Intel Quad RAID-0 Optane Memory 32GB Bootable Without VROC Key!

Subject: Storage | October 4, 2017 - 09:24 PM |
Tagged: x299, VROC, skylake-x, RAID-0, Optane, Intel, bootable, boot

We've been playing around a bit with Intel VROC lately. This new tech lets you create a RAID of NVMe SSDs connected directly to newer Intel Skylake-X CPUs, without the assistance of any additional chipset or other RAID controlling hardware on the X299 platform. While the technology is not fully rolled out, we did manage to get it working and test a few different array types as a secondary volume. One of the pieces of conflicting info we had been trying to clear up was can you boot from a VROC array without the currently unobtanium VROC key...

View Full Size

Well, it seems that question has been answered with our own tinkering. While there was absolutely no indication in the BIOS that our Optane Memory quad RAID-0 was bootable (the array is configurable but does not appear in the bootable devices list), I'm sitting here looking at Windows installed directly to a VROC array!

Important relevant screenshots below:

View Full Size

View Full Size

For the moment this will only work with Intel SSDs, but Intel's VROC FAQ states that 'selected third-party SSDs' will be supported, but is unclear if that includes bootability (future support changes would come as BIOS updates since they must be applied at the CPU level). We're still digging into VROC as well as AMD's RAID implementation. Much more to follow, so stay tuned!

Video News

October 4, 2017 | 09:32 PM - Posted by Cyclops

They finally listened to your DRM warcry and are softening up their defenses.

Next objective is to get them out of the closet.

October 4, 2017 | 09:42 PM - Posted by Allyn Malventano

Oh, this had nothing to do with me. Nothing has been updated since I posted the last article. Heck, it could be something that's not supposed to be allowed for all we know!

October 5, 2017 | 12:41 AM - Posted by James

Is it possible that the motherboard includes the regular, but not the premium key? Or is it in some kind of free trial mode?

October 5, 2017 | 01:00 PM - Posted by Allyn Malventano

The motherboard is in pass-through mode, but RAID-0 *may* be supported without the key at all.

October 5, 2017 | 04:39 AM - Posted by psuedonymous

I can't recall any of Intel's marketing material ever stating that the key was needed for bootable VROC, only that it was needed for non-RAID-0 arrays.

October 4, 2017 | 11:09 PM - Posted by Paul A. Mitchell (not verified)

YOU GO, Allyn!

G-R-E-A-T stuff.

100GB is a decent C: system partition.

October 4, 2017 | 11:37 PM - Posted by IntelGiant

Allyn is the absolute best we have, Go Man Go! :)

I'm using a single 32GB Optane for my X99 workrig right now, with amazing results.

I tried to RAID two Optane modules, but X99 would only recognize a single drive.

Upgrading to Z370 Maximus 10 Extreme and 8700K as soon as I can, and adding an Intel Optane 900P SSD into slot number 4. :)

Thank you Allyn,

You are da man of the hour,

peace and love.

October 9, 2017 | 05:52 AM - Posted by fullbododydenim (not verified)

Do you mean your X299 platform? I didn't think the optane drives were compatible with the older X99 platform.

October 5, 2017 | 12:34 AM - Posted by James

"One of the pieces of conflicting into we had been trying to clear up was can you boot from a VROC array without the currently unobtanium VROC key..."

Conflicting into? I assume you mean info.

In the earlier article, you said something about a 90 day free trial or something. Does the RAID just stop working after 90 days?

October 5, 2017 | 01:03 PM - Posted by Allyn Malventano

The trial appears to only effect volumes created within the GUI that are beyond the system key. The trial stuff can't apply to an array seen by the BIOS as we're way before any trial counters within the installed OS driver. The conflict I was referencing was that the Intel VROC FAQ states that pass-through mode won't even support RAID-0.

October 5, 2017 | 12:39 AM - Posted by James

Any idea how much of a CPU performance hit you would take for the RAID 5 mode for parity calculations? Is that trivial these days? I was thinking it might not be at many GB per second. I think I would rather have an actual completely hardware RAID card, but I probably don't want to pay for one.

October 5, 2017 | 02:17 AM - Posted by Paul A. Mitchell (not verified)

With so many multi-core CPUs available now,
there is a very real probability that one or more
of those multiple cores is idle and available
to do the processing necessary to support
software RAID arrays. Also, it appears from
initial measurements that AMD's X399 chipset
has a UEFI feature that allows "interleaving"
between 2 or more x16 PCIe slots. As such,
the era of dedicated hardware RAID IOPs
may be waning in the face of these super powerful
multi-core CPUs like AMD's Threadripper.
If I had to guess, I would speculate that these
factors played heavily in minds of the ASUS
engineers who designed their Hyper M.2 X16 Card.
Supporting bifurcation / quad-furcation in the
chipset also eliminates the need for a PLX-type
switch to be integrated onto the card's PCB.
Here, compare Highpoint's SSD7101A-1, which does
have a PLX chip. In general, motherboard vendors
need to embrace a goal of supporting all modern
RAID modes for as many NVMe SSDs as their motherboards
can accommodate. And, that appears to be the case
for AMD's recently announced NVMe RAID support,
albeit only for their top-end X399 chipset.
Expect this technology to trickle down over time.
We can also predict natural evolutions e.g.
an ASUS DIMM.2 slot the accepts an AIC with 4 x M.2s.

October 5, 2017 | 09:44 AM - Posted by 35orMoreWattsSVP (not verified)

I like the features on that Gigabyte MZ31-AR0 Extended ATX Server Socket SP3/Epyc Motherboard. And that's plenty of PCIe 3.0 connectivity with: PCIe 3.0 x16(4 full x16), one PCIe 3.0 x16(x8 slot), and 2 PCIe 3.0 x8 slots connectivity So maybe someone will get this and do some benchmarking with the Epyc 7401P 24 core/48 thread CPU SKU. And the Epyc 7401P(24 core/48 thread) CPU price, at $1075, is only $76 dollars more than a Treadripper 1950X. So that Epyc/SP3 single socket MB supports twice the PCIe lanes(128) and twice the Memory channels(8) as any TR/X399 MB SKU. The Gigabyte Epyc/SP3 MB also supports dual 10Gb ethernet and one 1 GB ports and a lot of other workstation grade features that the consumer MB's cant match.

That Gigabyte MZ31-AR0 is back in stock at newegg/others but at newegg is only costs $610 which is not bad for the features that it offers(1). And that Includes actual Certification/Validation for ECC memory use and the support is in the warrenty, unlike the consumer variants that may "support" ECC but are not Certified/Validated to do so.


"MZ31-AR0 (rev. 1.0)"

October 5, 2017 | 12:01 PM - Posted by Paul A. Mitchell (not verified)

Is there enough room between the DIMM slots
and the x16 PCIe slots on that MZ31-AR0?

October 5, 2017 | 02:38 AM - Posted by Kev (not verified)

I set up a RAID once and had no idea what to put for the "Data stripe size" and googling just brought up Tom's Hardware forum idiots pretending like they knew what they were talking about.

I left it at 4k because the sticker on the HDDs said 4k advanced format, and it seems to work okay. I just would like to have my stuff optimized to not lose performance on the table.

October 9, 2017 | 06:54 PM - Posted by Allyn Malventano

It really just boils down to a few things you have to compromise on:

  • Smaller stripe sizes lead to greater overhead for the RAID itself. A single 128KB sequential request must be divided into 32 requests with a stripe size of 4KB. Conversely, that single request may see faster throughputs as it can potentially spread across 32 separate devices in the array.
  • Larger stripe sizes reduce overhead and improve sequentials to a certain point, but lower QD requests of smaller sizes have a higher chance of hitting just a single device at a time, hurting performance.

Generally, it's best to go with the default for the given setup, unless you are willing to put in some additional time tuning for your specific workload.

January 8, 2018 | 11:47 AM - Posted by nosirrahx (not verified)

I have a question about this setup and using m.2 to u.2 converters to leverage far larger optane u.2 SSDs.

Is there any indication that this would or wold not work?

Even if the m.2 form factor optane drives get bigger it wont be as big as the u.2 ones so if I wait the u.2 drives will still be the tempting way to go.

BTW, I already have my Asus 16X card, just looking for 4 SSDs to install into it as my boot drive. I am currently torn between waiting for the new Samsung SSDs and going with a threadripper platform and going all Intel.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

This question is for testing whether you are a human visitor and to prevent automated spam submissions.