Review Index:

Intel VROC Tested! - X299 VROC vs. Z270 RST, Quad Optane vs. Quad 960 PRO

Subject: Storage
Manufacturer: PC Perspective

More Confusion, Configuration, and Test System Setup

Quick note: After this article went live we did manage to boot from an Intel SSD VROC array *without* a VROC key installed! Now let's continue.

I'd first like to point out that we have no idea how or why this is even working in the first place. Observe these two conflicting pieces of information. First from the Intel VROC FAQ:

View Full Size

...and second from the ASUS Hyper M.2 X16 Card press slides:

View Full Size

Alright, so we have Intel saying 'No RAID support' without a key, but ASUS saying RAID-0 'No need' a hardware key. But here I am staring at this:

View Full Size

I can choose any of those options. They all work. They all create usable arrays that report as bootable, though the interface is reporting that I am in a 'Trial mode' for a 90-day period. While everything seems to work, except for boot support actually working (which we can only assume is due to the BIOS needing a key installed for that feature to function), there are additional points of confusion to bring up:

View Full Size

Once VROC is enabled, the BIOS allows you to configure arrays directly. Note that the pair of Intel 600P SSDs can be selected, while a pair of Samsung 960 PROs can not. Now you might expect this since Intel initially said only Intel SSDs would be supported, however:

View Full Size

There are the same two SSDs configured in an array within the VROC Windows driver GUI. Note that they also report as bootable (right pane). This array actually works and is completely usable.

After all of this, we are left wondering if we are even using true VROC here. There are multiple points in the 'for' column (needs a driver installed to see any connected SSDs in this mode, etc), but there are also plenty of points in the 'against' column as well (SSDs that shouldn't work yet do, etc). We even have another 'WTF' column (bootability is broken even with Intel products, etc). Maybe it's all just half-baked and incomplete at this point, but hey, it appears to work well enough to throw some tests at it, so I guess we can see how it looks, eh?


We've already skimmed a lot of the basics here, but a few additional steps you'd need to do in addition to configuring the BIOS, installing the SSDs to the card, installing the card, etc:

View Full Size

While you can configure the array within the BIOS (assuming you had a VROC key for it to all work properly), you would likely need to load the 'F6 Driver' during the Windows install in order to see the array. We're all spoiled with operating systems having many common RAID drivers built in, but VROC is a new thing, so be sure to copy those drivers onto your USB installer.

If you used the simple 'F6 driver' while installing Windows directly to a VROC array, or if your OS was new enough to see the array without one, you will still need to install the full VROC driver package once within Windows if you wish to get the RSTe GUI, where you can check on array status and configure additional arrays.

Test System Setup

For these tests, we will be using two platforms:

  • X299 (VROC) Testing:
    • ASUS X299
    • Intel Core i9-7960X (16 core, 32 thread. No overclock)
    • 32GB DDR4
  • Z270 (RST) Testing:
    • ASUS Z270
    • Intel Core i7-7700K (4 core, 8 thread. No overclock)
    • 16GB DDR4

To better represent real-world QD=1 performance, c-states were disabled on both platforms. QD=1 benchmark tests on some platforms do not load the CPU sufficiently to observe full system responsiveness as the lower clock rates negatively impact storage performance. This occurs because storage benchmarks focus only on the storage and nothing else. Real-world applications would be performing calculations or otherwise doing something with the accessed data, causing the system to operate at a higher clock rate. Disabling c-states gets us closer to that real-world state while running these simpler tests.

October 2, 2017 | 09:19 AM - Posted by Anonymously Anonymous (not verified)

I like big butts and I can not lie.

on topic: impressive per results!

October 2, 2017 | 10:18 AM - Posted by Anonymous2 (not verified)

Yeah, butt why no WRITE results?

October 2, 2017 | 10:43 AM - Posted by Allyn Malventano

Mainly in the interest of halving the number of charts needed, as well as speeding up testing. Everything does what you’d expect for writes (scales the same way as reads, etc), but getting steady state random writes for the 960 PROs would have meant way more testing time since I’d have to run that workload for much longer to get through the sequential to random transition of the FTL. Optane doesn’t have this problem, but didn’t want to be unfair to the NAND stuff.

October 2, 2017 | 06:04 PM - Posted by asdff (not verified)

i am interested to see if the 4k write cache is available in the VROC type of raid, cause they usually only available via IRST which is via PCH.

October 3, 2017 | 07:28 PM - Posted by Anonymous23 (not verified)


October 2, 2017 | 09:22 AM - Posted by Anonymously Anonymous (not verified)

Also, AMD just announced this:

Now available: Free NVMe RAID upgrade for AMD X399 chipset!

October 2, 2017 | 10:54 AM - Posted by Allyn Malventano

Indeed they did, and we’re testing it now!

October 2, 2017 | 01:56 PM - Posted by Jim Lahey

Excellent!! This is the last piece of information I need before deciding on threadripper vs x299 vs 8700k.

October 2, 2017 | 01:11 PM - Posted by Anonym (not verified)

More importantly its free.

October 2, 2017 | 09:34 AM - Posted by mAxius

Amd needs to start getting parity with intel when it comes to intels storage tech.

October 2, 2017 | 02:02 PM - Posted by GottaTestEmAll (not verified)

Optane/XPoint is not an Intel only storage technology XPoint is an Intel/Micron technology and Micron has their QuantX brand of XPoint that's supposed to be available at the end of 2017. So AMD/others can offer XPoint devices if they want to license from Micron as Micron appears to be wanting to license its XPoint IP to other makers in addition to branding some QuantX/XPoint products of its own.

Intel's RAID keys required compered to AMD's support for RAID booting with the latest firmware update is not an extra cost on AMD's platforms also when considering AM4/X399/SP3 motherboards that support AMDs Ryzen/Threadripper/Epyc CPU SKUs and their respective motherboard platforms.

I'd like to see RAID testing done across Intel's consumer and Xeon pattform SKUs as well as AMD consumer and Epyc platform SKUs. And I'd really love to see that Gigabyte Epyc/SP3 single socket motherboerd tested by PCPER with all the single socket Epyc "P" CPU SKUs and that compared to any Intel Xeon or consumer SKUs as well as any consumer/Threadripper SKUs because the Epyc single socket SP3 motherboard/Epyc CPU "P" varients are an even better feature for feature and core for core deal that even consumer Threadripper.

October 2, 2017 | 02:09 PM - Posted by psuedonymous

Micron and Intel collaborated on the physical storage chips, but Intel alone have added the OS-transparent drive caching to their CPUs and chipsets. That's not something Micron can license to others.

October 2, 2017 | 02:22 PM - Posted by ThinkSometimesBeforeYouReflex (not verified)

Really you think that AMD/Other CPU makers do not have the skills necessary to make XPoint work on their systems/platforms! So you claim is not very valid. And Intel's IP is proprietary while Micron may just go with some open standard ways as ther are many open standard methods that are transparent to the OS also.

And see here with no costly($$$$) RAID required:

"Threadripper owners can now fire up their NVMe RAID arrays"

techreport dot com/news/32632/threadripper-owners-can-now-fire-up-their-nvme-raid-arrays

[Spam Filter is blocking Legit Websites so that techreport dot com will have to be made to actually work for the link to properly work]

October 3, 2017 | 08:35 AM - Posted by psuedonymous

You did not actually read what I said: AMD have not once, on any of their platforms, demonstrated transparent drive caching. Neither have ASMedia (the designer of AMD's recent chipsets).

Sure, you can use Optane on an AMD platform. You can do it TODAY: just plug it right in, it exposes as a normal PCIe NVME drive. What you can't do is the transparent caching, be it with Optane or any other drive. That's a completely seperate problem to solve, Micron aren't even peripherally involved.

October 2, 2017 | 09:58 AM - Posted by Mobile_Dom

I mean, its definitely one way to get a 128gb Optane SSD :P

October 2, 2017 | 10:44 AM - Posted by Allyn Malventano

True. Too bad you can’t boot from a pair (or trio!?) of stacked X16 cards to go even larger!

October 2, 2017 | 11:53 AM - Posted by Benjamins (not verified)

Well you could on X399. It would be funny to see 8-10 Optane drives on a RAID 0 on x399.

October 2, 2017 | 05:09 PM - Posted by Anonymously (not verified)

Have you seen der8auer x399 raid video?

October 2, 2017 | 01:15 PM - Posted by Paul A. Mitchell (not verified)

> pair (or trio!?) of stacked X16 cards

Users who have contacted Highpoint are
being told that they are working on
making their SSD7101A-1 bootable.

The specs for that AIC state that
multiple cards are also supported.

Many thanks, Allyn, for pushing the envelope.

Can't wait for your comparisons with Threadripper!

October 2, 2017 | 10:18 AM - Posted by Anony (not verified)

The neon glow lines are still frustrating to look at on the graphs.

October 2, 2017 | 02:03 PM - Posted by vyvyvv6565898 (not verified)

Is Asus ever going to actually sell the Hyper M.2 x16 or is it more vaporware?

October 2, 2017 | 02:44 PM - Posted by Paul A. Mitchell (not verified)

FYI: Highpoint have announced three NVMe add-in cards:

One user at another Forum reported success
getting their SSD7110 driver to work with the SSD7101.

The specs for the SSD7110 say it's bootable:

"Bootable & Data Storage"

October 2, 2017 | 11:54 PM - Posted by Allyn Malventano

I already linked to two of those in the article :)

October 3, 2017 | 12:35 AM - Posted by Paul A. Mitchell (not verified)

I missed those links because I didn't click on them:

p.s. Readers should know that I'm a "Highpoint Fanboy",
and Allyn graciously tested a Syba 2.5" U.2-to-M.2
enclosure for me, before the Highpoint SSD7101A-1
was released, after they announced their model 3840A.

Here's the Newegg link to that Syba enclosure:

Thanks again to Allyn for doing that test.

An add-in card is superior for adding 4 x M.2 SSDs,
because it eliminates the needs for U.2 cables
and additional enclosures.

October 2, 2017 | 04:59 PM - Posted by Oddbjørn Kvalsund (not verified)

Great test, Allyn! I'm disappoimted to see that random 4K reads at QD1 for the VROC Optane raid doesn't scale at all as you add drives, what's the deal with that?

October 3, 2017 | 03:31 AM - Posted by Oddbjørn Kvalsund (not verified)

From the 86k IOPS @ QD1 with one drive, I was hoping to see 300K+ IOPS @ QD1 with four drives... :(

Is this a driver issue or is the actual hardware pipeline saturated at around ~100K IOPS?

October 3, 2017 | 02:13 PM - Posted by Allyn Malventano

QD1 can't scale with additional drives in RAID because each individual request is only going to a single drive. It *can* scale sequential performance, for example if you had 16KB stripe size and did 128KB sequential, each request would spread across multiple drives and you would get higher throughput. Not so for small (4KB) random access, where it's the latency of the device that comes into play more than the straight line throughput.

What *is* significant about increasing the number of low latency devices in a RAID is that latencies will remain lower as QD increases, since the load is being spread across several SSDs. I dug deeper into this using Latency Percentile data in my triple M.2 Z170 piece. Keeping latencies lower helps 'shallow the queue' since a given workload will naturally settle at a lower queue depth when applied to very low latency storage (Optane).

The latency results of this piece also used Latency Percentile data, only I referenced the 50% point of the results to get 'latency weighted average' figures instead of the IO weighted numbers you'd get from simpler benchmark apps. Trying to make this many comparisons across this many dimensions (number of drives, different drive types, different platforms, different workloads, different queue depths, etc) meant that there was no room left for a 701 data point plot line of each individual test result :).

October 3, 2017 | 03:41 PM - Posted by Oddbjørn Kvalsund (not verified)

Excellent clarification, thanks!

October 2, 2017 | 05:07 PM - Posted by Paul A. Mitchell (not verified)

Does Allyn have an AMD Threadripper in his lab?

October 2, 2017 | 07:53 PM - Posted by Allyn Malventano

He does, and yes, he is testing AMD's implementation.

October 2, 2017 | 08:09 PM - Posted by Paul A. Mitchell (not verified)

Can't wait!

We appreciate your consistent excellence, Allyn.

October 2, 2017 | 09:23 PM - Posted by Jim Lahey


October 2, 2017 | 08:03 PM - Posted by Paul A. Mitchell (not verified)

Allyn, this YouTube video is back up:

Finally figured out why THREADRIPPER has so many PCIe lanes (en)

In answer to a subtle issue we have already discussed,
he does show how to enable the "interleave" option
in the Zenith Extreme UEFI/BIOS. As such, it appears
that it is possible to interleave 2 such add-in cards.

October 2, 2017 | 09:19 PM - Posted by EpycIsThreadripperTimes2 (not verified)

And for every PCIe lane/memory channel that the Threadripper/X399 motherboard platform offers the Epyc/SP3 single socket motherboard platform offers 2! So the Epyc SP3 motherboards support 128 PCIe lanes and 8 memory channels.

And Anandtech makes me LOL with theirs along with servethehome's crappy "testing" of the single socket Epyc "P" SKUs (Anandtech and servethehome are using dual socket Epyc/SP3 motherboards and non "P" Epyc SKUs) in an attempt at trying to estimate what a single socket Epyc SKU may do by only populating a single socket on a dual socket Epyc motherboard to estimate how a single socket Epyc SKU would perform, and an Epyc 7401(Dual socket SKU) is not an Epyc 7401P(Single socket SKU), and the Epyc 7401 cost a more than the 7401P.

Really Anandtech and Servethehome have hit a new lows and neglected to even test the Epyc 7401P against the Threadripper 1950x(anandtech) or are only testing with 2P Epyc/SP3 motherboard/Epyc non-P CPU SKUs(Anandtech and Servethehome).

Look at what Anandtech says:

"Anyone looking to build a new workstation is probably in a good position to start doing so today. The only real limitation is going to be if parts are at retail or can only be found by OEMs, how many motherboards will be available, and how quickly AMD plans to ramp up production of EPYC for the workstation market. We’re getting all the EPYC 1P processors in for review here shortly, and we’re hoping Intel reaches out for Xeon-W. Put your benchmark requests in the comments below."(1)

How the hell can Anandtech ignore the Epyc 7401P with it's 24 cores and 48 threads for only $1075 compared to the Threadripper 1950X($999) in Anandtech's "Best" CPUs for workstations article! And servethehome is doing the same thing by trying out the 7401(dual socket SKU) in only an only one socket populated on a 2 socket SP3 motherboard.

There is currently a great Epyc/SP3 single socket motherboard offering up for sale(the GIGABYTE MZ31-AR0 Extended ATX Server Motherboard Socket SP3 single socket MB) for $609(back in stock again at Newegg) and no one is using that for their Epyc/"P" single socket processor testing. Anandtech even states[Above: see quoted statment] that they are "getting" all the Epyc "P" single socket SKUs in for testing but still publishing an article that should not have even been published until Anandtech had a single socket Epyc/SP3 motherboard to test the Epyc 7401P and its other "P" variants! And that Gigabyte Epyc/SP3 gigabyte single socket motherboard has been on sale for over a month now and do not tell me that Anandtech does not Know that with all the contacts the Anandtech has in the motherboard/CPU industry.

Damn folks are screaming all across the web on the Blender/Adobe/solidworks graphics forums for some single socket Epyc CPU/SP3 motherboard testing and folks have to rely on the enthusiasts website like Anandtech/others but are being ignored mostly by the enthusiasts websites. And the enthusiasts websites are trying like hell to sale everyone on Threadripper and Theradripper is not even a workstation grade part.


"Best CPUs for Workstations: 2017"[what a joke this article is and Anand Lal Shimpi would have never published this under his watch]

October 2, 2017 | 11:57 PM - Posted by Allyn Malventano

‘Interleave’ as it relates to that BIOS option is likely for tweaking performance. I don’t think it is required to make a bootable volume >4 SSDs - that’s an Intel VMD limitation.

October 3, 2017 | 12:19 AM - Posted by Paul A. Mitchell (not verified)

> that BIOS option is likely for tweaking performance.

Yes: I thought the same thing e.g.
somewhat similar to the way DRAM is interleaved.

And, I thought this tweaking option might also
help to explain what you observed earlier:

"Each drive would have to do 3.55GB/s to accomplish this speed. 960 PROs only go 3,300 MB/s when they are reading actual data."

October 3, 2017 | 03:18 AM - Posted by Martin Trautvetter

Page 3:

"With Optane's in a RAID,"

should read

"With Optanes in a RAID," or "With Optane drives in a RAID,"

October 3, 2017 | 02:23 PM - Posted by Allyn Malventano

The offending apostrophe has been sacked.


October 3, 2017 | 12:49 PM - Posted by James

I skimmed through the article, but I still don't know exactly what Intel is trying to lock out without paying for a ridiculous hardware key. I could see them trying to keep RAID 5 (parity) mode for professional use/market segmentation, although I don't know if many people would choose to pay for just RAID 5. It would presumably use less valuable SSD space than mirroring everything. Massive RAID bandwidth isn't that useful to your average consumer anyway. It is mostly professional applications. Locking RAID modes out that can be used on the low end consumer chipset is bogus though. Intel has used the chipset bottleneck as a way to segment the market for quite a while. You can have all kinds of connections on the chipset, but you can't use many of them at once due to the upstream bandwidth limitation. If you wanted more via more PCI-e off the CPU, you needed to upgrade to a higher end platform. AMD's Ryzen (non-ThreadRipper) is very limited on PCI-e also, so this isn't really unique. Threadripper is already basically a low end Epyc processor, and significantly more expensive platform than standard Ryzen.

Locking parity mode out would probably just set up people to lose data because they will configure just striping without mirroring. It is kind of like keeping ECC for the enterprise market. I am of the opinion that ECC should be everywhere now. I had a huge number of files get corrupted by an undetected memory error. I am tempted to buy a serve level board.

October 3, 2017 | 02:22 PM - Posted by Allyn Malventano

I believe their concern is that the VROC technology can scale very high on drive counts and has its roots in the enterprise side. The worry would be that some creative enterprise IT guy would just buy a batch of the cheaper desktop parts and roll them out in a larger cluster. The parity or not question is a bit moot since higher performing storage would generally be RAID-0 with a redundant complete system sitting alongside it with a mirror of the data for failover. That's why I suggest at the end of the piece that the hardware key limits should be in drive counts and not in RAID levels. This way pro users could benefit from the same parity volume reliability benefits that Z270 users currently enjoy, but limited to a reasonable consumer-level drive count of 4 (which is the bootable VMD limit anyway).

October 3, 2017 | 09:19 PM - Posted by Dapple (not verified)

Nvidia is facing a similar problem right now. A lot of folks are aware that miners are purchasing a ton of graphics cards, which has lead to higher graphics card pricing. What folks don't realize is that consumer Nvidia graphics cards are getting snapped up by the professional market as well. That is why it's hard to find any of the blower-style 1080's in stock (FE and 3rd party).

October 3, 2017 | 09:21 PM - Posted by Dapple (not verified)

I meant 1080 TI's. But they are buying 1080, 1070, and 1060 as well. 1080 Ti blower is the preferred due to it's size and fan setup being the best fit for server cases

October 4, 2017 | 04:14 AM - Posted by James

Even limiting it to 4 drives would offer huge bandwidth, more than any consumer level applications really require. It seems like they could have done the consumer version of the chipset, with 4 drive limit (1 VMD controller) and a workstation/server variant with the full 3 VMD controllers enabled without char ware keys. That still would not really look very good with AMD offering support for a large number of drives with no up charge. It is just a fact that massive bandwidth can be supported with high-end, but still consumer level hardware. This is similar to when high end RISC workstations fell to cheap PC hardware.

October 3, 2017 | 06:46 PM - Posted by Paul A. Mitchell (not verified)

> The parity or not question is a bit moot since higher performing storage would generally be RAID-0 with a redundant complete system sitting alongside it with a mirror of the data for failover.

This is EXACTLY what we have been doing with our
production workstation, and it works G-R-E-A-T!

We actually run 3 active "tiers' on that workstation,
and a fourth tier is backup storage servers:

(1) 14GB ramdisk
(2) RAID-0 w/ 4 x 6G Nand Flash SSDs
(3) 2TB rotating HDD for archiving (5 year warranty)
(4) networked storage servers (e.g. old LGA-775 PCs)

It's also very easy to "sync" our ramdisk
with our RAID-0 array e.g.:

xcopy R:\folder O:\folder /s/e/v/d
xcopy O:\folder R:\folder /s/e/v/d


R: = ramdisk drive letter
O: = RAID-0 drive letter

The entire ramdisk is SAVEd and RESTOREd on O:
automatically at SHUTDOWN and STARTUP, respectively.

October 3, 2017 | 08:37 PM - Posted by Dapple (not verified)

I have a premium VROC key. It was actually fairly easy to get. Have offered it to a couple of tech folks to test but no takers so far.

October 3, 2017 | 08:40 PM - Posted by Dapple (not verified)

By the way, I have some additional notes about VROC and it's setup that may be of help. I assume you can view the email address I've input for this comment. If you want to discuss just let me know

October 4, 2017 | 03:50 PM - Posted by preferredcustomer (not verified)

To answer your question about using VROC or not... I have to say not because you did not create the array within the BIOS which the silly Key was not engaged. Still baffles me INTEL will charge for non-intel M.2's and not released the key. AMD giving it away for free and having more PCIe lanes will hurt Intel.

I have been bouncing between 1950x / 7900x for my next system... I already have a Hyper M.2 x16 card for whichever I choose. Where I am at is difficult to obtain cooling for the 1950x and the VROC for the 7900x...


Ok why I think you are not using VROC... #1 you are in "pass-thru" mode for VROC. The best way I can explain it is VROC is acting like a HBA in your setup. This terminology makes sense.

Using the RSTe gives you the ability to create a Software Raid. This was the only way you were able to create the Stripe from the article.

You do not have the DMI 3.0 bottleneck from the results (So using CPU lanes, expected with x16 PCIe), however a little overhead for the software Stripe. My guess is the pure VROC RAID 0 setup will yield slightly better results is managed from BIOS.

This can be tested by installing windows on Formatted(Secure Erase,etc) M.2's... F6 the driver and if it doesn't see an Array or wants to create an Array, we should have the answer... predicting Software RAID via VROC pass-thru(aka-HBA)...

Lastly, it is possible VROC is using the same formatting of the array as RSTe. Just VROC is doing from the BIOS... Intel has kept us in the dark on this crap. I am curious after building the RAID 0 in RSTe, when you go to the BIOS, does it show RAID 0 or unconfigured like your screenshot...

This is my best guess on what is happening with VROC presently... AMD's style looks to be CPU pass-thru, software RAID...

Thanks for reading my long attempt to explain my idea... //R//

October 9, 2017 | 02:21 PM - Posted by Allyn Malventano

VROC arrays *can* be created in the BIOS, but only with Intel SSDs. Samsung SSDs currently show as not compatible. Further, after this piece went up I was able to create, install windows to, and boot from a RAID-0 array. Stands to reason it's true VROC.

October 4, 2017 | 04:26 PM - Posted by preferredcustomer (not verified)

Hi, just to add... I found a XEON based VROC article that explains how VROC works:

Q7: Is Intel VROC software or hardware RAID... Ans: Hybrid... Seems the VROC uses mainly hardware with software to calculate RAID logic.

Q11: References what needs a VROC Key... I think Intel should revisit for the X299 chipsets

Q12: How is Intel VROC different from Intel RSTe...

I tried to copy and paste... however Intel locked the PDF from the link above... This is a good read to understand more in depth how VROC works... //R//

October 5, 2017 | 01:42 AM - Posted by Habibird (not verified)

Hi Allyn,

Excellent review and very thorough testing. A couple of questions:

1. I can't seem to find the VROC key for sale anywhere. How did you obtain yours?

2. The SM961 is listed as being compatible with VROC. I wonder if the SM960 is also compatible to create a bootable VROC Raid 0 or Raid 1 array since it is just the consumer version of the OEM SM961?

ps: Above questions also open to anyone who has the hardware and has tested this.

Thanks in advance gents/ladies.

October 12, 2017 | 05:43 PM - Posted by Allyn Malventano

1. All testing in this article was done without a VROC key.

2. They should be, but no way to know for sure as we don't know how Intel is limiting the compatibility.

October 13, 2017 | 09:11 AM - Posted by Michael M (not verified)

Third party SSDs like the SM961 are supported only by Intel XEON Chipsets, like the C622 or C4xx.

Even then, Intel does not mention if they are bootable under VROC.

Intel informed me by email that Intel does not support VROC for the X299, and that the mainboard manufactures offering VROC are responsible for the proper functioning.

Very frustrating that ASUS put the blame on Intel, not offering any information, beside spreading false information -at least at the german support.

I want to run a real VROC! Not in pass-through-mode.

Amyhow, thanks for sharing this excellent test!


October 5, 2017 | 01:28 PM - Posted by Paul A. Mitchell (not verified)

More details are in Podcast #470:

October 13, 2017 | 02:02 AM - Posted by Michael M (not verified)

I am struggling already with a very basic problem.
As far as I understood I will have to install the RSTe (enterprise) driver rather than the RST.
However, the RSTe can not be installed "on my platform".
Presumably it does not support Win10x64.
In general Intel does not officially support the X299 with that driver...

Have you installed the RSTe on a Win10-platform? Which version?

Many thanks

October 26, 2017 | 11:42 PM - Posted by Allyn Malventano

You might have found one of the older RSTe versions. Look for

December 1, 2017 | 02:44 PM - Posted by PhantomLimb (not verified)

Hi Allyn I found an Intel document explaining the vROC trial mode.
Searching the web I found this document “Intel Virtual RAID on CPU (Intel® VROC) and Intel Rapid Storage Technology enterprise (Intel® RSTe)”.
On that document they explain the 90 day trial period and its limitations. What I understood was that the trial period acts as if you had the Premium Key installed. That is why you are able to use 3rd party SSDs. It will show the RAID array on the RSTe GUI but it won’t show it on the BIOS, where the attached SSDs will appear as independent non-RAID disks and might inform there is no RAID volume on the system.
The RSTe vROC implementation has a feature to configure a RAID on the BIOS but you need to have the Intel VDM configured on the BIOS (the guide has an example of doing this in a Purley chipset motherboard) and the Upgrade Key installed.
They made the following important clarifications: you can only configure data RAID arrays on the BIOS and not spanned system volumes. You also need to use the correct F6 driver when installing Windows to a bootable RAID to see the device during installation. iaStorE drivers are for SATA and sSATA drives, iaVROC will be for NVMe drives. You need to load iaVROC driver.
The guide also comes with a couple of warnings of what happens when the 90 day trial period finishes. Your RAID volumes will appear on the RSTe GUI but won’t be accessible. They will only become accessible when you install the upgrade key. They don’t guarantee the safety of the data during the trial period.
On an Intel forum an Intel representative informed a customer that for X299 only the standard mode could be activated after the trial period. He even advise to purchase a key from Mouser Electronics to obtain the correct VROC upgrade key.
I really appreciated the depth of your article and your latency analysis was awesome. We all were interested in vROC because it provided a direct connection to the CPU bypassing the DMI bottleneck, hopefully reducing access latencies and improving RAID performance. But the poor latency results and the Intel guide cause certain questions to arised.
1) The first one is about PCIe bifurcation.
A customer from Gygabite asked if their motherboards supported PCIe bifurcation and he was informed that none of their boards actually supported it. However Gygabite has options on the BIOS to configure PCIe slots for VROC on some motherboards.
Another customer purchased an ASUS Hyper M.2 card and wanted to use it on an EVGA motherboard and was able to do so after a BIOS update to a BIOS that let him configure the PCIe slots. Is PCIe bifurcation something that can be done in software only?
The Hyper M.2 NVMe adapter doesn't feature a PLX chip for PCIe
bifurcation but it is capable to divide the slot bandwidth when
configured on the BIOS. There is no mention of 4x/4x/4x/4x bifurcation on the ASUS motherboard manuals. The only motherboards that I know that state bifurcation are mini-ITX motherboards that bifurcate the only PCI-e slot they have to support riser cards for dual GPUs.
2) This guide mention OCulink technology, a high speed PCIe transmission technology.
On one of the examples of the guide, an Intel reference board will only work with VROC through an OCulink connection. Maybe to get the most of VROC you need an specialized connection like OCulink.
Reading an article on OCulink, it says some U.2 devices are able to utilize the OCulink protocol. Since the VDMs accept these types of connections it is safe to assume that vROC uses this protocol in some form. The RSTe driver lets you connect RAID arrays to the CPU but it also lets you configure RAID arrays that connect to the PCH. If Oculink is not available will it reset to a PCH RAID?
I found a Supermicro Xeon WS motherboard, the X11SRM-VF with these type of connections. Will this motherboard with this type of connection reduce the access latency?
Could a U.2 connection reduce the latency if it uses the OCulink protocol? Does Intel U.2 drives support this protocol? Is this the reason why Intel drives are the only ones supported on some modes? 3rd party SSDs are supported on the Premium mode but do they fall back to the PCH connection due to the lack of this protocol? Is there a way to check if the I/O traffic is using
the PCH on the VROC trial mode.
I found a reseller of this board on my country and I will love to handed it to you for a OCulink latency analysis.
3) On the guide they mention an special F6 driver for vROC, the iaVROC driver, is this driver only used to see the BIOS configured RAID array during the Windows installation or if I load a non VROC driver like iaSTORE that is for SATA, could that cause performance degradation, or will the RAID array fall back to the PCH because of this.
4) The guide also mentions that each motherboard has a different way to configure the Intel VDM on the BIOS, it is a requisite for using VROC. Do you know where can I check this on an ASUS X299 motherboard BIOS?
Your article was an excellent example of technology journalism and I'm looking forward to test VROC under an OCulink connection.
Sorry if my English appears to be harsh or broken, but I'm not a native speaker.

December 2, 2017 | 05:36 PM - Posted by HotGarbage (not verified)

I downloaded the guide and I think that on the review you might have missed a step to configure VROC. I see that you configured the hardware for connecting multiple drives to a configured set of lanes. On the guide they set specific VMD ports through an specific OCulink connection, whatever that is. They also configured the Volume Management Device as an OCulink connection. They did the same for every CPU the processor had. I'm assuming that the ASUS board has the ability to do this with a PCIe 3.0 connection. Correct if I'm wrong but I'm assuming that any RAID array created on the RSTe GUI will run under the PCH connection if the VMD ports aren't linked to the PCIe 3.0 connection on the BIOS.

December 4, 2017 | 05:40 AM - Posted by Flake (not verified)

Anyone knows where I can find the VROC key and the price? Intel says "contact your mainboard manifacturer" and Gigabyte (I have a GA-X299-UD4 with 2 x Samsung 960 PRO) says "contact your dealer" but I'm the dealer and I can't find the key!
Thank you!

December 5, 2017 | 05:06 PM - Posted by wwilso91 (not verified)

Hi, I have a couple questions about bandwidth if someone can answer them for me:

1. Would I experience a bottleneck with 4 x Samsung 960 Pros if I use this card in a x8 slot rather than a x16 slot? Will it make any noticeable difference?

2. How does this card compare to the dimm.2 risers on asus boards (Rampage VI Apex & Extreme)? The riser card provides 2 PCle x4 connections directly to the cpu. Does the Hyper m.2 x 16 card have additional overhead that would cause more latency than the riser cards?

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.