Review Index:
Feedback

Intel VROC Tested! - X299 VROC vs. Z270 RST, Quad Optane vs. Quad 960 PRO

Subject: Storage
Manufacturer: PC Perspective

Introduction

Introduction

We've been hearing about Intel's VROC (NVMe RAID) technology for a few months now. ASUS started slipping clues in with their X299 motherboard releases starting back in May. The idea was very exciting, as prior NVMe RAID implementations on Z170 and Z270 platforms were bottlenecked by the chipset's PCIe 3.0 x4 DMI link to the CPU, and they also had to trade away SATA ports for M.2 PCIe lanes in order to accomplish the feat. X99 motherboards supported SATA RAID and even sported four additional ports, but they were left out of NVMe bootable RAID altogether. It would be foolish of Intel to launch a successor to their higher end workstation-class platform without a feature available in two (soon to be three) generations of their consumer platform.

To get a grip on what VROC is all about, lets set up some context with a few slides:

View Full Size

First, we have a slide laying out what the acronyms mean:

  • VROC = Virtual RAID on CPU
  • VMD = Volume Management Device

What's a VMD you say?

View Full Size

...so the VMD is extra logic present on Intel Skylake-SP CPUs, which enables the processor to group up to 16 lanes of storage (4x4) into a single PCIe storage domain. There are three VMD controllers per CPU.

View Full Size

VROC is the next logical step, and takes things a bit further. While boot support is restricted to within a single VMD, PCIe switches can be added downstream to create a bootable RAID possibly exceeding 4 SSDs. So long as the array need not be bootable, VROC enables spanning across multiple VMDs and even across CPUs!

Assembling the Missing Pieces

Unlike prior Intel storage technology launches, the VROC launch has been piecemeal at best and contradictory at worst. We initially heard that VROC would only support Intel SSDs, but Intel later published a FAQ that stated 'selected third-party SSDs' would also be supported. One thing they have remained steadfast on is the requirement for a hardware key to unlock RAID-1 and RAID-5 modes - a seemingly silly requirement given their consumer chipset supports bootable RAID-0,1,5 without any key requirement (and VROC only supports one additional SSD over Z170/Z270/Z370, which can boot from 3-drive arrays).

On the 'piecemeal' topic, we need three things for VROC to work:

  • BIOS support for enabling VMD Domains for select groups of PCIe lanes.
  • Hardware for connecting a group of NVMe SSDs to that group of PCIe lanes.
  • A driver for OS mounting and managing of the array.

Let's run down this list and see what is currently available:

BIOS support?

View Full Size

Check. Hardware for connecting multiple drives to the configured set of lanes?

View Full Size

Check (960 PRO pic here). Note that the ASUS Hyper M.2 X16 Card will only work on motherboards supporting PCIe bifurcation, which allows the CPU to split PCIe lanes into subgroups without the need of a PLX chip. You can see two bifurcated modes in the above screenshot - one intended for VMD/VROC, while the other (data) selection enables bifurcation without enabling the VMD controller. This option presents the four SSDs to the OS without the need of any special driver.

With the above installed, and the slot configured for VROC in the BIOS, we are greeted by the expected disappointing result:

View Full Size

Now for that pesky driver. After a bit of digging around the dark corners of the internet:

View Full Size

Check! (well, that's what it looked like after I rapidly clicked my way through the array creation)

Don't even pretend like you won't read the rest of this review! (click here now!)


October 2, 2017 | 09:19 AM - Posted by Anonymously Anonymous (not verified)

I like big butts and I can not lie.

on topic: impressive per results!

October 2, 2017 | 10:18 AM - Posted by Anonymous2 (not verified)

Yeah, butt why no WRITE results?

October 2, 2017 | 10:43 AM - Posted by Allyn Malventano

Mainly in the interest of halving the number of charts needed, as well as speeding up testing. Everything does what you’d expect for writes (scales the same way as reads, etc), but getting steady state random writes for the 960 PROs would have meant way more testing time since I’d have to run that workload for much longer to get through the sequential to random transition of the FTL. Optane doesn’t have this problem, but didn’t want to be unfair to the NAND stuff.

October 2, 2017 | 06:04 PM - Posted by asdff (not verified)

i am interested to see if the 4k write cache is available in the VROC type of raid, cause they usually only available via IRST which is via PCH.

October 3, 2017 | 07:28 PM - Posted by Anonymous23 (not verified)

>butt

October 2, 2017 | 09:22 AM - Posted by Anonymously Anonymous (not verified)

Also, AMD just announced this:

Now available: Free NVMe RAID upgrade for AMD X399 chipset!

https://community.amd.com/community/gaming/blog/2017/10/02/now-available...

October 2, 2017 | 10:54 AM - Posted by Allyn Malventano

Indeed they did, and we’re testing it now!

October 2, 2017 | 01:56 PM - Posted by Jim Lahey

Excellent!! This is the last piece of information I need before deciding on threadripper vs x299 vs 8700k.

October 2, 2017 | 01:11 PM - Posted by Anonym (not verified)

More importantly its free.

October 2, 2017 | 09:34 AM - Posted by mAxius

Amd needs to start getting parity with intel when it comes to intels storage tech.

October 2, 2017 | 02:02 PM - Posted by GottaTestEmAll (not verified)

Optane/XPoint is not an Intel only storage technology XPoint is an Intel/Micron technology and Micron has their QuantX brand of XPoint that's supposed to be available at the end of 2017. So AMD/others can offer XPoint devices if they want to license from Micron as Micron appears to be wanting to license its XPoint IP to other makers in addition to branding some QuantX/XPoint products of its own.

Intel's RAID keys required compered to AMD's support for RAID booting with the latest firmware update is not an extra cost on AMD's platforms also when considering AM4/X399/SP3 motherboards that support AMDs Ryzen/Threadripper/Epyc CPU SKUs and their respective motherboard platforms.

I'd like to see RAID testing done across Intel's consumer and Xeon pattform SKUs as well as AMD consumer and Epyc platform SKUs. And I'd really love to see that Gigabyte Epyc/SP3 single socket motherboerd tested by PCPER with all the single socket Epyc "P" CPU SKUs and that compared to any Intel Xeon or consumer SKUs as well as any consumer/Threadripper SKUs because the Epyc single socket SP3 motherboard/Epyc CPU "P" varients are an even better feature for feature and core for core deal that even consumer Threadripper.

October 2, 2017 | 02:09 PM - Posted by psuedonymous

Micron and Intel collaborated on the physical storage chips, but Intel alone have added the OS-transparent drive caching to their CPUs and chipsets. That's not something Micron can license to others.

October 2, 2017 | 02:22 PM - Posted by ThinkSometimesBeforeYouReflex (not verified)

Really you think that AMD/Other CPU makers do not have the skills necessary to make XPoint work on their systems/platforms! So you claim is not very valid. And Intel's IP is proprietary while Micron may just go with some open standard ways as ther are many open standard methods that are transparent to the OS also.

And see here with no costly($$$$) RAID required:

"Threadripper owners can now fire up their NVMe RAID arrays"

techreport dot com/news/32632/threadripper-owners-can-now-fire-up-their-nvme-raid-arrays

[Spam Filter is blocking Legit Websites so that techreport dot com will have to be made to actually work for the link to properly work]

October 3, 2017 | 08:35 AM - Posted by psuedonymous

You did not actually read what I said: AMD have not once, on any of their platforms, demonstrated transparent drive caching. Neither have ASMedia (the designer of AMD's recent chipsets).

Sure, you can use Optane on an AMD platform. You can do it TODAY: just plug it right in, it exposes as a normal PCIe NVME drive. What you can't do is the transparent caching, be it with Optane or any other drive. That's a completely seperate problem to solve, Micron aren't even peripherally involved.

October 2, 2017 | 09:58 AM - Posted by Mobile_Dom

I mean, its definitely one way to get a 128gb Optane SSD :P

October 2, 2017 | 10:44 AM - Posted by Allyn Malventano

True. Too bad you can’t boot from a pair (or trio!?) of stacked X16 cards to go even larger!

October 2, 2017 | 11:53 AM - Posted by Benjamins (not verified)

Well you could on X399. It would be funny to see 8-10 Optane drives on a RAID 0 on x399.

October 2, 2017 | 05:09 PM - Posted by Anonymously (not verified)

Have you seen der8auer x399 raid video?

October 2, 2017 | 01:15 PM - Posted by Paul A. Mitchell (not verified)

> pair (or trio!?) of stacked X16 cards

Users who have contacted Highpoint are
being told that they are working on
making their SSD7101A-1 bootable.

The specs for that AIC state that
multiple cards are also supported.

Many thanks, Allyn, for pushing the envelope.

Can't wait for your comparisons with Threadripper!

October 2, 2017 | 10:18 AM - Posted by Anony (not verified)

The neon glow lines are still frustrating to look at on the graphs.

October 2, 2017 | 02:03 PM - Posted by vyvyvv6565898 (not verified)

Is Asus ever going to actually sell the Hyper M.2 x16 or is it more vaporware?

October 2, 2017 | 02:44 PM - Posted by Paul A. Mitchell (not verified)

FYI: Highpoint have announced three NVMe add-in cards:

http://www.highpoint-tech.com/USA_new/CS-product_nvme.htm

One user at another Forum reported success
getting their SSD7110 driver to work with the SSD7101.

The specs for the SSD7110 say it's bootable:

http://www.highpoint-tech.com/PDF/NVMe/SSD7110/Datasheet_SSD7110_17_09_2...

"Bootable & Data Storage"

October 2, 2017 | 11:54 PM - Posted by Allyn Malventano

I already linked to two of those in the article :)

October 3, 2017 | 12:35 AM - Posted by Paul A. Mitchell (not verified)

I missed those links because I didn't click on them:

http://www.highpoint-tech.com/USA_new/series-ssd7120-overview.htm
http://www.highpoint-tech.com/USA_new/series-ssd7101a-1-overview.htm

p.s. Readers should know that I'm a "Highpoint Fanboy",
and Allyn graciously tested a Syba 2.5" U.2-to-M.2
enclosure for me, before the Highpoint SSD7101A-1
was released, after they announced their model 3840A.

Here's the Newegg link to that Syba enclosure:

https://www.newegg.com/Product/Product.aspx?Item=N82E16817801139&Tpk=N82...

Thanks again to Allyn for doing that test.

An add-in card is superior for adding 4 x M.2 SSDs,
because it eliminates the needs for U.2 cables
and additional enclosures.

October 2, 2017 | 04:59 PM - Posted by Oddbjørn Kvalsund (not verified)

Great test, Allyn! I'm disappoimted to see that random 4K reads at QD1 for the VROC Optane raid doesn't scale at all as you add drives, what's the deal with that?

October 3, 2017 | 03:31 AM - Posted by Oddbjørn Kvalsund (not verified)

From the 86k IOPS @ QD1 with one drive, I was hoping to see 300K+ IOPS @ QD1 with four drives... :(

Is this a driver issue or is the actual hardware pipeline saturated at around ~100K IOPS?

October 3, 2017 | 02:13 PM - Posted by Allyn Malventano

QD1 can't scale with additional drives in RAID because each individual request is only going to a single drive. It *can* scale sequential performance, for example if you had 16KB stripe size and did 128KB sequential, each request would spread across multiple drives and you would get higher throughput. Not so for small (4KB) random access, where it's the latency of the device that comes into play more than the straight line throughput.

What *is* significant about increasing the number of low latency devices in a RAID is that latencies will remain lower as QD increases, since the load is being spread across several SSDs. I dug deeper into this using Latency Percentile data in my triple M.2 Z170 piece. Keeping latencies lower helps 'shallow the queue' since a given workload will naturally settle at a lower queue depth when applied to very low latency storage (Optane).

The latency results of this piece also used Latency Percentile data, only I referenced the 50% point of the results to get 'latency weighted average' figures instead of the IO weighted numbers you'd get from simpler benchmark apps. Trying to make this many comparisons across this many dimensions (number of drives, different drive types, different platforms, different workloads, different queue depths, etc) meant that there was no room left for a 701 data point plot line of each individual test result :).

October 3, 2017 | 03:41 PM - Posted by Oddbjørn Kvalsund (not verified)

Excellent clarification, thanks!

October 2, 2017 | 05:07 PM - Posted by Paul A. Mitchell (not verified)

Does Allyn have an AMD Threadripper in his lab?

October 2, 2017 | 07:53 PM - Posted by Allyn Malventano

He does, and yes, he is testing AMD's implementation.

October 2, 2017 | 08:09 PM - Posted by Paul A. Mitchell (not verified)

Can't wait!

We appreciate your consistent excellence, Allyn.

October 2, 2017 | 09:23 PM - Posted by Jim Lahey

+1!

October 2, 2017 | 08:03 PM - Posted by Paul A. Mitchell (not verified)

Allyn, this YouTube video is back up:

Finally figured out why THREADRIPPER has so many PCIe lanes (en)

https://www.youtube.com/watch?v=9CoAyjzJWfw

In answer to a subtle issue we have already discussed,
he does show how to enable the "interleave" option
in the Zenith Extreme UEFI/BIOS. As such, it appears
that it is possible to interleave 2 such add-in cards.

October 2, 2017 | 09:19 PM - Posted by EpycIsThreadripperTimes2 (not verified)

And for every PCIe lane/memory channel that the Threadripper/X399 motherboard platform offers the Epyc/SP3 single socket motherboard platform offers 2! So the Epyc SP3 motherboards support 128 PCIe lanes and 8 memory channels.

And Anandtech makes me LOL with theirs along with servethehome's crappy "testing" of the single socket Epyc "P" SKUs (Anandtech and servethehome are using dual socket Epyc/SP3 motherboards and non "P" Epyc SKUs) in an attempt at trying to estimate what a single socket Epyc SKU may do by only populating a single socket on a dual socket Epyc motherboard to estimate how a single socket Epyc SKU would perform, and an Epyc 7401(Dual socket SKU) is not an Epyc 7401P(Single socket SKU), and the Epyc 7401 cost a more than the 7401P.

Really Anandtech and Servethehome have hit a new lows and neglected to even test the Epyc 7401P against the Threadripper 1950x(anandtech) or are only testing with 2P Epyc/SP3 motherboard/Epyc non-P CPU SKUs(Anandtech and Servethehome).

Look at what Anandtech says:

"Anyone looking to build a new workstation is probably in a good position to start doing so today. The only real limitation is going to be if parts are at retail or can only be found by OEMs, how many motherboards will be available, and how quickly AMD plans to ramp up production of EPYC for the workstation market. We’re getting all the EPYC 1P processors in for review here shortly, and we’re hoping Intel reaches out for Xeon-W. Put your benchmark requests in the comments below."(1)

How the hell can Anandtech ignore the Epyc 7401P with it's 24 cores and 48 threads for only $1075 compared to the Threadripper 1950X($999) in Anandtech's "Best" CPUs for workstations article! And servethehome is doing the same thing by trying out the 7401(dual socket SKU) in only an only one socket populated on a 2 socket SP3 motherboard.

There is currently a great Epyc/SP3 single socket motherboard offering up for sale(the GIGABYTE MZ31-AR0 Extended ATX Server Motherboard Socket SP3 single socket MB) for $609(back in stock again at Newegg) and no one is using that for their Epyc/"P" single socket processor testing. Anandtech even states[Above: see quoted statment] that they are "getting" all the Epyc "P" single socket SKUs in for testing but still publishing an article that should not have even been published until Anandtech had a single socket Epyc/SP3 motherboard to test the Epyc 7401P and its other "P" variants! And that Gigabyte Epyc/SP3 gigabyte single socket motherboard has been on sale for over a month now and do not tell me that Anandtech does not Know that with all the contacts the Anandtech has in the motherboard/CPU industry.

Damn folks are screaming all across the web on the Blender/Adobe/solidworks graphics forums for some single socket Epyc CPU/SP3 motherboard testing and folks have to rely on the enthusiasts website like Anandtech/others but are being ignored mostly by the enthusiasts websites. And the enthusiasts websites are trying like hell to sale everyone on Threadripper and Theradripper is not even a workstation grade part.

(1)

"Best CPUs for Workstations: 2017"[what a joke this article is and Anand Lal Shimpi would have never published this under his watch]

https://www.anandtech.com/show/11891/best-cpus-for-workstations-2017

October 2, 2017 | 11:57 PM - Posted by Allyn Malventano

‘Interleave’ as it relates to that BIOS option is likely for tweaking performance. I don’t think it is required to make a bootable volume >4 SSDs - that’s an Intel VMD limitation.

October 3, 2017 | 12:19 AM - Posted by Paul A. Mitchell (not verified)

> that BIOS option is likely for tweaking performance.

Yes: I thought the same thing e.g.
somewhat similar to the way DRAM is interleaved.

And, I thought this tweaking option might also
help to explain what you observed earlier:

"Each drive would have to do 3.55GB/s to accomplish this speed. 960 PROs only go 3,300 MB/s when they are reading actual data."

October 3, 2017 | 03:18 AM - Posted by Martin Trautvetter

Page 3:

"With Optane's in a RAID,"

should read

"With Optanes in a RAID," or "With Optane drives in a RAID,"

October 3, 2017 | 02:23 PM - Posted by Allyn Malventano

The offending apostrophe has been sacked.

(Thanks)

October 3, 2017 | 12:49 PM - Posted by James

I skimmed through the article, but I still don't know exactly what Intel is trying to lock out without paying for a ridiculous hardware key. I could see them trying to keep RAID 5 (parity) mode for professional use/market segmentation, although I don't know if many people would choose to pay for just RAID 5. It would presumably use less valuable SSD space than mirroring everything. Massive RAID bandwidth isn't that useful to your average consumer anyway. It is mostly professional applications. Locking RAID modes out that can be used on the low end consumer chipset is bogus though. Intel has used the chipset bottleneck as a way to segment the market for quite a while. You can have all kinds of connections on the chipset, but you can't use many of them at once due to the upstream bandwidth limitation. If you wanted more via more PCI-e off the CPU, you needed to upgrade to a higher end platform. AMD's Ryzen (non-ThreadRipper) is very limited on PCI-e also, so this isn't really unique. Threadripper is already basically a low end Epyc processor, and significantly more expensive platform than standard Ryzen.

Locking parity mode out would probably just set up people to lose data because they will configure just striping without mirroring. It is kind of like keeping ECC for the enterprise market. I am of the opinion that ECC should be everywhere now. I had a huge number of files get corrupted by an undetected memory error. I am tempted to buy a serve level board.

October 3, 2017 | 02:22 PM - Posted by Allyn Malventano

I believe their concern is that the VROC technology can scale very high on drive counts and has its roots in the enterprise side. The worry would be that some creative enterprise IT guy would just buy a batch of the cheaper desktop parts and roll them out in a larger cluster. The parity or not question is a bit moot since higher performing storage would generally be RAID-0 with a redundant complete system sitting alongside it with a mirror of the data for failover. That's why I suggest at the end of the piece that the hardware key limits should be in drive counts and not in RAID levels. This way pro users could benefit from the same parity volume reliability benefits that Z270 users currently enjoy, but limited to a reasonable consumer-level drive count of 4 (which is the bootable VMD limit anyway).

October 3, 2017 | 09:19 PM - Posted by Dapple (not verified)

Nvidia is facing a similar problem right now. A lot of folks are aware that miners are purchasing a ton of graphics cards, which has lead to higher graphics card pricing. What folks don't realize is that consumer Nvidia graphics cards are getting snapped up by the professional market as well. That is why it's hard to find any of the blower-style 1080's in stock (FE and 3rd party).

October 3, 2017 | 09:21 PM - Posted by Dapple (not verified)

I meant 1080 TI's. But they are buying 1080, 1070, and 1060 as well. 1080 Ti blower is the preferred due to it's size and fan setup being the best fit for server cases

October 4, 2017 | 04:14 AM - Posted by James

Even limiting it to 4 drives would offer huge bandwidth, more than any consumer level applications really require. It seems like they could have done the consumer version of the chipset, with 4 drive limit (1 VMD controller) and a workstation/server variant with the full 3 VMD controllers enabled without char ware keys. That still would not really look very good with AMD offering support for a large number of drives with no up charge. It is just a fact that massive bandwidth can be supported with high-end, but still consumer level hardware. This is similar to when high end RISC workstations fell to cheap PC hardware.

October 3, 2017 | 06:46 PM - Posted by Paul A. Mitchell (not verified)

> The parity or not question is a bit moot since higher performing storage would generally be RAID-0 with a redundant complete system sitting alongside it with a mirror of the data for failover.

This is EXACTLY what we have been doing with our
production workstation, and it works G-R-E-A-T!

We actually run 3 active "tiers' on that workstation,
and a fourth tier is backup storage servers:

(1) 14GB ramdisk
(2) RAID-0 w/ 4 x 6G Nand Flash SSDs
(3) 2TB rotating HDD for archiving (5 year warranty)
(4) networked storage servers (e.g. old LGA-775 PCs)

It's also very easy to "sync" our ramdisk
with our RAID-0 array e.g.:

xcopy R:\folder O:\folder /s/e/v/d
xcopy O:\folder R:\folder /s/e/v/d

Where,

R: = ramdisk drive letter
O: = RAID-0 drive letter

The entire ramdisk is SAVEd and RESTOREd on O:
automatically at SHUTDOWN and STARTUP, respectively.

October 3, 2017 | 08:37 PM - Posted by Dapple (not verified)

I have a premium VROC key. It was actually fairly easy to get. Have offered it to a couple of tech folks to test but no takers so far.

October 3, 2017 | 08:40 PM - Posted by Dapple (not verified)

By the way, I have some additional notes about VROC and it's setup that may be of help. I assume you can view the email address I've input for this comment. If you want to discuss just let me know

October 4, 2017 | 03:50 PM - Posted by preferredcustomer (not verified)

To answer your question about using VROC or not... I have to say not because you did not create the array within the BIOS which the silly Key was not engaged. Still baffles me INTEL will charge for non-intel M.2's and not released the key. AMD giving it away for free and having more PCIe lanes will hurt Intel.

I have been bouncing between 1950x / 7900x for my next system... I already have a Hyper M.2 x16 card for whichever I choose. Where I am at is difficult to obtain cooling for the 1950x and the VROC for the 7900x...

-----------------

Ok why I think you are not using VROC... #1 you are in "pass-thru" mode for VROC. The best way I can explain it is VROC is acting like a HBA in your setup. This terminology makes sense.

Using the RSTe gives you the ability to create a Software Raid. This was the only way you were able to create the Stripe from the article.

You do not have the DMI 3.0 bottleneck from the results (So using CPU lanes, expected with x16 PCIe), however a little overhead for the software Stripe. My guess is the pure VROC RAID 0 setup will yield slightly better results is managed from BIOS.

This can be tested by installing windows on Formatted(Secure Erase,etc) M.2's... F6 the driver and if it doesn't see an Array or wants to create an Array, we should have the answer... predicting Software RAID via VROC pass-thru(aka-HBA)...

Lastly, it is possible VROC is using the same formatting of the array as RSTe. Just VROC is doing from the BIOS... Intel has kept us in the dark on this crap. I am curious after building the RAID 0 in RSTe, when you go to the BIOS, does it show RAID 0 or unconfigured like your screenshot...

This is my best guess on what is happening with VROC presently... AMD's style looks to be CPU pass-thru, software RAID...

Thanks for reading my long attempt to explain my idea... //R//

October 9, 2017 | 02:21 PM - Posted by Allyn Malventano

VROC arrays *can* be created in the BIOS, but only with Intel SSDs. Samsung SSDs currently show as not compatible. Further, after this piece went up I was able to create, install windows to, and boot from a RAID-0 array. Stands to reason it's true VROC.

October 4, 2017 | 04:26 PM - Posted by preferredcustomer (not verified)

Hi, just to add... I found a XEON based VROC article that explains how VROC works:

https://www.intel.com/content/www/us/en/software/virtual-raid-on-cpu-vro...

Q7: Is Intel VROC software or hardware RAID... Ans: Hybrid... Seems the VROC uses mainly hardware with software to calculate RAID logic.

Q11: References what needs a VROC Key... I think Intel should revisit for the X299 chipsets

Q12: How is Intel VROC different from Intel RSTe...

I tried to copy and paste... however Intel locked the PDF from the link above... This is a good read to understand more in depth how VROC works... //R//

October 5, 2017 | 01:42 AM - Posted by Habibird (not verified)

Hi Allyn,

Excellent review and very thorough testing. A couple of questions:

1. I can't seem to find the VROC key for sale anywhere. How did you obtain yours?

2. The SM961 is listed as being compatible with VROC. I wonder if the SM960 is also compatible to create a bootable VROC Raid 0 or Raid 1 array since it is just the consumer version of the OEM SM961?

ps: Above questions also open to anyone who has the hardware and has tested this.

Thanks in advance gents/ladies.

October 12, 2017 | 05:43 PM - Posted by Allyn Malventano

1. All testing in this article was done without a VROC key.

2. They should be, but no way to know for sure as we don't know how Intel is limiting the compatibility.

October 13, 2017 | 09:11 AM - Posted by Michael M (not verified)

Third party SSDs like the SM961 are supported only by Intel XEON Chipsets, like the C622 or C4xx.

Even then, Intel does not mention if they are bootable under VROC.

Intel informed me by email that Intel does not support VROC for the X299, and that the mainboard manufactures offering VROC are responsible for the proper functioning.

Very frustrating that ASUS put the blame on Intel, not offering any information, beside spreading false information -at least at the german support.

I want to run a real VROC! Not in pass-through-mode.

Amyhow, thanks for sharing this excellent test!

Michael

October 5, 2017 | 01:28 PM - Posted by Paul A. Mitchell (not verified)

More details are in Podcast #470:
https://www.youtube.com/watch?v=4V2o91CSWXc

October 13, 2017 | 02:02 AM - Posted by Michael M (not verified)

I am struggling already with a very basic problem.
As far as I understood I will have to install the RSTe (enterprise) driver rather than the RST.
However, the RSTe can not be installed "on my platform".
Presumably it does not support Win10x64.
In general Intel does not officially support the X299 with that driver...

Have you installed the RSTe on a Win10-platform? Which version?

Many thanks
Michael

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.