Review Index:
Feedback

Intel Skylake / Z170 Rapid Storage Technology Tested - PCIe and SATA RAID *updated*

Subject: Storage
Manufacturer: Intel
Tagged: Z170, Skylake, rst, raid, Intel

PCIe RAID Results

...so after some more communication with ASUS, we were told that for PCIe RAID to work properly, we *must* flip this additional switch:

View Full Size

As it turns out, the auto setting is not intelligent enough to pick up on all of the other options we flipped. Do note that switching to the x4 setitng disables the 5th and 6th ports of the Intel SATA controller. What is happening on an internal level is that the pair of PCIe lanes that were driving that last pair of SATA ports on the Intel controller are switched over to be the 3rd and 4th PCIe lanes, now linked to PCIe_3 on this particular motherboard. Other boards may be configured differently. Here is a diagram that will be helpful in understanding how RST will work on 170 series motherboards moving forward:

View Full Size

The above diagram shows the portion of the 20 PCIe lanes that can be linked to RST. Note this is a 12 lane chunk *only*, meaning that if you intend to pair up two PCIe x4 SSDs, only four lanes will remain, which translated to a maximum of four remaining RST-linked SATA. Based on this logic, there is no way to have the six Intel SATA ports enabled while also using a PCIe RAID. For the Z170 Deluxe configuration we were testing, the center four lanes were SATA, M.2 uses the left four, and PCIe slot 3 uses the right four. Note that to use those last four, the last two Intel SATA ports are disabled. The Z170 Deluxe does have eight total SATA ports, so the total dropped to six, but only four of those are usable with RST in a RAID configuration.

Once we made x4 change noted above, everything just started working as it should have in the first place (even booting from SATA, which was previuously disabled when the PCIe-related values were not perfectly lined up). Upon booting off of that SATA SSD, we were greeted with this when we fired up RST:

View Full Size

...now there's something you don't see every day. PCIe SSDs sitting in a console that only ever had SATA devices previously. Here's the process of creating a RAID:

View Full Size

You can now choose SATA or PCIe modes when creating an array.

View Full Size

Stripe size selection is similar to what has been in prior versions. I recommend sticking with the default that RST chooses based on the capacity of the drives being added to the array. You can select a higher stripe size at the cost of lower small file performance. I don't recommend going lower than the recommended value, as it typically results in reduced efficiency of the RST driver (the thread handling the RAID pegs its thread / memory allocation just handling the lookups), which results in a performance hit on the RAID.

View Full Size

With the array created, I turned up all of the caching settings (only do this in production if you really need the added performance, and only if the system is behind a UPS). I will also be doing this testing with TRIMmed / empty SSD 750's. This is normally taboo, but I want to test the maximum performance of RST here - *not* the steady-state performance of the SSDs being tested.

The following results were attained with a pair of 1.2TB SSD 750's in a RAID-0, default stripe size of 16KB.

Iometer sequential 128KB read QD32:

View Full Size

Iometer random 4K read QD128 (4 workers at QD32 each):

View Full Size

ATTO (QD=4):

View Full Size

For comparison, here is a single SSD 750, also ATTO QD=4:

View Full Size

So what we are seeing here is that the DMI 3.0 bandwidth saturates at ~3.5 GB/sec, so this is the upper limit for sequentials on whichever pair of PCIe SSDs you decide to RAID together. This chops off some of the maximum read speed for 750's, but 2x the write speed of a single 750 is still less than 3.5 GB/sec, so that gets a neat doubling in performance for an effective 100% scaling. I was a bit more hopeful for 4K random performance, but a quick comparison of the 4K results in the two ATTO runs above suggest that even with caching at the maximum setting, RST pegged at the 4K random figure equal to a single SSD 750. Testing at stripe sizes less than the default 16K did not change this result (and performed lower).

The takeaway here is that PCIe RAID is here! There are a few tricks to getting it running, and it may be a bit buggy for the first few BIOS revisions from motherboard makers, but it is a feature that is more than welcome and works reasonably well considering the complications Intel would have had to overcome to get all of this to perform as well as it does. So long as you are aware of the 3.5 GB/sec hard limit of DMI 3.0, and realize that 4K performance may not scale like it used to with SATA devices, you'll still be getting better performance than you can with any single PCIe SSD to date!


August 5, 2015 | 12:17 PM - Posted by Edwin Liew (not verified)

Hey Allyn, have you tested the boot up speeds with the new z170 boards using the intel 750? I was hoping the enumeration fault was gone and we wouldn't need to use the csm to boot into them, guess we'll have to wait for another gen..?

August 5, 2015 | 01:27 PM - Posted by Allyn Malventano

I can't say for sure since the implementation we got in for testing still had a few bugs to work out. One issue was that we couldn't get the CSM disabled - it would re-enable again after a reboot. 

August 11, 2015 | 07:14 AM - Posted by Edwin Liew (not verified)

Any update to this, Allyn? :P

August 11, 2015 | 07:16 AM - Posted by Edwin Liew (not verified)

oh and er, maybe some academic tests of boot times with:
raid 750x2
single 750

August 5, 2015 | 03:26 PM - Posted by Cyclops

Why are you so focused on the SATA performance, Allyn? In the real world, would you rather RAID 0 6 x 240GB SSDs or grab a single 1.2TB SSD 750?

SATA is dead to me. It's great for general usage but you can't compared 65k/65k queues/commands of NVMe vs 1/32 of SATA. On the other hand, I am disappointed that getting a NVMe RAID solution is still problematic.

Another thing. The PCH can provide 20 PCIe 3.0 lanes. which translates roughly to 20 GB/S. The DMI however, can't process more than 8 GT/s. What's the point in including all of those lanes when DMI is still the bottleneck?

August 5, 2015 | 05:36 PM - Posted by Anonymous (not verified)

That and no real specifications on the GPU that's integrated into the processor, The DMI is not the only problem, there are plenty of older Intel SKUs in the retail channels, and plenty of time to wait for Zen's arrival. I'm still coming up empty in my search for a Carrizo Based laptop with downgrade rights to windows 7, and I'll pay a premium to get Carrizo's graphics and not have to wait for even longer for the i7's quad cores on my current laptop to struggle at 100% for a few hours just to render a single image. Carrizo's latest GCN cores and the latest software to utilize HSA have moved the rendering more fully onto the GPU, and so long with the need for Quad cores and 8 threads for rendering workloads! Come on HP where are the Probooks with a 35 watt Carrizo, and a 1920 x 1080 screen, and maybe a Probook with a discrete GPU to pair with the integrated one, I like business laptops because they have the windows 7 options, just show me some options for better screen resolutions and Carrizo.

August 5, 2015 | 09:56 PM - Posted by Allyn Malventano

SATA still has its uses. Cost/GB is still lower than PCIe devices and there are still some who want GPUs in all of their PCIe slots. Booting NVMe PCIe is still not as elegant / trouble free as it should be, either. Also, if it wasn't bottlenecked somewhere, 6xSATA in a RAID-0 would give many PCIe SSDs a run for their money, especially in write speeds.

August 14, 2015 | 04:37 PM - Posted by Anonymous (not verified)

This is so that you can physically hook up a ton of devices. The expectation is that you don't use them all at once.

The CPU, in the end, can only process data at a certain speed. If you were able to hook up enough SATA drives to saturate 20 PCI-Express 3.0 lanes, and let's go with your figure of 20MB/s, that's awfully close to the maximum total memory bandwidth of a DDR3 system. You'd choke everything else.. it just doesn't make sense at least on a consumer system.

And hey, SATA is a dog now but remember how far that has come since the IDE days. Now it's time for NVMe, certainly the next evolution. But in all honesty, while the geek in my loves all this, at the moment I am left wondering how much I/O I really need. As it is, I'm mostly blocked by a 75/75 internet connection in terms of I/O, and that's easily taken care of by one SATA port.

August 14, 2015 | 04:38 PM - Posted by Anonymous (not verified)

.. and certainly 20GB/s not MB/s lol :)

August 14, 2015 | 04:38 PM - Posted by Anonymous (not verified)

.. and certainly 20GB/s not MB/s lol :)

August 5, 2015 | 11:36 PM - Posted by MRFS (not verified)

How about any of the stable PCIe 3.0 add-on RAID controllers with 2 fan-out ports supporting a total of 8 x 6G SSDs?

If my math is correct, such x8 edge connectors support an upstream bandwidth of 8.0 GHz / 8.125 x 8 lanes ~= 7.88 GBps
(with the 128b/130b "jumbo frame", 130 bits / 16 bytes = 8.125 bits per byte in PCIe 3.0 chipsets).

Allyn, I know you don't prefer Highpoint AOCs, but doesn't PCPER have a few of the latest PCIe 3.0 RAID controllers in your parts inventory e.g. LSI, ATTO, Adaptec et al.?

p.s. Plug-and-Play, where are you?

MRFS

August 5, 2015 | 11:54 PM - Posted by MRFS (not verified)

At Newegg, search for "12Gb/s RAID Controller".

It would be nice if SATA SSD manufacturers
upped their clock speeds to 12 Gbps.

Or, am I dreaming again?

MRFS

August 6, 2015 | 12:17 AM - Posted by MRFS (not verified)

Not to be forgotten: 12Gbps SAS SSDs:

Toshiba Raises The Bar For 12 Gb/s SAS SSD Performance With PX04S Series

http://www.tomsitpro.com/articles/toshiba-ssd-12gbs-sas-px04s,1-2778.html

August 6, 2015 | 03:55 AM - Posted by Epicman (not verified)

Really simple question here: can I just grab any Z170 mobo (with a 32Gbps M.2 slot) and BOOT Win 8.1 / 10 from a SM951?

August 6, 2015 | 03:48 PM - Posted by Allyn Malventano

Yes, that should work without issue (even with the NVMe version of the SM951)

October 14, 2015 | 09:41 PM - Posted by Md (not verified)

Hi Allyn,
Follow up simple question, please. Can you give, or direct me to, steps to installing Windows 7 Pro onto SM951 installed on m.2 slot of Asus Z-170-A mobo as boot drive?
Have read endless threads on problems and still not clear on how to.
So far your thread is great...but not seeking raid...just booting off small with windows 7..
Thanks in advance..

August 6, 2015 | 05:21 AM - Posted by Frost (not verified)

Do you plan on testing NVMe PCIe Gen 3 SSD connected to the Z170 PCH vs connected to native CPU lanes on X99 to see the difference in performance (latency etc.)?

August 6, 2015 | 03:49 PM - Posted by Allyn Malventano

Some quick tests shows that a single SSD 750 is very close in performance on either side of the chipset. Will do more detailed testing in the future, but it's nothing to be concerned about.

August 10, 2015 | 05:10 AM - Posted by Frost (not verified)

Please also include test cases when there's simultaneous I/O going via PCIe SSD and USB 3 over DMI 3 to see how things slow down when DMI is saturated from multiple sources.

October 1, 2015 | 02:23 PM - Posted by Livern (not verified)

Same question I had!

August 6, 2015 | 12:23 PM - Posted by Anonymous (not verified)

I've been waiting for an article like this, many thanks.

As far as the results, what a shame. Turns out the DMI bottleneck was not the only one at work.

August 6, 2015 | 12:50 PM - Posted by Dark_wizzie

Does the 750 or sm951 use CPU lanes on x99 mobos? If so, how does its performance compare to PCH lanes of the Skylake stuff?

On a side note, a storage guy on OCN told me that Skyrim with a ton of mods probably requires a lot of sequential reads. That seems contrary to what I know about SSDs. Shouldn't it be random reads? 4k, 16k.

And finally, I'd make a trace analysis joke but I think that's pretty dead at this point, lol.

August 6, 2015 | 03:53 PM - Posted by Allyn Malventano

Skyrim mods are likely being read out at 128k sequential. A given texture is probably >16k in size, and would look more sequential to an SSD than random.

August 6, 2015 | 09:40 PM - Posted by Dark_wizzie

Thanks a lot Allyn, that's very useful information to me.

What about the PCH/CPU pcie lane stuff?

August 7, 2015 | 01:01 PM - Posted by Allyn Malventano

DMI doesnt count against the 16 CPU lanes of Skylake. Also, I initially misunderstood the info we were passed - there are 20 downstream lanes from the PCH (not 20-4 as I initially thought). This means you can have 36 total PCIe lanes connected to a Skylane CPU (though 20 of those are bottlenecked by a 4x DMI link).

August 9, 2015 | 05:59 AM - Posted by Dark_wizzie

So the difference between the 16 lanes in Skylake for the GPU and the rest of the PCIE lanes (20 left) for other stuff, is that the 16 lanes for the GPU has a much higher bandwidth compared to the other 20 lanes?

I heard people say 'the 16 lanes are native, direct connections to the CPU whereas the rest are handled by the chipset' which hints that there might be some sort of caveat with the rest of the PCIE lanes beyond just bandwidth. Higher latencies? Or something.

Nice work on these articles, you produce some good stuff. really hope you get around to answering this question, it's my last burning SSD question.

December 22, 2015 | 04:00 AM - Posted by patrickpoet (not verified)

There wouldn't be in general much of a latency difference, since the DMI 3.0 connection is very similar to PCI 3.0, i.e. a packet bus. The problem is though, that they will all share 3.93 GB/s (31.44 Gb/s) via DMI 3.0 and they will also share that bandwidth with 6 other HSIO slots that are used for USB 3.x.

August 6, 2015 | 02:19 PM - Posted by MRFS (not verified)

cf.
http://forums.storagereview.com/index.php/topic/37949-toshiba-px04s-ente...

Why aren't SATA SSD manufacturers upping their clock speeds to 12 Gb/s too?

Even though SATA SSDs are necessariliy single-ported (as compared to SAS double-porting), an 8G clock is already a standard feature of the PCIe 3.0 specification, and 12 Gb/s RAID controllers are now available from a number of reputable vendors r.g. Areca, LSI, Adaptec, ATTO, Intel etc.

Moreover, SAS backplanes are designed to work with SATA drives.

What am I missing here, please?

MRFS

August 6, 2015 | 04:35 PM - Posted by Anonymous (not verified)

It seems like DMI is still a limiting factor. If they are going to put 20 PCIe links on the chipset, then it seems like they should increase the speed of the up link to the CPU more significantly.

August 6, 2015 | 05:51 PM - Posted by fredey (not verified)

could you the intel rst tech with a bunch of sata ssd? cause then having a raid 0 zero cache would be a like a ultimate all in one solution

August 6, 2015 | 06:10 PM - Posted by Rick0502 (not verified)

so if 3.5gb/s is the limit of dmi 3.0, why have all these ports with a limit so low? you have 10-12 usb 3.0 ports, usb 3.1, m.2, sata 3 ports. i feel like it would be easy to max that out with all the connections you have.

August 6, 2015 | 06:33 PM - Posted by wizpig64 (not verified)

PCIe raid is really exciting but for the money I think i'd still just raid-0 pair of 850 evos for daily use.

August 7, 2015 | 01:06 AM - Posted by MRFS (not verified)

We experimented with a RAID-0 of 4 x Samsung 840 Pro SSDs,
and our sequential READ speed exceeded 1,800 MB/sec.
with PCIe 2.0 and a cheap Highpoint RocketRAID 2720SGL:

http://supremelaw.org/systems/io.tests/4xSamsung.840.Pro.SSD.RR2720.P5Q....

http://supremelaw.org/systems/io.tests/4xSamsung.840.Pro.SSD.RR2720.P5Q....

http://supremelaw.org/systems/io.tests/4xSamsung.840.Pro.SSD.RR2720.P5Q....

http://supremelaw.org/systems/io.tests/4xSamsung.840.Pro.SSD.RR2720.P5Q....

The P5Q Deluxe machine above remains very snappy:
we plan to do the same with 4 x Samsung 850 Pro SSDs.

With a 10-year factory warranty, the speed will be
fast enough e.g. to SAVE and RESTORE 12GB ramdisks
at SHUTDOWN and STARTUP using RamDisk Plus.

Icy Dock now make a nice 8-slot 5.25" enclosure:

http://www.newegg.com/Product/Product.aspx?Item=N82E16817994178&Tpk=N82E...

August 6, 2015 | 08:02 PM - Posted by Austin (not verified)

Allyn,

How well do we think z170 boards will boot from nvme SSDs?

August 6, 2015 | 08:04 PM - Posted by Austin (not verified)

specifically m.2 nvme

August 7, 2015 | 04:16 AM - Posted by John H (not verified)

Hey Allyn - are you 100% sure all 6 of those sata SSDs were communicating at 6 gbps?

August 7, 2015 | 09:56 AM - Posted by Allyn Malventano

Individually they were going full speed. 

August 7, 2015 | 06:35 AM - Posted by JoshZ (not verified)

hi Allyn,

(1) when testing the PCIe raid0, did you try Windows Write-cache buffer flushing set to default Enabled (i.e. disabling Intel proprietary caching options)? i think it maybe actually gave me better results in some tests a while back with an ssd raid..

(2) if you create 6-ssd SATA raid0 in Windows intel RST GUI, does the wizard also happen to give you choice between SATA and PCIe controller options? could it be that with PCIe controller setting chosen, the 6-ssd raid could get the full 3+ GB/s bandwidth?

August 7, 2015 | 09:57 AM - Posted by Allyn Malventano

Performance was lower with it enabled. 

When you select PCIe, you can only choose PCIe devices. 

August 7, 2015 | 07:44 AM - Posted by CharlieK (not verified)

So 3x sm951 on the asrock z170 extreme7 would still be capped at 3.5GBps?

August 7, 2015 | 01:03 PM - Posted by Allyn Malventano

There would be no way to connect the third one and use it with RST.

August 10, 2015 | 03:03 PM - Posted by DavidF (not verified)

Are you sure Allyn -reading the manual this looks very do-able

It would mean no SATA drives tho...

August 16, 2015 | 02:22 PM - Posted by Anonymous (not verified)

http://www.pcper.com/reviews/Storage/Intel-Skylake-Z170-Rapid-Storage-Te...

for what ASRock Z170 Extreme7+ make three Ultra M.2 slots ?

December 22, 2015 | 04:03 AM - Posted by patrickpoet (not verified)

That isn't true, people have tested with three drives connected to m.2 in RAID. It works fine. It does away with all of the SATA 3 except for 4, as per their manual.

August 7, 2015 | 08:06 AM - Posted by Anonymous (not verified)

can i work with 2 or 3 samsung 951 in raid 0 ???

can i get bus speed 200MHZ ?

August 8, 2015 | 11:30 PM - Posted by MRFS (not verified)

BTW, Allyn,

NICE WORK!

You be THE BEST, Man :-)

MRFS

August 8, 2015 | 11:44 PM - Posted by CSR (not verified)

Hey guys what's the version of intel rapid storage technology that you have installed and were did you get it am not seeing the options shown as yours.
My setup ASUS Z170-DELUXE xp951(512GB) and intel 750 1.2TB drives
I created raid0 intel 750+Xp951.Capacity lowered 953GB
Able to see the drives on raid0 during install but capacity (953GB)lowered why?
W10 install went smoothly but does not boot tried various changes unable to boot.

Can I have the Bios settings to make UEFI boot with raid0.

Thanks for your article.

August 10, 2015 | 08:00 PM - Posted by MRFS (not verified)

2 members of a RAID-0 array must contribute
the exact same amount of storage to that array,
hence -- as summarized in a reply above --
RST allocated only 512GB (unformatted) from
the Intel 750, as follows:

512GB (xp951) + 512GB (Intel 750) = 1,024 GB unformatted

Using a much older version of Intel Matrix Storage Console,
it reports 48.83GB after formatting a 50GB C: partition on
2 x 750GB WDC HDDs in RAID-0.

Using that ratio, then, we predict:

48.83 / 50 = .9766 x 1,024 = 1,000.03 GB (formatted)

You got 953GB == close enough, considering that we
are looking at entirely new storage technologies here,
and later operating systems will default to creating
shadow and backup partitions, which explain the
differences between 1,000 and 953.

Hope this helps.

MRFS

MRFS

August 10, 2015 | 08:06 PM - Posted by MRFS (not verified)

Did you format with GPT or MBR?

I seem to recall that UEFI booting requires a GPT C: partition.

Please double-check this, because I've only formatted
one GPT partition in my experience (I'm still back at XP/Pro).

Allyn, if you are reading this, please advise.

THANKS!

MRFS

August 10, 2015 | 03:16 AM - Posted by Anonymous (not verified)

capacity lowered to match the smaller drive,anyone able to boot windows from a raid 0 pcie setup?

August 16, 2015 | 12:26 PM - Posted by Anonymous (not verified)

[url]http://www.pcper.com/reviews/Storage/Intel-Skylake-Z170-Rapid-Storage-Technology-Tested-PCIe-and-SATA-RAID/PCIe-RAID-Resu[/url]

Samsung SM951 128GB AHCI can work in raid 0 X 3 disk ?
if i buy 3 disk like that
[url]http://www.amazon.com/Samsung-SM951-128GB-AHCI-MZHPV128HDGM-00000/dp/B00VELD5D8/ref=sr_1_3?s=pc&ie=UTF8&qid=1439745389&sr=1-3&keywords=Samsung+SM951[/url]

what speed can reach ?
what is the difference between Samsung SM951 vs SSD 750 ?

August 31, 2015 | 05:13 PM - Posted by Anonymous (not verified)

I am buying a gigabyte g1 gaming board and it has 2 m.2 slots on the board and says it can do raid0 if I put 2 sm951 256 gb m.2 cards on board should I be able to raid 0 and also boot and how would I set it up I was also planning on a 2 tbhdd for storage any thoughts

September 27, 2015 | 12:36 PM - Posted by Anonymous (not verified)

Good luck just returned my G-1 gaming could not get CSM disabled to stay and never got the Intel raid setup to appear in the Bios.

October 8, 2015 | 01:47 PM - Posted by Grandstar (not verified)

This was just a fine article, advanced and I enjoyed reading it. Thanks

Way back, I was one of the first to run bootable dual SSD`s in Raid 0 with an LSI raid controller. This month I hope to purchase the new Samsung 950 Pro V-NAND M.2.

I want to run a pair in Raid O.

But, and I quote the author: So what we are seeing here is that the DMI 3.0 bandwidth saturates at ~3.5 GB/sec, so this is the upper limit for sequentials on whichever pair of PCIe SSDs you decide to RAID together.

Since the new Samsungs are in broad terms:
Sequential Read 2,500 MB/s
Sequential Write 1,500 MB/s

Ergo, running dual Samsung 950 Pro V-NAND M.2 would have a theoretical sequential of 5 MBs, thus surpassing the bandwidth of DMI 3.0 by 1.5 MBs.

Question:

Even if a hardware raid controller is used, it will be limited to DMI bandwidth?

I take it there is no workaround for this bottleneck at the moment...

October 9, 2015 | 05:19 PM - Posted by Allyn Malventano

Even if a hardware raid controller is used, it will be limited to DMI bandwidth?

Only if it is connected to a PCIe slot behind DMI.

October 10, 2015 | 12:23 PM - Posted by parrotts (not verified)

hi Allyn
( sorry for my bad english)
i'm trying to install 2 sm951 nvme in radi 0 on my sabertooth x99 and i have some difficult to do.
the ssd is installed one in m.2 native slot and one in a pcie adaptor in the second slot.
bios see both.
i can install os in both.
i try some changes in bios like your article.
i try to modding bios file to update with version 14 of intel rst driver ( in fact in the advanced mode of bios in intel rapid storage flag it shown version 14 but no disc is detected)
in the nvme configuration they are listed
bus 3 sm951.....
bus 5 sm951.....
can you help me in some way?

thanks
parrotts

October 12, 2015 | 12:05 PM - Posted by Grandstar (not verified)

Allyn,

By implementing Raid 0 on a ASRock Z170 Extreme7 using 3 x Samsung 950 Pro V-NAND M.2 (upcoming) inserted into the three available Ultra M.2 ports (which are connected directly to the CPU, i.e not behind a chipset with DMI 3.0 limit of 3.5 GB/sec), wouldn't it be possible to achieve about 7.5GB/s sequential read?

October 16, 2015 | 01:14 PM - Posted by Grandstar (not verified)

As it turns out the Intel 750 was tested by some folks at ASUS with a sequential read of over 2.7 GB/s read and 1.3 GB/s write. Who needs anything else? I don't think Samsung will be as fast, and it's late to the market.

October 18, 2015 | 06:08 PM - Posted by Md (not verified)

October 14, 2015 | 09:41 PM - Posted by Md (not verified)
Hi Allyn,
Follow up simple question, please. Can you give, or direct me to, steps to installing Windows 7 Pro onto SM951 installed on m.2 slot of Asus Z-170-A mobo as boot drive?
Have read endless threads on problems and still not clear on how to.
So far your thread is great...but not seeking raid...just booting off small with windows 7..
Thanks in advance..

November 3, 2015 | 10:09 AM - Posted by Hugo Páscoa (not verified)

Hi, do you try to install in a normal sata ssd, then you clone to sm951?

November 21, 2015 | 11:02 AM - Posted by david (not verified)

Hi,

If I understand correctly the max bandwidth over the dmi 3.0 Z170 sata express connector is 3.5 Gb/s? What about on the X99 chipset? Is it also possible to raid m.2 on X99? And why do motherboard manufactures advertise the speed limit at 32Gb/sec?

Confused.

Thanks

December 15, 2015 | 07:20 AM - Posted by Jade (not verified)

Hi Ryan, I have some question with the Raid 0 Configuration of NVME M.2 SSD's.. If I make a RAID 0 set up for 2 NVME SSDs running on 2 PCIe 3.0 X4, is the DMI 3.0 be sufficient enough to carry such bandwidth? From what I know, One PCIe 3.0 x4 is equal to 32Gbps.. So in RAID 0, it will be 64 Gbps yes?

January 22, 2016 | 11:28 AM - Posted by Anonymous (not verified)

I really appreciate the testing you have done here. I built a z170 deluxe, pcie 750 ssd, GTX 980ti and called intel for support after I buggered up my UEFI settings. My support guy couldn't find any answers (i'm sure he was googling everything I had did) but having him on the line gave me the courage to forge ahead and finally got the drive back and working. I do think I have a ways to go though. I only have two SATA ports available now. SATA 3,4. In reading your posts I believe I forgot to go back and disable Hyperkit. Obviously Hyperkit isn't needed for a drive installed in the PCIe slot 3. Doh!
Keep up the good work

October 22, 2016 | 12:22 PM - Posted by Anonymous (not verified)

For those of you wanting to do m.2 RST SSD caching, to get the drive to show up just change the boot option as mentioned in the article. Thank you guys so much for finding and sharing these settings!!

December 12, 2016 | 03:09 PM - Posted by David Byrne (not verified)

Thanks guys, the diagram on the next page on the lanes helped me a lot, have been struggling to get my PCIe raid running along side a SATA one, now I understand why but see how I can do it

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

CAPTCHA
This question is for testing whether you are a human visitor and to prevent automated spam submissions.