Review Index:
Feedback

Intel Skylake / Z170 Rapid Storage Technology Tested - PCIe and SATA RAID *updated*

Subject: Storage
Manufacturer: Intel
Tagged: Z170, Skylake, rst, raid, Intel

A quick look at storage

** This piece has been updated to reflect changes since first posting. See page two for PCIe RAID results! **

Our Intel Skylake launch coverage is intense! Make sure you hit up all the stories and videos that are interesting for you!

When I saw the small amount of press information provided with the launch of Intel Skylake, I was both surprised and impressed. The new Z170 chipset was going to have an upgraded DMI link, nearly doubling throughput. DMI has, for a long time, been suspected as the reason Intel SATA controllers have pegged at ~1.8 GB/sec, which limits the effectiveness of a RAID with more than 3 SSDs. Improved DMI throughput could enable the possibility of a 6-SSD RAID-0 that exceeds 3GB/sec, which would compete with PCIe SSDs.

View Full Size

Speaking of PCIe SSDs, that’s the other big addition to Z170. Intel’s Rapid Storage Technology was going to be expanded to include PCIe (even NVMe) SSDs, with the caveat that they must be physically connected to PCIe lanes falling under the DMI-connected chipset. This is not as big of as issue as you might think, as Skylake does not have 28 or 40 PCIe lanes as seen with X99 solutions. Z170 motherboards only have to route 16 PCIe lanes from the CPU to either two (8x8) or three (8x4x4) PCIe slots, and the remaining slots must all hang off of the chipset. This includes the PCIe portion of M.2 and SATA Express devices.

View Full Size

Continue reading our preview of the new storage options on the Z170 chipset!!

I spent yesterday connecting a Skylake system to many different storage devices, starting with the PCIe side. As you can see above, the UEFI has been updated to include additional options that are specific to Intel’s new RST additions. Flipping the various switches diverts control of the connected device over to RST. With a pair of Intel SSD 750’s installed, one via PCIe_3 and the other via the U.2/M.2 Hyper Kit adapter, we were supposed to find an additional option elsewhere in the BIOS. As it turned out, this option did not appear until forcing UEFI in the Compatibility Support Module (CSM) options:

View Full Size

With that last option tweaked, we found what we were looking for:

View Full Size

This is an interesting addition as well, as in the past you could only create RAID volumes from within the option ROM presented during boot (Ctrl-I).

View Full Size

Creating a PCIe RAID here was no more difficult than creating one from SATA devices.

View Full Size

With the PCIe RAID enabled, all we could boot from was our USB Windows installer drive.

Unfortunately that is where the fun ended (**EDIT** We did get this working! Check out Page 2 for the details and results). While we could create a RAID of PCIe devices, the same combination of hardware and software configuration changes that made the RAID possible also removed our ability to boot the system. We could not even test the PCIe RAID from within Windows when booting from a single SATA device. Any single option flipped the other way would enable booting from SATA, but that same change would also make the PCIe RAID disappear. I ran the same gauntlet with a pair of Plextor M6E SSDs, which also had the same result. It was probably the most frustrating game of catch-22 I’ve ever played, so we had to shelf this testing until we can get some higher level support from ASUS and Intel – I’m guessing in the form of a bugfixed UEFI firmware.

SATA RAID Testing:

With PCIe testing on hold, I moved on to SATA. If this new DMI link could handle upwards of the claimed 3.5 GB/sec with a connected PCIe RAID, I set out to discover how that new upper limit would affect a SATA RAID. I broke out a six pack of recent Intel SATA 6Gb/sec SSDs and scoured the office for SATA power cables. With all six SATA 6Gb/sec ports populated, I created a RAID-0, enabled the highest level of caching, and ran a quick throughput check:

View Full Size

With SATA speeds apparently still capped at less than 2GB/sec, I was once again disappointed. While these troughput figures are ~100-200 MB/sec faster than what I’ve seen on Z97 / X99 RAIDs, it appears the link between the SATA controller and the rest of the chipset is to blame for the limit we are seeing here. It stands to reason that in those older chipsets, the SATA controller was designed to run just slightly faster than the DMI 2.0 throughput of 20 Gb/s. The new Z170 chipset may have a faster and more capable DMI, but the SATA controller appears to be based on the legacy design. It may still be relying on a PCIe 2.0 x4 link.

Conclusion:

The lack of increased SATA performance of the new Z170 chipset can be disappointing for those who were holding out for an update, but I can see how Intel would shift their focus toward PCIe SSD RAID support. Their SATA solution is still more than sufficient for HDDs performing mass storage duties while PCIe SSDs can take over for the more latency sensitive tasks. As for that fire breathing PCIe RAID we have all been waiting for, we will have to wait a few more days for an updated firmware before we can provide those results. If you have been chomping at the bit to boot off of a PCIe RAID, I recommend holding off on that Z170 motherboard purchase until we can confirm that this issue is corrected.


August 5, 2015 | 12:17 PM - Posted by Edwin Liew (not verified)

Hey Allyn, have you tested the boot up speeds with the new z170 boards using the intel 750? I was hoping the enumeration fault was gone and we wouldn't need to use the csm to boot into them, guess we'll have to wait for another gen..?

August 5, 2015 | 01:27 PM - Posted by Allyn Malventano

I can't say for sure since the implementation we got in for testing still had a few bugs to work out. One issue was that we couldn't get the CSM disabled - it would re-enable again after a reboot. 

August 11, 2015 | 07:14 AM - Posted by Edwin Liew (not verified)

Any update to this, Allyn? :P

August 11, 2015 | 07:16 AM - Posted by Edwin Liew (not verified)

oh and er, maybe some academic tests of boot times with:
raid 750x2
single 750

August 5, 2015 | 03:26 PM - Posted by Cyclops

Why are you so focused on the SATA performance, Allyn? In the real world, would you rather RAID 0 6 x 240GB SSDs or grab a single 1.2TB SSD 750?

SATA is dead to me. It's great for general usage but you can't compared 65k/65k queues/commands of NVMe vs 1/32 of SATA. On the other hand, I am disappointed that getting a NVMe RAID solution is still problematic.

Another thing. The PCH can provide 20 PCIe 3.0 lanes. which translates roughly to 20 GB/S. The DMI however, can't process more than 8 GT/s. What's the point in including all of those lanes when DMI is still the bottleneck?

August 5, 2015 | 05:36 PM - Posted by Anonymous (not verified)

That and no real specifications on the GPU that's integrated into the processor, The DMI is not the only problem, there are plenty of older Intel SKUs in the retail channels, and plenty of time to wait for Zen's arrival. I'm still coming up empty in my search for a Carrizo Based laptop with downgrade rights to windows 7, and I'll pay a premium to get Carrizo's graphics and not have to wait for even longer for the i7's quad cores on my current laptop to struggle at 100% for a few hours just to render a single image. Carrizo's latest GCN cores and the latest software to utilize HSA have moved the rendering more fully onto the GPU, and so long with the need for Quad cores and 8 threads for rendering workloads! Come on HP where are the Probooks with a 35 watt Carrizo, and a 1920 x 1080 screen, and maybe a Probook with a discrete GPU to pair with the integrated one, I like business laptops because they have the windows 7 options, just show me some options for better screen resolutions and Carrizo.

August 5, 2015 | 09:56 PM - Posted by Allyn Malventano

SATA still has its uses. Cost/GB is still lower than PCIe devices and there are still some who want GPUs in all of their PCIe slots. Booting NVMe PCIe is still not as elegant / trouble free as it should be, either. Also, if it wasn't bottlenecked somewhere, 6xSATA in a RAID-0 would give many PCIe SSDs a run for their money, especially in write speeds.

August 14, 2015 | 04:37 PM - Posted by Anonymous (not verified)

This is so that you can physically hook up a ton of devices. The expectation is that you don't use them all at once.

The CPU, in the end, can only process data at a certain speed. If you were able to hook up enough SATA drives to saturate 20 PCI-Express 3.0 lanes, and let's go with your figure of 20MB/s, that's awfully close to the maximum total memory bandwidth of a DDR3 system. You'd choke everything else.. it just doesn't make sense at least on a consumer system.

And hey, SATA is a dog now but remember how far that has come since the IDE days. Now it's time for NVMe, certainly the next evolution. But in all honesty, while the geek in my loves all this, at the moment I am left wondering how much I/O I really need. As it is, I'm mostly blocked by a 75/75 internet connection in terms of I/O, and that's easily taken care of by one SATA port.

August 14, 2015 | 04:38 PM - Posted by Anonymous (not verified)

.. and certainly 20GB/s not MB/s lol :)

August 14, 2015 | 04:38 PM - Posted by Anonymous (not verified)

.. and certainly 20GB/s not MB/s lol :)

August 5, 2015 | 11:36 PM - Posted by MRFS (not verified)

How about any of the stable PCIe 3.0 add-on RAID controllers with 2 fan-out ports supporting a total of 8 x 6G SSDs?

If my math is correct, such x8 edge connectors support an upstream bandwidth of 8.0 GHz / 8.125 x 8 lanes ~= 7.88 GBps
(with the 128b/130b "jumbo frame", 130 bits / 16 bytes = 8.125 bits per byte in PCIe 3.0 chipsets).

Allyn, I know you don't prefer Highpoint AOCs, but doesn't PCPER have a few of the latest PCIe 3.0 RAID controllers in your parts inventory e.g. LSI, ATTO, Adaptec et al.?

p.s. Plug-and-Play, where are you?

MRFS

August 5, 2015 | 11:54 PM - Posted by MRFS (not verified)

At Newegg, search for "12Gb/s RAID Controller".

It would be nice if SATA SSD manufacturers
upped their clock speeds to 12 Gbps.

Or, am I dreaming again?

MRFS

August 6, 2015 | 12:17 AM - Posted by MRFS (not verified)

Not to be forgotten: 12Gbps SAS SSDs:

Toshiba Raises The Bar For 12 Gb/s SAS SSD Performance With PX04S Series

http://www.tomsitpro.com/articles/toshiba-ssd-12gbs-sas-px04s,1-2778.html

August 6, 2015 | 03:55 AM - Posted by Epicman (not verified)

Really simple question here: can I just grab any Z170 mobo (with a 32Gbps M.2 slot) and BOOT Win 8.1 / 10 from a SM951?

August 6, 2015 | 03:48 PM - Posted by Allyn Malventano

Yes, that should work without issue (even with the NVMe version of the SM951)

October 14, 2015 | 09:41 PM - Posted by Md (not verified)

Hi Allyn,
Follow up simple question, please. Can you give, or direct me to, steps to installing Windows 7 Pro onto SM951 installed on m.2 slot of Asus Z-170-A mobo as boot drive?
Have read endless threads on problems and still not clear on how to.
So far your thread is great...but not seeking raid...just booting off small with windows 7..
Thanks in advance..

August 6, 2015 | 05:21 AM - Posted by Frost (not verified)

Do you plan on testing NVMe PCIe Gen 3 SSD connected to the Z170 PCH vs connected to native CPU lanes on X99 to see the difference in performance (latency etc.)?

August 6, 2015 | 03:49 PM - Posted by Allyn Malventano

Some quick tests shows that a single SSD 750 is very close in performance on either side of the chipset. Will do more detailed testing in the future, but it's nothing to be concerned about.

August 10, 2015 | 05:10 AM - Posted by Frost (not verified)

Please also include test cases when there's simultaneous I/O going via PCIe SSD and USB 3 over DMI 3 to see how things slow down when DMI is saturated from multiple sources.

October 1, 2015 | 02:23 PM - Posted by Livern (not verified)

Same question I had!

August 6, 2015 | 12:23 PM - Posted by Anonymous (not verified)

I've been waiting for an article like this, many thanks.

As far as the results, what a shame. Turns out the DMI bottleneck was not the only one at work.

August 6, 2015 | 12:50 PM - Posted by Dark_wizzie

Does the 750 or sm951 use CPU lanes on x99 mobos? If so, how does its performance compare to PCH lanes of the Skylake stuff?

On a side note, a storage guy on OCN told me that Skyrim with a ton of mods probably requires a lot of sequential reads. That seems contrary to what I know about SSDs. Shouldn't it be random reads? 4k, 16k.

And finally, I'd make a trace analysis joke but I think that's pretty dead at this point, lol.

August 6, 2015 | 03:53 PM - Posted by Allyn Malventano

Skyrim mods are likely being read out at 128k sequential. A given texture is probably >16k in size, and would look more sequential to an SSD than random.

August 6, 2015 | 09:40 PM - Posted by Dark_wizzie

Thanks a lot Allyn, that's very useful information to me.

What about the PCH/CPU pcie lane stuff?

August 7, 2015 | 01:01 PM - Posted by Allyn Malventano

DMI doesnt count against the 16 CPU lanes of Skylake. Also, I initially misunderstood the info we were passed - there are 20 downstream lanes from the PCH (not 20-4 as I initially thought). This means you can have 36 total PCIe lanes connected to a Skylane CPU (though 20 of those are bottlenecked by a 4x DMI link).

August 9, 2015 | 05:59 AM - Posted by Dark_wizzie

So the difference between the 16 lanes in Skylake for the GPU and the rest of the PCIE lanes (20 left) for other stuff, is that the 16 lanes for the GPU has a much higher bandwidth compared to the other 20 lanes?

I heard people say 'the 16 lanes are native, direct connections to the CPU whereas the rest are handled by the chipset' which hints that there might be some sort of caveat with the rest of the PCIE lanes beyond just bandwidth. Higher latencies? Or something.

Nice work on these articles, you produce some good stuff. really hope you get around to answering this question, it's my last burning SSD question.

December 22, 2015 | 04:00 AM - Posted by patrickpoet (not verified)

There wouldn't be in general much of a latency difference, since the DMI 3.0 connection is very similar to PCI 3.0, i.e. a packet bus. The problem is though, that they will all share 3.93 GB/s (31.44 Gb/s) via DMI 3.0 and they will also share that bandwidth with 6 other HSIO slots that are used for USB 3.x.

August 6, 2015 | 02:19 PM - Posted by MRFS (not verified)

cf.
http://forums.storagereview.com/index.php/topic/37949-toshiba-px04s-ente...

Why aren't SATA SSD manufacturers upping their clock speeds to 12 Gb/s too?

Even though SATA SSDs are necessariliy single-ported (as compared to SAS double-porting), an 8G clock is already a standard feature of the PCIe 3.0 specification, and 12 Gb/s RAID controllers are now available from a number of reputable vendors r.g. Areca, LSI, Adaptec, ATTO, Intel etc.

Moreover, SAS backplanes are designed to work with SATA drives.

What am I missing here, please?

MRFS

August 6, 2015 | 04:35 PM - Posted by Anonymous (not verified)

It seems like DMI is still a limiting factor. If they are going to put 20 PCIe links on the chipset, then it seems like they should increase the speed of the up link to the CPU more significantly.

August 6, 2015 | 05:51 PM - Posted by fredey (not verified)

could you the intel rst tech with a bunch of sata ssd? cause then having a raid 0 zero cache would be a like a ultimate all in one solution

August 6, 2015 | 06:10 PM - Posted by Rick0502 (not verified)

so if 3.5gb/s is the limit of dmi 3.0, why have all these ports with a limit so low? you have 10-12 usb 3.0 ports, usb 3.1, m.2, sata 3 ports. i feel like it would be easy to max that out with all the connections you have.

August 6, 2015 | 06:33 PM - Posted by wizpig64 (not verified)

PCIe raid is really exciting but for the money I think i'd still just raid-0 pair of 850 evos for daily use.

August 7, 2015 | 01:06 AM - Posted by MRFS (not verified)

We experimented with a RAID-0 of 4 x Samsung 840 Pro SSDs,
and our sequential READ speed exceeded 1,800 MB/sec.
with PCIe 2.0 and a cheap Highpoint RocketRAID 2720SGL:

http://supremelaw.org/systems/io.tests/4xSamsung.840.Pro.SSD.RR2720.P5Q....

http://supremelaw.org/systems/io.tests/4xSamsung.840.Pro.SSD.RR2720.P5Q....

http://supremelaw.org/systems/io.tests/4xSamsung.840.Pro.SSD.RR2720.P5Q....

http://supremelaw.org/systems/io.tests/4xSamsung.840.Pro.SSD.RR2720.P5Q....

The P5Q Deluxe machine above remains very snappy:
we plan to do the same with 4 x Samsung 850 Pro SSDs.

With a 10-year factory warranty, the speed will be
fast enough e.g. to SAVE and RESTORE 12GB ramdisks
at SHUTDOWN and STARTUP using RamDisk Plus.

Icy Dock now make a nice 8-slot 5.25" enclosure:

http://www.newegg.com/Product/Product.aspx?Item=N82E16817994178&Tpk=N82E...

August 6, 2015 | 08:02 PM - Posted by Austin (not verified)

Allyn,

How well do we think z170 boards will boot from nvme SSDs?

August 6, 2015 | 08:04 PM - Posted by Austin (not verified)

specifically m.2 nvme

August 7, 2015 | 04:16 AM - Posted by John H (not verified)

Hey Allyn - are you 100% sure all 6 of those sata SSDs were communicating at 6 gbps?

August 7, 2015 | 09:56 AM - Posted by Allyn Malventano

Individually they were going full speed. 

August 7, 2015 | 06:35 AM - Posted by JoshZ (not verified)

hi Allyn,

(1) when testing the PCIe raid0, did you try Windows Write-cache buffer flushing set to default Enabled (i.e. disabling Intel proprietary caching options)? i think it maybe actually gave me better results in some tests a while back with an ssd raid..

(2) if you create 6-ssd SATA raid0 in Windows intel RST GUI, does the wizard also happen to give you choice between SATA and PCIe controller options? could it be that with PCIe controller setting chosen, the 6-ssd raid could get the full 3+ GB/s bandwidth?

August 7, 2015 | 09:57 AM - Posted by Allyn Malventano

Performance was lower with it enabled. 

When you select PCIe, you can only choose PCIe devices. 

August 7, 2015 | 07:44 AM - Posted by CharlieK (not verified)

So 3x sm951 on the asrock z170 extreme7 would still be capped at 3.5GBps?

August 7, 2015 | 01:03 PM - Posted by Allyn Malventano

There would be no way to connect the third one and use it with RST.

August 10, 2015 | 03:03 PM - Posted by DavidF (not verified)

Are you sure Allyn -reading the manual this looks very do-able

It would mean no SATA drives tho...

August 16, 2015 | 02:22 PM - Posted by Anonymous (not verified)

http://www.pcper.com/reviews/Storage/Intel-Skylake-Z170-Rapid-Storage-Te...

for what ASRock Z170 Extreme7+ make three Ultra M.2 slots ?

December 22, 2015 | 04:03 AM - Posted by patrickpoet (not verified)

That isn't true, people have tested with three drives connected to m.2 in RAID. It works fine. It does away with all of the SATA 3 except for 4, as per their manual.

August 7, 2015 | 08:06 AM - Posted by Anonymous (not verified)

can i work with 2 or 3 samsung 951 in raid 0 ???

can i get bus speed 200MHZ ?

August 8, 2015 | 11:30 PM - Posted by MRFS (not verified)

BTW, Allyn,

NICE WORK!

You be THE BEST, Man :-)

MRFS

August 8, 2015 | 11:44 PM - Posted by CSR (not verified)

Hey guys what's the version of intel rapid storage technology that you have installed and were did you get it am not seeing the options shown as yours.
My setup ASUS Z170-DELUXE xp951(512GB) and intel 750 1.2TB drives
I created raid0 intel 750+Xp951.Capacity lowered 953GB
Able to see the drives on raid0 during install but capacity (953GB)lowered why?
W10 install went smoothly but does not boot tried various changes unable to boot.

Can I have the Bios settings to make UEFI boot with raid0.

Thanks for your article.

August 10, 2015 | 08:00 PM - Posted by MRFS (not verified)

2 members of a RAID-0 array must contribute
the exact same amount of storage to that array,
hence -- as summarized in a reply above --
RST allocated only 512GB (unformatted) from
the Intel 750, as follows:

512GB (xp951) + 512GB (Intel 750) = 1,024 GB unformatted

Using a much older version of Intel Matrix Storage Console,
it reports 48.83GB after formatting a 50GB C: partition on
2 x 750GB WDC HDDs in RAID-0.

Using that ratio, then, we predict:

48.83 / 50 = .9766 x 1,024 = 1,000.03 GB (formatted)

You got 953GB == close enough, considering that we
are looking at entirely new storage technologies here,
and later operating systems will default to creating
shadow and backup partitions, which explain the
differences between 1,000 and 953.

Hope this helps.

MRFS

MRFS

August 10, 2015 | 08:06 PM - Posted by MRFS (not verified)

Did you format with GPT or MBR?

I seem to recall that UEFI booting requires a GPT C: partition.

Please double-check this, because I've only formatted
one GPT partition in my experience (I'm still back at XP/Pro).

Allyn, if you are reading this, please advise.

THANKS!

MRFS

August 10, 2015 | 03:16 AM - Posted by Anonymous (not verified)

capacity lowered to match the smaller drive,anyone able to boot windows from a raid 0 pcie setup?

August 16, 2015 | 12:26 PM - Posted by Anonymous (not verified)

[url]http://www.pcper.com/reviews/Storage/Intel-Skylake-Z170-Rapid-Storage-Technology-Tested-PCIe-and-SATA-RAID/PCIe-RAID-Resu[/url]

Samsung SM951 128GB AHCI can work in raid 0 X 3 disk ?
if i buy 3 disk like that
[url]http://www.amazon.com/Samsung-SM951-128GB-AHCI-MZHPV128HDGM-00000/dp/B00VELD5D8/ref=sr_1_3?s=pc&ie=UTF8&qid=1439745389&sr=1-3&keywords=Samsung+SM951[/url]

what speed can reach ?
what is the difference between Samsung SM951 vs SSD 750 ?

August 31, 2015 | 05:13 PM - Posted by Anonymous (not verified)

I am buying a gigabyte g1 gaming board and it has 2 m.2 slots on the board and says it can do raid0 if I put 2 sm951 256 gb m.2 cards on board should I be able to raid 0 and also boot and how would I set it up I was also planning on a 2 tbhdd for storage any thoughts

September 27, 2015 | 12:36 PM - Posted by Anonymous (not verified)

Good luck just returned my G-1 gaming could not get CSM disabled to stay and never got the Intel raid setup to appear in the Bios.

October 8, 2015 | 01:47 PM - Posted by Grandstar (not verified)

This was just a fine article, advanced and I enjoyed reading it. Thanks

Way back, I was one of the first to run bootable dual SSD`s in Raid 0 with an LSI raid controller. This month I hope to purchase the new Samsung 950 Pro V-NAND M.2.

I want to run a pair in Raid O.

But, and I quote the author: So what we are seeing here is that the DMI 3.0 bandwidth saturates at ~3.5 GB/sec, so this is the upper limit for sequentials on whichever pair of PCIe SSDs you decide to RAID together.

Since the new Samsungs are in broad terms:
Sequential Read 2,500 MB/s
Sequential Write 1,500 MB/s

Ergo, running dual Samsung 950 Pro V-NAND M.2 would have a theoretical sequential of 5 MBs, thus surpassing the bandwidth of DMI 3.0 by 1.5 MBs.

Question:

Even if a hardware raid controller is used, it will be limited to DMI bandwidth?

I take it there is no workaround for this bottleneck at the moment...

October 9, 2015 | 05:19 PM - Posted by Allyn Malventano

Even if a hardware raid controller is used, it will be limited to DMI bandwidth?

Only if it is connected to a PCIe slot behind DMI.

October 10, 2015 | 12:23 PM - Posted by parrotts (not verified)

hi Allyn
( sorry for my bad english)
i'm trying to install 2 sm951 nvme in radi 0 on my sabertooth x99 and i have some difficult to do.
the ssd is installed one in m.2 native slot and one in a pcie adaptor in the second slot.
bios see both.
i can install os in both.
i try some changes in bios like your article.
i try to modding bios file to update with version 14 of intel rst driver ( in fact in the advanced mode of bios in intel rapid storage flag it shown version 14 but no disc is detected)
in the nvme configuration they are listed
bus 3 sm951.....
bus 5 sm951.....
can you help me in some way?

thanks
parrotts

October 12, 2015 | 12:05 PM - Posted by Grandstar (not verified)

Allyn,

By implementing Raid 0 on a ASRock Z170 Extreme7 using 3 x Samsung 950 Pro V-NAND M.2 (upcoming) inserted into the three available Ultra M.2 ports (which are connected directly to the CPU, i.e not behind a chipset with DMI 3.0 limit of 3.5 GB/sec), wouldn't it be possible to achieve about 7.5GB/s sequential read?

October 16, 2015 | 01:14 PM - Posted by Grandstar (not verified)

As it turns out the Intel 750 was tested by some folks at ASUS with a sequential read of over 2.7 GB/s read and 1.3 GB/s write. Who needs anything else? I don't think Samsung will be as fast, and it's late to the market.

October 18, 2015 | 06:08 PM - Posted by Md (not verified)

October 14, 2015 | 09:41 PM - Posted by Md (not verified)
Hi Allyn,
Follow up simple question, please. Can you give, or direct me to, steps to installing Windows 7 Pro onto SM951 installed on m.2 slot of Asus Z-170-A mobo as boot drive?
Have read endless threads on problems and still not clear on how to.
So far your thread is great...but not seeking raid...just booting off small with windows 7..
Thanks in advance..

November 3, 2015 | 10:09 AM - Posted by Hugo Páscoa (not verified)

Hi, do you try to install in a normal sata ssd, then you clone to sm951?

November 21, 2015 | 11:02 AM - Posted by david (not verified)

Hi,

If I understand correctly the max bandwidth over the dmi 3.0 Z170 sata express connector is 3.5 Gb/s? What about on the X99 chipset? Is it also possible to raid m.2 on X99? And why do motherboard manufactures advertise the speed limit at 32Gb/sec?

Confused.

Thanks

December 15, 2015 | 07:20 AM - Posted by Jade (not verified)

Hi Ryan, I have some question with the Raid 0 Configuration of NVME M.2 SSD's.. If I make a RAID 0 set up for 2 NVME SSDs running on 2 PCIe 3.0 X4, is the DMI 3.0 be sufficient enough to carry such bandwidth? From what I know, One PCIe 3.0 x4 is equal to 32Gbps.. So in RAID 0, it will be 64 Gbps yes?

January 22, 2016 | 11:28 AM - Posted by Anonymous (not verified)

I really appreciate the testing you have done here. I built a z170 deluxe, pcie 750 ssd, GTX 980ti and called intel for support after I buggered up my UEFI settings. My support guy couldn't find any answers (i'm sure he was googling everything I had did) but having him on the line gave me the courage to forge ahead and finally got the drive back and working. I do think I have a ways to go though. I only have two SATA ports available now. SATA 3,4. In reading your posts I believe I forgot to go back and disable Hyperkit. Obviously Hyperkit isn't needed for a drive installed in the PCIe slot 3. Doh!
Keep up the good work

October 22, 2016 | 12:22 PM - Posted by Anonymous (not verified)

For those of you wanting to do m.2 RST SSD caching, to get the drive to show up just change the boot option as mentioned in the article. Thank you guys so much for finding and sharing these settings!!

December 12, 2016 | 03:09 PM - Posted by David Byrne (not verified)

Thanks guys, the diagram on the next page on the lanes helped me a lot, have been struggling to get my PCIe raid running along side a SATA one, now I understand why but see how I can do it

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

CAPTCHA
This question is for testing whether you are a human visitor and to prevent automated spam submissions.