Subject: Storage | January 10, 2018 - 07:24 PM | Tim Verry
Tagged: Mushkin, silicon motion, SM2262, SM2263XT, 3d nand, tlc, M.2, NVMe, CES, CES 2018
Mushkin is on site at CES where it is launching a slew of new products. On the storage front, Mushkin is showing off three new M.2 2280 form factor NVMe solid state drives aimed at various price points. The Pilot, Pilot-E, and Helix-L M.2 drives all use Silicon Motion controllers and 3D TLC NAND flash memory. Mushkin further advertises them with a three-year warranty and the company's MEDS Reliability Suite which includes technology to enable end-to-end data path protection, LDPC ECC, and global wear leveling algorithms to ensure data integrity and longevity.
At the top end of performance is the Pilot-E M.2 SSD based on SM2262EN controller which offers up eight channels for connecting all the 3D NAND. This 250 GB to 2 TB drive is able to achieve extremely speedy 3.5 GB/s sequential reads and 3.0 GB/s sequential writes along with 370K read IOPS and 300K write IOPS. Essentially, the Pilot-E M.2 should be able to easily max out the PCI-E x4 connection with the right workloads.
Stepping down a bit, the Pilot drive uses an eight channel SM2262 controller. This drive gets close to the Pilot-E in reads, but has much lower sequential write performance. Capacities for this SSD range from 120 GB to 2 TB. Specifically, the Pilot SSD is rated at 3.2 GB/s sequential reads, 1.9 GB/s sequential writes, 370K random read IOPS, and 300K random write IOPS. This drive should be cheaper than the Pilot-E and will be aimed at the consumer space where reads are more important than writes.
Finally, Mushkin's Helix-L is a lower cost SSD that uses a DRAM-less design to reduce cost as well as a cheaper four channel SM2263XT controller. Capacities range from 120 GB to 1TB. This SSD supports Host Memory Buffer architecture which allows it to use system memory as a cache to improve performance. The Helix-L is rated at 2.4 GB/s sequential reads, 1.7 GB/s sequential writes, 280K random read IOPS (140K without HMB) and 250K random write IOPS.
Mushkin has not yet revealed pricing or availability on its new NVMe 1.3 drives. You can read more about the Silicon Motion controllers used here.
Subject: Storage, Shows and Expos | January 8, 2018 - 03:04 PM | Allyn Malventano
Tagged: XS700, toshiba, ssd, RC100, portable, ocz, NVMe, CES 2018, CES
Toshiba announced a couple of new additions to their SSD lineup. First up is the RC100:
This is a DRAMless design intended to target budget builds - something much needed in the current (pricey) SSD landscape. Just because there is no DRAM present in the design does not mean that the RC100 can't perform well. Toshiba has implemented the Host memory Buffer (HMB) feature, which allows the NVMe driver to share a small (38MB) portion of host memory via the same PCIe 3.0 x2 link used to transfer user data. This memory portion effectively caches a portion of the FTL, which should bring the random performance of smaller sections of the SSD up to what you would expect to see from a higher performance product. Specs are as follows:
- Capacities: 120/240/480GB
- PCIe 3.0 x2
- Random read/write: 160/120k IOPS
- Sequential read/write: 1620/1130 MB/s
- Warranty: 3 years
Up next is the XS700, Toshiba's first portable SSD:
- 240GB only
- USB 3.1 Gen2 (type-c connector on device)
- Ships with type-c to type-a cable
The XS700 is the first portable SSD I've seen out of Toshiba. It was just a matter of time here as just about every other major SSD maker has offered a similar product.
We don't have pricing yet, but these should shape up to be highly price-competitive products offering decent performance. Both models will be coming later this year.
Press blast after the break.
Introduction and Specifications
Back in April, we finally got our mitts on some actual 3D XPoint to test, but there was a catch. We had to do so remotely. The initial round of XPoint testing done (by all review sites) on a set of machines located on the Intel campus. Intel had their reasons for this unorthodox review method, but we were satisfied that everything was done above board. Intel even went as far as walking me over to the very server that we would be remoting into for testing. Despite this, there were still a few skeptics out there, and today we can put all of that to bed.
This is a 750GB Intel Optane SSD DC P4800X - in the flesh and this time on *our* turf. I'll be putting it through the same initial round of tests we conducted remotely back in April. I intend to follow up at a later date with additional testing depth, as well as evaluating kernel response times across Windows and Linux (IRQ, Polling, Hybrid Polling, etc), but for now, we're here to confirm the results on our own testbed as well as evaluate if the higher capacity point takes any sort of hit to performance. We may actually see a performance increase in some areas as Intel has had several months to further tune the P4800X.
This video is for the earlier 375GB model launch, but all points apply here
(except that the 900P has now already launched)
The baseline specs remain the same as they were back in April with a few significant notable exceptions:
The endurance figure for the 375GB capacity has nearly doubled to 20.5 PBW (PetaBytes Written), with the 750GB capacity logically following suit at 41 PBW. These figures are based on a 30 DWPD (Drive Write Per Day) rating spanned across a 5-year period. The original product brief is located here, but do note that it may be out of date.
We now have official sequential throughput ratings: 2.0 GB/s writes and 2.4 GB/s reads.
We also have been provided detailed QoS figures and those will be noted as we cover the results throughout the review.
Subject: Storage | November 6, 2017 - 03:22 PM | Jeremy Hellstrom
Tagged: crucial, Momentum Cache, NVMe, Crucial Storage Executive
The SSD Review noticed something very interesting in the latest update to Crucial's Storage Executive software, the Momentum Cache feature now works with a variety of non-Crucial NVMe SSDs. The software allows your system to turn part of your RAM into a cache so that reads and writes can initially be sent to that cache which results in improved performance thanks to RAM's significantly quicker response time. If you have a Crucial SSD installed as well as another NVMe SSD and are using the default Windows NVMe driver, you can set up caching on the non-Crucial SSD if you so desire. Stop by for a look at the performance impact as well as a list of the drives which have been successfully tested.
"Crucial’s Momentum Cache feature, part of Crucial Storage Executive, is unlocked for all NVMe SSDs, or at least the ones we have tested in our Z170 test system; the key here, of course, is that a compatible Crucial SSD must initially be on the system to enable this feature at all."
Here are some more Storage reviews from around the web:
- Patriot Hellfire 240GB @ Benchmark Reviews
- Team Group CARDEA Zero 240GB M2 SSD @ Guri of 3D
- HP S700 SSD Review @ OCC
- Western Digital (WD) My Cloud Home 6TB @ Kitguru
- Seagate IronWolf Pro 12TB SATA III HDD Review @ NikKTech
- Seagate BarraCuda Pro 12TB HDD @ Kitguru
Introduction, Specifications and Packaging
It’s been two long years since we first heard about 3D XPoint Technology. Intel and Micron serenaded us with tales of ultra-low latency and very high endurance, but when would we have this new media in our hot little hands? We got a taste of things with Optane Memory (caching) back in April, and later that same month we got a much bigger, albeit remotely-tested taste in the form of the P4800X. Since April all was quiet, with all of us storage freaks waiting for a consumer version of Optane with enough capacity to act as a system drive. Sure we’ve played around with Optane Memory parts in various forms of RAID, but as we found in our testing, Optane’s strongest benefits are the very performance traits that do not effectively scale with additional drives added to an array. The preferred route is to just get a larger single SSD with more 3D XPoint memory installed on it, and we have that very thing today (and in two separate capacities)!
You might have seen various rumors centered around the 900P lately. The first is that the 900P was to supposedly support PCIe 4.0. This is not true, and after digging back a bit appears to be a foreign vendor mistaking / confusing PCIe X4 (4 lanes) with the recently drafted PCIe 4.0 specification. Another set of rumors centered around pre-order listings and potential pricing for the 280 and 480 GB variants of the 900P. We are happy to report that those prices (at the time of this writing) are way higher than Intel’s stated MSRP's for these new models. I’ll even go as far as to say that the 480GB model can be had for less than what the 280GB model is currently listed for! More on that later in the review.
Performance specs are one place where the rumors were all true, but since all the folks had to go on was a leaked Intel press deck slide listing figures identical to the P4800X, we’re not really surprised here.
Lots of technical stuff above, but the high points are <10us typical latency (‘regular’ SSDs run between 60-100us), 2.5/2.0 GB/s sequential reads/writes, and 550k/500k random read/write performance. Yes I know, don’t tell me, you’ve seen higher sequentials on smaller form factor devices. I agree, and we’ve even seen higher maximum performance from unreleased 3D XPoint-equipped parts from Micron, but Intel has done what they needed to do in order to make this a viable shipping retail product, which likely means sacrificing the ‘megapixel race’ figures in favor of offering the lowest possible latencies and best possible endurance at this price point.
Packaging is among the nicest we’ve seen from an Intel SSD. It actually reminds me of how the Fusion-io ioDrives used to come.
Also included with the 900P is a Star Citizen ship. The Sabre Raven has been a topic of gossip and speculation for months now, and it appears to be a pretty sweet looking fighter. For those unaware, Star Citizen is a space-based MMO, and with a ‘ship purchase’ also comes a license to play the game. The Sabre Raven counts as such a purchase and apparently comes with lifetime insurance, meaning it will always be tied to your account in case it gets blown up doing data runs. Long story short, you get the game for free with the purchase of a 900P.
Subject: General Tech | October 2, 2017 - 04:12 PM | Tim Verry
Tagged: X399, Threadripper, nvme raid, NVMe, amd, 960 PRO
A recent support page and community update posting suggest that NVMe RAID support is coming to Threadripper and the X399 platform imminently (as soon as motherboard manufacturers release an updated BIOS/UEFI). AMD will support up to six NVMe drives without adapters in a RAID 0, 1, or 10 array with all the drives wired directly to the PCI-E controller in the CPU rather than being routed through the chipset (meaning no DMI bottlenecking). There are no limits on the brand of drives and the NVMe RAID update is free with no hardware or software keys needed to unlock it.
NVMe SSDs are very fast on their own, but when combined in a RAID array directly wired to the CPU things really get interesting. AMD claims that it saw read speeds of 21.2 GB/s when reading from six Samsung 960 Pro 512 GB drives in a RAID 0 array! The company also saw near perfect scaling with their test array (when adding up to six drives over a single drive) with reads scaling 6x on reads and 5.38x on writes. Intel's VROC seems to have the theoretical performance advantage here with the ability to RAID more total drives (four per VMD and three VMDs per CPU) but only after purchasing a hardware key and when using more than one VMD it can't be a bootable OS array. When it comes to bootable arrays, AMD would appear to have the upper hand with free support for up to six drives that can be used to run your bootable OS array! Windows has never booted faster! (heh)
Along with its partners releasing BIOS updates, AMD is releasing updates to its NVMe RAID Driver (version 17.50) and RAIDXpert2 Windows management ultility. Currently, Windows 10 x64 build 1703 is officially supported and fresh installs of Windows are recommended (and if you are currently running your Windows OS off of a RAID array a fresh install is required).
Once BIOS updates are available (and they are coming shortly), users will have to jump through a few hoops to get a NVMe RAID up and running, but those hoops may just be worth it for enthusiasts wanting the best storage performance! For one, if you have a RAID array (bootable or not) you will not be able to do an in-place upgrade. If you have a SATA RAID you must back up your data and break down the array before updating the UEFI/BIOS and installing the Windows driver. Further, if your existing array is bootable with your operating system installed on it you will need to back up your data, upgrade the BIOS, and perform a fresh install of Windows with the AMD supplied F6 driver. After upgrading the BIOS, there will be a new menu item (the exact name will vary by manufacturer but SATA Mode and SATA Configuration are likely suspects) where users will need to change the mode from SATA or AHCI to RAID.
Oh, and did I mention to back up your data before diving into this? NVMe RAID support for Threadripper is a long-awaited feature and has a lot of promise with Threadripper offering up 64 PCI-E lanes and, according to AMD, many boards offering 7 slots (6 with a graphics card) which is where AMD is getting the six drive support number. It is appears that using adapters like the Asus Hyper M.2 cards or DIMM.2 slots would allow users to go past that six drive limit though.
NVMe RAID support on X399 / Threadripper is a feature we are in the process of testing now (see comments) and I am very interested in what the results are! Stay tuned for more information as it develops!
- Finally figured out why THREADRIPPER has so many PCIe lanes (en) [VIDEO] @ der8auer
- Intel VROC Tested! - X299 VROC vs. Z270 RST, Quad Optane vs. Quad 960 PRO
- ASUS X299 Enables Intel Virtual RAID on CPU - RAID-0 up to 20 SSDs!
- Triple M.2 Samsung 950 Pro Z170 PCIe NVMe RAID Tested - Why So Snappy?
We've been hearing about Intel's VROC (NVMe RAID) technology for a few months now. ASUS started slipping clues in with their X299 motherboard releases starting back in May. The idea was very exciting, as prior NVMe RAID implementations on Z170 and Z270 platforms were bottlenecked by the chipset's PCIe 3.0 x4 DMI link to the CPU, and they also had to trade away SATA ports for M.2 PCIe lanes in order to accomplish the feat. X99 motherboards supported SATA RAID and even sported four additional ports, but they were left out of NVMe bootable RAID altogether. It would be foolish of Intel to launch a successor to their higher end workstation-class platform without a feature available in two (soon to be three) generations of their consumer platform.
To get a grip on what VROC is all about, lets set up some context with a few slides:
First, we have a slide laying out what the acronyms mean:
- VROC = Virtual RAID on CPU
- VMD = Volume Management Device
What's a VMD you say?
...so the VMD is extra logic present on Intel Skylake-SP CPUs, which enables the processor to group up to 16 lanes of storage (4x4) into a single PCIe storage domain. There are three VMD controllers per CPU.
VROC is the next logical step, and takes things a bit further. While boot support is restricted to within a single VMD, PCIe switches can be added downstream to create a bootable RAID possibly exceeding 4 SSDs. So long as the array need not be bootable, VROC enables spanning across multiple VMDs and even across CPUs!
Assembling the Missing Pieces
Unlike prior Intel storage technology launches, the VROC launch has been piecemeal at best and contradictory at worst. We initially heard that VROC would only support Intel SSDs, but Intel later published a FAQ that stated 'selected third-party SSDs' would also be supported. One thing they have remained steadfast on is the requirement for a hardware key to unlock RAID-1 and RAID-5 modes - a seemingly silly requirement given their consumer chipset supports bootable RAID-0,1,5 without any key requirement (and VROC only supports one additional SSD over Z170/Z270/Z370, which can boot from 3-drive arrays).
On the 'piecemeal' topic, we need three things for VROC to work:
- BIOS support for enabling VMD Domains for select groups of PCIe lanes.
- Hardware for connecting a group of NVMe SSDs to that group of PCIe lanes.
- A driver for OS mounting and managing of the array.
Let's run down this list and see what is currently available:
Check. Hardware for connecting multiple drives to the configured set of lanes?
Check (960 PRO pic here). Note that the ASUS Hyper M.2 X16 Card will only work on motherboards supporting PCIe bifurcation, which allows the CPU to split PCIe lanes into subgroups without the need of a PLX chip. You can see two bifurcated modes in the above screenshot - one intended for VMD/VROC, while the other (data) selection enables bifurcation without enabling the VMD controller. This option presents the four SSDs to the OS without the need of any special driver.
With the above installed, and the slot configured for VROC in the BIOS, we are greeted by the expected disappointing result:
Now for that pesky driver. After a bit of digging around the dark corners of the internet:
Check! (well, that's what it looked like after I rapidly clicked my way through the array creation)
Don't even pretend like you won't read the rest of this review! (click here now!)
Subject: Storage | August 14, 2017 - 08:09 AM | Allyn Malventano
Tagged: P4800X, XPoint, NVMe, HHHL, Optane, Intel, ssd, DC
We reviewed the Intel P4800X - Intel's first 3D XPoint SSD, back in April of this year. The one thing missing from that review was product pictures. Sure we had stock photos, but we did not have the product in hand due to the extremely limited number of samples and the need for Intel to be able to make more real-time updates to the hardware based on our feedback during the testing process (reviewers making hardware better FTW!). After the reviews were done, sample priority shifted to the software vendors who needed time to further develop their code bases to take better advantage of the very low latency that Optane can offer. One of those companies is VMware, and one of our friends from over there was able to get some tinker time with one of their samples.
Paul whipped up a few videos showing the installation process as well as timing a server boot directly from the P4800X (something we could not do in our review since we were testing on a remote server). I highly encourage those interested in the P4800X (and the upcoming consumer versions of the same) to check out the article on TinkerTry. I also recommend those wanting to know what Optane / XPoint is and how it works to check out our article here.
Introduction, Specifications and Packaging
Today Corsair launched their first ever HHHL form factor SSD, the NX500:
Just from the looks of this part, it is clear they were pulling out all the stops with respect to product design. This is certainly one of the most impressive looking SSDs we have seen come through our lab, and it will certainly be the type of thing enthusiasts would show off in their system builds. The NX500 is also likely to be the best showcase of Phison's new E7 controller. I'm just as eager to see if this SSD performs as well as it looks, so let's get to the review!
The specifications here are in line what we would expect for a modern day NVMe SSD. Note that ratings are identical for the 400GB and 800GB models, aside from a doubling of endurance due to the corresponding doubling of flash. There were some additional details in our press kit:
Extreme PerformanceThe Phison PS5007-E7• Description: PS5007-E7 is Phison’s first NVMe controller designed for high performance application. Supporting up to 8-channels in its NAND Flash interface.Extreme ReliabilityMultiple features are built into the PS5007-E7 to ensure stability and reliability.• SmartECC™ – Reconstructs defective/faulty pages when regular ECC fails• SmartRefresh™ – Monitors block ECC health status and refreshes blocks periodically to improve data retention• SmartFlush™ – Minimizes time data spends in cache to ensure data retention in the event of power lossExtreme ControlThe Neutron NX500 SSD with Phison PS5007-E7 controller works with CORSAIR SSD Toolbox.• Drive monitoring – Monitor the health of your Force Series• Secure wipe – For security purposes, completely clear the drive of any recoverable data• Firmware update – Install updated firmware as needed
As the Phison E7 is a new controller, it's worth taking a look at the internals:
Highlights from above are 8 channels to the flash, ONFI 3.2 and Toggle 2.0 support (covering most flash memory types), along with support for all modes (SLC/MLC/TLC).
I haven't seen SSD packaging this nice since the FusionIO ioDrive, and those parts were far more expensive. Great touch here by Corsair.
Subject: Storage, Shows and Expos | August 9, 2017 - 09:19 PM | Allyn Malventano
Tagged: FMS 2017, ssd, S4600, S4500, ruler, pcie, NVMe, Intel, EDSFF
Yesterday we saw Samsung introduce their 'NGSFF' form factor during yesterday's keynote. Intel has been at work on a similar standard, this one named EDSFF (Enterprise & Datacenter Storage Form Factor), with the simpler working name as 'Ruler', mainly because it bears a resemblance:
Note that the etching states P4500 Series. P4500 was launched a couple of days ago and is Intel's next generation NVMe PCIe Datacenter SSD. It's available in the typical form factors (U.2, HHHL), but this new Ruler form factor contains the exact same 12 channel controller and flash counts, only arranged differently.
SFF-TA-1002 connector (aka 'Gen-Z'), shown next to an AA battery for scale. This connector spec is electrically rated for speeds up to 4th and 5th generation PCIe, so future proofing was definitely a consideration here. In short, this is a beefed up M.2 style connector that can handle more throughput and also has a few additional pins to support remote power and power-loss-protection (capacitors outside the Ruler), as well as support for activity LEDs, etc.
Here is a slide showing the layout of the Ruler. 36 flash packages can be installed, with the possibility of pushing that figure to 42.
Thermals were a main consideration in the design, and the increased surface area compared to U.2 designs (with stacked PCBs) make for far cooler operation.
Intel's play here is fitting as much flash as possible into a 1U chassis. 1PB in a 1U is definitely a bold claim, but absolutely doable in the near future.
I'll leave you with the quick sniper shot I grabbed of their demo system. I'll be posting more details on the P4500 and P4600 series products later this week (remember, same guts as the Ruler), so stay tuned!