We aim to find out
Back in April of this year we first took a look at the storage performance of the then-new X470 chipset for the 2nd generation of Ryzen processors. Allyn dove into NVMe RAID performance and also a new offering called StoreMI. Based on a software tiered storage solution from Enmotus, StoreMI was a way for AMD to offer storage features and capabilities matching or exceeding that of Intel’s mainstream consumer platforms without the need for extensive in-house development.
Allyn described the technology well:
AMD has also launched their answer to Intel RST caching. StoreMI is actually a more flexible solution that offers some unique advantages over Intel. Instead of copying a section of HDD data to the SSD cache, StoreMI combines the total available storage space of both the HDD and SSD, and is able to seamlessly shuffle the more active data blocks to the SSD. StoreMI also offers more cache capacity than Intel - up to
512 256GB SSD caches are possible (60GB limit on Intel). Lastly, the user can opt to donate 2GB of RAM as an additional caching layer.
We recently did some testing with StoreMI after the release of the 2nd generation Threadripper processor evaluation was out of the way, just to get a feel for the current state of the software offering and whether or not it could really close the gap with the Optane caching solutions that Intel was putting forward for enthusiasts.
A little Optane for your HDD
Intel's Optane Memory caching solution, launched in April of 2017, was a straightforward feature. On supported hardware platforms, consisting of 7th and 8th generation Core processor-based computers, users could add a 16 or 32gb Optane M.2 module to their PC and enable acceleration for their slower boot device (generally a hard drive). Beyond that, there weren't any additional options; you could only enable and disable the caching solution.
However, users who were looking for more flexibility were out of luck. If you already had a fast boot device, such as an NVMe SSD, you had no use for these Optane Memory modules, even if you a slow hard drive in their system for mass storage uses that you wanted to speed up.
At GDC this year, Intel alongside the announcement of 64GB Optane Memory modules, announced that they are bringing support for secondary drive acceleration to the Optane Memory application.
Now that we've gotten our hands on this new 64GB module and the appropriate software, it's time to put it through its paces and see if it was worth the wait.
The full test setup is as follows:
|Test System Setup|
Intel Core i7-8700K
|Motherboard||Gigabyte H370 Aorus Gaming 3|
16GB Crucial DDR4-2666 (running at DDR4-2666)
Intel SSD Optane 800P
Intel Optane Memory 64GB and 1TB Western Digital Black
|Graphics Card||NVIDIA GeForce GTX 1080Ti 11GB|
|Graphics Drivers||NVIDIA 397.93|
|Power Supply||Corsair RM1000x|
|Operating System||Windows 10 Pro x64 RS4|
In coming up with test scenarios to properly evaluate drive caching on a secondary, mass storage device, we had a few criteria. First, we were looking for scenarios that require lots of storage, meaning that they wouldn't fit on a smaller SSD. In addition to requiring a lot of storage, the applications must also rely on fast storage.
Subject: Storage | July 3, 2012 - 12:21 AM | Tim Verry
Tagged: ssd, slc, server, sandisk, PCIe SSD, flash, enterprise, caching
Flash storage company Sandisk has recently jumped into the world of enterprise PCI-E caching SSDs – what they are calling Solid State Accelerators. Currently, they are offering a 200GB and 400GB model under the company’s Lightning PCIe series. The SSDs feature a proprietary Sandisk controller driving 24nm SLC NAND flash, a PCI-E 2.0 x4 interface, and maximum power draw of 15 watts.
The Lightning Accelerators use the NAND flash for Sandisk’s own foundry and offer a large performance boost for servers and workstations over hard drives and SATA SSDs. It is capable of 410 MB/s sequential reads or 110,000 IOPS. Further, when using 4KB and 8KB blocks, the drives can reach 23,000 and 17,000 read/write IOPS respectively. Other specifications include an average response time of 245 microseconds, and less than 30 millisecond maximum response times. The Solid State Accelerators also feature sustained read and write latencies as low as 50 microseconds.
Sandisk has built the drives so that they can be configured as boot drives, storage drives, or caching drives. The company supports up to 5 drives in a single system, for a maximum of 2TB of flash storage. In addition, Sandisk is offering up its Flashsoft software that allows the Lightning Accelerators to be used as caching drives on Windows-based systems. Unfortunately, that is an additional cost which is not included in the already pricey SSDs (good thing for corporate expense accounts!).
Speaking of pricing, the 200GB LP206M has an MSRP of $1,350 while the 400GB LP406M has an MSRP of $2,350. Both cards have five year warranties and a MTBF rating of 2 million hours. You can find more information on the Sandisk Website.
It will be interesting to see how this Sandisk accelerator stacks up to the likes of the Intel 910 and FusioIO drives! The FusionIO FX, for example, gives you 420GB of QDP MLC NAND for $2,495, which works out such that Sandisk has a slightly lower cost-per-gigabyte value and SLC flash. We will have to wait for some independant reviews to say which drive is actually faster, however.
Subject: Storage | April 7, 2012 - 11:15 PM | Tim Verry
Tagged: Intel SRT, Intel, caching, 313, 25nm
Intel is continuing the Intel SRT caching technology with two new Single Level Cell (SLC) SSD drives in both 2.5” SATA and mSATA form factors. The new Intel 313 series SSDs come in 20 GB and 24 GB capacities and are available for purchase now. Intel hopes that vendors will integrate the caching drives into their machines to improve performance while offering lots of storage with a mechanical hard drive. They further advertise the drives as "ultrabook ready."
The specifications can be found in the chart below, but they do seem to be a little strange, in that the larger capacity drive is actually slower in 4K random and sequential reads (which does not seem correct). After all, who would pay extra money for a slower caching drive (and a measly 4GB extra capacity) where read speeds are going to be more important than write speeds as far as general desktop performance.
|Intel 311||Intel 313 20 GB||Intel 313 24 GB|
|Random 4K Read IOPS||37,000||36,000||33,000|
|Random 4K Write IOPS||3,300||3,300||4,000|
|Sequential Read||200 MB/s||220 MB/s||160 MB/s|
|Sequential Write||105 MB/s||100 MB/s||115 MB/s|
|Price ($USD)||119.99 (retail)||119.99 (retail)||139.99 (retail)|
Compared to the previous generation "Larsen Creek" Intel 311 series SSD, the new "Hawley Creek" drives offer faster sequential read and write speeds. The 24 GB Intel 313 drive does manage to beat both the 20 GB Haswell and previous generation Intel 311 drive on 4K random writes, but otherwise the new drives are equal to, or slower than, the previous generation in 4K random IOPS (input/output per second). Considering the new drives are retailing for the same or more than the previous generation, the new Intel 313 SSDs really are not looking all that promising, despite the move to a 25nm NAND manufacturing process.
I am personally waiting for reviews to come out on the new Intel 313 drives before making a final decision, but they are nonetheless perplexing. More information is available here (PDF).
*Edit by Allyn*:
The 'odd' differences in performance are due to the channel routing. The 20GB model has the standard Intel 3Gb/sec controller using 5 of the 10 data channels (similar to the old 40GB X25-V). Each of those channels is routed to a 4GB SLC die. This lays out to 5 TSOP packages with 1 die each. The 24GB model also uses the same controller and channel layout, but those 5 channels are routed to 6x 4GB dies. This is an odd configuration, and assuming Intel kept the same PCB layouts, the 2.5" model has provision for additional mounted TSOPs but the mSATA PCB is too tight on room, meaning they would have had to shift only one of the 5 flash packages to a double stacked configuration. More to follow on that once we see these in the flesh.