Subject: Storage | August 16, 2016 - 02:00 PM | Allyn Malventano
Tagged: XPoint, Testbed, Optane, Intel, IDF 2016, idf
IDF 2016 is up and running, and Intel will no doubt be announcing and presenting on a few items of interest. Of note for this Storage Editor are multiple announcements pertaining to upcoming Intel Optane technology products.
Optane is Intel’s branding of their joint XPoint venture with Micron. Intel launched this branding at last year's IDF, and while the base technology is as high as 1000x faster than NAND flash memory, full solutions wrapped around an NVMe capable controller have shown to sit at roughly a 10x improvement over NAND. That’s still nothing to sneeze at, and XPoint settles nicely into the performance gap seen between NAND and DRAM.
Since modern M.2 NVMe SSDs are encroaching on the point of diminishing returns for consumer products, Intel’s initial Optane push will be into the enterprise sector. There are plenty of use cases for a persistent storage tier faster than NAND, but most enterprise software is not currently equipped to take full advantage of the gains seen from such a disruptive technology.
XPoint die. 128Gbit of storage at a ~20nm process.
In an effort to accelerate the development and adoption of 3D XPoint optimized software, Intel will be offering enterprise customers access to an Optane Testbed. This will allow for performance testing and tuning of customers’ software and applications ahead of the shipment of Optane hardware.
I did note something interesting in Micron's FMS 2016 presentation. QD=1 random performance appears to start at ~320,000 IOPS, while the Intel demo from a year ago (first photo in this post) showed a prototype running at only 76,600 IOPS. Using that QD=1 example, it appears that as controller technology improves to handle the large performance gains of raw XPoint, so does performance. Given a NAND-based SSD only turns in 10-20k IOPS at that same queue depth, we're seeing something more along the lines of 16-32x performance gains with the Micron prototype. Those with a realistic understanding of how queues work will realize that the type of gains seen at such low queue depths will have a significant impact in real-world performance of these products.
The speed of 3D XPoint immediately shifts the bottleneck back to the controller, PCIe bus, and OS/software. True 1000x performance gains will not be realized until second generation XPoint DIMMs are directly linked to the CPU.
The raw die 1000x performance gains simply can't be fully realized when there is a storage stack in place (even an NVMe one). That's not to say XPoint will be slow, and based on what I've seen so far, I suspect XPoint haters will still end up burying their heads in the sand once we get a look at the performance results of production parts.
Leaked roadmap including upcoming Optane products
Intel is expected to show a demo of their own more recent Optane prototype, and we suspect similar performance gains there as their controller tech has likely matured. We'll keep an eye out and fill you in once we've seen Intel's newer Optane goodness it in action!
Subject: Storage | August 11, 2016 - 12:27 PM | Allyn Malventano
Tagged: ssd, PS5008-E8/E8T, PS5008-E8, PS5007-E7, phison, PCIe 3.0 x2, NVMe, FMS 2016, FMS, E8
I visited Phison to check out their new E8 controller:
Phsion opted to take a step back from the higher performance PCIe 3.0 x4 NVMe controllers out there, offering a solution with half the lanes. PCIe 3.0 x2 can still handle 1.5 GB/s, and this controller can exceed 200,000 random IOPS. Those specs are actually in-line with what most shipping x4 solutions offer today, meaning the E8 is more effectively saturating its more limited connectivity. Reducing the number of lanes helps Phison reduce the component cost of this controller to match the cost of typical SATA controllers while tripling the performance, greatly reducing the cost to produce NVMe SSDs.
In addition to 3D Flash support, the E8 is also a DRAM-less controller, meaning it has a small internal SRAM cache and has been architected to not need external DRAM installed on the PCB. DRAM-less means even lower costs. This can only be a good thing, since high performing NVMe parts at SATA costs is going to drive down the costs of even faster NVMe solutions, which is great for future buyers.
Subject: Storage | August 11, 2016 - 12:06 PM | Allyn Malventano
Tagged: FMS, FMS 2016, XPoint, micron, QuantX, nand, ram
Earlier this week, Micron launched their QuantX branding for XPoint devices, as well as giving us some good detail on expected IOPS performance of solutions containing these new parts:
Thanks to the very low latency of XPoint, the QuantX solution sees very high IOPS performance at a very low queue depth, and the random performance very quickly scales to fully saturate PCIe 3.0 x4 with only four queued commands. Micron's own 9100 MAX SSD (reviewed here), requires QD=256 (64x increase) just to come close to this level of performance! At that same presentation, a PCIe 3.0 x8 QuantX device was able to double that throughput at QD=8, but what are these things going to look like?
The real answer is just like modern day SSDs, but for the time being, we have the prototype unit pictured above. This is essentially an FPGA development board that Micron is using to prototype potential controller designs. Dedicated ASICs based on the final designs may be faster, but those take a while to ramp up volume production.
So there it is, in the flesh, nicely packaged and installed on a complete SSD. Sure it's a prototype, but Intel has promised we will see XPoint before the end of the year, and I'm excited to see this NAND-to-DRAM performance-gap-filling tech come to the masses!
Subject: Storage | August 11, 2016 - 11:18 AM | Allyn Malventano
Tagged: FMS, FMS 2016, Liqid, kingston, toshiba, phison, U.2, HHHL, NVMe, ssd
A relative newcomer this year at Flash Memory Summit was Liqid. These guys are essentially creating an ecosystem from a subset of parts. Let's start with Toshiba:
At Toshiba's booth, we spotted their XG3 being promoted as being part of the Liqid solution. We also saw a similar demo at the Phison booth, meaning any M.2 parts can be included as part of their design. Now let us get a closer look at the full package options and what they do:
This demo, at the Kingston booth, showed a single U.2 device cranking out 835,000 4k IOPS. This is essentially saturating its PCIe 3.0 x4 link with random IO's, and it actually beats the Micron 9100 that we just reviewed!
How can it pull this off? The trick is that there are actually four M.2 SSDs in that package, along with a PLX switch. The RAID must be handled on the host side, but so long as you have software that can talk to multiple drives, you'll get full speed from this part.
More throughput can be had by switching to a PCIe 3.0 x8 link on a HHHL form factor card:
That's 1.3 million IOPS from a single HHHL device! Technically this is four SSDs, but still, that's impressively fast and is again saturating the bus, but this time it's PCIe 3.0 x8 being pegged!
We'll be tracking Liqid's progress over the coming months, and we will definitely test these solutions as they come to market (we're not there just yet). More to follow from FMS 2016!
Subject: Storage | August 11, 2016 - 10:59 AM | Allyn Malventano
Tagged: FMS, SYS-2028U-TN24R4T+, SYS-1028U-TN10RT+, supermicro, SSG-2028R-NR48N, server, NVMe, FMS 2016
Supermicro was at FMS 2016, showing off some of their NVMe chassis:
The first model is the SYS-1028U-TN10RT+. This 1U chassis lets you hot swap 10 2.5" U.2 SSDs, connecting all lanes directly to the host CPUs.
Supermicro's custom PCB and interposer links all 40 PCIe lanes to the motherboard / CPUs.
Need more drives installed? Next up is the SYS-2028U-TN24R4T+, which uses a pair of PCIe switches to connect 24 U.2 SSDs to the same pair of CPUs.
Need EVEN MORE drives installed? The SSG-2028R-NR48N uses multiple switches to connect 48 U.2 SSDs in a single 2U chassis! While the switches will limit the ultimate sequential throughput of the whole package to PCIe 3.0 x40, we know that when it comes to spreading workloads across multiple SSDs, bandwidth bottlenecks are not the whole story, as latency is greatly reduced for a given workload. With a fast set of U.2 parts installed in this chassis, the raw IOPS performance would likely saturate all threads / cores of the installed Xeons before it saturated the PCIe bus!
More to follow as we wrap up FMS 2016!
Subject: Storage | August 10, 2016 - 02:00 PM | Allyn Malventano
Tagged: 2.5, V-NAND, ssd, Samsung, nand, FMS 2016, FMS, flash, 64-Layer, 32TB, SAS, datacenter
..now this picture has been corrected for extreme parallax and was taken in far from ideal conditions, but you get the point. Samsung's keynote is coming up later today, and I have a hunch this will be a big part of what they present. We did know 64-Layer was coming, as it was mentioned in Samsung's last earnings announcement, but confirmation is nice.
*edit* now that the press conference has taken place, here are a few relevant slides:
With 48-Layer V-NAND announced last year (and still rolling out), it's good to see Samsung pushing hard into higher capacity dies. 64-Layer enables 512Gbits (64GB) per die, and 100MB/s per die maximum throughput means even lower capacity SSDs should offer impressive sequentials.
Samsung 48-Layer V-NAND. Pic courtesy of TechInsights.
We will know more shortly, but for now, dream of even higher capacity SSDs :)
*edit* and this just happened:
*additional edit* - here's a better picture taken after the keynote:
The 32TB model in their 2.5" form factor displaces last years 16TB model. The drive itself is essentially identical, but the flash packages now contain 64-layer dies, doubling the available capacity of the device.
Subject: Storage | August 10, 2016 - 01:59 PM | Allyn Malventano
Tagged: FMS 2016, ssd, Seagate, Lightning, facebook, 60TB
Seagate showed off some impressive Solid State Storage at Flash Memory Summit 2016.
First up is the Nytro XM1440. This is a 2TB M.2 22110 SSD complete with enterprise firmware and power loss protection. Nice little package, but what's it for?
..well if you have 60 of them, you can put them into this impressive 1U chassis. This is Facebook's Lightning chassis (discussed yesterday). With Seagate's 2TB parts, this makes for 120TB of flash in a 1U footprint. Great for hyperscale datacenters.
Now onto what you came to see:
This is the 'Seagate 60TB SAS SSD'. It really doesn't need a unique name because that capacity takes care of that for us! This is a 3.5" form factor SAS 12Gbit beast of a drive.
They pulled this density off with a few tricks which I'll walk through. First was the stacking of three PCBs with flash packages on both sides. 80 packages in total.
Next up is Seagate's ONFi fan-out ASIC. This is required because you can only have so many devices connected to a single channel / bus of a given SSD controller. The ASIC acts as a switch for data between the controller and flash dies.
With so much flash present, we could use a bit of fault tolerance. You may recall RAISE from SandForce (who Seagate now owns). This is effectively RAID for flash dies, enabling greater resistance to individual errors across the array.
Finally we have the specs. With a dual 12 Gbit SAS inteface, the 60TB SAS SSD can handle 1.5 GB/s reads, 1.0 GB/s writes, and offers 150,000 IOPS at 4KB QD32 random (SAS tops out at QD32). The idea behind drives like these is to cram as much storage into the smallest space possible, and this is certainly a step in the right direction.
We also saw the XP7200 add-in card. I found this one interesting as it is a PCIe 3.0 x16 card with four M.2 PCIe 3.0 x4 SSDs installed, but *without* a PLX switch to link them to the host system. This is possible only in server systems supporting PCIe Bifurcation, where the host can recognize that certain sets of lanes are linked to individual components.
More to follow from FMS 2016! Press blast after the break.
Subject: Storage | August 9, 2016 - 06:44 PM | Jeremy Hellstrom
Tagged: ssd t3, Samsung, portable storage
Just because you are on the road there is no reason to subject yourself to HDD speeds when transferring files. Not only will an SSD be quieter and more resilient but the USB 3.1 Gen 1 Type C port theoretically offers up to 450MB/s transfer speeds. This particular 2TB portable SSD uses the same MGX controller as the 850 EVO, the NAND is Samsung's 48-layer TLC V-NAND. The Tech Report previously tried out the T1 model so their expectations were that this drive would improve performance in addition to offering larger sizes of drive. Does it live up to expectations? Find out in their full review.
"Not all new SSDs go inside your computer. We take a quick look at Samsung's latest V-NAND-powered external drive, the Portable SSD T3, to see what it's like to put 2TB of fast storage in one's pocket."
Here are some more Storage reviews from around the web:
- Corsair Neutron XTi SSD Review (480GB) @ The SSD Review
- Kingston SSDNow UV400 SSD Review (480GB) @ The SSD Review
- Crucial MX300 750GB Limited Edition SSD Review @ NikKTech
- Apricorn Aegis Secure Key 3.0 Review – Data Protection For Every Security Need @ The SSD Review
- Kingston 512GB SDXC Card @ The SSD Review
- QNAP TurboNAS TS-531P-8G NAS Server Review @ NikKTech
- Synology DiskStation DS916+ 4-Bay SMB NAS @ eTeknix
- QNAP TS-453A QTS-Ubuntu Combo NAS @ eTeknix
- Synology DS916+ 4-bay NAS @ techPowerUp
Subject: Storage | August 9, 2016 - 05:59 PM | Allyn Malventano
Tagged: XPoint, Worm, storage, ssd, RocksDB, Optane, nand, flash, facebook
At their FMS 2016 Keynote, Facebook gave us some details on the various storage technologies that fuel their massive operation:
In the four corners above, they covered the full spectrum of storing bits. From NVMe to Lightning (huge racks of flash (JBOF)), to AVA (quad M.2 22110 NVMe SSDs), to the new kid on the block, WORM storage. WORM stands for Write Once Read Many, and as you might imagine, Facebook has lots of archival data that they would like to be able to read quickly, so this sort of storage fits the bill nicely. How do you pull off massive capacity in flash devices? QLC. Forget MLC or TLC, QLC stores four bits per cell, meaning there are 16 individual voltage states for each cell. This requires extremely precise writing techniques and reads must appropriately compensate for cell drift over time, and while this was a near impossibility with planar NAND, 3D NAND has more volume to store those electrons. This means one can trade the endurance gains of 3D NAND for higher bit density, ultimately enabling SSDs upwards of ~100TB in capacity. The catch is that they are rated at only ~150 write cycles. This is fine for archival storage requiring WORM workloads, and you still maintain NAND speeds when it comes to reading that data later on, meaning that decade old Facebook post will appear in your browser just as quickly as the one you posted ten minutes ago.
Next up was a look at some preliminary Intel Optane SSD results using RocksDB. Compared to a P3600, the prototype Optane part offers impressive gains in Facebook's real-world workload. Throughput jumped by 3x, and latency reduced to 1/10th of its previous value. These are impressive gains given this fairly heavy mixed workload.
More to follow from FMS 2016!
Subject: Storage | August 9, 2016 - 03:33 PM | Allyn Malventano
Tagged: XPoint, QuantX, nand, micron
Micron just completed their keynote address at Flash Memory Summit, and as part of the presentation, we saw our first look at some raw scaled Queue Depth IOPS performance figures from devices utilizing XPoint memory:
These are the performance figures from an U.2 device with a PCIe 3.0 x4 link. Note the outstanding ramp up to full saturation of the bus at a QD of only 4. Slower flash devices require much more parallelism and a deeper queue to achieve sufficient IOPS throughput to saturate that same bus. That 'slow' device on the bottom there, I'm pretty certain, is Micron's own 9100 MAX, which was the fastest thing we had tested to date, and it's being just walked all over by this new XPoint prototype!
Ok, so that's damn fast, but what if you had an add in card with PCIe 3.0 x8?
Ok, now that's just insane! While the queue had to climb to ~8 to reach these figures, that's 1.8 MILLION IOPS from a single HHHL add in card. That's greater than 7 GB/s worth of 4KB random performance!
In addition to the crazy throughput and IOPS figures, we also see latencies running at 1/10th that of flash-based NVMe devices.
..so it appears that while the cell-level performance of XPoint boasts 1000x improvements over flash, once you implement it into an actual solution that must operate within the bounds of current systems (NVMe and PCIe 3.0), we currently get only a 10x improvement over NAND flash. Given how fast NAND already is, 10x is no small improvement, and XPoint still opens the door for further improvement as the technology and implementations mature over time.
More to follow as FMS continues!