Subject: Storage | August 10, 2016 - 01:59 PM | Allyn Malventano
Tagged: FMS 2016, ssd, Seagate, Lightning, facebook, 60TB
Seagate showed off some impressive Solid State Storage at Flash Memory Summit 2016.
First up is the Nytro XM1440. This is a 2TB M.2 22110 SSD complete with enterprise firmware and power loss protection. Nice little package, but what's it for?
..well if you have 60 of them, you can put them into this impressive 1U chassis. This is Facebook's Lightning chassis (discussed yesterday). With Seagate's 2TB parts, this makes for 120TB of flash in a 1U footprint. Great for hyperscale datacenters.
Now onto what you came to see:
This is the 'Seagate 60TB SAS SSD'. It really doesn't need a unique name because that capacity takes care of that for us! This is a 3.5" form factor SAS 12Gbit beast of a drive.
They pulled this density off with a few tricks which I'll walk through. First was the stacking of three PCBs with flash packages on both sides. 80 packages in total.
Next up is Seagate's ONFi fan-out ASIC. This is required because you can only have so many devices connected to a single channel / bus of a given SSD controller. The ASIC acts as a switch for data between the controller and flash dies.
With so much flash present, we could use a bit of fault tolerance. You may recall RAISE from SandForce (who Seagate now owns). This is effectively RAID for flash dies, enabling greater resistance to individual errors across the array.
Finally we have the specs. With a dual 12 Gbit SAS inteface, the 60TB SAS SSD can handle 1.5 GB/s reads, 1.0 GB/s writes, and offers 150,000 IOPS at 4KB QD32 random (SAS tops out at QD32). The idea behind drives like these is to cram as much storage into the smallest space possible, and this is certainly a step in the right direction.
We also saw the XP7200 add-in card. I found this one interesting as it is a PCIe 3.0 x16 card with four M.2 PCIe 3.0 x4 SSDs installed, but *without* a PLX switch to link them to the host system. This is possible only in server systems supporting PCIe Bifurcation, where the host can recognize that certain sets of lanes are linked to individual components.
More to follow from FMS 2016! Press blast after the break.
Subject: Storage | August 9, 2016 - 05:59 PM | Allyn Malventano
Tagged: XPoint, Worm, storage, ssd, RocksDB, Optane, nand, flash, facebook
At their FMS 2016 Keynote, Facebook gave us some details on the various storage technologies that fuel their massive operation:
In the four corners above, they covered the full spectrum of storing bits. From NVMe to Lightning (huge racks of flash (JBOF)), to AVA (quad M.2 22110 NVMe SSDs), to the new kid on the block, WORM storage. WORM stands for Write Once Read Many, and as you might imagine, Facebook has lots of archival data that they would like to be able to read quickly, so this sort of storage fits the bill nicely. How do you pull off massive capacity in flash devices? QLC. Forget MLC or TLC, QLC stores four bits per cell, meaning there are 16 individual voltage states for each cell. This requires extremely precise writing techniques and reads must appropriately compensate for cell drift over time, and while this was a near impossibility with planar NAND, 3D NAND has more volume to store those electrons. This means one can trade the endurance gains of 3D NAND for higher bit density, ultimately enabling SSDs upwards of ~100TB in capacity. The catch is that they are rated at only ~150 write cycles. This is fine for archival storage requiring WORM workloads, and you still maintain NAND speeds when it comes to reading that data later on, meaning that decade old Facebook post will appear in your browser just as quickly as the one you posted ten minutes ago.
Next up was a look at some preliminary Intel Optane SSD results using RocksDB. Compared to a P3600, the prototype Optane part offers impressive gains in Facebook's real-world workload. Throughput jumped by 3x, and latency reduced to 1/10th of its previous value. These are impressive gains given this fairly heavy mixed workload.
More to follow from FMS 2016!
Subject: Storage | August 8, 2016 - 10:40 AM | Sebastian Peak
Tagged: storage, ssd, solid state drive, PCIe 3.0 x8, PCI-E 3.0, NVMe2032, NVMe2016, NVMe, Microsemi, Flashtec
Microsemi's Flashtec NVMe SSD controllers are now in production, and as Computer Base reports (Google-translated version of the page available here) these controllers use twice as many PCIe lanes than current offerings with a x8 PCI-E 3.0 connection, and can support up to 20 TB of flash capacity.
Image credit: Computer Base
"The NVMe controller destined for the professional high-performance segment and work with PCIe 3.0 x8 or two x4 PCIe 3.0. The NVMe2032 has 32 memory channels (and) NVMe2016 (has) 16. When using 256-Gbit flash SSDs can be implemented with up to 20 terabytes of storage."
The 32-channel NVMe2032 boasts up to 1 million IOPS in 4K random read performance, and the controller supports DDR4 memory for faster cache performance. The announcement of the availability of these chips comes just before the start of Flash Memory Summit, which our own Allyn Malventano will be attending. Stay tuned for more flashy SSD news to come!
Subject: Storage | August 1, 2016 - 11:03 PM | Scott Michaud
Tagged: ssd, Samsung, enterprise ssd
Allyn first mentioned this device last year, but they're apparently now shipping for a whopping $10,000 USD. To refresh, the PM1633a is an SSD from Samsung that packs 15.36TB into a 2.5-inch form factor. According to Samsung, it does this by stacking 16 dies, each containing 48 layers of flash cells, into a 512GB package.
It's unclear how many packages are installed in the device, because we don't know how much over-provisioning Samsung provides, but the advertised capacity equates to exactly 30 packages. Update @ 11:30pm: Turns out I was staring right at it in the old press release. The drive has 32 packages, so 16384 GB, once you account for over-provisioning.
Image Credit: Samsung
Down at CDW, they are selling them for $10,311.99 USD with the option to lease for $321.73 / month. That's only 2.1c/GB... per month... for probably three whole years. No Ryan, that doesn't count. The warranty period doesn't seem to be listed, but Samsung will cover up to 15.36TB per day in writes. I mean, we knew it would be expensive, given its size and performance. At least it's only ~65c/GB.
Subject: Storage | August 1, 2016 - 03:14 PM | Sebastian Peak
Tagged: M8PeG, ssd, solid state drive, preview, plextor, nand, M8Pe, M.2, CES 2016, M8PeY
Plextor announced their first M.2 SSD at CES 2016, and now the M8Pe series is officially set for a release this month. Computer Base (German language) had a chance to preview the new drive, and supplied a detailed look at the M.2 version (this is model M8PeG, and the version with a riser card is M8PeY).
The Plextor M8PeG SSD (Image credit: Computer Base)
Even the M.2 form-factor version of the SSD includes a heatsink, which Plextor warns creates incompatibility with notebooks as the M8PeG is 4.79 mm in height with the heatsink in place.
Specifications for the drives are as follows:
|Plextor M8PeG||Plextor M8PeY|
|Controller||Marvell 88SS1093 (8-Channel)|
|DRAM||512MB LPDDR3 (1024MB variant)|
|Capacity||128 GB, 256 GB, 512 GB|
|NAND||Toshiba 15nm Toggle 2.0 MLC|
|Form Factor||M.2 (80 mm)||PCIe card (HH, HL)|
|Interface||PCIe 3.0 x4|
So what did Computer Base have to report with their hands-on preview of the new drive? Here's their CrystalDiskMark result:
(Image credit: Computer Base)
Naturally we'll have to wait for a full-scale AllynReview™ to get a better idea of performance in all situations, but until then it's good to know we'll soon have another option to consider in the M.2 SSD market. As to pricing, we don't have anything just yet.
The M8Pe SSD lineup (Image credit: Computer Base)
Introduction, Packaging, and Internals
Being a bit of a storage nut, I have run into my share of failed and/or corrupted hard drives over the years. I have therefore used many different data recovery tools to try to get that data back when needed. Thankfully, I now employ a backup strategy that should minimize the need for such a tool, but there will always be instances of fresh data on a drive that went down before a recent backup took place or a neighbor or friend that did not have a backup.
I’ve got a few data recovery pieces in the cooker, but this one will be focusing on ‘physical data recovery’ from drives with physically damaged or degraded sectors and/or heads. I’m not talking about so-called ‘logical data recovery’, where the drive is physically fine but has suffered some corruption that makes the data inaccessible by normal means (undelete programs also fall into this category). There are plenty of ‘hard drive recovery’ apps out there, and most if not all of them claim seemingly miraculous results on your physically failing hard drive. While there are absolutely success stories out there (most plastered all over testimonial pages at those respective sites), one must take those with an appropriate grain of salt. Someone who just got their data back with a <$100 program is going to be very vocal about it, while those who had their drive permanently fail during the process are likely to go cry quietly in a corner while saving up for a clean-room capable service to repair their drive and attempt to get their stuff back. I'll focus more on the exact issues with using software tools for hardware problems later in this article, but for now, surely there has to be some way to attempt these first few steps of data recovery without resorting to software tools that can potentially cause more damage?
Well now there is. Enter the RapidSpar, made by DeepSpar, who hope this little box can bridge the gap between dedicated data recovery operations and home users risking software-based hardware recoveries. DeepSpar is best known for making advanced tools used by big data recovery operations, so they know a thing or two about this stuff. I could go on and on here, but I’m going to save that for after the intro page. For now let’s get into what comes in the box.
Note: In this video, I read the MFT prior to performing RapidNebula Analysis. It's optimal to reverse those steps. More on that later in this article.
Introduction, Specifications, and Packaging
Everyone expects SSD makers to keep pushing out higher and higher capacity SSDs, but the thing holding them back is sufficient market demand for that capacity. With that, it appears Samsung has decided it was high time for a 4TB model of their 850 EVO. Today we will be looking at this huge capacity point, and paying close attention to any performance dips that sometimes result in pushing a given SSD controller / architecture to extreme capacities.
This new 4TB model benefits from the higher density of Samsung’s 48-layer V-NAND. We performed a side-by-side comparison of 32 and 48 layer products back in March, and found the newer flash to reduce Latency Percentile profiles closer to MLC-equipped Pro model than the 32-layer (TLC) EVO:
Latency Percentile showing reduced latency of Samsung’s new 48-layer V-NAND
We’ll be looking into all of this in today’s review, along with trying our hand at some new mixed paced workload testing, so let’s get to it!
Introduction, Specifications and Packaging
It's been too long since we took a look at enterprise SSDs here at PC Perspective, so it's high time we get back to it! The delay has stemmed from some low-level re-engineering of our test suite to unlock some really cool QoS and Latency Percentile possibilities involving PACED workloads. We've also done a lot of work to distill hundreds of hours of test results into fewer yet more meaningful charts. More on that as we get into the article. For now, let's focus on today's test subject:
Behold the Micron 9100 MAX Series. Inside that unassuming 2.5" U.2 enclosure sits 4TB of flash and over 4GB of DRAM. It's capable of 3 GB/s reads, 2 GB/s writes, and 750,000 IOPS. All from inside that little silver box! There's not a lot more to say here because nobody is going to read much past that 3/4 MILLION IOPS figure I just slipped, so I'll just get into the rest of the article now :).
The 9100's come in two flavors and form factors. The MAX series (1.2TB and 2.4TB in the above list) come with very high levels of performance and endurance, while the PRO series comes with lower overprovisioning, enabling higher capacity points for a given flash loadout (800GB, 1.6TB, 3.2TB). Those five different capacity / performance points are available in both PCIe (HHHL) and U.2 (2.5") form factors, making for 10 total available SKUs. All products are PCIe 3.0 x4, using NVMe as their protocol. They should all be bootable on systems capable of UEFI/NVMe BIOS enumeration.
Idle power consumption is a respectable 7W, while active consumption is selectable in 20W, 25W, and 'unlimited' increments. While >25W operation technically exceeds the PCIe specification for non-GPU devices, we know that the physical slot is capable of 75W for GPUs, so why can't SSDs have some more fun too! That said, even in unlimited mode, the 9100's should still stick relatively close to 25W and in our testing did not exceed 29W at any workload. Detailed power testing is coming to future enterprise articles, but for now, the extent will be what was measured and noted in this paragraph.
Our 9100 MAX samples came only in anti-static bags, so no fancy packaging to show here. Enterprise parts typically come in white/brown boxes with little flair.
Pre and Post Update Testing
Samsung launched their 840 Series SSDs back in May of 2013, which is over three years ago as of this writing. They were well-received as a budget unit but rapidly eclipsed by the follow-on release of the 840 EVO.
A quick check of our test 840 revealed inconsistent read speeds.
We broke news of Samsung’s TLC SSDs being effected by a time-based degrading of read speeds in September of 2014, and since then we have seen nearly every affected product patched by Samsung, with one glaring exception - the original 840 SSD. While the 840 EVO was a TLC SSD with a built-in SLC static data cache, the preceding 840 was a pure TLC drive. With the focus being on the newer / more popular drives, I had done only spot-check testing of our base 840 sample here at the lab, but once I heard there was finally a patch for this unit, I set out to do some pre-update testing so that I could gauge any improvements to read speed from this update.
As a refresher, ‘stale’ data on an 840 EVO would see reduced read speeds over a period of months after those files were written to the drive. This issue was properly addressed in a firmware issued back in April of 2015, but there were continued grumbles from owners of other affected drives, namely the base model 840. With the Advanced Performance Optimization patch being issued so long after others have been patched, I’m left wondering why there was such a long delay on this one? Differences in the base-840’s demonstration of this issue revealed themselves in my pre-patch testing:
Subject: Storage | June 13, 2016 - 03:46 AM | Allyn Malventano
Tagged: XPoint, tlc, Stony Beach, ssd, pcie, Optane, NVMe, mlc, Mansion Beach, M.2, kaby lake, Intel, imft, Brighton Beach, 3DNAND, 3d nand
For those unaware, XPoint (spoken 'cross-point') is a new type of storage technology that is persistent like NAND Flash but with speeds closer to that of RAM. Intel's brand name for devices implementing XPoint are called Optane.
Starting at the bottom of the slide, we see a new 'System Acceleration' segment with a 'Stony Beach PCIe/NVMe m.2 System Accelerator'. This is likely a new take on Larson Creek, which was a 20GB SLC SSD launched in 2011. This small yet very fast SLC flash was tied into the storage subsystem via Intel's Rapid Storage Technology and acted as a caching tier for HDDs, which comprised most of the storage market at that time. Since Optane excels at random access, even a PCIe 3.0 x2 part could outmaneuver the fastest available NAND, meaning these new System Accelerators could act as a caching tier for Flash-based SSDs or even HDDs. These accelerators can also be good for boosting the performance of mobile products, potentially enabling the use of cheaper / lower performing Flash / HDD for bulk storage.
Skipping past the mainstream parts for now, enthusiasts can expect to see Brighton Beach and Mansion Beach, which are Optane SSDs linked via PCIe 3x2 or x4, respectively. Not just accelerators, these products should have considerably more storage capacity, which may bring costs fairly high unless either XPoint production is very efficient or if there is also NAND Flash present on those parts for bulk storage (think XPoint cache for NAND Flash all in one product).
We're not sure if or how the recent delays to Kaby Lake will impact the other blocks on the above slide, but we do know that many of the other blocks present are on-track. The SSD 540s and 5400s were in fact announced in Q2, and are Intel's first shipping products using IMFT 3D NAND. Parts not yet seen announced are the Pro 6000p and 600p, which are long overdue m.2 SSDs that may compete against Samsung's 950 Pro. Do note that those are marked as TLC products (purple), though I suspect they may actually be a hybrid TLC+SLC cache solution.
Going further out on the timeline we naturally see refreshes to all of the Optane parts, but we also see the first mention of second-generation IMFT 3DNAND. As I hinted at in an article back in February, second-gen 3D NAND will very likely *double* the per-die capacity to 512Gbit (64GB) for MLC and 768Gbit (96GB) for TLC. While die counts will be cut in half for a given total SSD capacity, speed reductions will be partially mitigated by this flash having at least four planes per die (most previous flash was double-plane). A plane is an effective partitioning of flash within the die, with each section having its own buffer. Each plane can perform erase/program/read operations independently, and for operations where the Flash is more limiting than the interface (writes), doubling the number of planes also doubles the throughput. In short, doubling planes roughly negates the speed drop caused by halving the die count on an SSD (until you reach the point where controller-to-NAND channels become the bottleneck, of course).
IMFT XPoint Die shot I caught at the Intel / Micron launch event.
Well, that's all I have for now. I'm excited to see that XPoint is making its way into consumer products (and Storage Accelerators) within the next year's time. I certainly look forward to testing these products, and I hope to show them running faster than they did back at that IDF demo...