Subject: Editorial, Storage
Manufacturer: PC Perspecitve
Tagged: tlc, Samsung, bug, 840 evo, 840

Investigating the issue

Over the past week or two, there have been growing rumblings from owners of Samsung 840 and 840 EVO SSDs. A few reports scattered across internet forums gradually snowballed into lengthy threads as more and more people took a longer look at their own TLC-based Samsung SSD's performance. I've spent the past week following these threads, and the past few days evaluating this issue on the 840 and 840 EVO samples we have here at PC Perspective. This post is meant to inform you of our current 'best guess' as to just what is happening with these drives, and just what you should do about it.

The issue at hand is an apparent slow down in the reading of 'stale' data on TLC-based Samsung SSDs. Allow me to demonstrate:

840 EVO 512 test hdtach-2-.png

You might have seen what looks like similar issues before, but after much research and testing, I can say with some confidence that this is a completely different and unique issue. The old X25-M bug was the result of random writes to the drive over time, but the above result is from a drive that only ever saw a single large file write to a clean drive. The above drive was the very same 500GB 840 EVO sample used in our prior review. It did just fine in that review, and at afterwards I needed a quick temporary place to put a HDD image file and just happened to grab that EVO. The file was written to the drive in December of 2013, and if it wasn't already apparent from the above HDTach pass, it was 442GB in size. This brings on some questions:

  • If random writes (i.e. flash fragmentation) are not causing the slow down, then what is?
  • How long does it take for this slow down to manifest after a file is written?

Read on for the full scoop!

Micron's M600 SSD, SLC in the front MLC in the back

Subject: Storage | September 18, 2014 - 04:10 PM |
Tagged: micron, M600, SLC. MLC, DWA

Micron's M600 SSD has a new trick up its sleeve, called dynamic write acceleration which is somewhat similar to the HDDs with an NAND cache to accelerate the speed frequently accessed data can be read but with a brand new trick.  In this case SLC NAND acts as the cache for MLC NAND but it does so dynamically, the NAND can switch from SLC to MLC and back depending on the amount of usage.  There is a cost, the SLC storage capacity is 50% lower than MLC so the larger the cache the lower the total amount of storage is available.  As well the endurance rating is also higher than previous drives, not because of better NAND but because of new trim techniques being used.  This is not yet a retail product so The Tech Report does not have benchmarks but this goes to show you there are plenty more tricks we can teach SSDs.

drives.jpg

"Micron's new M600 SSD can flip its NAND cells between SLC and MLC modes on the fly, enabling a dynamic write cache that scales with the drive's unused capacity. We've outlined how this dynamic write acceleration is supposed to impact performance, power consumption, and endurance."

Here are some more Storage reviews from around the web:

Storage

Subject: Storage
Manufacturer: ADATA

Introduction, Specifications and Packaging

Introduction:

It seems a lot of folks have been incorporating Silicon Motion's SM2246EN controller into their product lines. We first reviewed the Angelbird SSD wrk, but only in a 512GB capacity. We then reviewed a pair of Corsair Force LX's (256GB and 512GB). ADATA has joined the club with their new Premier SP610 product line, and today we are going to take a look at all available capacities of this new model:

DSC05020.JPG

It's fortunate that ADATA was able to sample us a full capacity spread, as this will let us evaluate all shipping SSD capacites that exist for the Silicon Motion SM2246EN controller.

Continue reading as we evaluate the ADATA Premier SP610!

Micron launches M600 SATA SSD with innovative SLC/MLC Dynamic Write Acceleration

Subject: Storage, Shows and Expos | September 16, 2014 - 11:29 AM |
Tagged: ssd, slc, sata, mlc, micron, M600, crucial

You may already be familiar with the Micron Crucial M550 line of SSDs (if not, familiarize yourself with our full capacity roundup here). Today Micron is pushing their tech further by releasing a new M600 line. The M600's are the first full lineup from Micron to use their 16nm flash (previously only in their MX100 line). Aside from the die shrink, Micron has addressed the glaring issue we noted in our M550 review - that issue being the sharp falloff in write speeds in lower capacities of that line. Their solution is rather innovative, to say the least.

Recall the Samsung 840 EVO's 'TurboWrite' cache, which gave that drive a burst of write speed during short sustained write periods. The 840 EVO accomplished this by each TLC die having a small SLC section of flash memory. All data written passed through this cache, and once full (a few GB, varying with drive capacity), write speed slowed to TLC levels until the host system stopped writing for long enough for the SSD to flush the cached data from SLC to TLC.

high_res_M600D_form_factors_1.jpg

The Micron M600 SSD in 2.5" SATA, MSATA, and M.2 form factors.

Micron flips the 'typical' concept of caching methods on its head. It does employ two different types of flash writing (SLC and MLC), but the first big difference is that the SLC is not really cache at all - not in the traditional sense, at least. The M600 controller, coupled with some changes made to Micron's 16nm flash, is able to dynamically change the mode of each flash memory die *on the fly*. For example, the M600 can place most of the individual 16GB (MLC) dies into SLC mode when the SSD is empty. This halves the capacity of each die, but with the added benefit of much faster and more power efficient writes. This means the M600 would really perform more like an SLC-only SSD so long as it was kept less than half full.

M600-1.png

As you fill the SSD towards (and beyond) half capacity, the controller incrementally clears the SLC-written data, moving that data onto dies configured to MLC mode. Once empty, the SLC die is switched over to MLC mode, effectively clearing more flash area for the increasing amount of user data to be stored on the SSD. This process repeats over time as the drive is filled, meaning you will see less SLC area available for accelerated writing (see chart above). Writing to the SLC area is also advantageous in mobile devices, as those writes not only occur more quickly, they consume less power in the process:

M600-2.png

For those worst case / power user scenarios, here is a graph of what a sustained sequential write to the entire drive area would look like:

M600-3.png

Realize this is not typical usage, but if it happened, you would see SLC speeds for the first ~45% of the drive, followed by MLC speeds for another 10%. After the 65% point, the drive is forced to initiate the process of clearing SLC and flipping dies over to MLC, doing so while the host write is still in progress, and therefore resulting in the relatively slow write speed (~50 MB/sec) seen above. Realize that in normal use (i.e. not filling the entire drive at full speed in one go), garbage collection would be able to rearrange data in the background during idle time, meaning write speeds should be near full SLC speed for the majority of the time. Even with the SSD nearly full, there should be at least a few GB of SLC-mode flash available for short bursts of SLC speed writes.

This caching has enabled some increased specs over the prior generation models:

M600-4.png

M600-5.png

Note the differences in write speeds, particularly in the lower capacity models. The 128GB M550 was limited to 190MB/sec, while the M600 can write at 400MB/sec in SLC mode (which is where it should sit most of the time).

We'll be testing the M600 shortly and will come back with a full evaluation of the SSD as a whole and more specifically how it handles this new tech under real usage scenarios.

Full press blast after the break.

Source: Micron

IDF 2014 Storage Roundup - RAM and NVMe and IOPS! Oh my!

Subject: Storage, Shows and Expos | September 16, 2014 - 09:49 AM |
Tagged: ram, NVMe, IOPS, idf 2014, idf, ddr4, DDR

The Intel Developer Forum was last week, and there were many things to be seen for sure. Mixed in with all of the wearable and miniature technology news, there was a sprinkling of storage goodness. Kicking off the show, we saw new cold storage announcements from both HGST and Western Digital, but that was about it for HDD news, as the growing trend these days is with solid state storage technologies. I'll start with RAM:

First up was ADATA, who were showing off 64GB DDR3 (!) DIMMs:

DSC05446.JPG

Next up were various manufacturers pushing DDR4 technology quite far. First was SK Hynix's TSV 128GB DIMMs (covered in much greater depth last week):

DSC05415.JPG

Next up is Kingston, who were showing a server chassis equipped with 256GB of DDR4:

DSC05460.JPG

If you look closer at the stats, you'll note there is more RAM in this system than flash:

DSC05462.JPG

Next up is IDT, who were showing off their LRDIMM technology:

DSC05454.JPG

This technology adds special data buffers to the DIMM modules, enabling significantly higher amounts of installed RAM into a single system, with a 1-2 step de-rating of clock speeds as you take capacities to the far extremes. The above server has 768GB of DDR4 installed and running!:

DSC05455.JPG

Moving onto flash memory type stuff, Scott covered Intel's new 40 Gbit Ethernet technology last week. At IDF, Intel had a demo showing off some of the potential of these new faster links:

DSC05430.JPG

This demo used a custom network stack that allowed a P3700 in a local system to be matched in IOPS by an identical P3700 *being accessed over the network*. Both local and networked storage turned in the same 450k IOPS, with the remote link adding only 8ms of latency. Here's a close-up of one of the SFF-8639 (2.5" PCIe 3.0 x4) SSDs and the 40 Gbit network card above it (low speed fans were installed in these demo systems to keep some air flowing across the cards):

DSC05438.JPG

Stepping up the IOPS a bit further, Microsoft was showing off the capabilities of their 'Inbox AHCI driver', shown here driving a pair of P3700's at a total of 1.5 million IOPS:

DSC05445.JPG

...for those who want to get their hands on this 'Inbox driver', guess what? You already have it! "Inbox" is Microsoft's way of saying the driver is 'in the box', meaning it comes with Windows 8. Bear in bind you may get better performance with manufacturer specific drivers, but it's still a decent showing for a default driver.

Now for even more IOPS:

DSC05441.JPG

Yes, you are reading that correctly. That screen is showing a system running over 11 million IOPS. Think it's RAM? Wrong. This is flash memory pulling those numbers. Remember the 2.5" P3700 from a few pics back? How about 24 of them:

DSC05443.JPG

The above photo shows three 2U systems (bottom), which are all connected to a single 2U flash memory chassis (top). The top chassis supports three submodules, each with eight SFF-8639 SSDs. The system, assembled by Newisys, demonstrates just how much high speed flash you can fit within an 8U space. The main reason for connecting three systems to one flash chassis is because it takes those three systems to process the full IOPS capability of 24 low latency NVMe SSDs (that's 96 total lanes of PCIe 3.0!)!

So there you have it, IDF storage tech in a nutshell. More to come as we follow these emerging technologies to their maturity.

Apotop S3C SSD, Silicon Motion's new controller for less than $0.50/GB

Subject: Storage | September 12, 2014 - 02:30 PM |
Tagged: SM2246EN, S3C, mlc, Apotop

The Apotop S3C SSD uses the same controller as the Angelbird drive Al reviewed recently.  It uses synchronous MLC NAND with the 4 channel present on the Silicon Motion controller and is able to provide more than the specified 490 MB/s read and 275 MB/s write in some benchmarks.  It can often read faster than the wrk SSD but the writes cannot always keep up though it is not something likely to be noticeable in real usage scenarios.  The MSRP is very attractive with the 512GB model expected to be released at $200.  Silicon Motion is likely to start appearing in a lot more SSDs in the near future with this mix of price and performance.  Read the full review at Kitguru.

first-page2.jpg

"The new Apotop S3C SSD features the Silicon Motion 2246EN controller which we first reviewed in the Angelbird 512GB wrk SSD back in August this year. The controller impressed us, so we have already high hopes for the Apotop S3C."

Here are some more Storage reviews from around the web:

Storage

Source: KitGuru

SanDisk Launches 512GB SDXC Card for $799.99

Subject: General Tech, Storage | September 12, 2014 - 01:08 PM |
Tagged: sandisk, sdxc, sdhc, sd card, 512GB

Assuming your camera, card reader, or other device fully conforms to the SDXC standard, Sandisk has developed a half-terabyte (512GB) memory card. Beyond being gigantic, it can be read at up to 95 MB/s and written at up to 90 MB/s, which should be enough to stream 4K video. Sandisk claims that it is temperature proof, shock proof, water proof, and x-ray proof. It also comes with a lifetime warranty and "RescuePRO Deluxe" recovery software but, honestly, I expect people would just use PhotoRec or something.

It should be noted that the SDXC standard covers memory cards up to 2TB so it will probably not be too long before we see another standard get ratified. What is next? SDUC? SDYC? SDALLTHEC? Blah! This is why IEEE assigns names sequentially.

The SanDisk Extreme PRO UHS-I SDHC/SDXC 512GB memory card should be available now, although I cannot yet find them online, for $799.99 MSRP.

Source: SanDisk

IDF 2014: Through Silicon Via - Connecting memory dies without wires

Subject: Storage, Shows and Expos | September 10, 2014 - 12:34 PM |
Tagged: TSV, Through Silicon Via, memory, idf 2014, idf

If you're a general computer user, you might have never heard the term "Through Silicon Via". If you geek out on photos of chip dies and wafers, and how chips are assembled and packaged, you might have heard about it. Regardless of your current knowledge of TSV, it's about to be a thing that impacts all of you in the near future.

Let's go into a bit of background first. We're going to talk about how chips are packaged. Micron has an excellent video on the process here:

The part we are going to focus on appears at 1:31 in the above video:

die wiring.png

This is how chip dies are currently connected to the outside world. The dies are stacked (four high in the above pic) and a machine has to individually wire them to a substrate, which in turn communicates with the rest of the system. As you might imagine, things get more complex with this process as you stack more and more dies on top of each other:

chip stacking.png

16 layer die stack, pic courtesy NovaChips

...so we have these microchips with extremely small features, but to connect them we are limited to a relatively bulky process (called package-on-package). Stacking these flat planes of storage is a tricky thing to do, and one would naturally want to limit how many of those wires you need to connect. The catch is that those wires also equate to available throughput from the device (i.e. one wire per bit of a data bus). So, just how can we improve this method and increase data bus widths, throughput, etc?

Before I answer that, let me lead up to it by showing how flash memory has just taken a leap in performance. Samsung has recently made the jump to VNAND:

vnand crop--.png

By stacking flash memory cells vertically within a die, Samsung was able to make many advances in flash memory, simply because they had more room within each die. Because of the complexity of the process, they also had to revert back to an older (larger) feature size. That compromise meant that the capacity of each die is similar to current 2D NAND tech, but the bonus is speed, longevity, and power reduction advantages by using this new process.

I showed you the VNAND example because it bears a striking resemblance to what is now happening in the area of die stacking and packaging. Imagine if you could stack dies by punching holes straight through them and making the connections directly through the bottom of each die. As it turns out, that's actually a thing:

tsv cross section.png

Read on for more info about TSV!

IDF 2014: Western Digital announces new Ae HDD series for archival / cold storage

Subject: Storage, Shows and Expos | September 9, 2014 - 01:51 PM |
Tagged: WDC< Western Digital, WD, idf 2014, idf, hdd, Cold, Archival, Ae

We talked about helium filled, shingled HDD's from HGST earlier today. Helium may give you reduced power demands, but at the added expensive of hermetically sealed enclosures over conventional HDD's. Shingling may give added capacity, but at the expense of being forced into specific writing methods. Now we know Western Digital's angle into archival / cold storage:

WD_AE_PRN.jpg

..so instead of going with higher cost newer technologies, WD is taking their consumer products and making them more robust. They are also getting rid of the conventional thinking of capacity increments and are moving to 100GB increments. The idea is that once a large company or distributor has qualified a specific HDD model on their hardware, that model will stick around for a while, but be continued at an increased capacity as platter density yields increase over time. WD has also told me that capacities may even be mixed an matched within a 20-box of drives, so long as the average capacity matches the box label. This works in the field of archival / cold storage for a few reasons:

  • Archival storage systems generally do not use conventional RAID (where an entire array of matching capacity disks are spinning simultaneously). Drives are spun up and written to individually, or spun up individually to service the occasional read request. This saves power overall, and it also means the individual drives can vary in capacity with no ill effects.
  • Allowing for variable capacity binning helps WD ship more usable platters/drives overall (i.e. not rejecting drives that can't meet 6TB). This should drive overall costs down.
  • Increasing capacity by only a few hundred GB per drive turns into *huge* differences in cost when you scale that difference up to the number of drives you would need to handle a very large total capacity (i.e. Exabytes).

So the idea here is that WD is choosing to stick with what they do best, which they can potentially do for even cheaper than their consumer products. That said, this is really meant for enterprise use and not as a way for a home power user to save a few bucks on a half-dozen drives for their home NAS. You really need an infrastructure in place that can handle variable capacity drives seamlessly. While these drives do not employ SMR to get greater capacity, that may work out as a bonus, as writes can be performed in a way that all systems are currently compatible with (even though I suspect they will be tuned more for sequential write workloads).

Here's an illustration of this difference:

capacity 1.png

The 'old' method meant that drives on the left half of the above bell curve would have to be sold as 5TB units.

capacity 2.png

With the 'new' method, drives can be sold based on a spec closer to their actual capacity yield. For a given model, shipping capacities would increase as time goes on (top to bottom of the above graphic).

To further clarify what is meant by the term 'cold storage' - the data itself is cold, as in rarely if ever accessed:

tiers.png

Examples of this would be Facebook posts / images from months or years ago. That data may be rarely touched, but it needs to be accessible enough to be browsed to via the internet. The few second archival HDD spinup can handle this sort of thing, while a tape system would take far too long and would likely timeout that data request.

WD's Ae press blast after the break.

IDF 2014: HGST announces 3.2TB NVMe SSDs, shingled 10TB HDDs

Subject: Storage, Shows and Expos | September 9, 2014 - 11:00 AM |
Tagged: ssd, SMR, pcie, NVMe, idf 2014, idf, hgst, hdd, 10TB

It's the first day of IDF, so it's only natural that we see a bunch of non-IDF news start pouring out :). I'll kick them off with a few announcements from HGST. First item up is their new SN100 line of PCIe SSDs:

Ultrastar_SN100_Family_CMYK_Master.jpg

These are NVMe capable PCIe SSDs, available from 800GB to 3.2TB capacities and in (PCI-based - not SATA) 2.5" as well as half-height PCIe cards.

Next up is an expansion of their HelioSeal (Helium filled) drive line:

10TB_Market_applications_HR.jpg

Through the use of Shingled Magnetic Recording (SMR), HGST can make an even bigger improvement in storage densities. This does not come completely free, as due to the way SMR writes to the disk, it is primarily meant to be a sequential write / random access read storage device. Picture roofing shingles, but for hard drives. The tracks are slightly overlapped as they are written to disk. This increases density greatly, but writting to the middle of a shingled section is not possible without potentially overwriting two shingled tracks simultaneously. Think of it as CD-RW writing, but for hard disks. This tech is primarily geared towards 'cold storage', or data that is not actively being written. Think archival data. The ability to still read that data randomly and on demand makes these drives more appealing than retrieving that same data from tape-based archival methods.

Further details on the above releases is scarce at present, but we will keep you posted on further details as they develop.

Full press blast for the SN100 after the break.

Source: HGST