** Edit **
The tool is now available for download from Samsung here. Another note is that they intend to release an ISO / DOS version of the tool at the end of the month (for Lunix and Mac users). We assume this would be a file system agnostic version of the tool, which would either update all flash or wipe the drive. We suspect it would be the former.
** End edit **
As some of you may have been tracking, there was an issue with Samsung 840 EVO SSDs where ‘stale’ data (data which had not been touched for some period of time after writing it) saw slower read speeds as time since written extended beyond a period of weeks or months. The rough effect was that the read speed of old data would begin to slow roughly one month after written, and after a few more months would eventually reach a speed of ~50-100 MB/sec, varying slightly with room temperature. Speeds would plateau at this low figure, and more importantly, even at this slow speed, no users reported lost data while this effect was taking place.
An example of file read speeds slowing relative to file age.
Since we first published on this, we have been coordinating with Samsung to learn the root causes of this issue, how they will be fixed, and we have most recently been testing a pre-release version of the fix for this issue. First let's look at the newest statement from Samsung:
Because of an error in the flash management software algorithm in the 840 EVO, a drop in performance occurs on data stored for a long period of time AND has been written only once. SSDs usually calibrate changes in the statuses of cells over time via the flash management software algorithm. Due to the error in the software algorithm, the 840 EVO performed read-retry processes aggressively, resulting in a drop in overall read performance. This only occurs if the data was kept in its initial cell without changing, and there are no symptoms of reduced read performance if the data was subsequently migrated from those cells or overwritten. In other words, as the SSD is used more and more over time, the performance decrease disappears naturally. For those who want to solve the issue quickly, this software restores the read performance by rewriting the old data. The time taken to complete the procedure depends on the amount of data stored.
This partially confirms my initial theory in that the slow down was related to cell voltage drift over time. Here's what that looks like:
As you can see above, cell voltages will shift to the left over time. The above example is for MLC. TLC in the EVO will have not 4 but 8 divisions, meaning even smaller voltage shifts might cause the apparent flipping of bits when a read is attempted. An important point here is that all flash does this - the key is to correct for it, and that correction is what was not happening with the EVO. The correction is quite simple really. If the controller sees errors during reading, it follows a procedure that in part adapts to and adjusts for cell drift by adjusting the voltage thresholds for how the bits are interpreted. With the thresholds adapted properly, the SSD can then read at full speed and without the need for error correction. This process was broken in the EVO, and that adaptation was not taking place, forcing the controller to perform error correction on *all* data once those voltages had drifted near their default thresholds. This slowed the read speed tremendously. Below is a worst case example:
We are happy to say that there is a fix, and while it won't be public until
some time tomorrow now, we have been green lighted by Samsung to publish our findings.
Subject: Storage | October 6, 2014 - 02:51 PM | Jeremy Hellstrom
Tagged: ssd, slc, mlc, micron, M600, Dynamic Write Acceleration
The Tech Report took a different look at Micron's M600 SSD than Al did in his review. Their benchmarks were focused more on a performance comparison versus the rest of the market, with over two dozen SSDs listed in their charts. As you would expect the 1TB model outperformed the 256GB model but it was interesting to note that the 256GB MX100 outperformed the newer M600 in many tests. In the final tally the new caching technology helped the 256GB model perform quite well but it was the 1TB model, which supposedly lacks that technology proved to be one of the fastest they have tested.
"Micron's new M600 SSD has a dynamic write cache that can treat any block on the drive as high-speed SLC NAND. This unique feature is designed to help lower-capacity SSDs keep up with larger drives that have more NAND-level parallelism, and we've tested the 256GB and 1TB versions to see how well it works."
Here are some more Storage reviews from around the web:
- MyDigitalSSD BP4e mSATA SSD @ The SSD Review
- Top 10 SSDs: Price, performance and capacity @ The Register
- Micron M600 M.2 SATA SSD @ The SSD Review
- Some thoughts on the performance of SSD RAID 0 arrays @ Hardware Secrets
- Transcend SSD370 128GB SSD Review @ Legit Reviews
- Micron M600 SSD @ The SSD Review
- QNAP TS-653 Pro @ Legion Hardware
- QNAP SilentNAS HS-251 NAS Server Review @ NikKTech
- Mach Xtreme MX-ES Ultra SLC USB 3.0 Flash Drive @ The SSD Review
- Silicon Power Marvel M70 64GB USB 3.0 Flash Drive Review @ NikKTech
Introduction and Test System Setup
A while ago, in our review of the WD Red 6TB HDD, we noted an issue with the performance of queued commands. This could potentially impact the performance of those drives in multithreaded usage scenarios. While Western Digital acted quickly to get updated drives into the supply chain, some of the first orders might have been shipped unpatched drives. To be clear, an unpatched 5TB or 6TB Red still performs well, just not as well as it *could* perform with the corrected firmware installed.
We received updated samples from WD, as well as applying a firmware update to the samples used in our original review. We were able to confirm that the update does in fact work, and brings a WD60EFRX-68MYMN0 to the identical and improved performance characteristics of a WD60EFRX-68MYMN1 (note the last digit). In this article we will briefly clarify those performance differences, now that we have data more consistent with the vast majority of 5 and 6TB Reds that are out in the wild.
Test System Setup
We currently employ a pair of testbeds. A newer ASUS P8Z77-V Pro/Thunderbolt and an ASUS Z87-PRO. Storage performance variance between both boards has been deemed negligible.
PC Perspective would like to thank ASUS, Corsair, and Kingston for supplying some of the components of our test rigs.
|Hard Drive Test System Setup|
|CPU||Intel Core i7-4770K|
|Motherboard||ASUS P8Z77-V Pro/TB / ASUS Z87-PRO|
|Memory||Kingston HyperX 4GB DDR3-2133 CL9|
|Hard Drive||G.Skill 32GB SLC SSD|
|Video Card||Intel® HD Graphics 4600|
|Power Supply||Corsair CMPSU-650TX|
|Operating System||Windows 8.1 X64 (Update 1)|
- PCMark Vantage and 7
- HDTach *omitted due to incompatibility with >2TB devices*
- PCPer File Copy Test
Introduction and Specifications
Today Micron lifted the review embargo on their new M600 SSD lineup. We covered their press launch a couple of weeks ago, but as a recap, the headline new feature is the new Dynamic Write Acceleration feature. As this is a new (and untested) feature that completely changes the way an SSD must be tested, we will be diving deep on this one later in this article. For the moment, let's dispose with the formalities.
Here are the samples we received for testing:
It's worth noting that since all M600 models use 16nm 128Gbit dies, packaging is expected to have a negligible impact on performance. This means the 256GB MSATA sample should perform equally to its 2.5" SATA counterpart. The same goes for comparisons against M.2 form factor units. More detail is present in the specs below:
Highlights from the above specs are the increased write speeds (no doubt thanks to Dynamic Write Acceleration) and improved endurance figures. For reference, the prior gen Micron models were rated at 72TB (mostly regardless of capacity), so seeing figures upwards of 400TB indicates Micron's confidence in their 16nm process.
Sorry to disappoint here, but the M600 is an OEM targeted drive, meaning its 'packaging' will likely be the computer it comes installed in. If you manage to find it through a reseller, it will likely come in OEM-style brown/white box packaging.
We have been evaluating these samples for just under a week and have logged *many* hours on them, so let's get to it!
Subject: Storage | September 25, 2014 - 06:36 PM | Jeremy Hellstrom
Tagged: corsair, Voyager Air 2, wireless hdd
The Corsair Voyager Air 2 is the second iteration of wireless drive, this years model coming with a 1TB drive, a totally redesigned shell and a $20 drop in price. Legit Reviews warns that while the price drop is appreciated it no longer comes with the charging kit which will cost you extra. It supports USB 3.0 and 802.11 b/g/n transfers as well as Internet passthrough, keep in mind that WiFi is disabled once the USB plug is connected. The overall speeds were in line with what was expected and the battery life is impressive for 720p streaming, though 1080p streaming drains it much more quickly. See the Voyager in action right here.
"Last year we took a look at Corsair’s first wireless hard drive, called Voyager Air, which was a very sleek and impressive unit that we really liked. Today, we’re going to take a look at the more recently revamped version, conveniently called Voyager Air 2. We’ll take a look and see what this drive all has to offer and if there is anything new brought to the table."
Here are some more Storage reviews from around the web:
- RAIDON Runner GR2660 SSD/HDD RAID Enclosure @ Kitguru
- Silicon Power Stream S03 2TB USB 3.0 Portable Hard Drive Review @ NikKTech
- QNAP TS-251 High Performance NAS for SOHO and Home Users Review @ Madshrimps
- Team Group Micro SDHC UHS-1 U3 32GB Review @ Madshrimps
- SanDisk Ultra II 240GB SSD Review @ Legit Reviews
- Corsair Force LX 256GB @ eTeknix
- Kingston SM2280S3 M.2 SATA 120 GiB SSD Review @ Hardware Secrets
- Kingston SM2280S3 M.2 SATA SSD @ The SSD Review
Subject: General Tech, Storage | September 21, 2014 - 08:41 PM | Scott Michaud
Tagged: ssd, Samsung, kingston hyper x, kingston, endurance, corsair neutron gtx, corsair, 840 pro
Many drives have died over the last year and a bit. The Tech Report has been torturing SSDs with writes until they drop. Before a full petabyte of data was written, three of the six drives kicked the bucket. They are now at 1500TB of total writes and one of the three survivors, the 240GB Corsair Neutron GTX, dropped out. This was a bit surprising as it was reporting fairly high health when it entered "the petabyte club" aside from a dip in read speeds.
The two remaining drives are the Samsung 840 Pro (256GB) and Kingston HyperX 3K (240GB).
Two stand, one fell (Image Credit: Tech Report)
Between those two, the Samsung 840 Pro is given the nod as the Kingston drive lived through uncorrectable errors; meanwhile, the Samsung has yet to report any true errors (only reallocations). Since the test considers a failure to be a whole drive failure, though, the lashings will persist until the final drive gives out (or until Scott Wasson gives up in a glorious sledgehammer apocalypse -- could you imagine if one of them lasted a decade? :3).
Of course, with just one unit from each model, it is difficult to faithfully compare brands with this marathon. While each lasted a ridiculously long time, the worst of the bunch putting up with a whole 2800 full-drive writes, it would not be fair to determine an average lifespan for a given model with one data point each. It is good to suggest that your SSD probably did not die from a defrag run -- but it is still a complete waste of your time and you should never do it.
Investigating the issue
** Edit ** (24 Sep)
We have updated this story with temperature effects on the read speed of old data. Additional info on page 3.
** End edit **
** Edit 2 ** (26 Sep)
New quote from Samsung:
"We acknowledge the recent issue associated with the Samsung 840 EVO SSDs and are qualifying a firmware update to address the issue. While this issue only affects a small subset of all 840 EVO users, we regret any inconvenience experienced by our customers. A firmware update that resolves the issue will be available on the Samsung SSD website soon. We appreciate our customer’s support and patience as we work diligently to resolve this issue."
** End edit 2 **
** Edit 3 **
The firmware update and performance restoration tool has been tested. Results are found here.
** End edit 3 **
Over the past week or two, there have been growing rumblings from owners of Samsung 840 and 840 EVO SSDs. A few reports scattered across internet forums gradually snowballed into lengthy threads as more and more people took a longer look at their own TLC-based Samsung SSD's performance. I've spent the past week following these threads, and the past few days evaluating this issue on the 840 and 840 EVO samples we have here at PC Perspective. This post is meant to inform you of our current 'best guess' as to just what is happening with these drives, and just what you should do about it.
The issue at hand is an apparent slow down in the reading of 'stale' data on TLC-based Samsung SSDs. Allow me to demonstrate:
You might have seen what looks like similar issues before, but after much research and testing, I can say with some confidence that this is a completely different and unique issue. The old X25-M bug was the result of random writes to the drive over time, but the above result is from a drive that only ever saw a single large file write to a clean drive. The above drive was the very same 500GB 840 EVO sample used in our prior review. It did just fine in that review, and at afterwards I needed a quick temporary place to put a HDD image file and just happened to grab that EVO. The file was written to the drive in December of 2013, and if it wasn't already apparent from the above HDTach pass, it was 442GB in size. This brings on some questions:
- If random writes (i.e. flash fragmentation) are not causing the slow down, then what is?
- How long does it take for this slow down to manifest after a file is written?
Subject: Storage | September 18, 2014 - 07:10 PM | Jeremy Hellstrom
Tagged: micron, M600, SLC. MLC, DWA
Micron's M600 SSD has a new trick up its sleeve, called dynamic write acceleration which is somewhat similar to the HDDs with an NAND cache to accelerate the speed frequently accessed data can be read but with a brand new trick. In this case SLC NAND acts as the cache for MLC NAND but it does so dynamically, the NAND can switch from SLC to MLC and back depending on the amount of usage. There is a cost, the SLC storage capacity is 50% lower than MLC so the larger the cache the lower the total amount of storage is available. As well the endurance rating is also higher than previous drives, not because of better NAND but because of new trim techniques being used. This is not yet a retail product so The Tech Report does not have benchmarks but this goes to show you there are plenty more tricks we can teach SSDs.
"Micron's new M600 SSD can flip its NAND cells between SLC and MLC modes on the fly, enabling a dynamic write cache that scales with the drive's unused capacity. We've outlined how this dynamic write acceleration is supposed to impact performance, power consumption, and endurance."
Here are some more Storage reviews from around the web:
- Adata's Premier SP610 @ The Tech Report
- A SCORCHIO fatboy SSD: Samsung SSD850 PRO 3D V-NAND @ The Register
- Silicon Power Blaze B06 64GB USB 3.0 Flash Drive Review @ NikKTech
- SanDisk Ultra II SSD Review (240GB) - TLC Memory becomes Mainstream @ The SSD Review
- Thecus N2310 2-bay NAS @ Kitguru
- QNAP TS-451 @ techPowerUp
- Kingston HyperX FURY 64GB USB 3.0 Flash Drive Review @ OCC
Introduction, Specifications and Packaging
It seems a lot of folks have been incorporating Silicon Motion's SM2246EN controller into their product lines. We first reviewed the Angelbird SSD wrk, but only in a 512GB capacity. We then reviewed a pair of Corsair Force LX's (256GB and 512GB). ADATA has joined the club with their new Premier SP610 product line, and today we are going to take a look at all available capacities of this new model:
It's fortunate that ADATA was able to sample us a full capacity spread, as this will let us evaluate all shipping SSD capacites that exist for the Silicon Motion SM2246EN controller.
Subject: Storage, Shows and Expos | September 16, 2014 - 02:29 PM | Allyn Malventano
Tagged: ssd, slc, sata, mlc, micron, M600, crucial
You may already be familiar with the Micron Crucial M550 line of SSDs (if not, familiarize yourself with our full capacity roundup here). Today Micron is pushing their tech further by releasing a new M600 line. The M600's are the first full lineup from Micron to use their 16nm flash (previously only in their MX100 line). Aside from the die shrink, Micron has addressed the glaring issue we noted in our M550 review - that issue being the sharp falloff in write speeds in lower capacities of that line. Their solution is rather innovative, to say the least.
Recall the Samsung 840 EVO's 'TurboWrite' cache, which gave that drive a burst of write speed during short sustained write periods. The 840 EVO accomplished this by each TLC die having a small SLC section of flash memory. All data written passed through this cache, and once full (a few GB, varying with drive capacity), write speed slowed to TLC levels until the host system stopped writing for long enough for the SSD to flush the cached data from SLC to TLC.
The Micron M600 SSD in 2.5" SATA, MSATA, and M.2 form factors.
Micron flips the 'typical' concept of caching methods on its head. It does employ two different types of flash writing (SLC and MLC), but the first big difference is that the SLC is not really cache at all - not in the traditional sense, at least. The M600 controller, coupled with some changes made to Micron's 16nm flash, is able to dynamically change the mode of each flash memory die *on the fly*. For example, the M600 can place most of the individual 16GB (MLC) dies into SLC mode when the SSD is empty. This halves the capacity of each die, but with the added benefit of much faster and more power efficient writes. This means the M600 would really perform more like an SLC-only SSD so long as it was kept less than half full.
As you fill the SSD towards (and beyond) half capacity, the controller incrementally clears the SLC-written data, moving that data onto dies configured to MLC mode. Once empty, the SLC die is switched over to MLC mode, effectively clearing more flash area for the increasing amount of user data to be stored on the SSD. This process repeats over time as the drive is filled, meaning you will see less SLC area available for accelerated writing (see chart above). Writing to the SLC area is also advantageous in mobile devices, as those writes not only occur more quickly, they consume less power in the process:
For those worst case / power user scenarios, here is a graph of what a sustained sequential write to the entire drive area would look like:
Realize this is not typical usage, but if it happened, you would see SLC speeds for the first ~45% of the drive, followed by MLC speeds for another 10%. After the 65% point, the drive is forced to initiate the process of clearing SLC and flipping dies over to MLC, doing so while the host write is still in progress, and therefore resulting in the relatively slow write speed (~50 MB/sec) seen above. Realize that in normal use (i.e. not filling the entire drive at full speed in one go), garbage collection would be able to rearrange data in the background during idle time, meaning write speeds should be near full SLC speed for the majority of the time. Even with the SSD nearly full, there should be at least a few GB of SLC-mode flash available for short bursts of SLC speed writes.
This caching has enabled some increased specs over the prior generation models:
Note the differences in write speeds, particularly in the lower capacity models. The 128GB M550 was limited to 190MB/sec, while the M600 can write at 400MB/sec in SLC mode (which is where it should sit most of the time).
We'll be testing the M600 shortly and will come back with a full evaluation of the SSD as a whole and more specifically how it handles this new tech under real usage scenarios.