All | Editorial | General Tech | Graphics Cards | Networking | Motherboards | Cases and Cooling | Processors | Chipsets | Memory | Displays | Systems | Storage | Mobile | Shows and Expos
Introduction and Background
We first got a peek of USB 3.1 at CES 2015. MSI had a cool demo showing some throughput figures including read and write speeds as high as 690 MB/s, well over the ~450 MB/s we see on USB 3.0 options shipping today.
We were of course eager to play around with this for ourselves, and MSI was happy to oblige, sending along a box of goodies:
Stuff we will be testing today (Samsung T1 was not part of the MSI demo).
For those unaware, USB 3.1 (also known as Superspeed+), while only a 0.1 increment in numbering, incorporates a doubling of raw throughput and some dramatic improvements to the software overhead of the interface.
Don't be confused between the USB 3.1 standard and the new USB Type-C connector - they are unrelated and independent of each other.
Yes, you’re all going to have to buy *more* cables in the future.
Type-C connectors will enable more simple cable design and thinner connections going forward but USB 3.1 will exist in both Type-A/B and Type-C going forward. Our benchmarking today will utilize Type-A.
Introduction, Specifications and Packaging
Plextor launched their M6e PCIe SSD in mid-2014. This was the first consumer retail available native PCIe SSD. While previous solutions such as the OCZ RevoDrive bridged SATA SSD controllers to PCIe through a RAID or VCA device, the M6e went with a Marvell controller that could speak directly to the host system over a PCIe 2.0 x2 link. Since M.2 was not widely available at launch time, Plextor also made the M6e available with a half-height PCIe interposer, making for a painless upgrade for those on older non M.2 motherboards (which at that time was the vast majority).
With the M6e out for only a few months time (and in multiple versions), I was surprised to see Plextor launch an additonal version of it at the 2015 CES this past January. Announced alongside the upcoming M7e, the M6e Black Edition is essentially a pimped out version of the original M6e PCIe:
We left CES with a sample of the M6e Black, but had to divert our attention to a few other pressing issues shortly after. With all of that behind us, it's time to get back to cranking out the storage goodness, so let's get to it!
Well here we are again with this Samsung 840 EVO slow down issue cropping up here, there, and everywhere. The story for this one is so long and convoluted that I’m just going to kick this piece off with a walk through of what was happening with this particular SSD, and what was attempted so far to fix it:
The Samsung 840 EVO is a consumer-focused TLC SSD. Normally TLC SSDs suffer from reduced write speeds when compared to their MLC counterparts, as writing operations take longer for TLC than for MLC (SLC is even faster). Samsung introduced a novel way of speeding things up with their TurboWrite caching method, which adds a fast SLC buffer alongside the slower flash. This buffer is several GB in size, and helps the 840 EVO maintain fast write speeds in most typical usage scenarios, but the issue with the 840 EVO is not its write speed – the problem is read speed. Initial reviews did not catch this issue as it only impacted data that had been stagnant for a period of roughly 6-8 weeks. As files aged their read speeds were reduced, starting from the speedy (and expected) 500 MB/sec and ultimately reaching a worst case speed of 50-100 MB/sec:
There were other variables that impacted the end result, which further complicated the flurry of reports coming in from seemingly everywhere. The slow speeds turned out to be the result of the SSD controller working extra hard to apply error correction to the data coming in from flash that was (reportedly) miscalibrated at the factory. This miscalibration caused the EVO to incorrectly adapt to cell voltage drifts over time (an effect that occurs in all flash-based storage – TLC being the most sensitive). Ambient temperature could even impact the slower read speeds as the controller was working outside of its expected load envelope and thermally throttled itself when faced with bulk amounts of error correction.
An example of file read speed slowing relative to age, thanks to a tool developed by Techie007.
Once the community reached sufficient critical mass to get Samsung’s attention, they issued a few statements and ultimately pushed out a combination firmware and tool to fix EVO’s that were seeing this issue. The 840 EVO Performance Restoration Tool was released just under two months after the original thread on the Overclock.net forums was started. Despite a quick update a few weeks later, that was not a bad turnaround considering Intel took three months to correct a firmware issue of one of their own early SSDs. While the Intel patch restored full performance to their X25-M, the Samsung update does not appear to be faring so well now that users have logged a few additional months after applying their fix.
Introduction, Specifications and Packaging
Today Samsung has lifted the review embargo on their new Portable SSD T1. This represents Samsung's first portable SSD, and aims to serve as another way to make their super speedy VNAND available. We first saw the Samsung T1 at CES, and I've been evaluating the performance if this little drive for the past week:
We'll dive more into the details as this review progresses.
The T1 comes well packaged, with a small instruction manual and a flat style short USB 3.0 cable. The drive itself is very light - ours weighed in right at 1 ounce.
Drobo is frequently referred to as ‘the Apple of external storage products’. They got this name because their products go for the simplest possible out-of-the-box experience. Despite their simplicity, the BeyondRAID concept these units employ remains extremely robust and highly resistant to data loss in even the most extreme cases of drive failures and data loss. I reviewed the DroboPro 8-bay unit over 5 years ago and was so impressed by it that I continue to use one to this day (and it has never lost data, despite occasional hard drive failures).
Over those past 5 years since our review of the DroboPro, Drobo (then known as Data Robotics) has also had a bit of an Apple story. Their original CEO started the company but was ousted by the board in late 2009. He then started Connected Data in 2011, quickly growing to the point where they merged with Drobo in 2013. This was not just a merger of companies, it was a merger of their respective products. The original Transporter was only a single drive unit, where Drobo’s tech supercharged that personal cloud capability to scale all the way up to corporate environments.
Many would say that for that period where their original CEO was absent, Drobo’s products turned more towards profitability, perhaps too soon for the company, as the products released during that period were less than stellar. We actually got a few of those Drobos in for review, but their performance was so inconsistent that we spent more time trying to figure out what was causing the issues than completing a review we could stand behind. With their founder back in the CEO chair, Drobo's path was turned back to its roots - making a good, fast, and low cost product for their customers. This was what they wanted to accomplish back in 2009, but in many ways the available tech was not up to speed yet. USB 2.0 was the fastest widely available standard, aside from iSCSI over Gigabit (but that was pricey to implement and appeared in the DroboPro). Nowadays things are very different. USB 3.0 controllers are vastly more compatible and faster than they used to be, as is SATA controller hardware and ARM microcontrollers. These developments would ultimately enable Drobo to introduce what they wanted to in the first place:
This is the third generation 4-Bay Drobo. The 4-Bay model is what started it all for them, but was a bit underpowered and limited to USB 2.0 speeds. The second gen unit launched mid 2008, adding FireWire as a faster connection option, but it was still slower than most would have liked given its $500 price tag. This third generation unit promises to change all of that.
USB is once again the only connectivity option, but this time it’s USB 3.0. There have previously been other 5-bay Drobos with this as an option (Drobo S, S gen 2, 5D, Mini), but many of those units saw compatibility issues with some USB 3.0 host controllers. We experienced some of these same frustrating incompatibilities first hand, and can confirm those frustrations. Drobo is putting that behind them with a revised chipset, and today we will put it all to the test.
Introduction and Internals
We've seen USB 3.0 in devices for a few years now, but it has only more recently started taking off since controllers, drivers, and Operating Systems have incorporated support for the USB Attached SCSI Protocol. UASP takes care of one of the big disadvantages seen when linking high speed storage devices. USB adds a relatively long and multi-step path for each and every transaction, and the initial spec did not allow for any sort of parallel queuing. The 'Bulk-Only Transport' method was actually carried forward all the way from USB 1.0, and it simply didn't scale well for very low latency devices. The end result was that a USB 3.0 connected SSD performed at a fraction of its capability. UASP fixes that by effectively layering the SCSI protocol over the USB 3.0 link. Perhaps its biggest contributor to the speed boost is SCSI's ability to queue commands. We saw big speed improvements with the Corsair Flash Voyager GTX and other newer UASP enabled flash drives, but it's time we look at some ways to link external SATA devices using this faster protocol. Our first piece will focus on a product from Inateck - their FE2005 2.5" SATA enclosure:
This is a very simple enclosure, with a sliding design and a flip open door at the front.
Introduction, Specifications and Packaging
Mid last year, Samsung introduced the 840 EVO. This was their evolutionary step from the 840 Pro, which had launched a year prior. While the Pro was a performance MLC SSD, the EVO was TLC, and for most typical proved just as speedy. The reason for this was Samsung’s inclusion of a small SLC cache on each TLC die. Dubbed TurboWrite, this write-back cache gave the EVO the best write performance of any TLC-based SSD on the market. Samsung had also introduced a DRAM cache based RAPID mode - included with their Magician value added software solution. The EVO was among the top selling SSDs since its launch, despite a small hiccup quickly corrected by Samsung.
Fast forward to June of this year where we saw the 850 Pro. Having tested the waters with 24-layer 3D VNAND, Samsung revises this design, increasing the layer count to 32 and reducing the die capacity from 128Gbit to 86Gbit. The smaller die capacity enables a 50% performance gain, stacked on top of the 100% write speed gain accomplished by the reduced cross talk of the 3D VNAND architecture. These changes did great things for the performance of the 850 Pro, especially in the lower capacities. While competing 120/128GB SSDs were typically limited to 150 MB/sec write speeds, the 128GB 850 Pro cruises along at over 3x that speed, nearly saturating the SATA interface. The performance might have been great, but so was the cost - 850 Pro’s have stuck around $0.70/GB since their launch, forcing budget conscious upgraders to seek competing solutions. What we needed was an 850 EVO, and now I can happily say here it is:
As the 840 EVO was a pretty big deal, I believe the 850 EVO has an equal chance of success, so instead of going for a capacity roundup, this first piece will cover the 120GB and 500GB capacities. A surprising number of our readers run a pair of smaller capacity 840 EVOs in a RAID, so we will be testing a matched pair of 850 EVOs in RAID-0. To demonstrate the transparent performance boosting of RAPID, I’ll also run both capacities through our full test suite with RAPID mode enabled. There is lots of testing to get through, so let’s get cracking!
It has become increasingly apparent that flash memory die shrinks have hit a bit of a brick wall in recent years. The issues faced by the standard 2D Planar NAND process were apparent very early on. This was no real secret - here's a slide seen at the 2009 Flash Memory Summit:
Despite this, most flash manufacturers pushed the envelope as far as they could within the limits of 2D process technology, balancing shrinks with reliability and performance. One of the largest flash manufacturers was Intel, having joined forces with Micron in a joint venture dubbed IMFT (Intel Micron Flash Technologies). Intel remained in lock-step with Micron all the way up to 20nm, but chose to hold back at the 16nm step, presumably in order to shift full focus towards alternative flash technologies. This was essentially confirmed late last week, with Intel's announcement of a shift to 3D NAND production.
Intel's press briefing seemed to focus more on cost efficiency than performance, and after reviewing the very few specs they released about this new flash, I believe we can do some theorizing as to the potential performance of this new flash memory. From the above illustration, you can see that Intel has chosen to go with the same sort of 3D technology used by Samsung - a 32 layer vertical stack of flash cells. This requires the use of an older / larger process technology, as it is too difficult to etch these holes at a 2x nm size. What keeps the die size reasonable is the fact that you get a 32x increase in bit density. Going off of a rough approximation from the above photo, imagine that 50nm die (8 Gbit), but with 32 vertical NAND layers. That would yield a 256 Gbit (32 GB) die within roughly the same footprint.
Representation of Samsung's 3D VNAND in 128Gbit and 86 Gbit variants.
20nm planar (2D) = yellow square, 16nm planar (2D) = blue square.
Image republished with permission from Schiltron Corporation.
It's likely a safe bet that IMFT flash will be going for a cost/GB far cheaper than the competing Samsung VNAND, and going with a relatively large 256 Gbit (vs. VNAND's 86 Gbit) per-die capacity is a smart move there, but let's not forget that there is a catch - write speed. Most NAND is very fast on reads, but limited on writes. Shifting from 2D to 3D NAND netted Samsung a 2x speed boost per die, and another effective 1.5x speed boost due to their choice to reduce per-die capacity from 128 Gbit to 86 Gbit. This effective speed boost came from the fact that a given VNAND SSD has 50% more dies to reach the same capacity as an SSD using 128 Gbit dies.
Now let's examine how Intel's choice of a 256 Gbit die impacts performance:
- Intel SSD 730 240GB = 16x128 Gbit 20nm dies
- 270 MB/sec writes and ~17 MB/sec/die
- Crucial MX100 128GB = 8x128Gbit 16nm dies
- 150 MB/sec writes and ~19 MB/sec/die
- Samsung 850 Pro 128GB = 12x86Gbit VNAND dies
- 470MB/sec writes and ~40 MB/sec/die
If we do some extrapolation based on the assumption that IMFT's move to 3D will net the same ~2x write speed improvement seen by Samsung, combined with their die capacity choice of 256Gbit, we get this:
- Future IMFT 128GB SSD = 4x256Gbit 3D dies
- 40 MB/sec/die x 4 dies = 160MB/sec
Even rounding up to 40 MB/sec/die, we can see that also doubling the die capacity effectively negates the performance improvement. While the IMFT flash equipped SSD will very likely be a lower cost product, it will (theoretically) see the same write speed limits seen in today's SSDs equipped with IMFT planar NAND. Now let's go one layer deeper on theoretical products and assume that Intel took the 18-channel NVMe controller from their P3700 Series and adopted it to a consumer PCIe SSD using this new 3D NAND. The larger die size limits the minimum capacity you can attain and still fully utilize their 18 channel controller, so with one die per channel, you end up with this product:
- Theoretical 18 channel IMFT PCIE 3D NAND SSD = 18x256Gbit 3D dies
- 40 MB/sec/die x 18 dies = 720 MB/sec
- 18x32GB (die capacity) = 576GB total capacity
Overprovisioning decisions aside, the above would be the lowest capacity product that could fully utilize the Intel PCIe controller. While the write performance is on the low side by PCIe SSD standards, the cost of such a product could easily be in the $0.50/GB range, or even less.
In summary, while we don't have any solid performance data, it appears that Intel's new 3D NAND is not likely to lead to a performance breakthrough in SSD speeds, but their choice on a more cost-effective per-die capacity for their new 3D NAND is likely to give them significant margins and the wiggle room to offer SSDs at a far lower cost/GB than we've seen in recent years. This may be the step that was needed to push SSD costs into a range that can truly compete with HDD technology.
Introduction, Specifications and Packaging
In recent years, Plextor has branched beyond their renowned lines of optical storage devices, and into the realm of SSDs. They have done fairly well so far, treading carefully on their selection of controllers and form factors. Their most recent offerings include the M6S and M6M (reviewed here), and are based on Marvell controllers coupled with Toshiba flash. Given that the most recent Marvell controllers are also available in a PCIe variant, Plextor also chose to offer their M6 series in PCIe half height and M.2 form factor. These last two offerings are not simply SATA SSDs bridged over to PCIe, they are natively PCIe 2.0 x2 (1 GB/s), which gives a nice boost over the current SATA limit of 6Gb/sec (600 MB/sec). Today we are going to kill two birds with one stone by evaluating the half-height PCIe version:
As you can see, this is nothing more than the M.2 version on a Plextor branded interposer board. All results of this review should be identical to the bare M.2 unit plugged into a PCIe 2.0 x2 capable M.2 port on either a motherboard or mobile device. Note that those devices need to support the 2280 form factor, which is 80mm in length.
Here's the M.2 version installed on an ASUS X99-Deluxe, as tested by Morry.
Introduction, Specifications and Packaging
At that time we only knew that Phison was going to team up with another SSD manufacturer to get these to market. We now know that manufacturer is Corsair, and their new product is to be called the Neutron XT. How do we know this? Well, we've got one sitting right here:
While the Neutron has not officially launched (pricing is not even available), we have been afforded an early look into the performance of this new controller / SSD. While this is suspected to be a cost effective entry into the SSD marketplace, for now all we can do is evaluate the performance, so let's get to it!
Meet the Inateck barebones tool-free HDD
Recently Inatek sent over two products to test out, the FEU3NS-1 USB 3.0 HDD Tool Free External Enclosure and the BP2001 10W Bluetooth Stereo Speaker. Inatek has been around for a while, though originally their products were only available in the EU they have recently expanded to North America. They sell a variety of peripherals such as PCIe USB cards, cables and chargers as well as Bluetooth input devices and mobile device protectors, in addion to external HDDs enclosures and of course Bluetooth speakers.
The first product to take a look at is the USB 3.0 enclosure which ships with a USB cable and manual in addition to the tool free USB HDD enclosure. It is a very simple product at a very low price and is small enough to stick in a laptop bag without having an unsightly bulge. The base model is currently $14 on Amazon and for an extra $5 you can get one which supports USB Attached SCSI Protocol to allow an SSD to hit full speed when installed in the enclosure. The USB 3.0 cable is a dual male cable; no proprietary plugs or breakable adapters needed to make this work and as enough power can be provided over USB that this is the only cable you will need. The only compatibility issue concerns the relatively uncommon 12mm 2.5" drives which will not fit, 9.5mm and 7mm are both acceptable and there is a removable cushion to keep your 7mm drive nice and snug.
Introduction, Specifications and Packaging
G.Skill is likely better known for their RAM offerings, but they have actually been in the SSD field since the early days. My first SSD RAID was on a pair of G.Skill Flash SSDs. While they were outmaneuvered by the X25-M, they were equipped with SLC flash, and G.Skill offered them at a significantly lower price than the Samsung OEM units they were based on.
Since those early days of flash, G.Skill has introduced a few additional models but has not been known as a major player in the SSD market. That is set to change today, with their introduction of the Phoenix Blade PCIe SSD:
If you're eager to know what is inside or how it works, I'll set your mind at ease with this brief summary. The Phoenix Blade is essentially an OCZ RevoDrive 350, but with beefier specs and improved performance. The same SandForce 2281 controllers and Toshiba flash are used. The difference comes in the form of a smaller form factor (half height vs. full height PCIe), and the type of PCIe to SATA bridge chip used. More on that on the disassembly page.
Given that we are anticipating a launch of the Samsung 850 EVO very shortly, it is a good time to back fill on the complete performance picture of the 850 Pro series. We have done several full capacity roundups of various SSD models over the past months, and the common theme with all of them is that as the die count is reduced in lower capacity models, so is the parallelism that can be achieved. This effect varies based on what type of flash memory die is used, but the end result is mostly an apparent reduction in write performance. Fueling this issue is the increase in flash memory die capacity over time.
There are two different ways to counteract the effects of write speed reductions caused by larger capacity / fewer dies:
- Reduce die capacity.
- Increase write performance per die.
Recently there has been a trend towards *lower* capacity dies. Micron makes their 16nm flash in both 128Gbit and 64Gbit. Shifting back towards the 64Gbit dies in lower capacity SSD models helps them keep the die count up, increasing overall parallelism, and therefore keeping write speeds and random IO performance relatively high.
Introduction and Features
EZ-Clone in Standalone disk-cloning mode
Kingwin’s new EZ-Clone (Model: USI-2535CLU3) is a HDD/SSD adapter that can be used as a standalone disk-cloning device or as an external hard drive adapter. When used in standalone mode, the self-powered EZ-Cone can quickly clone one SATA/IDE drive to a new SATA drive in minutes (IDE to SATA or SATA to SATA) without being connected to a PC.
EZ-Clone being used as an external drive adapter
When used as an external drive adapter the EZ-Clone provides connectors for attaching two SATA drives (SSD or HDD) and one IDE hard drive in 2.5” or 3.5” form factors. The EZ-Clone adapter connects to a PC using the high-speed USB 3.0 interface. When used as an external drive adapter, the user can access up to two external drives at the same time (two SATA drives or one SATA and one IDE drive).
Kingwin EZ-Clone Key Features: (from the Kingwin website)
• EZ-Clone model: USI-2535CLU3
• External USB 3.0 to dual-SATA & single-IDE clone adapter
• Standalone disk duplicator with One-Touch Clone Button (no PC required)
• Supports 2.5” and 3.5” IDE and SATA drives (HDD or SSD)
• Compatible with SATA I/II/III (1.5/3.0/6.0 Gbps)
• SATA Drive Hot-swap compatibility
• Supports hard drives up to 3TB disk size
• Dual output power supply with standard 4-pin and SATA power connectors
• Up to 5 Gbps data transfer rate with USB 3.0 (also compatible with USB 2.0)
• USB Plug-and-play capability
• 2 Drive LEDs (red) and four Clone Progress LEDs (blue)
• Screw-less, easy to attach connectors
• Windows and Mac OS compatible (no driver installation required)
• 1-Year Warranty from Kingwin
• MSRP $39.99 USD ($33.99 from Amazon.com, Oct. 2014)
** Edit **
The tool is now available for download from Samsung here. Another note is that they intend to release an ISO / DOS version of the tool at the end of the month (for Lunix and Mac users). We assume this would be a file system agnostic version of the tool, which would either update all flash or wipe the drive. We suspect it would be the former.
** End edit **
As some of you may have been tracking, there was an issue with Samsung 840 EVO SSDs where ‘stale’ data (data which had not been touched for some period of time after writing it) saw slower read speeds as time since written extended beyond a period of weeks or months. The rough effect was that the read speed of old data would begin to slow roughly one month after written, and after a few more months would eventually reach a speed of ~50-100 MB/sec, varying slightly with room temperature. Speeds would plateau at this low figure, and more importantly, even at this slow speed, no users reported lost data while this effect was taking place.
An example of file read speeds slowing relative to file age.
Since we first published on this, we have been coordinating with Samsung to learn the root causes of this issue, how they will be fixed, and we have most recently been testing a pre-release version of the fix for this issue. First let's look at the newest statement from Samsung:
Because of an error in the flash management software algorithm in the 840 EVO, a drop in performance occurs on data stored for a long period of time AND has been written only once. SSDs usually calibrate changes in the statuses of cells over time via the flash management software algorithm. Due to the error in the software algorithm, the 840 EVO performed read-retry processes aggressively, resulting in a drop in overall read performance. This only occurs if the data was kept in its initial cell without changing, and there are no symptoms of reduced read performance if the data was subsequently migrated from those cells or overwritten. In other words, as the SSD is used more and more over time, the performance decrease disappears naturally. For those who want to solve the issue quickly, this software restores the read performance by rewriting the old data. The time taken to complete the procedure depends on the amount of data stored.
This partially confirms my initial theory in that the slow down was related to cell voltage drift over time. Here's what that looks like:
As you can see above, cell voltages will shift to the left over time. The above example is for MLC. TLC in the EVO will have not 4 but 8 divisions, meaning even smaller voltage shifts might cause the apparent flipping of bits when a read is attempted. An important point here is that all flash does this - the key is to correct for it, and that correction is what was not happening with the EVO. The correction is quite simple really. If the controller sees errors during reading, it follows a procedure that in part adapts to and adjusts for cell drift by adjusting the voltage thresholds for how the bits are interpreted. With the thresholds adapted properly, the SSD can then read at full speed and without the need for error correction. This process was broken in the EVO, and that adaptation was not taking place, forcing the controller to perform error correction on *all* data once those voltages had drifted near their default thresholds. This slowed the read speed tremendously. Below is a worst case example:
We are happy to say that there is a fix, and while it won't be public until
some time tomorrow now, we have been green lighted by Samsung to publish our findings.
Introduction and Test System Setup
A while ago, in our review of the WD Red 6TB HDD, we noted an issue with the performance of queued commands. This could potentially impact the performance of those drives in multithreaded usage scenarios. While Western Digital acted quickly to get updated drives into the supply chain, some of the first orders might have been shipped unpatched drives. To be clear, an unpatched 5TB or 6TB Red still performs well, just not as well as it *could* perform with the corrected firmware installed.
We received updated samples from WD, as well as applying a firmware update to the samples used in our original review. We were able to confirm that the update does in fact work, and brings a WD60EFRX-68MYMN0 to the identical and improved performance characteristics of a WD60EFRX-68MYMN1 (note the last digit). In this article we will briefly clarify those performance differences, now that we have data more consistent with the vast majority of 5 and 6TB Reds that are out in the wild.
Test System Setup
We currently employ a pair of testbeds. A newer ASUS P8Z77-V Pro/Thunderbolt and an ASUS Z87-PRO. Storage performance variance between both boards has been deemed negligible.
PC Perspective would like to thank ASUS, Corsair, and Kingston for supplying some of the components of our test rigs.
|Hard Drive Test System Setup|
|CPU||Intel Core i7-4770K|
|Motherboard||ASUS P8Z77-V Pro/TB / ASUS Z87-PRO|
|Memory||Kingston HyperX 4GB DDR3-2133 CL9|
|Hard Drive||G.Skill 32GB SLC SSD|
|Video Card||Intel® HD Graphics 4600|
|Power Supply||Corsair CMPSU-650TX|
|Operating System||Windows 8.1 X64 (Update 1)|
- PCMark Vantage and 7
- HDTach *omitted due to incompatibility with >2TB devices*
- PCPer File Copy Test
Introduction and Specifications
Today Micron lifted the review embargo on their new M600 SSD lineup. We covered their press launch a couple of weeks ago, but as a recap, the headline new feature is the new Dynamic Write Acceleration feature. As this is a new (and untested) feature that completely changes the way an SSD must be tested, we will be diving deep on this one later in this article. For the moment, let's dispose with the formalities.
Here are the samples we received for testing:
It's worth noting that since all M600 models use 16nm 128Gbit dies, packaging is expected to have a negligible impact on performance. This means the 256GB MSATA sample should perform equally to its 2.5" SATA counterpart. The same goes for comparisons against M.2 form factor units. More detail is present in the specs below:
Highlights from the above specs are the increased write speeds (no doubt thanks to Dynamic Write Acceleration) and improved endurance figures. For reference, the prior gen Micron models were rated at 72TB (mostly regardless of capacity), so seeing figures upwards of 400TB indicates Micron's confidence in their 16nm process.
Sorry to disappoint here, but the M600 is an OEM targeted drive, meaning its 'packaging' will likely be the computer it comes installed in. If you manage to find it through a reseller, it will likely come in OEM-style brown/white box packaging.
We have been evaluating these samples for just under a week and have logged *many* hours on them, so let's get to it!
Investigating the issue
** Edit ** (24 Sep)
We have updated this story with temperature effects on the read speed of old data. Additional info on page 3.
** End edit **
** Edit 2 ** (26 Sep)
New quote from Samsung:
"We acknowledge the recent issue associated with the Samsung 840 EVO SSDs and are qualifying a firmware update to address the issue. While this issue only affects a small subset of all 840 EVO users, we regret any inconvenience experienced by our customers. A firmware update that resolves the issue will be available on the Samsung SSD website soon. We appreciate our customer’s support and patience as we work diligently to resolve this issue."
** End edit 2 **
** Edit 3 **
The firmware update and performance restoration tool has been tested. Results are found here.
** End edit 3 **
Over the past week or two, there have been growing rumblings from owners of Samsung 840 and 840 EVO SSDs. A few reports scattered across internet forums gradually snowballed into lengthy threads as more and more people took a longer look at their own TLC-based Samsung SSD's performance. I've spent the past week following these threads, and the past few days evaluating this issue on the 840 and 840 EVO samples we have here at PC Perspective. This post is meant to inform you of our current 'best guess' as to just what is happening with these drives, and just what you should do about it.
The issue at hand is an apparent slow down in the reading of 'stale' data on TLC-based Samsung SSDs. Allow me to demonstrate:
You might have seen what looks like similar issues before, but after much research and testing, I can say with some confidence that this is a completely different and unique issue. The old X25-M bug was the result of random writes to the drive over time, but the above result is from a drive that only ever saw a single large file write to a clean drive. The above drive was the very same 500GB 840 EVO sample used in our prior review. It did just fine in that review, and at afterwards I needed a quick temporary place to put a HDD image file and just happened to grab that EVO. The file was written to the drive in December of 2013, and if it wasn't already apparent from the above HDTach pass, it was 442GB in size. This brings on some questions:
- If random writes (i.e. flash fragmentation) are not causing the slow down, then what is?
- How long does it take for this slow down to manifest after a file is written?
Introduction, Specifications and Packaging
It seems a lot of folks have been incorporating Silicon Motion's SM2246EN controller into their product lines. We first reviewed the Angelbird SSD wrk, but only in a 512GB capacity. We then reviewed a pair of Corsair Force LX's (256GB and 512GB). ADATA has joined the club with their new Premier SP610 product line, and today we are going to take a look at all available capacities of this new model:
It's fortunate that ADATA was able to sample us a full capacity spread, as this will let us evaluate all shipping SSD capacites that exist for the Silicon Motion SM2246EN controller.
Introduction, Specifications and Packaging
We first looked at the Silicon Motion 2246EN controller in our Angelbird SSD wrk review. In that review, we noted the highest sequential performance seen in any SATA SSD reviewed to date. Eager to expand our testing to include additional vendors and capacities, our next review touching on this controller is the Corsair Force LX series of SSDs. The Force LX Series is available in 128GB, 256GB, and 512GB capacities, and today we will look at the 256GB and 512GB iterations of this line: