Review Index:
Feedback

Samsung 840 / 840 EVO susceptible to flash read speed degradation over time

Subject: Editorial, Storage
Manufacturer: PC Perspecitve
Tagged: tlc, Samsung, bug, 840 evo, 840

Investigating the issue

** Edit ** (24 Sep)

We have updated this story with temperature effects on the read speed of old data. Additional info on page 3.

** End edit **

** Edit 2 ** (26 Sep)

New quote from Samsung:

"We acknowledge the recent issue associated with the Samsung 840 EVO SSDs and are qualifying a firmware update to address the issue.  While this issue only affects a small subset of all 840 EVO users, we regret any inconvenience experienced by our customers.  A firmware update that resolves the issue will be available on the Samsung SSD website soon.  We appreciate our customer’s support and patience as we work diligently to resolve this issue."

** End edit 2 **

** Edit 3 **

The firmware update and performance restoration tool has been tested. Results are found here.

** End edit 3 **

Over the past week or two, there have been growing rumblings from owners of Samsung 840 and 840 EVO SSDs. A few reports scattered across internet forums gradually snowballed into lengthy threads as more and more people took a longer look at their own TLC-based Samsung SSD's performance. I've spent the past week following these threads, and the past few days evaluating this issue on the 840 and 840 EVO samples we have here at PC Perspective. This post is meant to inform you of our current 'best guess' as to just what is happening with these drives, and just what you should do about it.

The issue at hand is an apparent slow down in the reading of 'stale' data on TLC-based Samsung SSDs. Allow me to demonstrate:

View Full Size

You might have seen what looks like similar issues before, but after much research and testing, I can say with some confidence that this is a completely different and unique issue. The old X25-M bug was the result of random writes to the drive over time, but the above result is from a drive that only ever saw a single large file write to a clean drive. The above drive was the very same 500GB 840 EVO sample used in our prior review. It did just fine in that review, and at afterwards I needed a quick temporary place to put a HDD image file and just happened to grab that EVO. The file was written to the drive in December of 2013, and if it wasn't already apparent from the above HDTach pass, it was 442GB in size. This brings on some questions:

  • If random writes (i.e. flash fragmentation) are not causing the slow down, then what is?
  • How long does it take for this slow down to manifest after a file is written?

Read on for the full scoop!

Just to double check myself, and to try and disturb our 'stale data' sample as little as possible, I added a small 10GB test file and repeated the test:

View Full Size

Two important things here:

  • An additional HDTach read pass did not impact the slow read speeds in any way. This means that if there is some sort of error process occurring, nothing is being done to correct it from pass to pass.
  • The 10GB file appears at the end of the drive. For those curious, the saturation speed (nice flat line at the max SATA speed) is simply how Samsung controllers return requests for unallocated (i.e TRIMmed) data. Since the SSD has zero work to do for those requests, it can instantly return zeroed out data for those requests.

Now to verify actual file reads within Windows. Simplest way shown here:

New 10GB file:

View Full Size

'Old' image file:

View Full Size

The above copies (especially the large older file) are nearly identical speed profiles to what was seen in HDTach. It's important to do this double check when using HDTach as a test, since it uses a QD=1 access pattern that doesn't play well with some SSD controllers. Not the case here, as despite the slow downs, the EVO's controller itself is snappy, but appears to be dealing with something else slowing down the process of data retrieval.

Let's dig further, with some help from the community:


September 20, 2014 | 02:28 AM - Posted by Anonymous (not verified)

What's a good alternative to a Samsung EVO ?

September 20, 2014 | 02:56 AM - Posted by Morry Teitelman

The OCZ Vertex 460 is a fast alternative that can be found on sale of a good price...

September 20, 2014 | 07:01 PM - Posted by Tim Verry

Agreed, I bought a Vertex 460 120GB for about $75 on Amazon the other day :). So far so good!

September 28, 2014 | 11:15 PM - Posted by Anonymous (not verified)

OCZ drives are of very poor quality. They use low-quality components and push them too hard to achieve the high speeds and low prices, and the result is a drive with a high failure rate.

September 20, 2014 | 04:19 AM - Posted by Allyn Malventano

With a new EVO coming from Samsung, as well as the M600 replacing the older Micron M550, you're going to see good deals on all of the soon-to-be prior gen stuff (840 EVO included - just as soon as they issue a firmware update for it). 

September 20, 2014 | 02:52 AM - Posted by jtiger102

Is it just the 840 series that is affected by this bug? Would the 830 or 850 potentially be susceptible as well?

September 20, 2014 | 03:15 AM - Posted by Scott Michaud

Just the 840 and the 840 EVO. The 840 Pro is not affected. The 830 is not affected. The 850 Pro is also not affected. It is only the TLC-based models, apparently.

September 20, 2014 | 03:25 AM - Posted by jtiger102

Ok. Thanks!

September 20, 2014 | 09:37 AM - Posted by arbiter

I don't remember there being a 840 version, only ones I ever heard of was 840pro and 840evo, didn't ever hear of a 3rd model in there.

September 20, 2014 | 01:59 PM - Posted by Scott Michaud

Yeah, they were soon obsolete by the EVO. It was Samsung's first attempt at TLC.

September 20, 2014 | 04:02 AM - Posted by Chaitanya Shukla

I have a couple of 840 EVOs in my office machines. I am eagerly waiting for update from samsung.

September 25, 2014 | 05:13 PM - Posted by Briggsy (not verified)

ditto

September 20, 2014 | 04:48 AM - Posted by Josyd (not verified)

How long does it take for this slow down to manifest after a file is written?

My 2 EVOs are <6mo old and already have this issue. So I'm guessing not long at all.

September 20, 2014 | 11:23 AM - Posted by BrainSplatter (not verified)

My 1TB EVO is just 4 weeks old and has clear signs of this problem.

My latest tests also seem to indicate that higher temperatures (but still in normal range < 60 celsius) further reduce the read speed.

September 20, 2014 | 05:32 AM - Posted by jchambers2586

No issues here in RAID 0

http://i1333.photobucket.com/albums/w630/jchambers2586/ssd_zpsaffcb5ea.png

September 20, 2014 | 07:19 AM - Posted by BrainSplatter (not verified)

You don't understand. It's old data there the bug shows, not new. Most benchmark programs like Crystal, AS-SSD, ... write new files for testing and then the speed is fine.

September 20, 2014 | 05:43 AM - Posted by Simon Zerafa (not verified)

Hi,

Has anyone tried using Spinrite at level 2 or better level 4 to see if this fixes the issue?

Regards

Simon

September 20, 2014 | 06:29 PM - Posted by ironpossum (not verified)

Simon,

I was thinking of SpinRite, as well, when reading this article.

My suggestion would be SpinRite level 3 (only one write and two reads per sector).

I have done level 3 to a SSD in the past to bring a drive back to life after older firmware had created some issues - haven't had a problem since!

-ironpossum

September 21, 2014 | 03:25 AM - Posted by Allyn Malventano

Any Spinrite mode that writes data back will restore full speed. I would use the lowest mode since higher modes may unnecessarily wear the flash.

September 22, 2014 | 07:14 PM - Posted by Josyd (not verified)

You dont even have to do that. A simple wipe/reinstall, or in my case imaging, gets rid of the problem. It doesnt last long tho. I've seen some cases where it's happening within weeks.

September 20, 2014 | 08:06 AM - Posted by Uncle T (not verified)

I have this speed degradation problem on my 840 SSD 120GB (Basic, not Pro) sitting in a travel laptop I rarely use. These last 9 months it has been powered on only on Patch Tuesdays, to update the system. I mostly use portable versions of programs on it, all located nicely in a Portable Programs folder, so they can be copied elsewhere and then copied back easily as instructed here in this fine article. My question however is: Since this laptop gets very little use, does my 840 SSD consider the whole OS as "old data" as well? If so, how on earth would I go about rewriting those files? There's a snowballs chance in Hell I am reinstalling Windows or cloning this drive, too much hassle for me because Samsung sold me a defect product. Will the myDefrag solution move/refresh the whole OS? And also the fact that I have to DEFRAG my SSD to temporarily fix this means that if/when Samsung finally decide to try and correct this with a firmware, it's too late for me. They have lost a customer. I am going to demand a replacement. There. I got to vent a little bit..Very frustrating when you spend a few days unscrewing a gazillion screws to get the SSD placed into a Fort Knox certified laptop, install the OS and drivers, update and tweak the system and then put it away for later use. Then fire it up a month later only to see that the performance has degraded to worse than IDE speed, while sitting in the closet. This is so bad it's hilarious.. Maybe it's time we stop shopping for deals, data is important. Let's all just buy the most expensive crap we can find and shut up. Let's be Apple people.

September 20, 2014 | 08:27 AM - Posted by BrainSplatter (not verified)

Best temporary fix at the moment is to use:
http://www.mydefrag.com/
with the 'data disk monthly' script.

This will work it will essentially move/rewrite every file (and not because there is bad fragmentation).

September 20, 2014 | 11:16 AM - Posted by Uncle T (not verified)

Ok, thanks. Still absurd having to shorten the drives lifespan by defragging/rewriting every month or so. Kinda like having to sandblast your face every morning before applying make up =) Samsung will pay for this dearly, word to mouth is more powerful than they think. My trusty old Intel 520 in my main computer still performs like a champ, tested that one too in the wake of this Samsung fiasco, from this day forward Intel will get my money.

September 21, 2014 | 04:18 AM - Posted by arbiter

From what seen in the story, don't need to do it monthly, maybe like every 6 months going by the graph, it takes 30+ weeks before it starts to really become a problem.

http://www.pcper.com/files/review/2014-09-19/900x900px-LL-4985de76_2014-...

September 25, 2014 | 03:34 PM - Posted by Anonymous (not verified)

i thought it was a no no to defrag an ssd

September 26, 2014 | 07:46 AM - Posted by fred (not verified)

It's true, defragging is not needed on SSDs because there are zero benefits and the rewrites only shorten the drive life.

But this is not a normal situation. The only reason the others are suggesting a defrag run is to "freshen" the written data so that it now falls before the date threshold when the drive slows down.

Nobody is suggesting the same use of defrag as on a spinning drive. Nobody is saying "turn on continuous or frequent defrag." What they are saying is, sacrificing a relatively small number of rewrites every few weeks may be worth restoring the full performance of the drive. A very light defrag just happens to fulfill those criteria. But so does a quick backup and restore.

And if Samsung gets a firmware fix out soon, the number of rewrites sacrificed may be pretty small.

September 20, 2014 | 11:22 AM - Posted by PCPerFan (not verified)

My drive and many other drives are showing ~1MB/s in benchmarks and explorer transfers. The minimum speed is not ~50MB/s as stated in the article.

September 20, 2014 | 12:12 PM - Posted by BrainSplatter (not verified)

You have to differentiate between smaller (< 100KB) and larger files. Smaller files are inherently slower because the SSD can't read the data in parallel from different internal flash blocks.

The bug was explicitly tested with larger files (>=500KB) which should always be read with sth like 300-500MB/s. That was because the tests wanted to eliminate the chance that the slow speeds come from a lot of small files and not the degradation.

It's likely that smaller files are affected as well and might read much lower then the normal small file speed of 20-35MB/s. It has just not been examined closer so far.

September 20, 2014 | 12:26 PM - Posted by PCPerFan (not verified)

The benchmark I used does not differntiate between file sizes, but when I try to copy an 8-month-old 1.5GB file, I see under ~1MB/s in explorer.

September 20, 2014 | 01:29 PM - Posted by BrainSplatter (not verified)

Wow. That would be the worst case by far. Maybe u could run FileBench, the tool I wrote to test this issue:
http://www.overclock.net/attachments/26208

September 29, 2014 | 04:21 PM - Posted by Alex Antonio (not verified)

thats about how bad my drive is performing: http://i.imgur.com/8EXZz9c.jpg

September 20, 2014 | 01:34 PM - Posted by Anonymous (not verified)

I work with Samsung as vendor for my company. I can tell you without a doubt they will be on top of this. Their performance for providing software/firmware fixes is top notch.

September 20, 2014 | 02:26 PM - Posted by Humanitarian

Here's my completely uneducated hypothesis;

The cells are slowly drifting over time, they start to level out as the energy in the cell reaches a point where.. ehh... less voltage = less leakage?, but not so much that it flips a bit, just enough so that the error correction has a big job to do on every cell it comes across. and it doesn;t correct errors until the flash needs to be overwritten.

And that's what's wrong. Samsung, I await your employment offer.

September 20, 2014 | 02:40 PM - Posted by Anonymous (not verified)

On hearing this news, the Samsung CEO should have had the executives responsible in the Office, and with apologies to their families, working on the problem, this includes the necessary engineering personnel, software and hardware. In order to even be able to afford to have holidays with the family, there needs to be sales, and sales require customer satisfaction. Better to have a little disappointment around holiday time than big layoffs later, the competition is not waiting for the holidays to be over.

September 20, 2014 | 02:52 PM - Posted by AnonymousAI (not verified)

This is why I won't buy Sansung products. Their hard drives and optical drives had such horrible reliability (is it any wonder that they sold hard drives or $10-15 cheaper than every other brand) that MDG pulled their contract years ago. I know too many people that have been burned by their TV reliability too. Samsung couldn't handle the support problems so they sold off the hard drive division of the company. And yes, OCZ suffered a similar fate when they couldn't handle the warranty exchanges on their failing SSD's.

Now it's Samsung's SSD's.

September 20, 2014 | 05:38 PM - Posted by Anonymous (not verified)

I am also hearing that these Samsung SSDs are not so good in RAID 0 configurations, with uneven numbers of SSDs, on the anandtech article about this issue, user's posts. I know that the Samsung System software sucks, my series 3 laptop can not keep the WiFi off at bootup, half of the time it boots up with the WiFi on, and also will auto-connect to any available WiFi router, which is not so good for security or any air travel, where there may be the need to keep the WiFi off, Even disabling the WiFi in windows will not keep the WiFi off. It appears that Samsung's QC in the software/firmware department is bad all around.

September 20, 2014 | 07:42 PM - Posted by Anonymous (not verified)

I haven't had any problems with ANY Crucial SSD's (knock on wood), and I've sold probably over fifty of them in my shop in the last year. I've had exactly 5 OCZ SSD's (a mix of models) and all of them were dead within a year, so I stopped carrying them. That was before the company selloff. I've had the odd Kingston SSD fail too, but nothing as bad as OCZ. I've seen way too many Samsung hard drives fail in various machines to ever recommend or use them though.

September 24, 2014 | 02:18 PM - Posted by BBMan (not verified)

I've got an 830 and 840 PRO and I have had no issues for 2 years. And EVO appears to have work-arounds and fixes. That said,they DO appear to have a shorter life than the PROs, but you will probably go through a chunk of a peta-byte before you get there.
http://techreport.com/review/27062/the-ssd-endurance-experiment-only-two...
Regardless, I have to be a hard-drive snot and go for the high warranty stuff- and 850 PRO has a 10 year warranty. Overall I've been pleased with a lot coming out of Samsang and Korea in general.

September 25, 2014 | 11:01 PM - Posted by Anonymous (not verified)

None of two OCZ SSDs failes or slowed down!

I have used an OCZ Vertex 2 since December 2010 and it still performs without error from factory stock condition. I never flashed newer firmware on the Vertex 2.

I have used an OCZ Vertex 3 SSD since November 2011 in its stock factory firmware too. It has never been flashed with newer firmware either. Performs perfect.

Are you sure you bought legitimate OCZ SSDs?

I think you are a complete liar.

September 26, 2014 | 02:45 PM - Posted by Allyn Malventano

Your sample size is two. For a more informed decision, try reading Amazon reviews on those drives. Plenty of them died. I personally killed two in normal usage.

September 20, 2014 | 11:47 PM - Posted by Wharrrgarbl (not verified)

I had an 840 500gb ssd. Roughly 1 month ago, transfer speeds had dropped to around 11Mb/s regardless of file size. Also, I began recieving this error while transferring files from the 840 ssd "Error 0x8007045D: the request could not be performed because of an I/O device error". All this began occuring after less than 1 year of use, and 2.5tb written to the drive (according to ss magician). My Samsung 128gb 830 however has had 4tb written to it and is still running strong. I think that the above situation speaks for itself.

September 21, 2014 | 01:11 AM - Posted by Musafir_86 (not verified)

-Excuse me, but aren't ALL SSD (and NAND flash storage) have wear-leveling algorithm that supposed to move around (shuffle) the data internally? So those old data never get shuffled at all?

-What about data retention? What'll happen to the data in a drive that has been unplugged for months/years?

-And how about the earlier 840 (non-EVO, non-Pro) which use 21nm TLC NAND (versus 19nm TLC NAND in 840 EVO)? Does it has the same issue too? I helped a relative upgraded his ProBook 6555b to a 120GB 840. :(

Regards.

September 21, 2014 | 02:57 AM - Posted by Anonymous (not verified)

I thought the wear leveling was only concerned with new data, and old released/freed space not being used over and over again, but it keeps track of the number of write cycles for each block, and tries to utilize the least used free/freed space when it becomes available. The current problem appears to revolve around old files that are never accessed or moved for longer periods of time loosing their states, and having to have error correction applied in order to retrieve the data in an error free form, this error correction takes time and that is what is slowing things down. Of course, moving the file by reading it to another location, and back to the SSD, by various methods, refreshes the TLC in the short term, but does not alleviate the problem. Likewise the short term fix does impose more reading and writing to the SSD, and increases the number of cycles on the SSD, that would not be needed had this problem not arose in the first place.

September 21, 2014 | 03:30 AM - Posted by Anonymous (not verified)

There my be something to this happening on a smaller process node, and the 21nm TLC NAND being able to hold a more stable state for a longer period of time, than the 19nm TLC NAND in 840 EVO, and further node shrinks will see this problem magnified. If this problem is intrinsic to the smaller process nodes, then some form of reengineering on the process chemistry/NAND geometry may have to be instated on the smaller process node shrinks in the future, or maybe die stacking at a little larger node can alleviate the problem in the shorter term, while more study is done into the causes in the long run. The firmware solution may including reading and writing in place, but that won't help the wear leveling tradeoff, by having to do more read write cycles to refresh the TLC.

September 23, 2014 | 07:17 PM - Posted by Allyn Malventano

Excellent point, and that's something we are looking into now. Wear leveling *should* spread writes across flash - even flash that contained the old data, meaning that writes taking place in regular use would act to 'freshen up' the stale data, even though those LBA's were not explicitly rewritten by the host OS. This does not appear to be happening with our samples.

September 24, 2014 | 05:05 PM - Posted by razor512

Wear leveling usually takes place during the start of a write, the SSD buffers a tiny bit of data into its memory, and then the SSD does some processing to figure out the best locations to place the data in order to make sure no cell is getting overused.
The problem with this is spare capacity is also used for this, thus when your SSD begins the process of reallocating sectors, the drive is effectively on its last legs since it means that all of the cells are close to death and the slightly weaker ones are dying first.

When the reallocation process starts then there is a chance for ecc to encounter an unrecoverable error during a write, and thus cause you to lose data. e.g., in the tech report SSD endurance test, the evo at around 100TB of writes, experienced some data corruption (likely a few flash cells were leaky and could not retain data for very long)

http://techreport.com/review/26523/the-ssd-endurance-experiment-casualti...

September 22, 2014 | 12:40 PM - Posted by Anonymous (not verified)

New Volume(E) = Samsung 840 evo 512GB?

September 23, 2014 | 04:34 PM - Posted by Allyn Malventano

Correct.

September 22, 2014 | 09:42 PM - Posted by Rick (not verified)

I am seeing this on my raid 0 setup. I really only use my pc for gaming, so most of the data is old. I have 2 250s in raid 0 on z87 mb. I only have about 10% of the array filled with data. The first 10% of the array runs at 15-20mbs. Once I get past 10% the array goes to 1000mbs.

September 24, 2014 | 04:58 AM - Posted by KCPDX (not verified)

I tested out my EVO, and sure enough saw the exact same issue. I ran a program called Diskfresh (free for home users) http://www.puransoftware.com/DiskFresh.html . It reads and writes all of the drive. After doing that the drive returned to decent speeds again.
Short term fix until the firmware comes out.

September 24, 2014 | 05:16 PM - Posted by razor512

It is one of the many major issues if TLC flash, more bits per cell at smaller process sizes, you begin to rely more heavily on error correction (increasing error rates over time) which slows the performance. (While not recommended, you can significantly lower the error rate on an SSD by running spinrite on it on level 4 (though it will eat up a ton of writes (tried it on a 120GB evo that a friend bought for a system that was not used much (barely any writes)
To see the error rate, run spinrite on level 2 (read only test), then check out the error rates.
The smaller the process size is, the fewer electrons can be stored, and thus a loss of even a small number of electrons, can significantly increase the error rate.
MLC SSD's do not have this issue due to the low error rates.

If the issue is truly an issue with the retention of electrons in the cell, then any fic they come up with will either be a more efficient error correction to minimize the speed loss, or something that will cause the drive to periodically rewrite cells when the drive is idle.

September 24, 2014 | 07:15 PM - Posted by Anonymous (not verified)

Any TLC 3d V-NAND will have to be looked at, and tested more for this issue, epically on smaller process nodes.

As far as more efficient/faster error correction, maybe adding more processing cores to the controller, and some internal background idle time read testing, with rewriting/refreshing on stagnant data if the error rates test too high, maybe on a per-file basis.

It would be nice if the SSD could be managed by some included with the SDD, software, that could move the old data onto a hard drive, after informing the user, and giving them the option, as even a regular hard drive would be faster, if the SSD error rate is too high on old SSD based data/files.

What are the data retention rates on hard drives, compared to SSDs, as far as being slowed down by read errors, and error correction induced delays/slowing of read speeds.

September 24, 2014 | 06:19 PM - Posted by Mav'Erik

Should we be expecting Samsung to issue a targeted announcement to 840 EVO purchasers? I don't recall if my purchase (Oct 2013, Amazon) included registration with Samsung. I'll be checking PC Perspective for updates. This is disturbing news.

September 25, 2014 | 12:20 AM - Posted by Rantor (not verified)

So, I should probably hold off on reinstalling my win7 machine on a 840 Evo I have, as I planned to do this weekend. Guess I'll use the mx100 I have instead. Thanks for the heads up guys.

September 25, 2014 | 12:53 AM - Posted by KingKookaluke (not verified)

So Prior to Samsung jumping through the hoops of the QC process, and corporate BS. Where do I get free software to rewrite my 840 Evo occasionally, and get my speed back? I've noticed that my system boot time decreased dramatically over the last few weeks. I realize that the drive will actually die earlier in doing a full rewrites because of this.

September 25, 2014 | 02:49 AM - Posted by Anonymous (not verified)

Depending on the warranty/implied warranty, rules and regulations Samsung may be forced to offer a longer period on the affected model/s, if the firmware solution results in such increased re-writing/extra ware to maintain data integrity. TLC drives should come with enough over provisioning to allow the initial warranty to be extended, should there be a need, or requirement/judgment for a warranty extension. Any TLC based SSDs in the future may need a little more extra over provisioning.

If the firmware fix is not soon, Samsung should have some free software made available as a temporary solution for speed degradation issues, and some extra warranty time, if the firmware fix results in excess ware and tear on the affected SKU/s.

Maybe tiered storage software will make it into to PC/Laptop market from the server/HPC market, Hierarchical storage management, systems. Where old stagnant data is moved from SSD to hard drive, automatically based on use algorithms. The user should have the ability to have the SSD's software/firmware rotate the older data off of the SSD, before it becomes so stagnant that the SSD's controller becomes overtaxed with error correction loads, leading to the speed issue. Maybe even some software/driver software, that can create a mirrored partition on a hard drive that acts as a mirrored store for all/some SDD writes, and any files that are beyond a certain age will have the mirrored file on the hard drive overwrite the stagnant file's content on the SSD in the background, to top of the TLC's charge/state. With this type of mirroring, the firmware/software could keep track of the amount of Error correction the SSD is using, and if it reaches a certain threshold value in an old SSD file, it can redirect the reads to the hard drive mirrored copy, on a block by block basis, to at least keep the transfer from degrading below even a normal hard drive's transfer speed.

Gaming PCs with large amounts of game libraries would benefit from a tiered Hierarchical storage management system, keeping the most recently used fresh data on the SSD, and automatically managing the file stores to prevent SSD speed degradation, and excess SSD ware and tear.

September 25, 2014 | 09:30 AM - Posted by Justin150 (not verified)

All I can say is that I am a happy user of many Samsung products. The TV may be old, but works just fine. The monitors likewise. I have had optical drives that just did the job as you would expect.

I even have an old Samsung netbook which gets lugged around everywhere. It just works without fuss

In my HTPC I have a Samsung 840 SSD and have not noticed this issue, maybe none of my data is old.

September 25, 2014 | 01:06 PM - Posted by Anonymous (not verified)

Really, so let's go cherry pick, that Samsung 840 SSD is it fabricated on a larger process node, what year was it manufactured? An Old TV, is it CRT or LCD/other! Your testimonial, does it even include the product in question. User satisfaction on any outdated products, that may no longer be made, should not factor in as much, if at all, as with the more current products that are still in production.

Samsung's TLC 840 EVO on the firmware/engineering side, did not receive very much TLC(of a different kind) from Samsung, so I'll go with SLC for the 100,000+ P/E cycles, And stick with spinning rust for the longer term storage.
Tape is still an option for backups Too, when losing the data means losing your A$$.

Samsung better make good use of more of the third dimension, with that chip stacking NAND, and forgo some of the process node shrinks, to keep its flash based storage from becoming very slow DRAM.

September 25, 2014 | 03:04 PM - Posted by ZoA (not verified)

So to confirm I understand this news update correctly, to fix this issue all one has to do is keep his 840 SSD cool? So placing it right behind working ventilator could do the trick?

September 25, 2014 | 03:42 PM - Posted by Anonymous (not verified)

First, thank you for the article. I have not seen any other review site go into detail about this issue. Other sites have just skimmed the surface mostly.

Second, Is it possible the Samsung enterprise drives that also use TLC nand suffer from this issue? The models are 845DC EVO, the PM853T(oem version of 845DC EVO) and the PM843.

These drives haven't been examined for the issue yet. I'd be very worried if the tlc enterprise drives didn't have this bug discovered during their development.

September 26, 2014 | 02:55 PM - Posted by Allyn Malventano

I'm evaluating that now, but our samples haven't been here long enough to have stale data on them.

September 26, 2014 | 04:35 PM - Posted by Anonymous (not verified)

Nice. Very nice. Good to know you are trying to cover as many angles as you can.

September 27, 2014 | 01:52 AM - Posted by CrisisHawk

In the quote in edit two, they mention the 840 EVO twice, but not the 840. That has to be an oversight right? I have a 250GB 840 and I have noticed it being slow, but didn't know what to do about it until now. (Thanks for the disk fresh tip btw.)

October 1, 2014 | 11:17 PM - Posted by DrWattsOn

FYI: a couple of years ago I put Win7 Pro, 32bit on one of the then new Kingston 64 GB SSD's and, after all proved installed & running well, I used an Aluratek MACHINE to clone it to a WD 320GB 7200rpm 2.5in. HDD. After booting the PC with the HDD & confirming it all worked, I used a defragger to look at it before defragging. As expected, almost 100% fragmented. Proving what someone else wisely said, that SSD Controllers "lie" to their systems (a good thing)!
After a long defrag session, the HDD ran much more quickly. And worked well as a "dated" backup.
Incidentally, I have built about 6 or 7 PCs with these WD 320GB HDD's which have been in continuous use for over 5 years. No signs of failure. I'll bet that's because since 1995 I have never installed a HDD without a fan bolted to its bottom, usually running thru a resistor to slow it down to reduce noise. Always at considerable effort modding to get them mounted. But proved worth the effort.
BTW on the "antique" Kingston 64GB SSD after noticing a slowdown, I ran my SpinRite at Level gasp! 4. New speed regained (it was about 8 Mo's o!d). Since then have run Level 1 once a month, & level 2 a couple of times and been running well. Also do SpinRite on my stock Dell laptop HDD every few weeks because I don't trust its Seagate 750.

October 5, 2014 | 04:08 AM - Posted by DrWattsOn

re:
** Edit 2 ** (26 Sep)
New quote from Samsung:
"We acknowledge the recent issue associated with the Samsung 840 EVO SSDs and are qualifying a firmware update to address the issue. While this issue only affects a small subset of all 840 EVO users, we regret any inconvenience experienced by our customers. A firmware update that resolves the issue will be available on the Samsung SSD website soon. We appreciate our customer’s support and patience as we work diligently to resolve this issue."
** End edit 2 **
Ha ha ha ha ha "small subset"!

October 6, 2014 | 04:10 PM - Posted by Anonymous (not verified)

With the new firmware will the data be lost? IE will we have to format the drive to install the new firmware?

October 10, 2014 | 10:20 AM - Posted by DrWattsOn

I think it has been stated that no data has ever been lost. Thus just the /relatively/ "slow" transfer or access times. So if the new F/W requires reformatting, it may take considerably longer to back up and reinstall. But so far not one "bit" of evidence or a claim of lost data.
(Ya see what I did there)?

October 7, 2014 | 08:11 PM - Posted by tolou (not verified)

What's the strangest thing here is that a "severe" issue like this only gets discovered after about 1 year?!
The conclusion is that degradation start after just a couple of weeks. -Has anyone that run Diskfresh(or similar) some time ago already starting to see the read speed go down again?

October 8, 2014 | 11:16 PM - Posted by Paranoidroid (not verified)

So I have an 840 Pro, and I know the consensus seems to be that they are unaffected, but I just ran a defrag and my ultra slow bootup seems to have disappeared...

October 10, 2014 | 10:03 AM - Posted by DrWattsOn

It may do us well to keep an eye on the 850 Pro. It still uses TLC but maybe their claim that the 3D Vertical stacking "locks in" the bits, fixes the problem? I think they also use a thicker process that would help the bits to remain at their original levels and need less error correction. Maybe better R/W longevity too? ??

October 15, 2014 | 07:21 AM - Posted by Pig Biting Mad (not verified)

So today there is a "Samsung SSD 840 EVO Performance Restoration Software" available, but NOTHING for the 840 Basic..maybe next year Samsung? Or 2017? Take your time, it's not like it's important or anything. I give it another week, then this 840 comes out of the laptop and gets a spectacular Viking funeral, because the way it is now it's unusable. Lesson learned, never a Samsung product again. Already got a little revenge when my daughter wanted a Galaxy S5 for her birthday, but I bought her the LG G3 instead. Smells like VICTORY!

October 16, 2014 | 02:18 AM - Posted by Anonymous (not verified)

My slow 840 non-EVO sits in a acer netbook from 2012, and you can be sure that IF Samsung releases that performance restoration software for my drive it will not support the netbook resolution =) Same thing happened with Magician software, I was stuck with old firmware because I had to use an old Magician version which failed to update the firmware all the time. Samsung spent over a year correcting the resolution problem, and still every time I start magician it throws a warning about the resolution..works though, but come on Samsung, really? So yeah, that's my two cents. A fix is MAYBE underway for non-EVO and most likely will not work for netbooks until 2018. Thank you. Crucial, here I come.

October 17, 2014 | 10:09 PM - Posted by Tiberius

So this issue only affects read speed correct? What if your write speeds are greatly reduced but your read speed is fine?

November 1, 2014 | 03:24 PM - Posted by kkurt (not verified)

Addressing now the issue but i already knew it;

The read speed of one of the partition, that has been unused after creating and filling it, dropped to 2.5mb/s;
i'm searching for scans on image search but i never saw a speed so slow like the one i achieved.

In this moment i'm restoring the data using the samsung tool, it's near the end so i think is fixing the problem.

November 1, 2014 | 04:17 PM - Posted by kkurt (not verified)

restored; from 2.5 ~ 4 , using hddscan (a diagnostic tool, not a speed test) i get a fixed 480 ~ 490.

November 4, 2014 | 03:17 AM - Posted by Jay Jones (not verified)

Is this an issue with windows only and not with linux? I am a linux user and formatted this 840 evo with ext4.

March 15, 2015 | 01:55 PM - Posted by FlyingPenguinOFC (not verified)

Well, as we all know, the issue persists after the "Performance Restoration Tool" fix and firmware update. Over time, stale data is losing performance, and we're supposed to see another release of a restoration tool and a new firmware sometime soon.

Meantime, I just wanted to mention that the recommendation of using MyDefrag with the 'data disk monthly' script still works like a charm. I just ran it on my 840 Evo 1Tb and it restored performance - for now.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.