Review Index:

Long-term performance analysis of Intel Mainstream SSDs

Subject: Storage
Manufacturer: Intel

Intel Option #1: Write Big

Quoteth Intel: “One method is to use IOMeter to sequentially write content to the entire drive. This can be done by configuring IOMeter to perform a 1 second long sequential read test on the SSD drive with a blank NTFS partition installed on it. In this case, IOMeter will “Prepare” the drive for the read test by first filling all of the available space sequentially with an IOBW.tst file, before running the 1 second long read test. This is the most “user-like” method to accomplish the defragmentation process, as it fills all SSD LBAs with “valid user data” and causes the drive to quickly adapt for a typical client user workload.”

So we gave it a shot.  First we tried repeatedly writing large files to a ‘live’ drive, filling up all available space in an effort to defragment it without resorting to drastic measures like imaging off the OS and wiping it completely.  Despite our efforts, the gains were negligible at best.  We had no choice but to try and defragment our drive with our OS partition moved elsewhere.  Taking X25-M’s at various stages of usage, we ran HDTach RW in ‘Full’ bench mode.  This mode performs its tests by reading and writing across the entire drive space.  In all cases, the first test run reflected the drive in its current state, yet subsequent runs came out looking like a new drive.  It turned out that just as Intel suggested, our X25-M was able to return to full speed with this method.  Remember, this process needs access to the entire drive space, meaning you could not accomplish this feat with your OS on the X25-M.

HDTach runs performed at various stages of fragmentation.

Second HDTach pass showing a defragmented drive after each of the above runs.

The Tipping Point

While it might have been annoying to image off the OS just to get our X25-M back to its prior glory, a single large write across the entire drive seemed to do a good job of coaxing it into defragmenting itself.  Unfortunately this was not always the case.  We found that some usage patterns would throw the drive into a seemingly downward spiral of craptastic performance.

A very unhappy SSD.

Once internal fragmentation reached an arbitrary threshold (somewhere around 40 MB/sec average write speed), the drive would seem to just give up on ‘adapting’ its way back to solid performance.  In absence of the mechanism that normally tries to get the drive back to 100%, large writes do little to help, and small writes only compound the issue by causing further fragmentation.  In several tests our write speeds dropped to 25-30 MB/s and simply refused to recover on their own, even with several successive passes of HDTach as well as any other application we could find to write a solid file across the entire drive.

6 successive HDTach passes run on a X25-M that was too far gone to recover.

Average write speeds were (top to bottom): 25.3 | 17.2 | 19.1 | 21.2 | 22.1 | 23.2 MB/s.

While the testing above showed the X25-M making some small headway, we were only performing a single large write over and over again.  An in-use drive would also have small writes taking place, which would work to counteract any potential gains.  Even if several HDTach passes were able to eventually bring the drive back to normal, the 6 passes above took over 6 hours in total.  It is unreasonable to expect anyone to put their drive (and themselves) through such an ordeal just to get their drive back to spec.  Also noteworthy is that since flash has a finite number of erasures, batch-writing hundreds of GB to any SSD will take chunks out of its usable life.

A secure disk wipe pass with Paragon Hard Disk Manager confirmed our findings (~21 MB/sec).

We were not the only ones to see this happen, as other X25-M reviews around the internet reported similar findings in their own tests.  While this activity normally takes weeks to months to bring the drive to the point of no return, the same can be achieved with less than an hour under IOMeter configured to mimic a typical file server access pattern.