Review Index:

Western Digital Red 3.5" 4TB and 2.5" 1TB NAS HDD Full Review - WD Red Mini?

Subject: Storage
Manufacturer: Western Digital

Specs, Testing Methodology and System Setup


Here's my translation of Western Digital's marketing speak for the specs of the WD Red Series. Some of these are not specifically listed by WD, but we know they are in there, so we will keep them here for your education:  

  • NoTouch™ ramp load technology — Previously called "IntelliPark".  Drive heads take an 'exit ramp' off of the platters instead of landing on the platters when the drive is spun down.  You know how the most damage is done to your engine when you start it on a cold morning?  This means the drive heads do not have to break stiction each and every time the drive spins up.  The heads are able to leave the ramp and float onto the spinning disk.  
  • Native Command Queuing (NCQ) — The drive can reorder groups of reads/writes to minimize overall head movement, and therefore increase effective access time.  Beware - this is only effective with an AHCI-enabled SATA controller.  
  • Perpendicular Magnetic Recording (PMR) — Bits are aligned vertically instead of horizontally to get more packed onto each platter.  Think dominoes (the game, not the food).
  • 64MB cache — Basically standard across most current WD models, though this part is faster than those previous.  Increased cache speed helps boost random access performance.  
  • Dual processors — Introduced with the RE4-GP line, the additional core helps the drive keep track of the added cache and increased throughput streaming off of the head pack.
  • Advanced Format — Introduced back in late 2009, this increases storage efficiency and robustness by having the drive handle data as 4KB internal blocks. This means error correction routines are not limited to 512B segments. ECC works better on larger chunks of data, and this gives an ~50% improvement in that area. The trade-off is random access for blocks <4KB will suffer, but this is not much of an issue as the vast majority of file access is >= 4KB.

Now for the standouts:

  • RAID-specific time-limited error recovery (TLER) — The drive limits the 'hang' experienced on a read error in order to avoid a RAID controller considering the drive dead / offline.
  • 3D Active Balance Plus — Our enhanced dual-plane balance control technology significantly improves the overall drive performance and reliability. Hard drives that are not properly balanced may cause excessive vibration and noise in a multi-drive system, reduce the hard drive life span, and degrade the performance over time.

The WD Active Balance tech does a good job to make the drive as vibration free and therefore as quiet as possible during operation. This is particularly handy when you have several of them stacked in a relatively lighweight NAS enclosure.

The load/unload spec (for the ramp load technology) remains at 600,000 cycles. 

Specific to this Review

Back in our Caviar Black WD1002FAEX (6Gb/sec SATA) review, we determined that the current Marvell 6Gb/sec controllers and drivers, while decent, fall a bit short of Intel's native controllers.  Luckily these days we have native 6Gb/sec from Intel and we are not stuck with these stop gap measures. My standing recommendation is to always use the native chipset when employing high speed storage. We've seen 15% drops in write speeds in some instances with the Marvell controller paired to high end HDDs (for some benches) - and that was comparing to a 3Gb/sec native controller!
Marvell controllers and drivers are not sufficiently refined for comparative HDD testing.
Test System Setup

The Red was tested under our Sandy Bridge test bed as the Ivy Bridge was busy doing some dynamic SSD testing. PC Perspective would like to thank ASUS, Corsair, and Kingston for supplying some of the components of our test rig.  

Hard Drive Test System Setup
CPU Intel Core i5-2500K
Motherboard Asus P8Z68-V Pro
Memory Kingston HyperX 4GB DDR3-2133 CL9
Hard Drive G.Skill 32GB SLC SSD
Sound Card N/A
Video Card Intel® HD Graphics 3000
Video Drivers Intel
Power Supply Corsair CMPSU-650TX
DirectX Version DX9.0c
Operating System Windows 7 X64
  • PCMark05
  • Yapt
  • IOMeter
  • HDTach *omitted due to incompatibility with 3TB devices*
  • HDTune
  • PCPer File Copy Test


September 3, 2013 | 12:06 PM - Posted by Zicoz (not verified)

Have they removed the limitation on how many of them you can have in one server?

September 3, 2013 | 02:53 PM - Posted by Allyn Malventano

It's really more of a recommendation than a limit. There is no way an individual Red knows how many other drives are in the array. I'd say that if you are running more than 8 drives, you should be looking into the SE or RE series drives, as they have additional accelerometers that help deal with seek issues that crop up when you have the enture chassis vibrating at the harmonic of >8 HDD's.

September 4, 2013 | 11:16 PM - Posted by arbiter

Reason it is a recommendation i think this video explains a bit of it.

September 5, 2013 | 11:07 AM - Posted by Zicoz (not verified)

Ok, thank you. Yeah, I've been using the 3TB RE, but they're expensive :/

September 3, 2013 | 01:27 PM - Posted by castlefox (not verified)

Thanks for doing this review. I am looking forward to finally buying that NAS i've always wanted.

September 3, 2013 | 02:22 PM - Posted by Anonymous (not verified)

Software RAID solutions such as SnapRAID, FlexRAID, unRAID, etc., will not fail in the manner you describe when a non-TLER drive encounters a bad sector. In fact, they should recover from bad sectors quite reliably.

The fact that these RAID solutions can use consumer-grade drives is only one of the advantages over hardware RAID.

September 3, 2013 | 03:01 PM - Posted by Allyn Malventano

I shy away from those solutions as there is not yet a solid way to recover data from a failed array. With conventional RAID, there exist many tools that can re-stitch a RAID back together from drive images, even with a missing drive or partially missing data. This is not so with most software implementations, and it's also the reason I tried so hard to break the DroboPro prior to blessing it off in our review. There is a new iteration of FlexRAID that's supposed to merge individual drive file systems transparently (and leave those file systems present). That's an idea that I like, but it's a bit 'pro' for people who are just looking to populate an off-the-shelf NAS. Besides, the Reds are a minimal cost premium over Greens, and I have no issue throwing in an additional $10/HDD for dynamic spindle balancing alone. 24/7 support line and TLER are added bonuses.

September 3, 2013 | 03:51 PM - Posted by Thedarklord

Good review, I've been thinking about grabbing some WD Red's for a future Server build.


You recommend not using 3rd party controllers for high speed storage, does that recommendation stay when it comes to say a good (think $350+) RAID controller for a RAID 5 setup?

And what brand or specific controllers do you recommend for a RAID 5 with 4 drives?, up to RAID 6 with 8 drives.

September 4, 2013 | 12:23 PM - Posted by Allyn Malventano

I personally use (and recommend) an Areca RAID card, driving 8x 3TB Reds in a RAID-5. Yes, I have a backup of that (to a DroboPro).

September 3, 2013 | 07:56 PM - Posted by Anonymous (not verified)

Hey Allyn, can you tell it to me like I'm 5 again the color branding of WD drives?

I know that black/caviar was to be their high end stuff for consumers and Green is their really slow "environmentally friendly" budget brand.

I think there is at least Blue and Red, red being for slow performance and 24/7 hardcore read/writes. Am I correct on this? Anyways what the hell are the other colors? Blue does what again?

September 4, 2013 | 12:24 PM - Posted by Allyn Malventano

Blue is more for their Caviar Blue line, which is more for the mobile market (i.e. 2.5" drives). It's sort-of like the equivalent of Green, but for laptops. There are 3.5" Blues, but they are OEMish and have reduced warranty (2-year) over Reds.

May 18, 2015 | 11:20 AM - Posted by tuna fishing xp (not verified)

Its such as you read my mind! You seem to understand so much about this, such as you wrote the e-book
in it or something. I think that you can do with some % to drive the message
house a little bit, but instead of that, that is great blog.
A great read. I will definitely be back.

Visit my web site: tuna fishing xp

September 3, 2013 | 09:27 PM - Posted by Anonymous (not verified)

@Allyn - Your failure scenario is probably a little naiive as most hardware controllers these days do periodic scrubs and also support SMART data.

Moreover you can mark drives as online even after it has displayed an URE and then scrub the disks which will fix the problem for you.

Not trying to be an arse, and it does make sense to get reds or REs but your scenario does seem to have some holes in it.

September 4, 2013 | 12:40 PM - Posted by Allyn Malventano

Forcing drives back online does no good if the read error remains in-place and is not mapped out and/or ignored by the controller. This means that when you attempt a rebuild after forcing that drive back online, it will once again fail. There are *some* controllers with enough flexibility to get around this, but it's a crapshoot at best. In most scenarios, one drive failing hard and a second with a permanent read error, will time out the entire array each and every time access is attempted, be it during the arrempted rebuild or when the related array LBA is accessed. Most people are not experienced enough with RAID configs and are more likely to break the array for good as opposed to resurrecting it. In addition, it doesn't matter how 'pro' you are if that bad sector on the second failed drive happens to be in the MFT. It's going to time out and die over and over, regardless of how many times you go back and force it online.

I speak from personal experience on the above. The only way to recover was to reinitialize the array using an image in place of the second failure, removing the problem HDD (and timeout) from the equation entirely. As an added data point, another staff member here had a similar array failure which required an even more complicated recovery (imaging 2 of 3 drives of the RAID, as both had a handfull of read errors that were causing timeouts). In both cases, hours of work could have been very easily avoided if TLER was present. The drives would have reported their read error as opposed to timing out, and everything would have functioned properly. It really boils down to the HDD calling a sector unreadable in a reasonable ammount of time.

Further, in both above scenarios, only 1 of the 4 failed drives reported smart errors. This is not unheard of, as simialr findings were reported in the Google HDD analysis paper. In my scenario, the array was scrubbed weekly (scheduled). The first drive failed hard *during* a scrub, and the second read error occurred during the rebuild after the first failure drive was replaced. So in that fairly standard scenario of discovering and replacing a failed drive, a subsequent error caused the need for pro-level steps to restore it. The fictional scenario in my article assumes no scrubs mainly because you'd be surprised just how many consumer / home office arrays are not performing any scheduled scrubs on their array.

September 5, 2013 | 06:59 PM - Posted by Anonymous (not verified)

Holy sh!t that Allyn is an unstoppable beast when it comes to we'll thought out an informative replies.

/bows...I'M NOT WORTHY!!!

September 5, 2013 | 11:59 AM - Posted by Anonymous (not verified)

I have a 3TB version of WD NAS and empty slot .What do you recommend for my NAS drive? I'm a storage addicted and every TB counts for me .Despite the RAID case.

September 5, 2013 | 06:14 PM - Posted by Moochacho (not verified)

I recently purchased a 2TB model for use as ReadyNAS attached to router. Even though my setup is a single drive config I have to say it is awesome. Compared to a WD Black 1TB I was using before it is just about silent and vibrationless. I could hear the old hdd thru the walls at night, but not this one.
The biggest difference by far is the amount of time it takes to "re-read" drive content. For those annoying times when either the router or smart tv gets confused when files get renamed/moved and a restart is required, it takes half the time (or better) than the original WD Black to re-scan content.... even though they both have 64MB cache.

September 8, 2013 | 04:38 AM - Posted by MarkF (not verified)

I assume that the scenario described Allyn is only relevant for RAID 5 or RAID 0?

I have a RAID 1 mirror setup in my Synology 2-Bay with 2 x Hitachi 4tb and also a Zyxel NSA325 with 2 x WD Green 2tb drives . Both NAS are running RAID 1. I have had no issues with either so far.
I used to have 2 x WD BLACK 750gb drives running RAID 1 in my PC and at some point the RAID failed due to a fault on the drive. I subsequently found that the Black's don't like RAID setups due to the firmware's involved (I assume the bug described in Allyns write up is the same).

September 9, 2013 | 07:17 AM - Posted by Bob (not verified)

According to Synology's website, the WD Black class is compatible with their systems. A coworker of mine has a DS411 totally populated with WD Blacks, and has had zero problems. Other than luck on his part, is it possible that Synology disables TLER through a software command?

I'll be buying a DS412+ within the next few weeks, and I'm scratching my head over what HDDs to use as well. I'd like the black series for performance, but based on what I've read (here and elsewhere), red seems to be the better choice.

September 9, 2013 | 01:27 PM - Posted by Allyn Malventano

Aside from a minimal boost to seeks, I'd favor Reds over Blacks for a home NAS. You'd only see a measurable benefit if the NAS was seeing heavy multi-stream multi-user random access usage. Reds can handle a few sequential streams (i.e. data copies mixed with video streaming) just fine.

September 9, 2013 | 01:24 PM - Posted by Allyn Malventano

Yup, same for RAID-1 as RAID-5. My scenario doesn't really apply for RAID-0, as once a drive times out the array goes away until you force it back online.

Synology boxes will work with Blacks, but if they are not TLER enabled, you still risk the timeout scenario with a RAID-1 mirror.

August 21, 2015 | 10:12 AM - Posted by Curt (not verified)

I do not ƙnow whetheг it's just me оr iff еverybody elose encountering ƿroblems
withh ƴоur website. Іt lоoks lіke some оf tҺe text іn your content aгe running offf tɦe screen. Сan sߋmeone
еlse plkease provide feedback ɑnd lett me know іf this is happening tto tɦem
as wеll? Ҭhis may be a issue with my browser becauѕe І've Һad tɦis happen prеviously.
Thank you

My log post - dien dan mua ban

September 8, 2013 | 09:48 PM - Posted by Tim (not verified)

Does "better dead than RED" still hold ?
*Cold-war humor*

September 9, 2013 | 07:20 AM - Posted by Bob (not verified)

Нет мой друг! "Красно всегда лучше!"

(No my friend! Red is always better)

:-) :-) :-)

September 10, 2013 | 06:28 PM - Posted by Anonymous (not verified)

Ruskis gave in to Capitalism...we won !

September 20, 2013 | 07:51 PM - Posted by Synology1813+ (not verified)

Looking at populating this 8 slot nas box with some WD drives. Can I get away with the 4tb reds or do I really need to invest into RE drives. Mild usage...

November 1, 2013 | 08:35 PM - Posted by Alg (not verified)

The most wanted REDs still missing - 2.5" drives 2TB in size (they are 15mm height). Low power usage, twice the capacity.

January 8, 2014 | 09:25 AM - Posted by no_connection (not verified)

Does the default head parking value of 8sec on 4TB RED cause additional wear?
I have seen a lot of complaint about WD RED no longer disabling intellipark for 4TB drives. But no actual data on if it's bad.
I'm surprised though that your review does not mention this.

September 23, 2017 | 11:49 AM - Posted by (not verified)

The WD 1TB 2.5 RED (Model WD10JFCX) was the first hard drive failed on me since 2005. It did so without any warning, just did not start one day. This was barely two years after I purchased them. Not impressed at all.
I thought lousy performance would give some certainty that the drive would last longer, apparently not.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.