Review Index:
Feedback

Western Digital Red 3.5" 4TB and 2.5" 1TB NAS HDD Full Review - WD Red Mini?

Subject: Storage
Manufacturer: Western Digital

Introduction

Introduction:

Last July, I went on a bit of a mini-rant about how using a bunch of drives not meant to be in a RAID could potentially lead to loss of the entire array from only a few bad sectors spread across several disks. Western Digital solved this problem by their introduction of the WD Red series. That series capped out at 3TB, and users were pushing for larger storage capacities for their NAS devices. In addition to the need for larger disks came the need for *smaller* disks as well, as there are some manufacturers that wish to create NAS / HTPC type devices that house multiple 2.5" HDD's. One such device is the Drobo Mini - a 4x2.5" device which has not really had a 'proper' NAS storage element available - until now:

View Full Size

Today Western Digital has announced a twofold expansion to their Red Series. First is a 4TB capacity in their 3.5" series, and second is a 2.5" iteration of the Red, available in both 750GB and 1TB capacities.

As a recap of what can potentially happen if you have a large RAID with 'normal' consumer grade HDD's (and by consumer grade I mean those without any form of Time Limited Error Recovery, or TLER for short):

  • Array starts off operating as normal, but drive 3 has a bad sector that cropped up a few months back. This has gone unnoticed because the bad sector was part of a rarely accessed file.
  • During operation, drive 1 encounters a new bad sector.
  • Since drive 1 is a consumer drive it goes into a retry loop, repeatedly attempting to read and correct the bad sector.
  • The RAID controller exceeds its timeout threshold waiting on drive 1 and marks it offline.
  • Array is now in degraded status with drive 1 marked as failed.
  • User replaces drive 1. RAID controller initiates rebuild using parity data from the other drives.
  • During rebuild, RAID controller encounters the bad sector on drive 3.
  • Since drive 3 is a consumer drive it goes into a retry loop, repeatedly attempting to read and correct the bad sector.
  • The RAID controller exceeds its timeout threshold waiting on drive 3 and marks it offline.
  • Rebuild fails.
  • Blamo, your data is now (mostly) inaccessible.

I went into much further detail on this back in the intro to the WD 3TB Red piece, but the short of it is that you absolutely should use a HDD intended for RAID when building one, and Western Digital is removing that last excuse for not doing so by introducing a flagship 4TB capacity to the Red Series.

Continue reading our review of the new WD 3.5" and 2.5" Hard Drives!!

September 3, 2013 | 09:06 AM - Posted by Zicoz (not verified)

Have they removed the limitation on how many of them you can have in one server?

September 3, 2013 | 11:53 AM - Posted by Allyn Malventano

It's really more of a recommendation than a limit. There is no way an individual Red knows how many other drives are in the array. I'd say that if you are running more than 8 drives, you should be looking into the SE or RE series drives, as they have additional accelerometers that help deal with seek issues that crop up when you have the enture chassis vibrating at the harmonic of >8 HDD's.

September 4, 2013 | 08:16 PM - Posted by arbiter

Reason it is a recommendation i think this video explains a bit of it.

http://www.youtube.com/watch?v=tDacjrSCeq4

September 5, 2013 | 08:07 AM - Posted by Zicoz (not verified)

Ok, thank you. Yeah, I've been using the 3TB RE, but they're expensive :/

September 3, 2013 | 10:27 AM - Posted by castlefox (not verified)

Thanks for doing this review. I am looking forward to finally buying that NAS i've always wanted.

September 3, 2013 | 11:22 AM - Posted by Anonymous (not verified)

Software RAID solutions such as SnapRAID, FlexRAID, unRAID, etc., will not fail in the manner you describe when a non-TLER drive encounters a bad sector. In fact, they should recover from bad sectors quite reliably.

The fact that these RAID solutions can use consumer-grade drives is only one of the advantages over hardware RAID.

September 3, 2013 | 12:01 PM - Posted by Allyn Malventano

I shy away from those solutions as there is not yet a solid way to recover data from a failed array. With conventional RAID, there exist many tools that can re-stitch a RAID back together from drive images, even with a missing drive or partially missing data. This is not so with most software implementations, and it's also the reason I tried so hard to break the DroboPro prior to blessing it off in our review. There is a new iteration of FlexRAID that's supposed to merge individual drive file systems transparently (and leave those file systems present). That's an idea that I like, but it's a bit 'pro' for people who are just looking to populate an off-the-shelf NAS. Besides, the Reds are a minimal cost premium over Greens, and I have no issue throwing in an additional $10/HDD for dynamic spindle balancing alone. 24/7 support line and TLER are added bonuses.

September 3, 2013 | 12:51 PM - Posted by Thedarklord

Good review, I've been thinking about grabbing some WD Red's for a future Server build.

Question;

You recommend not using 3rd party controllers for high speed storage, does that recommendation stay when it comes to say a good (think $350+) RAID controller for a RAID 5 setup?

And what brand or specific controllers do you recommend for a RAID 5 with 4 drives?, up to RAID 6 with 8 drives.

September 4, 2013 | 09:23 AM - Posted by Allyn Malventano

I personally use (and recommend) an Areca RAID card, driving 8x 3TB Reds in a RAID-5. Yes, I have a backup of that (to a DroboPro).

September 3, 2013 | 04:56 PM - Posted by Anonymous (not verified)

Hey Allyn, can you tell it to me like I'm 5 again the color branding of WD drives?

I know that black/caviar was to be their high end stuff for consumers and Green is their really slow "environmentally friendly" budget brand.

I think there is at least Blue and Red, red being for slow performance and 24/7 hardcore read/writes. Am I correct on this? Anyways what the hell are the other colors? Blue does what again?

September 4, 2013 | 09:24 AM - Posted by Allyn Malventano

Blue is more for their Caviar Blue line, which is more for the mobile market (i.e. 2.5" drives). It's sort-of like the equivalent of Green, but for laptops. There are 3.5" Blues, but they are OEMish and have reduced warranty (2-year) over Reds.

September 3, 2013 | 06:27 PM - Posted by Anonymous (not verified)

@Allyn - Your failure scenario is probably a little naiive as most hardware controllers these days do periodic scrubs and also support SMART data.

Moreover you can mark drives as online even after it has displayed an URE and then scrub the disks which will fix the problem for you.

Not trying to be an arse, and it does make sense to get reds or REs but your scenario does seem to have some holes in it.

September 4, 2013 | 09:40 AM - Posted by Allyn Malventano

Forcing drives back online does no good if the read error remains in-place and is not mapped out and/or ignored by the controller. This means that when you attempt a rebuild after forcing that drive back online, it will once again fail. There are *some* controllers with enough flexibility to get around this, but it's a crapshoot at best. In most scenarios, one drive failing hard and a second with a permanent read error, will time out the entire array each and every time access is attempted, be it during the arrempted rebuild or when the related array LBA is accessed. Most people are not experienced enough with RAID configs and are more likely to break the array for good as opposed to resurrecting it. In addition, it doesn't matter how 'pro' you are if that bad sector on the second failed drive happens to be in the MFT. It's going to time out and die over and over, regardless of how many times you go back and force it online.

I speak from personal experience on the above. The only way to recover was to reinitialize the array using an image in place of the second failure, removing the problem HDD (and timeout) from the equation entirely. As an added data point, another staff member here had a similar array failure which required an even more complicated recovery (imaging 2 of 3 drives of the RAID, as both had a handfull of read errors that were causing timeouts). In both cases, hours of work could have been very easily avoided if TLER was present. The drives would have reported their read error as opposed to timing out, and everything would have functioned properly. It really boils down to the HDD calling a sector unreadable in a reasonable ammount of time.

Further, in both above scenarios, only 1 of the 4 failed drives reported smart errors. This is not unheard of, as simialr findings were reported in the Google HDD analysis paper. In my scenario, the array was scrubbed weekly (scheduled). The first drive failed hard *during* a scrub, and the second read error occurred during the rebuild after the first failure drive was replaced. So in that fairly standard scenario of discovering and replacing a failed drive, a subsequent error caused the need for pro-level steps to restore it. The fictional scenario in my article assumes no scrubs mainly because you'd be surprised just how many consumer / home office arrays are not performing any scheduled scrubs on their array.

September 5, 2013 | 03:59 PM - Posted by Anonymous (not verified)

Holy sh!t that Allyn is an unstoppable beast when it comes to we'll thought out an informative replies.

/bows...I'M NOT WORTHY!!!

September 5, 2013 | 08:59 AM - Posted by Anonymous (not verified)

I have a 3TB version of WD NAS and empty slot .What do you recommend for my NAS drive? I'm a storage addicted and every TB counts for me .Despite the RAID case.

September 5, 2013 | 03:14 PM - Posted by Moochacho (not verified)

I recently purchased a 2TB model for use as ReadyNAS attached to router. Even though my setup is a single drive config I have to say it is awesome. Compared to a WD Black 1TB I was using before it is just about silent and vibrationless. I could hear the old hdd thru the walls at night, but not this one.
The biggest difference by far is the amount of time it takes to "re-read" drive content. For those annoying times when either the router or smart tv gets confused when files get renamed/moved and a restart is required, it takes half the time (or better) than the original WD Black to re-scan content.... even though they both have 64MB cache.

September 8, 2013 | 01:38 AM - Posted by MarkF (not verified)

I assume that the scenario described Allyn is only relevant for RAID 5 or RAID 0?

I have a RAID 1 mirror setup in my Synology 2-Bay with 2 x Hitachi 4tb and also a Zyxel NSA325 with 2 x WD Green 2tb drives . Both NAS are running RAID 1. I have had no issues with either so far.
I used to have 2 x WD BLACK 750gb drives running RAID 1 in my PC and at some point the RAID failed due to a fault on the drive. I subsequently found that the Black's don't like RAID setups due to the firmware's involved (I assume the bug described in Allyns write up is the same).

September 9, 2013 | 04:17 AM - Posted by Bob (not verified)

According to Synology's website, the WD Black class is compatible with their systems. A coworker of mine has a DS411 totally populated with WD Blacks, and has had zero problems. Other than luck on his part, is it possible that Synology disables TLER through a software command?

I'll be buying a DS412+ within the next few weeks, and I'm scratching my head over what HDDs to use as well. I'd like the black series for performance, but based on what I've read (here and elsewhere), red seems to be the better choice.

September 9, 2013 | 10:27 AM - Posted by Allyn Malventano

Aside from a minimal boost to seeks, I'd favor Reds over Blacks for a home NAS. You'd only see a measurable benefit if the NAS was seeing heavy multi-stream multi-user random access usage. Reds can handle a few sequential streams (i.e. data copies mixed with video streaming) just fine.

September 9, 2013 | 10:24 AM - Posted by Allyn Malventano

Yup, same for RAID-1 as RAID-5. My scenario doesn't really apply for RAID-0, as once a drive times out the array goes away until you force it back online.

Synology boxes will work with Blacks, but if they are not TLER enabled, you still risk the timeout scenario with a RAID-1 mirror.

September 8, 2013 | 06:48 PM - Posted by Tim (not verified)

Does "better dead than RED" still hold ?
*Cold-war humor*

September 9, 2013 | 04:20 AM - Posted by Bob (not verified)

Нет мой друг! "Красно всегда лучше!"

(No my friend! Red is always better)

:-) :-) :-)

September 10, 2013 | 03:28 PM - Posted by Anonymous (not verified)

Ruskis gave in to Capitalism...we won !

September 20, 2013 | 04:51 PM - Posted by Synology1813+ (not verified)

Looking at populating this 8 slot nas box with some WD drives. Can I get away with the 4tb reds or do I really need to invest into RE drives. Mild usage...

November 1, 2013 | 05:35 PM - Posted by Alg (not verified)

The most wanted REDs still missing - 2.5" drives 2TB in size (they are 15mm height). Low power usage, twice the capacity.

January 8, 2014 | 06:25 AM - Posted by no_connection (not verified)

Does the default head parking value of 8sec on 4TB RED cause additional wear?
I have seen a lot of complaint about WD RED no longer disabling intellipark for 4TB drives. But no actual data on if it's bad.
I'm surprised though that your review does not mention this.

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.