Review Index:
Feedback

Western Digital Red 3TB SATA SOHO NAS Drive - Full Review

Subject: Storage

PCPer File Copy Test

Our custom PCPer-FC test does some fairly simple file creation and copy routines in order to test the storage system for speed.  The script creates a set of files of varying sizes, times the creation process, then copies the same files to another partition on the same hard drive and times the copy process as well.  There are four file patterns that we used to try and find any strong or weak points in the hardware: 10 files @ 1000 MB each, 100 files @ 100 MB each, 500 files @ 10 MB each and 1000 files at 1 MB each. 

View Full Size

Wow! The Red beat out all but the VelociRaptor in file creation. This is surely due to increased platter density (all Red models are 1TB / platter). That higher density enables it to fly past even the older 7200RPM RE4 and Black drives.

View Full Size

During this test I immediately noticed something - no seek noise. None. I had to nearly lay my head on the drive to hear only the slightest scratchiness of the head pack actuator. No rattles, ticks, or other sounds you'd hear from the other drives. The Red is clearly tuned for dead-silent NAS operation.

Those slower seeks likely added up a bit in file copies, letting the Black get a slight leg up along with the VR. The Red still beat the rest of the pack though.


July 11, 2012 | 04:06 PM - Posted by dreamer77dd

put that in a Adrobo and i think you be good to go.

July 12, 2012 | 12:47 AM - Posted by biblicabeebli

Actually, do we know whether Drobos are or are-not affected by a drive dropping out for "too long?"

I ask because the blurb on their website for their "BeyondRAID" technology say: "Built on an advanced virtualization platform..." [<-- this tells me nothing]
and also
"Since the technology works at the block level, it can write blocks of data that alternate between data protection approaches."

My thinking here is that Drobos seem to be really smart about not exploding their own data, even on cheap and buggy drives. (Also, it would be nice if someone tells me I don't have to suddenly start worrying about this exact problem on the several Drobos I am acquainted with...)

[edited for poor grammar]

July 12, 2012 | 11:38 AM - Posted by JSL

You're talking about the TLER issue non-enterprise wd drives had while in raid setups.... I don't believe it'll be an issue here.

July 13, 2012 | 10:12 PM - Posted by Drobo User (not verified)

DROBO is NOT MAGIC. The quality of your drives CLEARLY counts.

I have used WD 1.5GB/2.0TB Greens in my 5 bay drobo and twice now in two years one of the drives has failed corrupting my data. Almost identical failures, the drive has bad sectors but its SMART reporting says its perfectly healthy. DROBO still thinks the drives are healthy, only thing that alerts you is on a reboot Win7 says your partition has a few unrecoverable files and it cannot fix them when it tries. Only way to find out ~which~ is the bad drive is to run WD's diagnostic tools for many hours per drive and sure enough you'll find the bad one in the stack. Replace it and then restore from backup - getting tired of this and its what I spent big bucks over an internal raid to try to avoid.

Backup, backup, backup.

Now that I know this stuff about TLER etc, I just ordered 5 WD Reds as a complete replacement slide em in and we'll see how they do. Thanks for the great article Allyn.

July 16, 2012 | 12:54 AM - Posted by Allyn Malventano

Totally agree here. While the Drobo is very flexible with migrations, rebuilds, etc, the Drobo treats drive timeouts just like any other high end RAID system, and will benefit just as greatly from TLER-enabled drives.

July 25, 2012 | 10:43 AM - Posted by Anonymous (not verified)

It depends on how BeyondRAID works, and that I am not sure of.

It is worth mentioning that software raid solutions (MDADM and ZFS primarily)aren't subject to this issue. In the case a drive is taking to long to recover they will just cancel the operation and resubmit it, and should that fail again it will rebuild the bad stripe.

Also, TLER is on the firmware for every drive WD makes, but ever since they locked the firmware you cannot enable it on the green drives. Dirty money-grubbing bastards. With that being said, the raid drives do have other features (rotational vibration compensation, floating head, feed forward reading and all of that). Because of this though, I will probably never buy a WD drive again. Seagate wins the price vs performance debate hands down.

March 18, 2014 | 02:52 PM - Posted by Benjamin (not verified)

But ZFS takes integrity further with background checking of blocks for corruption, and repairing / remapping PROACTIVELY.

February 24, 2013 | 11:29 PM - Posted by Drobo drive-by (not verified)

I have owned various odd-sized drives (mostly WD consumer line) that eventually exceeded their SMART thresholds & would instantly draw a flurry of errors when directly attached in FreeBSD or even Windows 7--yet Drobo Dashboard would give them a clean bill upon initialization in my 2nd gen "appliance." Since discovering this lovely quirk of the good old storage robot's mysterious workings I've been wary of keeping anything on it too long without a fresh offline backup.

July 11, 2012 | 05:55 PM - Posted by IronMikeAce

I wish you guys would have put a comparison of the blue drive as well. I think Blue version is very popular giving great prices and decent performance. I really would have liked to see this Red drive compared to the Blue

July 11, 2012 | 06:17 PM - Posted by jtiger102

Are these drives using the 1TB platters? (i.e. Do you think there would be any big performance drops between the 2TB and 3TB models?)

July 11, 2012 | 07:49 PM - Posted by Tim Verry

Yes, these are 1TB/platter drives afaik. I'll ask Allyn for that second question. I'm guessing the 2TB would be a bit slower than the 3TB but whether its enough to really be noticeable I dunno.

July 11, 2012 | 09:34 PM - Posted by Allyn Malventano

The gouge I got from WD suggests all three capacities should be nearly identical in performance. All three use 1TB per platter - lower capacities just have less platters, but the same density, so yeah, performance should not vary wildly among them.

July 11, 2012 | 09:25 PM - Posted by Wolvenmoon (not verified)

TLER used to be something you could enable on WD consumer drives. They removed this capability in newer drives. Do these dock their heads every few seconds like Green drives do/did back the last time I looked at platter drives?

I hate to be a cynic with nothing nice to say, but I'm not looking to do an enterprise datacenter RAID nor am I looking for a home server RAID. I'm looking for a workstation RAID-1 that is hardware failure resistant (and potentially with faster read speeds) and high performance (like my TLER enabled cav black 1TB drives). . .or just a single drive that's backed with a long warranty and a data recovery guarantee.

I certainly don't, I bought my last few drive purchases in pairs to RAID-1 against hardware failure with the understanding that if the volume failed to recover I'd be whipping out Recuva and other utilities to pick my files up.

How hard is it to physically take apart a drive and remove the platters? I'd like to see it done!

July 11, 2012 | 09:42 PM - Posted by Allyn Malventano

I believe the head dock time has been extended to minimize repeated cycling.

I would recommend these over Greens for anything involving any sort of redundant RAID (your pairs included). I say this because the features are definitely worth it vs. the cost, and TLER is absolutely a good thing to have for those configurations. The flip side to this is if you also need superior random access performance for that same RAID, in which case I'd say go with VelociRaptors or even RE series drives (both also have TLER). Since TLER 'gives up' on bad sectors faster than a normal drive would, you should use them in redundant arrays. For single mass storage drives I'd say go Green, Blue, or Black - those will try harder to recover a bad sector, but that will only work properly outside of an array.

Finally - modern drives are *way* too dense and precise to open outside of a clean room. The last drive I did a head pack swap with was an 80GB unit. You can't do the steamy shower trick either :).

July 12, 2012 | 12:24 PM - Posted by Anonymous (not verified)

not only sleeps with them but also showers with them ...nobody knows drives better than Allyn :)

July 12, 2012 | 12:59 PM - Posted by Adrian (not verified)

> How hard is it to physically take apart a drive and remove the platters? I'd like to see it done!

If you have a set of TORX screwdrivers, it is very easy to take apart a drive and remove its platters.

Nevertheless, you must never do this, unless you intend to dump afterwards the HDD as garbage. A HDD cannot be put together again, after being taken apart, unless all it is done in a special clean room, with special tools.

July 11, 2012 | 11:52 PM - Posted by Anonymous (not verified)

Something I've yet to see in a review about these Red drives are comparisons to other manufacturers' drives. Yes, WD only offers TLER on the RE and Red lines, but many manufacturers offer the same functionality on all their drives under the name CCTL. The warranty and MTBF is nice, but I'd rather have 7200rpm Barracudas with CCTL than these 5xxxrpm Reds. Especially in a RAID5 environment where random IO performance is important, spindle speed matters.

July 12, 2012 | 03:37 AM - Posted by Steve (not verified)

Could you post some details of the script you run on your raid. I am sure there are plenty of us that already have a WD green drive raid and can't afford to replace the whole raid in the short term.

July 12, 2012 | 11:35 AM - Posted by Seb (not verified)

Hi

From what I've heard and implemented myself, for large arrays, like the one mentioned, RAID-6 is recommended for increased security and indeed the increase likelihood of read-failure.
Regarding the read issues on specific drives, SMART should be used to continuously monitor the drive status: temperature, number of bad sectors, spin-up count, ... Further SMART can and on large arrays should be configured for periodic tests:
* short self-tests
* long self-tests
* or partial long self-tests

This allowed me for example to detect recently a failing WD EARS 2g drive and replace in time.

July 12, 2012 | 11:45 AM - Posted by Claus (not verified)

Since TLER cause a timeout in the recovery of a sector how does it affect the possible relocating of the sector in case it's no good anymore? Won't it prohibit the drive from finding out whether or not the sector is bad?

July 16, 2012 | 12:57 AM - Posted by Allyn Malventano

TLER just means the drive considers the sector bad sooner than it would if it was a consumer drive. It won't be able to relocate the bad sector while it's bad, but it will know it's bad so hopefully the next time a write is attempted to the same spot it will then write to the spare area.

Note: for Advanced Format drives, logical sectors are part of larger physical sectors on the disk, so you're remapping 4K at a time, not 512B.

July 12, 2012 | 12:51 PM - Posted by Taowm (not verified)

Allyn,

You mentioned that you've been running a batch job to read the whole RAID array on a regular basis. That sounds like a good idea which I should do on my home NAS.

How are you accomplishing that feat? Do you get some type of a report that would tell you Disk 3 had a bad sector so you could deal with that before encountering a failure of Disk 1 (and subsequent RAID rebuild failure)? I assume that if you're aware of this type of problem then simply trying to copy the file will trigger a problem for the RAID controller? And in the case of the Areca controller it should automatically recover the problem for you?

Do you know if the integrated RAID controllers on Intel and AMD motherboards also copy bad sectors the way you mentioned Areca handles this problem?

One thing that was unclear to me about RAID5 ... does one bad sector on any disk guarantee a RAID rebuild failure? Shouldn't it be possible to rebuild the rest of the RAID array and simply have one corrupted file instead of losing the whole RAID array?

For several years WD had been recommending not using their Green drives in RAID arrays. Recently I saw they okayed their use in 2 drive (RAID0 or RAID1) configurations. Given WD recommendation I have avoided using them in RAID5.

Too bad WD didn't make these Red drives with an Unrecoverable Read Error rate of 10^15 versus the typical 10^14 for a consumer drive. That seems to be one advantage that Samsung drives have. Many are rated at 10^15, but now that Seagate has taken over Samsung HDD not sure how long that will remain true.

Great article. Appreciated all your hard work compiling and sharing these results.

July 12, 2012 | 12:52 PM - Posted by Anonymous (not verified)

Yeah.. the scenario described is precisely why I don't run any kind of hardware RAID on a home server.
Right now the flavor of choice is ZFS and RAID-Z. zpool scrub for the win. :)

July 12, 2012 | 05:19 PM - Posted by Anonymous (not verified)

This may be a stupid question but here goes. Could regular use of some kind of hdd utility such as "spinrite" go some way to preventing problems( i say stupid because i don't even know if such utilities are able to be ued with a raid set up.)?

July 13, 2012 | 04:01 PM - Posted by jgstew

You could use a utility like SpinRite, but you would have to take the RAID offline, run SpinRite on each drive individually, and then bring the RAID back online. I'm not sure I'd recommend this option.

You could try to run SpinRite on a RAID while it is online at level 2, and it would do something similar to reading the entire RAID, but SpinRite does not have true block level access to the drive itself like it expects when drives are in RAID (other than RAID 0)

July 16, 2012 | 01:01 AM - Posted by Allyn Malventano

With the controller logic being on the *drive* side of modern drives, there are severe limits to what Spinrite can accomplish. It's a great tool for exercising the entire drive surface by reading / writing / etc. Spinrite is coded to not timeout so easily on read errors, so it has a much more likely chance of making it across the whole disk span and prompting bad sectors to be mapped out, etc. Attempting the same feat under Windows would likely cause the drive to disappear after Windows gives up trying to access it after repeated errors.

July 13, 2012 | 03:20 AM - Posted by spacekiek

Yes, I’ve been obsessing about this “bit error” issue on my Synology RAID-5 setup for some weeks now.
I thought my setup was safe as long as there was no fire or theft … but it seems that bit rot can creep into your array without noticing it until it’s too late.
Me too, I’m using Caviar Greens because those Enterprise drives are just WAY too pricy for a consumer like me.
Those RED drives really seem like the answer for me.

I’m just wondering whether there is a difference when using a true hardware RAID Areca/promise controller and a NAS like Synology/QNAP ?
As Synology is more a software raid solution, I think they compensate for TLER on consumer drives, so the drive won’t be kicked out of the array. Is this correct?

Would RAID-6 offer any benefits above RAID-5 concerning these ‘bit errors’ ?

Until I can get my hands on some RED’s, I’m using another NAS in JBOD to make a weekly backup.
But I’d prefer a true solid which provides a “disk failure and bit-error proof“ setup…

July 13, 2012 | 05:19 AM - Posted by Anonymous (not verified)

RAID-6 should theoretically be better able to cope with bit rot. If bit rot was detected during a regular read, or preferably during a scheduled scan, it will have double the parity available. Remember that bit rot could just as easily corrupt the parity information as the data stored. With a dual parity setup it's possible to compare all three solutions, and with any luck two of the answers will come up equal meaning that the third, dissimilar answer is corrupt.

Now all of that is theoretical and only one possibility of how it can be handled by the controller or RAID software. AFAIK each sector on the HDD has a CRC checksum that should allow the drive to detect most cases of bit rot at the root. And with big sectors such as the WD Advanced Format they can even correct some cases of bit errors on the fly.

All of this is assuming that the bit rot is caused by instabilities in the media, however much more data is actually lost to software bugs and user errors, and unfortunately there is no RAID level that will help protect data from PEBKAC or programming errors.

July 16, 2012 | 09:57 AM - Posted by Simon (not verified)

Hi Allyn,

Would using the WD red make sense in a non-RAID environment?

If you just wish to store data which is not regularly read or written to; just for on-line storage would the red be a good option?

Kind Regards

Simon Zerafa

July 17, 2012 | 07:36 PM - Posted by Dr_b_ (not verified)

WD needs a refresh and update to their RE4's badly.

July 19, 2012 | 11:28 PM - Posted by Gustin (not verified)

How are software RAID setups (like mdadm) affected by these features (or lack thereof)?

July 25, 2012 | 10:49 AM - Posted by Anonymous (not verified)

They aren't. See my above reply for more information.

Software RAID is quickly becoming the defacto standard among home users for this reason.

July 20, 2012 | 12:50 AM - Posted by Gustin (not verified)

Nevermind, I did some RTFMing, the lack of TLER does not seem to be a problem for mdadm, which matches my anecdotal experience.

July 23, 2012 | 12:13 AM - Posted by Peter (not verified)

Hi guys, I have a 4 drive NAS, that's configured as solo drives (Non-raid configuration). Is this new Red drives recommended for such a configuration? Or would it be better to just use Green drives, as my Nas isn't configured as a RAID. Hope someone can enlighten me here. Thanks

July 23, 2012 | 05:54 PM - Posted by Rob (not verified)

front load washing machines have been using this method to counter-balance your dirty ginch for years and years ;)

July 26, 2012 | 02:45 AM - Posted by Anonymous (not verified)

Quick question!

I'm not a raid expert, but the example scenario you outline pops a few questions into my head.

Example 1:
Drive 1 goes offline, and during rebuild another hidden bad sector pops up in Drive 3 and kills the rebuild.

Example 2:
Drive 1 goes offline because the TLER reports the bad sector to the RAID card, and quickly address the issue using parity data from somewhere else.

There seems to be three factors, TLER vs non TLER, how a raid card address a bad sector, and the chance another bad sector would appear during a rebuild. The article makes it seem as if the Areca controller saved the day.

I draw this conclusion because Example 1 indicates the array fails because the raid controller failed to address the bad sector on Drive 1. Leaving the chance that a bad sector could appear during a rebuild, aka Drive 3. Whereas Example 2 has the array function because the drive reported the bad sector and the raid card address the issue correctly.

So the question I have is what is the timeout duration of a consumer drive to a TLER equipped drive? Is it a matter of hours? or days?

The question stems from the fact that I have several consumer drives in RAID-5 on a Areca 1231ML. I'd consider a over time upgrade, but if a TLER vs non TLER drive report a bad sector within days of each other, the card I have should address it correctly?

Appreciate the help.

July 27, 2012 | 01:06 PM - Posted by Anonymous (not verified)

Interesting. I just orderd 8 WD RED and I am looking at RAID-cards now. Areca seems to be one of the best or am i wrong? I want a hardware-RAID-card that can recover from "small" errors. RE4 are too pricy. I already have a QNAP639pro (6 WD black 1TB). That's worked perfect since the beginning. They (6x WD black 1TB) have run for well over 2 years 24/7 now and I espect a failur.. So I am building a new server after repleasing the drives in my QNAP-NAS with WD RED and 2 for and other RAID. Then I build a second with 2012 Essencial server when it appears and use the NAS as a backup for the new server. But I dont know if I am going to use WD RED or the pricy RE4/RE4-GP..

June 4, 2013 | 07:58 AM - Posted by Veronica (not verified)

There was and is a sure fire way to get rid of a men.
However, there are guys out there who will balk at reading these words, wondering if they fall into can t get a
girlfriend the fat category. It's Christmas time It makes us look good because it shows us how much they are hurting us and how it related to others, the time when they are in bars and discos.

my blog post :: Veronica

July 27, 2012 | 01:07 PM - Posted by Anonymous (not verified)

Interesting. I just orderd 8 WD RED and I am looking at RAID-cards now. Areca seems to be one of the best or am i wrong? I want a hardware-RAID-card that can recover from "small" errors. RE4 are too pricy. I already have a QNAP639pro (6 WD black 1TB). That's worked perfect since the beginning. They (6x WD black 1TB) have run for well over 2 years 24/7 now and I espect a failur.. So I am building a new server after repleasing the drives in my QNAP-NAS with WD RED and 2 for and other RAID. Then I build a second with 2012 Essencial server when it appears and use the NAS as a backup for the new server. But I dont know if I am going to use WD RED or the pricy RE4/RE4-GP..

July 28, 2012 | 01:09 AM - Posted by Anonymous (not verified)

Areca cards are great! However very pricey. Which is why I was I posted the question above your post.

BUT as a person who owns one of these cards, I'd move away from it. Why? ZFS is a better storage system. Grab a AMD cpu (because it supports ECC), get ECC ram, and load a OS that supports ZFS. From what I hear it is a much more reliable storage system than RAID.

I would have done this myself, but I didn't learn about ZFS until after my purchase. Secondly, I have nearly 6TBs across 8 drives. Moving that amount of data would be a pain.

Lastly, if you really decide to go with an Areca card. Try to find them second hand. I picked up my 1231ML for $375 used. Run a google search on the key phrase : "FS Areca", and sorta by date.

Good luck!

August 1, 2012 | 09:38 AM - Posted by Anonymous (not verified)

Do any of you see a problem using these drives in a 12 bay NAS running FreeNAS? ZFS/2

The WD site just says for up to 5 bays.. Is this just marketing hype> Or do you think these drives will be OK for large bay NAS enclosures?

Thought? Thanks

August 5, 2012 | 02:27 AM - Posted by Anonymous (not verified)

As I understand it, It's because the RED drives lack vibration sensors and pressure sensors.

However, I'm also speculating in using 15 of these babies in a file server for private use.

I'm really wondering if this will actually be a problem or not....

-JKJK-

August 5, 2012 | 02:27 AM - Posted by Anonymous (not verified)

As I understand it, It's because the RED drives lack vibration sensors and pressure sensors.

However, I'm also speculating in using 15 of these babies in a file server for private use.

I'm really wondering if this will actually be a problem or not....

-JKJK-

October 16, 2012 | 01:41 PM - Posted by Anonymous (not verified)

Hi,

I would really love to hear more about your 15 drive setup. Care to share some more details?

Thanks,
-jj

February 10, 2014 | 08:37 AM - Posted by Genie (not verified)

This apps game is a toy that lets you play kitchen with your child.
The unlocked version is the same as those that have
already hit the market, but they do not come with an SIM card.
This will appeal to young adults in their
early 20’s and even teenagers who often beg for an i - Phone as their birthday gift.

my webpage :: chatroulette-no-survey-2

August 5, 2012 | 02:29 AM - Posted by Anonymous (not verified)

Sorry about that triple post ... got a "page could not be found for each time I tried to post".

September 14, 2012 | 03:11 PM - Posted by Shimi (not verified)

A while back WD Red 3TB was selling for $169.99 now that I want to buy it is about $259.99 any idea if the prices would drop to below $200 and why the sudden increase in price?

October 1, 2012 | 04:18 PM - Posted by Tree (not verified)

Would you recommend the RED series if you don't use a RAID solution. Why, i'll running Windows Home Server 2011 that have 4 drives and i'll make (lessons learned) a robocopy once a week to een external esata drive of my important DATA.

June 17, 2013 | 04:02 AM - Posted by Luis (not verified)

But it's more than that wooden spoon was the longest, gentlest hug I could give. A common misperception of me is still haunted by that idea. Narcolepsy is estimated to affect between 200 and 500 people per million and is a perfect 12-0 io first round matches. Somehow, amidst the bedlam, Wales retained the composure how to smooth cellulite on thighs required to win another Six Nations title. If I put the dirty dishes in thehatch, then went into the match.

My webpage: cellulite treatments before and after [howtogetridofcellulitefast.info]

October 30, 2012 | 03:11 PM - Posted by Anonymous (not verified)

Hey exceptional website! Does running a blog such as this take a massive amount work? I have no understanding of programming however I was hoping to start my own blog soon. Anyways, should you have any suggestions or tips for new blog owners please share. I understand this is off topic but I simply had to ask. Thanks a lot! Rufus Erbes

November 5, 2012 | 03:30 PM - Posted by Anonymous (not verified)

I know this if off topic but I'm looking into starting my own blog and was curious what all is required to get set up? I'm assuming having a blog like yours would cost a pretty penny? I'm not very web savvy so I'm not 100% sure. Any recommendations or advice would be greatly appreciated. Kudos Deon Vanderweerd

December 5, 2012 | 08:46 AM - Posted by Anonymous (not verified)

"I have an 8-drive array of 3TB WD Caviar Greens in a RAID-5."

This array is just a disaster waiting to happen. Consider the situation when one of your drives fails: you're left with 7-drive RAID 0 array! And if the thought of housing your important data on such a large RAID 0 array doesn't make your pulse race, you're either an idiot (sorry) or the data wasn't that important to begin with.

When you replace the failed drive of an (n-1)-drive degraded array, the rebuild process places the greatest strain on the array when it is most vulnerable. It may seem counterintuitive, but at some point an n-drive RAID 5 array is more likely to fail and suffer complete data loss than a single large disk with no redundancy. I think n=8 is well beyond that point. This has actually been studied scientifically here: http://media.netapp.com/documents/rp-0046.pdf

Conclusion, large RAID 5 arrays are not safe. You need at least a RAID 6 configuration.

As for whether TLER actually improves the safety of the array, I think some of the other comments have covered this already.

January 10, 2013 | 03:47 AM - Posted by Danny (not verified)

Hi,

I had some green disks in my NAS and had to do the update with wdidle3 to come over the 8 sec park issue. Is this same fix also needed on these drives?

January 30, 2013 | 11:30 AM - Posted by PhotoNeil (not verified)

I find it interesting (and very depressing and confusing) that the only Raid storage devices listed on WD's certified Red drive compatibility list are devices that previously included Green drives in the manufacturer's own compatibility lists.

These include Drobos, and the Synology, QNAP, et al software Raid NAS boxes. It is my understanding that the NAS boxes all use Linux MDADM under the covers of their proprietary user interfaces. It is also my understanding that Synology SHR, for example, is simply a well built user interface over Linux LVM.

Suspiciously absent from the list are any of the hardware Raid controllers and devices that use various hardware Raid controllers.

If the above is factually correct then it calls into question the true value of Red drives. We can argue the merits of using Red drives in hardware Riad solutions, but the fact is that if you do have a problem, and you attempt to get support from your Raid controller maker, he will give you a simple response: "We don't support Red drives so we cannot tell you why your Raid array dropped (or regularly drops).

Seems to me there is little or no value in attempting to use Raid for an increased level of protection if the controller maker will not support it. It is, in that way, no better than using Green drives.

It calls into question the value of Red drives. I've read a lot of discussion (mostly speculation) about these new drives but never seen my concerns mentioned or discussed.

January 30, 2013 | 11:44 AM - Posted by PhotoNeil (not verified)

As an afterthought, I use a SansDigital TR4UTBPN 4 bay eSata/USB3 external Raid/JBOD enclosure. I know, having read their now defunct support forum for many years, that in every case where someone reported Raid array failure or frequent rebuilding problems that they washed their hands of the matter by simply pointing out that they don't support those drives in that enclosure (or any other Raid enclosure they well). You are totally on your own.

They are not listed on WD's Red drive compatibility (lists as are no other hardware Raid devices that previously only recommended or certified Enterprise drives). Nor do they specifically address Reds on their site (and their HCL list is not easy to find).

I recently submitted a support ticket asking if they supported Red drives. They initially just said "no, they have never been tested".

When I persisted and asked *why* they have not tested those drives, they responded by saying that because WD did not include them on their list they had no interest in testing them. I found that a strange response- does the tail wag the dog or the dog wag the tail?

Anyway, since I have no interest in using non-supported drives in a Raid device, I will continue to run the box in JBOD mode, as I always, have for the same reason, with the Green drives I currently use.

IOW, nothing has changed, except we consumers can now speculate about all the various vague claims by the various manufacturers of hard drives and the boxes that use them, and how the Raid system might respond to bad sectors.

A sad state of affairs.

February 21, 2013 | 10:23 AM - Posted by SupaGabba (not verified)

Can you persistantly patch the red drives to disable tler or change its setting for desktop use?

April 8, 2013 | 09:23 PM - Posted by Fred (not verified)

Great review Allyn !

Very informative.
Bought 2 WD20EFRX (2TB)

July 23, 2013 | 11:20 PM - Posted by South.Aussie (not verified)

Seriously. This is a piece of crap drive. I just lost all my data after 2 weeks old. Trying to recover and the whole drive is in raw and extremely slow trying to access it.
Was good for the first week. Then boom!!!! Instant poop.

September 13, 2013 | 08:59 AM - Posted by Anonymous (not verified)

Really the Reds seem to be Greens rebranded with new firmware/ram.. Unless you are using hardware RAID (and running 24/7) they are not worth it... Go up to enterprise (actual enterprise) drives and never look back..

Also people with early failures aren't stressing their builds before putting drives etc into production.. I've never lost data due to early failure.. I've lost a green due to head parking though in a NAS.. WDIDLE is a must for greens - and probably reds)..

Post new comment

The content of this field is kept private and will not be shown publicly.
  • Lines and paragraphs break automatically.
  • Allowed HTML tags: <a> <em> <strong> <cite> <code> <ul> <ol> <li> <dl> <dt> <dd> <blockquote><p><br>
  • Web page addresses and e-mail addresses turn into links automatically.

More information about formatting options

By submitting this form, you accept the Mollom privacy policy.