Any Experiences With Newer WD Red Drives?

Home » CentOS » Any Experiences With Newer WD Red Drives?
CentOS 10 Comments

Might be slightly OT as it isn’t necessarily a CentOS related issue.

I’ve been using WD Reds as mdraid components which worked pretty well for non-IOPS intensive workloads.

However, the latest C7 server I built, ran into problems with them on on a Intel C236 board (SuperMicro X11SSH) with tons of “ata bus error write fpdma queued”. Googling on it threw up old suggestions to limit SATA link speed to 1.5Gbps using libata.force boot options and/or noncq. Lowering the link speed helped to reduce the frequency of the errors (from not getting a smartctl output to getting a complete listing within 2 tries).

Tried two Reds produced in Jan 2016, two HGST, two SSDs on the same combination of cable/bay/SATA port as well as different combos. SSD
maxed out at 450Mbps dd write test so doesn’t appear to be lousy cables or board problem. Basically only the Reds were having problems.

Strange thing is a netinstall of CentOS 7.0 “minimal” worked with one of the Reds before starting to cough. Now that I think about it, could be due to an update to 7.2 after installing.

Needed to get the server out the door ASAP so didn’t have time to try C6 once I confirmed it was the drive and promptly replaced it with another HGST.

Since I’m likely to use Reds again, it is a bit of a concern. So wondering if I just happen to get an unlucky batch, or is there some incompatibility between the Reds and the Intel C236 chipset, or between Red / C236 / CentOS 7 combo or the unlikely chance WD has decided to do something with the firmware to make them work on on NAS
and not workstation/server chipsets to make people buy better stuff.

Anybody has recent experiences with them recently on the same chipset with C7?

10 thoughts on - Any Experiences With Newer WD Red Drives?

  • Unfortunately no, had to get the server out ASAP so already swapped the Reds with the vendor for HGSTs.

  • Emmanuel Noobadmin wrote:

    Sorry, we don’t seem to have any Supermicros with that m/b, but with the ones we have (all X9* m/bs), as well as our many Dells, old Penguins
    (rebranded Supermicro), and HPs, we’ve had no trouble at all with them, other than the occasional one that dies.

    mark

  • One data point, possibly not what you’re looking for, but it may be useful to someone.

    I’m NOT using that motherboard, it’s an Asus M5A99X
    (http://www.newegg.com/Product/Product.aspx?Item=N82E16813131874) with AMD six-core processor. It has been running since around late Dec 2015
    with two 1TB reds using software raid1, so far without trouble. (prior to that it ran a pair of smaller WD Blues also in RAID1).

    I’m also using two more of the same drive in an external two-slot RAID box for backup purposes. I had some trouble a year and a half ago with it, but it turned out to be the box, not the drives, though it took some considerable troubleshooting to figure that out.

    One would assume that if your drives were build in Jan 2016, that they have the latest firmware installed already, but it might be worth checking.

    good luck!

    Fred

  • any chance your SATA cables aren’t up to SATA3 (6gbps) performance levels ?

    the C236 chipset is the server version of the Z170, latest Skylake family chipset.

  • I discovered, amidst great initial pain, that most, if not all, of the problems I had with SATA disks were caused by SATA cables and not by the disks themselves. Intermittent problems, such as disks randomly not showing up in RAID groups, were solved when I replaced the cables with proper ones. Some of the bad cables even came from well known names.

  • Coincidence or not, all of the cables I had problems with were of the same general type: thin and covered with wrapped aluminum foil. I don’t think I ever had problems with the flat, wider ones.

  • The cables came with the SuperMicro board so I certainly hope they haven’t started cheapening out on those :D

    In any case, the cables shouldn’t be the problem because I swapped other drives (SSD and HSGT HDD) into the same drive bay, swapped cables as well as put the Red into 3 different drive bays/SATA
    cables/ports without any improvement. Both the SSDs I tried were able to hit around 450Mbps sequential write speed which is around the general ballpark performance from online sites so that should eliminate cabling/connection as the source.

  • I haven’t had any problems with past Reds on the X9* and X10* boards we used before as well. This was the first time we are using the X11
    board with the new chipset, so I was wondering if that might have a part to play.