Might be slightly OT as it isn’t necessarily a CentOS related issue.
I’ve been using WD Reds as mdraid components which worked pretty well for non-IOPS intensive workloads.
However, the latest C7 server I built, ran into problems with them on on a Intel C236 board (SuperMicro X11SSH) with tons of “ata bus error write fpdma queued”. Googling on it threw up old suggestions to limit SATA link speed to 1.5Gbps using libata.force boot options and/or noncq. Lowering the link speed helped to reduce the frequency of the errors (from not getting a smartctl output to getting a complete listing within 2 tries).
Tried two Reds produced in Jan 2016, two HGST, two SSDs on the same combination of cable/bay/SATA port as well as different combos. SSD
maxed out at 450Mbps dd write test so doesn’t appear to be lousy cables or board problem. Basically only the Reds were having problems.
Strange thing is a netinstall of CentOS 7.0 “minimal” worked with one of the Reds before starting to cough. Now that I think about it, could be due to an update to 7.2 after installing.
Needed to get the server out the door ASAP so didn’t have time to try C6 once I confirmed it was the drive and promptly replaced it with another HGST.
Since I’m likely to use Reds again, it is a bit of a concern. So wondering if I just happen to get an unlucky batch, or is there some incompatibility between the Reds and the Intel C236 chipset, or between Red / C236 / CentOS 7 combo or the unlikely chance WD has decided to do something with the firmware to make them work on on NAS
and not workstation/server chipsets to make people buy better stuff.
Anybody has recent experiences with them recently on the same chipset with C7?