Disk Choice For Workstation ?

Home » CentOS » Disk Choice For Workstation ?
CentOS 10 Comments

Hi,

My workstation is currently equipped with a pair of Western Digital Red 1 TB
SATA disks in a software RAID 1 setup.

Some stuff like working with virtual machines is a bit slow, so I’m thinking about replacing the disks by SSD.

I’m hesitating between three different setups:

1) Use a relatively small SSD (120 to 240 GB) to reinstall the system on it. Keep the two SATA disks in a RAID 1 array and mount /home on it.

2) Use a larger SSD (500 GB to 1 TB), install everything (including /home) on it. Keep the two SATA disks in a RAID 1 array and mount them on /data for storage.

3) Get rid of the disks and go full SSD, with a 1 TB disk.

Any advice from the hardware gurus on this list?

Cheers,

Niki

Microlinux – Solutions informatiques durables
7, place de l’église – 30730 Montpezat Site : https://www.microlinux.fr Blog : https://blog.microlinux.fr Mail : info@microlinux.fr Tél. : 04 66 63 10 32
Mob. : 06 51 80 12 12

10 thoughts on - Disk Choice For Workstation ?

  • If I were you, I’d do the 2nd … use a larger SSD (1 TB), and keep the mirror set
    (raid 1) for /data Walter

  • I use 1 ssd for OS. With my own automation for deployment.

    And sata drives swraid5 with data partition.

    And cache ssd for filesystem used on a raid’ed fs and enabled compression…

    But you need to choose your own “freak” level, you would enjoy having on your home workstation.

    MOST important, you can easily reinstall and YOU have fun while just thinking you have it deployed ;) and can use any second and feel good when using it!

  • Oops, sorry, you asked for disk choice.

    Choose different vendor, same size for data… Why? Different time before failure. Also, if you are ok on ‘tweaking’ firmware, maybe go for green and disable head parking if you use them 27/7.

    Also, double check temp which it can work at, and check where it will be standing? Under and behind a table?

    Keep in mind, some ssd might have higher temp then spindle disk, that would assume you would think on extra fan, but extra fan would have same sound effect as spindle drive… Especially if cheap fans used ;)

    So… Hope this helps ;)

  • If you are planning to directly replace the SATA magnetic disks with SATA
    SSD’s then although you will reduce significantly the seek time but the bandwidth of SATA is no where near the bandwidth of NVMe. SSD’s are intrinsically better than magnetic disks although magnetic disks are now available up to 20TByte in capacity.

    If I was you I would get a PCIe 3.0 or 4.0 2*NVMe Raid 0/1 carrier and install two SSD’s on to it and mirror them. Even though SSD’s are very reliable your data is more valuable.

    Don’t go for super cheap SSD’s as the write threshold will be low. I would look at Samsung for SSD’s for performance or Kioxia (Toshiba) SSD’s for price. As regards the carrier I would look at Sonnet or Highpoint. Bear in mind that the commercial sweet spot for SSD’s is 1 or 2 TByte.

    Mark Woolfson

    —–Original Message—

  • I have seen significant improvement when virtual machine disks are on their own spindle/ssd. I would add an SSD and put the VM’s on it.

    Mike

  • The most reliable SATA drives model from most reliable manufacturer during constant up, spinning and used in RAIDs  had about or less than
    3% failed in a course of over 10 years.  Most reliable manufacturer in my book that is: Hitachi, former IBM, later HGST, the production line now bought out by Western Digital.

    I doubt there is same longevity statistics for SSDs. Also: hard drives have theoretically infinite number of writes into the same area, whereas SSDs have finite number of write operations into a given cell. In view if this difference any comparison can be argued as unfair.

    I’m sure, people who have large number of SSDs for long time will add their observations. I was happy with Samsung 2.5 inch SATA SSDs so far.

    Valeri

    CentOS mailing list CentOS@CentOS.org https://lists.CentOS.org/mailman/listinfo/CentOS

  • I do know that one of our clients (datacenter) was constantly having failures with OCZ. They replaced them with Samsungs and had far better reliability. I don’t know if OCZ still has issues, for our own, we use Samsung and they’ve been pretty good, but they do seem to fail more than our spinning drives. However, that makes it sound worse than it is, it’s maybe one or two drive failures a year.

    For my own personal use, as I have little money, I use Crucial, but that’s for some towers that just get home use, the FreeBSD one using ZFS, and they’ve been fine, but that’s been home use. So, for the moment, I would just echo Valeri and say Samsung is the one that my company and I have seen to be the longest lasting.


    Scott Robbins PGP keyID EB3467D6
    ( 1B48 077D 66F6 9DB0 FDC2 A409 FA54 EB34 67D6 )
    gpg –keyserver pgp.mit.edu –recv-keys EB3467D6

  • I think your request lacks at least one critical consideration: What is the cost of down time?

    You’ve got a RAID1 setup now, so I have to assume that you’ve decided at some point in the past that the down time you’d incur replacing a disk when one fails is great enough to buy a second disk.  SSDs aren’t immune to failure.  Right now, I’m operating on a degraded RAID1 volume while I
    wait for an RMA for a SAMSUNG 860 QVO that I installed just over one year ago.  For me, the cost of the outage justified the redundant storage device.  It was expensive, but it’s a cost that paid off this year.

  • Crucial MX500 1TB drive.  Previously, they were on a HGST Travelstar
    1.5TB drive spinning at 5400rpm.

    You’d get better performance if you stripe set as opposed to a mirror set on that spinning silicon.

    When I noticed this improvement, I starting digging up on why the marked improvement.  During my k8s setup, I recall measuring IOPS using FIO [0]
    in order to ensure ETCD functioned appropriately. When I measured IOPS
    on my 1.5TB drive, it recorded a value of 37 IOPS.  With the MX500, that number is 1092 IOPS.

    [0]:
    https://www.ibm.com/cloud/blog/using-fio-to-tell-whether-your-storage-is-fast-enough-for-etcd