Raid On CentOS

Home » CentOS » Raid On CentOS
CentOS 12 Comments

Ok I’ve a HP mircoserver that I’m building up.

It’s got 4 bays for be used for data that I’m considering setup up woth softwere raid (mdadm)

I’ve 2 x 2TB 2 x 2.5 TB and 2 x 1TB, I’m leaning towards usig the 4 2.x TB
is a raid 5 array to get 6TB.

Now the data is on the 2.5TB disks currently.

So the plan so far.

Building the array as a degraded raid 5 with the 2 x 2TB disks that are emply. Copy the data from one of the 2.5TB disks on to array. Add now empty 2.5TB disk to the Array and wait for it to rebuild. Copy contents of remaining 2.5TB disk 2 array. Add now empty 2.5TB disk to the Array and wait for it to rebuild.

So questions…

Is the above a stupid idea?
Do I need to get involved in gptids? on disks this size?
What format should I create the file system as?
Should I get lvm involved somewhere?

Any pointers would be kindly appreciated.

Jeff

12 thoughts on - Raid On CentOS

  • From: Jeff Allison

    Just a reminder that (at least in the Gen8) bays 1+2 are 6G, but bays 3+4 are 3G.

  • He said HP Microserver which is a bit different …

    The OP should be aware of alternative BIOSes avaliable for it such as this though:

    http://www.avforums.com/threads/hp-n36l-n40l-n54l-microserver-updated-ahci-bios-support.1521657/

    This then opens up the possibilities as there is then a full 6 SATAIII 6GB
    ports avaliable rather than the original 4 internal, one SATA optical (no AHCI etc) and one eSATA …

    My N40L for example has 4 data drives in the cages and a 5th drive mounted where the optical would usually go for a system disk…

    I frankly don’t care if I lose the system disk – it’s quick to rebuild the system – it’s the data I care about.

    On this I’m running F20 rather than C6 primarily for the better BTRFS (when el7 rolls around I’ll contemplate a rebuild to that then) and the four data disks in a BTRFS pool with a raid1 profile …

    The specific benefits of this, in no particular order, and the reason I did so (non-exhaustive list) are:
    1) Daily snapshots at no space ‘cost’ due to COW
    2) Dedup (see bedup) of large files when i need to expose them in multiple places without resorting to hard links – each file remains independent if I
    need to change it.
    3) Filesystem level transparent compression of files where it’s worthwhile.
    4) Trivial to change/add disks to provide additional space when I’m close to capacity.
    5) Regular (2-4 weeks I usually do it) scrubs to catch any bitrot that might occur.
    6) Uneven sized disks still work in RAID1 (I have 2x2Tb and 2x3TB) without incurring the ‘minimum size of disk’ penalty.

  • my Microserver N40L home server runs FreeNAS on a USB stick stuck into the internal slot. ZFS on the 4 3TB drives, and a USB drive as a hotspare (lame, I know, but better than nothing). eventually it will get a eSATA card and a 4-bay esata expander

  • I do like the idea about the expander though … power might be an issue given it’s a low power PSU that comes with it – would probably have to swap that part out.

  • the eSATA expander has its own PSU. the Microserver would just be powering the esata card, which is nothing.

  • I don’t suppose you have a link to one you’ve looked at already do you?

    This sounds like something useful to play with in a few weeks or so …

  • well, it would have to be a 4-6 port EXTERNAL sata card, and I don’t think you can find those in low profile pci-e as there just isn’t room for that many connectors on the LP backplate. you could use a 4 port SAS card but thats more expensive.

    yes, 1 eSATA expanded to 4 drives is going to be slower than 4 dedicated ports. not sure that matters enough on my home NAS

  • The two I linked are internal units to go in a 5.25″ bay … that’s why you’d need an internal 4-6 port card to make them worthwhile.

  • ah, I thought we were talking about esata external 4-bays, since we were talking about microservers which don’t HAVE said 5.25″ bays.

    the desktop chassis I usually get have 6-8 3.5″ bays in them already, albeit they are side load internals.

  • Different generation of microserver … I hadn’t seen the gen8’s till you mentioned them today (I actually thought you were mentioning the rack based dl*gen8s) …

    The previous gen microservers (N40/45/54L) have all have a 5.25″ bay at the top principally for an optical drive but it’s common use to put extra drives in there (caddied or just plain wedged .. my OS drive is a single
    3.5″ drive wedged in it for instance).

    See the N40L wiki here for an idea of what I’m talking about:

    http://n40l.wikia.com/wiki/HP_MicroServer_N40L_Wiki