CentOS 7 : Create RAID Arrays Manually Using Mdadm –create ?

Home » CentOS » CentOS 7 : Create RAID Arrays Manually Using Mdadm –create ?
CentOS 8 Comments

Hi,

When installing CentOS 7, is there a way to

1. leave the GUI installer and open up a console
2. create RAID arrays manually using mdadm –create
3. get back in the GUI installer and use the freshly created /dev/mdX
arrays?

I tried to do this, but the installer always exits informing me that he can’t create the RAID arrays (since they’re already created, duh).

Any suggestions?

Cheers,

Niki

Microlinux – Solutions informatiques 100% Linux et logiciels libres
7, place de l’église – 30730 Montpezat Web : http://www.microlinux.fr Mail : info@microlinux.fr Tél. : 04 66 63 10 32

8 thoughts on - CentOS 7 : Create RAID Arrays Manually Using Mdadm –create ?

  • It’s useful to know what layout you want. The installer will neither create, nor let you use, what it thinks are ill-advised layouts. The main reason I can think of for pre-creating md devices is to use a non-default chunk/strip size.

    The other thing is if you want to use LVM on top of an md device, I’m pretty sure you have to create the whole thing in advance because the installer UI won’t add LVM on top of an existing md device. It expects that you create a mount point, define it as an LVM device, then within the modify options you choose what RAID type you want, which you can’t do if the md device is already created.

    Anyway, it’s a lot easier if you just state what you want first. And then it’s also useful to understand the installer’s UI is mount point centric, it kinda deemphasizes the specifics of how that mount point gets assembled.

  • The installer can create either of these layouts in manual partitioning.

    This can also be exactly reproduced with the installer using manual partitioning. However:

    – I’d substitute ext4 or xfs for /boot instead of ext2
    – I’d make /boot bigger than 100MB which is almost certainly too small to hold 3 kernels and initramfs’s.
    – I would not put swap on an md device, I’d just put a plain swap partition on each device; first create two swap mountpoints, by default this creates two swaps on one device. Select one of them and click on the screwdriver+wrench icon (configure selected mountpoint), and choose a specific drive, click select, then click Update Settings. Repeat for each additional swap, making sure each is on its own drive.

  • If one of the devices fails, doesn’t that mean that any processes with swap on the associated space will be killed? Avoiding that is kind of the point of having mirrors….

  • It’s a good question.

    I try to avoid swap use, especially on hard drives. For some use cases it’s better to slow to a crawl than implode under pressure. For more cases I think swap on SSD makes more sense, the system won’t slow down nearly as much.

    Why I avoid swap on md raid 1/10 is because of the swap caveats listed under man 4 md. Is possible for a page in memory to change between the writes to the two md devices such that the mirrors are in fact different. The man page only suggests this makes scrub check results unreliable, and that such a difference wouldn’t be read (?) But I
    don’t understand this. So I just avoid it because I haven’t thoroughly tested it.

    So if anyone has, that’d be useful info. If not, it might be worth asking in linux-raid@ for clarification.

    But sure, if swap is actively used and vanishes due to drive failure, decent chance it’s a problem. How it’ll manifest though?

  • I suggest not taking my word for it, and reading man (4) md, starting with the paragraph “The most likely cause for an unexpected mismatch on RAID1 or RAID10 occurs if a swap partition or swap file is stored on the array” and including the following 4 paragraphs, and let me know what you think it’s saying. It made my eyebrows raise, but it seems to be saying it’s not actually resulting in corruption. The part I don’t understand is how a page change between the writes to two
    (swap on) mirrors translates into unused swap and thus not a problem that there’s a (meaningful) mismatch between the two mirrors. If the page write to disk happened at all, it seems like this is used rather than not used swap.

    For data (not swap), a related known issue for all raid 1 and 5 is a series of common problems: regularly scheduled scrubs are necessary to make sure bad sectors are identified and corrected, yet this isn’t the default behavior, it has to be configured; further, a reported mismatch doesn’t unambiguously tell us which copy is good (or bad), it’s merely reported that they’re different. Ergo, regularly schedule
    “checks” are a good idea, while “repair” is sort of a last resort because it might cause the good copy to get overwritten.

    This isn’t broken. It’s just the way it’s designed. This is what DIF/DIX (now PI), Btrfs and ZFS are meant to address. There’s also been some intermittent talk on linux-raid@ whether and how to get checksums integrated there.