CentOS 7 Install On One RAID 1 [not-so-SOLVED]

Home » CentOS » CentOS 7 Install On One RAID 1 [not-so-SOLVED]
CentOS 11 Comments

Let me see if I can, um, reboot this thread….

I made a RAID 1 of two raw disks, /dev/sda and /dev/sdb, *not* /dev/sdax
/dev/sdbx. Then I installed CentOS 7 on the RAID, with /boot, /, and swap being partitions on the RAID. My problem is that grub2-install absolutely and resolutely refuses to install on /dev/sda or /dev/sdb.

I’ve currently got it up in a half-assed rescue mode, and have mount -o bind /dev, /proc/ and /sys under /mnt/sysimage, and chrooted there. That’s where I’m trying to do my grub2-install.

So:
1. *Is* there space for grub2 to install the bootloader under where the mdadm starts?
Or do I have to partition the disks (/dev/sda1 100%, ditto
/dev/sdb1, then
create the RAID 1 with the partitions, and *then* grub2-install?
2. I *think* that one thing that grub2-install is complaining about is that it can’t
find /boot/grub2. I’ve tried doing it with
$ grub2-install –boot-directory=/boot /dev/sda
and
$ grub2-install –boot-directory=/dev/md127p1/ /dev/sda
and
$ grub2-install –boot-directory=/dev/md127pw/boot /dev/sda
and it tells me it cannot find the canonical path for the grub2
directory. Is
there some way to specify where it should fund /boot/grub2 that I’ve missed?

mark

11 thoughts on - CentOS 7 Install On One RAID 1 [not-so-SOLVED]

  • To the best of my knowledge: no. To have mdadm started (md devices created) you already need kernel loaded, at this stage you don’t have it, you will have it in memory after you load initramdrive, so initramdrive is useless on md devices which do not exist yet at the moment you load initramdrive.

    Not necessarily, you can have software RAID/mirror of /dev/sda /dev/sdb
    (without those having disk labels).

    However, to boot you need regular drive partition present that hosts /boot
    (and bootloader somewhere on drive that does have disk label). You can have it all on separate tiny drive.

    Several years back it was done as I and one more poster described in the tread before the thread was “rebooted”. Now it is possible grub progressed since, but I doubt that grub supports Linux software RAID devices, for which it would need appropriate Linux portion of code, which is rather large, and GRUB being used to boot other system as well then likely will need to have their implementations of the same… But that might be outdated, I hope someone with current knowledge will chime in.

    Valeri

    ++++++++++++++++++++++++++++++++++++++++
    Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247
    ++++++++++++++++++++++++++++++++++++++++

  • In article <1485342377.3072.6.camel@biggs.org.uk>, Pete Biggs wrote:

    If you are using RAID 1 kernel mirroring, you can do that with /boot too, and Grub finds the kernel just fine. I’ve done it many times:

    1. Primary partition 1 type FD, size 200M. /dev/sda1 and /dev/sdb1.
    2. Create /dev/md0 as RAID 1 from /dev/sda1 and /dev/sdb1.
    3. Assign /dev/md0 to /boot, ext3 format (presumably ext4 would work too?)
    4. Make sure to setup both drives separately in grub.

    Typically I then go on to have /dev/sda2+/dev/sdb2 => /dev/md1 => swap, and /dev/sda3+/dev/sdb3 => /dev/md2 => /

    Cheers Tony

  • You didn’t answer all of the questions I asked, but I’ll answer as best I can with the information you gave.

    OK, so right off the bat we have to note that this is not a configuration supported by Red Hat. It is possible to set such a system up, but it may require advanced knowledge of grub2 and mdadm. Because the vendor doesn’t support this configuration, and as you’ve seen, the tools don’t always parse out the information they need, you’ll forever be responsible for fixing any boot problems that come up. Do you really want that?

    I sympathize. I wanted to use full disk RAID, too. I thought that replacing disks would be much easier this way, since there’d just be one md RAID device to manage. That was an attractive option after working with hardware RAID controllers that were easy to manage but expensive, unreliable, and performed very poorly in some conditions. But after a thorough review, I found my earlier suggestion of partitioned RAID with the kickstart and RAID management script I provided was the least work for me, in the long term.

    I assume you’re booting with BIOS, then?

    One explanation for fdisk showing nothing is that you’re using GPT
    instead of MBR (I think). In order to boot on such a system, you’d need a bios_boot partition at the beginning of the RAID volume to provide enough room for grub2 not to stomp on the first partition with a filesystem.

    The other explanation that comes to mind is that you’re using an mdadm metadata version stored at the beginning of the drive instead of the end. Do you know what metadata version you used?

    That’s one option, but it still won’t be a supported configuration.

  • Gordon Messmer wrote:
    Manitu ate my email, *again*.



    Thank you.

    Yup.

    Nope. fdisk sees it as an MBR. The SSDs are only 128G. They just run the server, and the LSI card takes care of the 12 hot-swap drives….
    (It’s a storage server.)

    I took CentOS 7’s default for mdadm.

    Yeah, I see. Well, time to go rebuild, and this time with three separate RAID 1 partitions….

    mark

  • Hmm, OK. I wonder why anaconda doesn’t do it then.

    Reading various websites, it looks like grub2 can do it, but you have to make sure that various grub modules are installed first – i.e. do something like

    grub-install –modules=’biosdisk ext2 msdos raid mdraid’ /dev/xxx

    I don’t know if they are added by default these days.

    The other gotcha is, of course, that the boot sectors aren’t RAID’d –
    so if /dev/sda goes, replacing it will make the system unbootable since it doesn’t contain the boot sectors. Hot swap will keep the system running but you have to remember to re-install the correct boot sector before reboot. If you have to bring the machine down to change the disk, then things could get interesting!

    P.

  • In article <1485416344.2047.1.camel@biggs.org.uk>, Pete Biggs wrote:

    I don’t know, but I’ve never had to do it, when using plain mirroring, on either C4, C5 or C6. I can imagine you would need to if /boot was RAID 0 striped, if indeed that is even possible.

    Yup, been there, done that. So long as you use grub to install the boot sector on both drives, then you can always tell the BIOS to boot from the other drive to bring the system up after replacing the first disk.

    Anaconda doesn’t set up the boot sector on the second drive by default, so I put some grub commands in the post-install section of kickstart to do so.

    Cheers Tony

  • In article <5ef97952-14c0-6ad2-0803-c24691a6816b@gmail.com>, Gordon Messmer wrote:

    Thanks, that’s interesting to know. When I first started doing this it was on CentOS 4, and I’m pretty sure the second drive didn’t get grubbed back then, which would be what prompted me to add the post-install grub for the second drive at that time.

    I never went back to check whether the need had been obviated in CentOS 5 or 6.

    Cheers Tony

  • First let me say I am not a true expert, but I am experienced.

    If this machine you purchased was some name brand, you must be speaking about hardware raid, true? If this is true, it normally presents you with what looks like a standard drive (/dev/sda) for every 2 drives configured as raid-1. Also, most name brand servers give you a bootable machine day one.

    If you are using software raid, you must have configured it yourself.
    Here is what my custom machine has:

    2 – 120 GB SSD

    2 – 4 TB spinning drives

    During my CentOS 7 install is where I performed the software raid-1
    configuration. I never do the default partition configuration so here is my setup (used fdisk -l):

    Disk /dev/sda: 120.0 GB, 120034123776 bytes, 234441648 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0x0001d7e8

    Device Boot Start End Blocks Id System
    /dev/sda1 2048 134604799 67301376 fd Linux raid autodetect
    /dev/sda2 134604800 184752127 25073664 fd Linux raid autodetect
    /dev/sda3 184752128 233191423 24219648 fd Linux raid autodetect
    /dev/sda4 233191424 234440703 624640 5 Extended
    /dev/sda5 * 233195520 234440703 622592 fd Linux raid autodetect

    Disk /dev/sdb: 120.0 GB, 120034123776 bytes, 234441648 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk label type: dos Disk identifier: 0x000085a6

    Device Boot Start End Blocks Id System
    /dev/sdb1 2048 134604799 67301376 fd Linux raid autodetect
    /dev/sdb2 134604800 184752127 25073664 fd Linux raid autodetect
    /dev/sdb3 184752128 233191423 24219648 fd Linux raid autodetect
    /dev/sdb4 233191424 234440703 624640 5 Extended
    /dev/sdb5 * 233195520 234440703 622592 fd Linux raid autodetect

    Disk /dev/sdc: 4000.8 GB, 4000787030016 bytes, 7814037168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk label type: gpt

    # Start End Size Type Name
    1 2048 1699579903 810.4G Linux RAID
    2 1699579904 3399157759 810.4G Linux RAID
    3 3399157760 3911510015 244.3G Linux RAID
    4 3911510016 5611087871 810.4G Linux RAID

    Disk /dev/sdd: 4000.8 GB, 4000787030016 bytes, 7814037168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk label type: gpt

    # Start End Size Type Name
    1 2048 1699579903 810.4G Linux RAID
    2 1699579904 3399157759 810.4G Linux RAID
    3 3399157760 3911510015 244.3G Linux RAID
    4 3911510016 5611087871 810.4G Linux RAID

    My df -h display shows me the following:

    /dev/md126 583M 317M 224M 59% /boot

    I have basically the definitions using CentOS 6 and CentOS 7 and it’s my understanding you must have a /boot device. Also during installation of CentOS 7 when writing the MBR to the MD device (in my case md126) it writes the information to both sda and sdb. With CentOS 6, according to HowToForge there are extra steps required to get the MBR on both sda and sdb.

    I have not had to replace either of these SSD, but I have had to replace spinning drives on my CentOS 6 machines in the past.

    Gene