CentOS 7 Install Software Raid On Large Drives Error

Home » CentOS » CentOS 7 Install Software Raid On Large Drives Error
CentOS 6 Comments

Greetings –

I am trying to install CentOS 7 into a new Dell Precision 3610. I have two
3 TB drives that I want to setup in software RAID1. I followed the guide here for my install as it looked fairly detailed and complete
(http://www.ictdude.com/howto/install-CentOS-7-software-raid-lvm/). I only changed the size of the partitions from what is described, but ended up with the disk configuration error that won’t allow the installation to complete. The error is:

You have not created a bootloader stage 1 target device. You have not created a bootable partition.

So I am clearly missing a step in setting up the drives; likely before running the installer. My disks are blank raw disks right now with nothing on them. Reading the RHEL Storage Admin Guide (Sec. 18.6, Raid Support in the Installer) this should be supported, but I am assuming I may need to do something different because the drives are greater than 2 TB. I have a SystemRescueCD that I can use GParted to do some setup in advance of the installer, but am not sure of what exactly I need to do.

My objective is to RAID1 the two drives, use LVM on top of the RAID, install CentOS7 as a KVM host system, with two KVM guests (Linux Mint and Windows
7). Can anyone tell me the steps I am missing, or point me to a better tutorial than what I have found in my extensive Google searches. Thanks.

Please cc me directly on replies as I am only subscribed to the daily digest. Thanks.

Jeff Boyce www.meridianenv.com

6 thoughts on - CentOS 7 Install Software Raid On Large Drives Error

  • btrfs will meet both your functional objectives (mirroring, management, and expandability) and should be simpler to set up within the CentOS 7 installer. It also gets you a more durable filesystem due to features like checksummed metadata and scrubbing.

    This is the first release where that is a supported option, which is why your search results aren

  • The last time I actually needed to do this was probably CentOS 5, so someone will correct me if I have not kept up with all the changes.

    1. Even though GRUB2 is capable of booting off of an LVM drive, that capability is disabled in RHEL & CentOS. Apparently RH doesn’t feel it is mature yet. Therefore, you need the separate boot partition. (I have a computer running a non-RH grub2 installation, and it boots off of LVM OK, but apparently it falls into the “works for me” category).

    2. I cannot comment from experience about the separate drive for /boot/efi, but needing a separate partition surprises me. I have not read about others needing that. I would think that having an accessible /boot partition would suffice.

    3. When grub (legacy or grub2) boots off of a RAID1 drive, it doesn’t
    “really” boot off of the RAID. I just finds one of the pair, and boots off of that “half” of the RAID. It doesn’t understand that this is a RAID
    drive, but the disk structure for RAID1 is such that it just looks like a regular drive to GRUB. Basically, it always boots off of sda1. If sda fails, you have to physically (or in BIOS) swap sda and sdb in order for grub to find the RAID copy.

    4. At one time, I recall that the process for setting up RAID for the boot drive was basically:
    a. Create identical boot partitions on both drives (used to have to be at the beginning of the drive, I don’t think that is necessary any more).
    b. Partition the rest of your drive as desired.
    c. Do the install using sda1 as the boot partition (ignore sdb1).
    d. After the installation, convert sda1 and sdb1 into a RAID1 array
    (probably md1 in your case).
    e. Go through a process that copies the boot sector information from sda to sdb, so sdb is ready for the scenario mentioned is step 3.

    In summary: grub doesn’t understand RAID arrays, but it can be tricked into booting off of a RAID1 disk partition. However you don’t get full RAID
    benefits. Yes, you have a backup copy, but grub doesn’t know it is there.
    It’s more like you have to put it in grub’s way, so that grub trips over it and uses it.

    The only way to find out if your setup has all the pieces in place is to physically remove sda, and see if the boot off of sdb completes or not.

    Ted Miller Indiana, USA

  • A few comments in-line and at the bottom.

    Now that you say that I do recall seeing someone mention that before on this list, but had not run across it recently in all my Goggle searching.

    I tried a lot of different combinations with the installer and pre-partitioning the drives, but I don’t recall if I tried putting the /boot and /boot/efi on the same partition outside of the RAID. That may work, but I am not going back to try that combination now.

    This seems reasonable, and appears to jive with a lot of the information that I read this weekend.

    Yep, I created an sda1 and sda2 (for /boot/efi and /boot), then created an identical sdb1 and sdb2 using GParted prior to running the installer.

    What I did here was leave the remaining portion of the drive unpartitioned in GParted, so that I would then use the installer to create the RAID and LVM volume group.

    Yep, I had the installer put /boot/efi on sda1 and /boot on sda2. Ignored sdb1 and sdb2 during the installation.

    I think I am going to leave those partitions outside of a RAID configuration and just do something periodically with rsync to keep them synchronized. It is my understanding that there is not going to be a lot of file changes made within these partitions, and this way I don’t have two RAID1’s on the same set of disks.

    I haven’t done this yet; that is my next step. I see plenty of advice for using dd to copy sda1 and sda2 to sdb1 and sdb2. Then also needing to make them bootable. I will have to check my notes again to see exactly what to do here.

    I like that description; put it in grub’s way so that it trips over it and uses it.

    Once I get my boot partitions copied over to sdb and make them bootable, I
    plan on disconnecting sda and verifying that everything boots up properly. Probably repeating that a couple of times back and forth with each drive to be sure. Then completing my notes regarding what to do to restore a system when I have to replace a failed drive.

    Thanks for your summary of the situation. It confirms most of the information I waded through in Google searches this weekend to see if what I
    had prepared up to this point was the proper way to meet my objective.

    Jeff

  • Systems that boot with UEFI instead of BIOS require separate partitions for /boot and /boot/efi.

    UEFI must boot from a FAT32 filesystem. That filesystem will include the UEFI shim that’s signed by Verisign for trusted boot, a version of GRUB2 that’s signed by Red Hat (IIRC), and the GRUB2 configuration file. The /boot filesystem will contain the same thing on UEFI that it does under BIOS, namely the kernel and initrd.

  • For RAID-1 on CentOS 7, have a look at the following:
    <http://binblog.info/2014/10/25/CentOS-7-on-md-raid-1/>

    It deals with the situation, including mirroring of /boot. Note that in my case, I disabled UEFI in the BIOS, so didn’t have
    /boot/efi come up on autopartition.

    That URL also implies a not-stellar assessment of getting help on this list. In that particular case, the assessment was IMO
    accurate. ie: If you’re going to answer the question, then answer the question and don’t go off on a tangent.

    Devin