Raid 1 Question

Home » CentOS » Raid 1 Question
CentOS 20 Comments

Hi,

I have a server with 2 disks. Installed CentOS 5.9 with raid1. I
created /dev/md0 to hold “/” and /dev/md1 for swap and nothing else. Grub is installed on /dev/md0. After the successful installation, the server does not boot. I don’t see the boot loader . I see a blank cursor blinking.

What have I done wrong?

Thanks Paras.

20 thoughts on - Raid 1 Question

  • On 03/07/2013 05:43 PM, thus Paras pradhan spake:

    Replying off-list: Use the rescue mode of your installation CD/DVD. Then you can apply the commands described there.

    HTH,

    Timo

  • Install grub on sda and sdb. Installing GRUB on the mbr of both disks ensures that your system can still boot if one disk has failed.

    Although the Linux OS sees those two drives as a software raid1, GRUB looks at a single drive when booting.

  • I have a CentOS 5.4 installation. grub on /dev/md0 . no problem at all. my primary disk failed , replaced the disk and no problem at all. what has changed in 5.9 and 6 releases? its not easy anymore.

    Paras.

  • When you boot to your rescue CD, check the metadata version of your raid1
    array. You might reply back with the output from “mdadm -D –scan”

    This is simliar to what I saw when using an unsupported metadata (too new for the GRUB version). If you didn’t set up any partitions by hand, I
    would expect Anaconda set the proper metadata version.

  • Does the same warning about it not being recommended to use the software RAID also apply to CentOS 6?

    Thanks, Dave

  • Dave, I’ve been using software raid with every type of RedHat distro RH/CentOS/Fedora for over 10 years without any serious difficulties. I don’t quite understand the logic in all these negative statements about software raid on that wiki page. The worst I get into is I have to boot from a bootdisk if the MBR gets corrupted for any reason. No big deal. Just rerun grub.

    Gerry

  • have you been putting /boot on a mdraid? that’s what the article is recommending against.

    I’ve always put a static boot on each drive, then made the REST of the drive a mdraid mirror and put a LVM in it. all that needs to be done is to rsync the primary /boot with the backup following any kernel updates.

  • +1

    I don’t understand why. As long as you remember to install grub on each drive you’re good to go.

    Steve

  • Yes, I have /boot on /dev/md0 many times. Some distros (anacondas) give great problem with this. For those I just create entire filesystem outside of anaconda and then tell it to use an existing Linux. Works fine.

    Gerry

  • I’ve put /boot on md raid1 on a lot of machines (always drives small enough to be MBR based) and never had any problem with the partition looking enough like a native one for grub to boot it. The worst thing I’ve seen about it is that some machines change their idea of bios disk 0 and 1 when the first one fails, so your grub setup might be wrong even after you do it on the 2nd disk – and that would be the same with/without raid. As long as you are prepared to boot from a rescue disk you can fix it easily anyway.

  • No problems here either – I have had /boot on software raid1 on quite a few systems past and present.

    If I do have a drive fail, I can frequently hot-remove them and hot-add the replacement drive to get it resyncing without powering off.

    I think we could argue that rescue disks are a necessity regardless one is using software raid or not. :)

  • Thanks for all of the helpful info, and now I have a follow on question. I have a Dell m6500 that I’ve been running as a RAID 1 using the BIOS RAID on RHEL 5. The issue is that when you switch one of the drives, the BIOS renames the RAID and then RHEL 5 doesn’t recognize it anymore. So here are my questions:

    1) Has this issue of handling the renaming been resolved in RHEL 6?
    (my guess is no)
    2) Would a software RAID be a better choice than using the BIOS RAID?
    3) If a software RAID is the better choice, are there going to be an impact on performance/stability/etc?

    Thanks, Dave

  • I’ve not seen any weird bios naming issues.

    It depends! In another thread on this list someone said they prefer the reliable Linux toolset over the manufacturer tools/RAID controllers.

    In a way it comes down to what you can afford and what you are comfortable with. And then there are chips on the hardware RAID controllers on which the RAID
    operations are offloaded.

    Supposedly hardware RAID performs better. I’ve not done any tests to quantify this statement I’ve heard others make.

    Since it’s software RAID, the OS will be using a few CPU cycles to handle the softraid. But I’ll doubt you’ll miss those CPU cycles … I haven’t and I have a mix of hardware raid and software raid systems. In the end that it will come down to drive performance in terms of how many IOPS you can squeeze out of your drives.

    Make certain to set up checks for your array health and disk health
    (smartd). If you use hardware raid many controllers don’t allow directly accessing the drives with smartctl … you have to use a vendor binary/utility (or open source one if available).

    And then there’s aligning your hardware RAID boundaries… [0] [1] :)

    [0]
    http://www.mysqlperformanceblog.com/2011/06/09/aligning-io-on-a-hard-disk-raid-the-theory/
    [1]
    http://www.mysqlperformanceblog.com/2011/12/16/setting-up-xfs-the-simple-edition/

  • Almost certainly, yes.

    You’d have to measure it on your own workload, but I wouldn’t expect any. Hardware RAID frequently offers a significant performance benefit as a result of having a battery backed write cache. As long as you don’t fill the cache, writing to the RAID card’s RAM is very fast, and writes will go to disk as idle time is available. Your BIOS raid probably doesn’t have a write cache, and thus probably is no better than software RAID for performance.

  • It’s an Intel Software RAID and whenever either of the drives are switched, the system won’t boot at all because it says it can’t find the device. I end up having to boot to a rescue disk and then manually edit the init image to get it to boot.

    Thanks to everyone for all the info and help. It sounds like the best solution is a software RAID with /boot being a non-RAID partition and then being rsynced on to the second drive, so I’ll give that a whirl.

    Thanks again, Dave

  • Just a bit of emphasis here:

    I’ve had success with /boot being part of a RAID1 software array and installing GRUB on the MBR of both disks. That way if the main/first disk fails the other disk still has GRUB installed and the system will boot if rebooted.

    RAID1 is viewed as single disks by GRUB. That is it boots off the _first drive_ — it is not until the Linux kernel has loaded and mdadm has assembled the drives that you actually have a RAID1. Since GRUB only boots off one drive it is prudent to install GRUB on both disks when set up this way.

    Putting /boot on a RAID1 software array will save you having to rsync /boot to the other drive. And booting to a rescue CD to install GRUB on the new drive after the primary drive died. The above configuration is some work up front, but less hassle in the wake of a drive failure.

    Try it on a test system if you don’t trust the configuration … it will boot. :)
    I have this configuration on fifteen or more systems (rough estimate) and some have been in service for years.

    Best Regards,

  • That does sound like a simpler solution in the longer term and I’m more concerned with maintenance/use than with the difficulty of setting it up, so I will give this a whirl. Thanks, Dave

LEAVE A COMMENT