Intel RST RAID 1, Partition Tables And UUIDs

Home » CentOS » Intel RST RAID 1, Partition Tables And UUIDs
CentOS 8 Comments

I have been having some problems with hardware RAID 1 on the motherboard that I am running CentOS 7 on. After a BIOS upgrade of the system, I lost the RAID 1 setup and was no longer able to boot the system.

Testdisk revealed that the partition tables had been damaged and because I had earlier saved information from fdisk, I was able to recreate the partitions. However, booting into the BIOS and recreating/synchronizing the RAID from one of the disks (took around 20 hours for 256 GB disks), I again lost the ability the boot and the partition tables were similarly damaged. Eventually I was able to boot the system again from one of the two disks and it is now up and running, now running without RAID.

– Should I expect that the Intel RAID 1 setup changes the partition tables? I should add that the disks were originally created after the RAID 1 was setup.

– After fooling around with testdisk (and prior to that, parted), it turned out that I had lost the disk UUID for both disks, they were both set to 000… Partition UUIDs seemed unchanged from before, including the LUKS partitions. Are disk UUIDs (not partition UUIDs) not used by Linux since I have not – yet – seen any effects of the missing disk UUIDs?

Because there seems to be a couple of other issues with the motherboard, I expect to have it replaced early this week but I am still interested in learning more.

Thanks.

8 thoughts on - Intel RST RAID 1, Partition Tables And UUIDs

  • The Intel RST RAID (aka Intel Matrix RAID) is also known as a fakeraid. It isn’t a hardware RAID, but instead a software RAID that has a fancy BIOS interface. I believe that the mdadm tool can examine the RAID settings, and you can look at /proc/mdstat to see its status, although from what I remember from previous posts, it’s better to just let the BIOS think it’s a JBOD and use the linux software RAID tools directly.

  • the main advantage I know of for bios fake-raid is that the bios can boot off either of the two mirrored boot devices. usually if the sata0 device has failed, the BIOS isn’t smart enough to boot from sata1

    the only other reason is if you’re running MS Windows desktop which can’t do mirroring on its own

  • I see, thank you. Right now I am running off one of the disks because of the mishap, I am also waiting for a systemboard replacement at which time I can decided whether to go with Linux software RAID, ie mdadm, or back to the Intel BIOS RAID.

    The latter lacks any progress indicators in BIOS when rebuilding an array which took around 20 hours for a 256 GB RAID 1 setup and it is annoying not to know the status of the rebuild etc. Could mdadm in a command window helped me answer that question?

    Also, it seemed that the BIOS RAID damaged the partition table on the disks – should I expect that this happens? My guess would be no but what do I know…

  • Thank you. As I mentioned, I am running from one disk but and the two disks have identical disk UUIDs, identical partition UUIDs, both of which I assume is an effect of the BIOS fake RAID.

    If I were to go with Linux mdadm and a RAID 1 configuration, am I correct in assuming that I would:

    – decide which one is the “master” disk

    – configure mdadm to sync to the other disk

    Would I need to change disk UUIDs, partition UUIDs on the second disk prior to this or mdadm would synchronize as needed?

    Thanks.

  • I’d use software raid rather the fakeraid. One of the advantages is that you are not limited to the mainboard and can use the disks in another machine if you need to. If you need to replace the board, you are not limited to one that provides a compatible fakeraid.

    Using software raid with mdadm will give you indication about the progress of rebuilds and checks by looking at /proc/mdstat, and you can automatically get an email in case a disk fails if so configured. Being informed about disk failures is relevant.

    I’ve used Linux software raid for at least 20 years and never had any problems with it other than questionable performance disadvantages compared to hardware raid. When a disk fails just replace it, and I’ve recently found that it can be impossible to get a rebuild started with hardware raid, which makes it virtually useless.

    I’ve never used the (Intel) fakeraid. Why would I?

    If you don’t require CentOS, you could go for Fedora instead. Fedora has btrfs as default file system now which has software raid built-in, and Fedora can have advantages over CentOS.

  • There are advantages in a bleeding edge one can find useful. There is some bleeding too, plausible, so don’t be surprised.

    Valeri

  • There is bleeding with CentOS 7, too, and CentOS 8 is probably no different. One can always be surprised.

    I’m not so much referring to bleeding but advantages like packages being available in Fedora that aren’t available in CentOS. And not being able to upgrade a distribution when a new release comes out is a killer for CentOS since there are things in CentOS 8 that make me wonder why I shouldn’t go for Fedora right away. At least I have the goodies when I do that.

    But then, there are now things in Fedora that make we wonder if I should switch to arch. Like how retarded is it to forcefully enable swapping to RAM by default. Either you have plenty RAM and swapping has no disadvantages, or you don’t and swapping to RAM makes it only worse. I can see that it might have an advantages for when you don’t create a swap partition, but that’s already a bad idea in the first place unless you have special requirements that are far from any default. I don’t even dare wondering if it can get any more stupid, because unfortunately, there is no limit to stupidity and the only thing helps against it is more stupidity.

    And systemd ursurping the functionality of crond? The last thing we need is systemd to become even more cryptic by that — and how can I check if I am getting an email when a failed disk is detected, or when errors are being detected by raid-check? I can do that with crond, but not with systemd.

  • And that is why my servers run FreeBSD. But when I switched from Fedora to CentOS (quite a while back), it made noticeable difference.

    Valeri

    And that is designed into the way distributions are maintained.

    Some of them are like “sliding release”, like Fedora, Debian… And with those you often get surprises just upon routine update something breaks, as package is replaced with higher version which has different internals. But these are a charm to “upgrade” to next release. One can also mention FreeBSD and MacOS as being close to this, IMHO.

    Others are “Enterprise” very long life. They are being patched by back porting fixes (very effort consuming), but they mostly “unchanged” packages internal wise, so during 10 years of such system’s life cycle, it is only rarely you may have things broken. But when it comes to life cycle end, you effectively have to build new system, as virtually neither of software packages can just step up from release 10years old to todays. You effectively do at once all you did for 10 years of “sliding release” system. Examples of this style are: RedHat Enterprise, CentOS (“binary replica” of the former). With all bad one can say about Microsoft, I would mention MS Windows system on which something you install when it release, will still work when the system maintenance ends 10 years later.

    So, it is one’s choice, which style of system to install and maintain. I for one chose CentOS for number crunchers and workstations, which takes less of my time to maintain (but FreeBSD for servers, but that is different story). Your choice appears to be different, and we both are right in our choices based of our goals.

    Systemd has resembling portion of code in mainstream Linux kernel (I bet experts will correct me where I’m wrong). You can try to go with systemd-free Linux distro like devuan (fork of Debian that happened when Debian went systemd way). Or you can try one of BSD descendants, which being such are closer to original UNIX philosophy: FreeBSD, NetBSD, OpenBSD (and variety of others standing quite close to these, or slightly more apart, your duckduckgo search will be as good as mine).

    Valeri