Order Of Sata/sas Raid Cards

Home » CentOS » Order Of Sata/sas Raid Cards
CentOS 16 Comments

Hi.

I bought a new Adaptec 6405 card including new (much larger) SAS drives (arrays).

I need to copy content of the current SATA (old adaptec 2405) drives to the new SAS drives.

When I put the new controller into the machine, the card is seen and I can see that the kernel loads the new drives and the old drives. The problem is that the new drives are loaded as SDA and SDB, which then stops the kernel loading, becasue it cannot find root and get kernel panic.

Is there a way to tell the kernel in which order to load the drives and assign the drive order in a way that the new drives are assigned SDC and SDD and the old drives get SDA and SDB?

thanks Jobst

16 thoughts on - Order Of Sata/sas Raid Cards

  • use UUID= in fstab (lsblk -o NAME,KNAME,UUID) and you will get rid of all this headaches (if you have software raid the assembling is done internally based on UUID so you don’t have to worry about mdraid)

    HTH, Adrian

  • Hi Adrian

    yes this will do. Because I do not know (yet) the UUID of the new partitions (drives), if I specify the UUID for the known drives for the partitions the kernel will assign the new drives to higher sdx?
    Is this correct?

    thanks Jobst

  • After reboot sdx could be sdy, as you noticed. The solution: you dont access a drive via /dev/sdx You access per UUID and the kernel maps it to the appropiate sdXY which could be sdy after reboot.

    I am not sure about initial ramdisks etc. maybe there is hardcoded stuff to sdx in there. Maybe it has to be rebuilt? Maybe you has to rebuild initrd as well as updating fstab?

  • Markus Falb wrote:

    You can also label it. I loathe UUIDs – there is *no* way you’re going to remember one when you need it. Labels are so much clearer.

    I’ve actually never seen a system *not* know what the first drive was, hardware-wise. And grub will point to root hd(0,x), normally, not UUID or anything else. You *can* (and I do, all the time) use LABEL= on the kernel line.

    mark

  • Hi!

    AFAIK the sdX names are given by the bios so this is why when new hardware is addend and/or something is change hardware-wise the sdX
    nomenclature is changed. If you decide to use UUID nomenclature you should use it for ALL
    disks/partitions .. for CentOS that menas the beside fstab to modify grub to have something like root=UUID= in kernel command line from grub. IMHO the easiest way to change all to UUID is to boot a live-cd find out all UUIDs and modify the fstab and grub accordingly. I dont know about root(hd0,0) .. i have the grub and / installed on an disk which by system is recognized as /dev/sdc and in grub.conf i have hd(2,msdos1)

    HTH, Adrian

  • Adrian Sevcenco wrote:


    I’ve never seen anything like msdos1 – I assume it’s a label, but have seen nothing suggesting I could do that. At any rate, that’s h/d 3, or presented as the third drive to the BIOS?

    mark

  • err, sorry that was my mistake … i copy pasted from wrong terminal
    (from my desktop fedora 16 grub 2 instead from the CentOS server where i look initially)

    so, to wrap things up: on my CentOS 5 storage i have root (hd0,0)
    that stayed the same no matter how many block devices i added or removed from my hardware card… but in fstab i use only UUIDs

    HTH, Adrian

  • Adrian Sevcenco wrote:

    Oh, no problem. I can’t see how you could *possibly* have copied from the wrong term (says the guy with seven open for use, sudo -s on three, and one for later use for streaming media….)

    Yup, what we’ve got. As I said, though, I hate UUIDs – for any non-RAIDed drives, we *always* label them, indicative of where they mount. That way, we all know what’s expected, where a UUID tells you nothing of what it is.

    mark

  • I agree with the UUID stuff, I do not like them for the exact same reason. I do not understand why RedHat cannot include the partition into the UUID, e.g.

    dev-sda1-c05e-449a-837b-b2579b949d55

    As for the first drive, when the kernel boots I think it assigns the drives in order of the controller on the system bus/slots. As the new controller sits lower in the slot system (i.e. closer to the CPUs) it is recognised first as I can see it appearing first in the order being initialized by the kernel. I cant move it below the old card as there is no slot that has the correct PCI-x8.

    I will try the LABEL way of doing ….

    I remember that was the same problem a few years back when one had multiple network interfaces …. until the MAC addresses where introduced into the ifcfg files.

    Jobst

  • the problem with labels, there’s no guarantee they will be unique. the default labels that the CentOS installer uses are the same on every system, so if you plug a drive into another computer, the odds are pretty high there will be a collision.

  • I’ve done that before to get some old data off a drive and the system appended a “1” to all matching label names.

  • Ken godee wrote:
    I’ll step into this again: let’s look at the context.

    1. a drive’s failed. No conflict.
    2. a server’s failed, and you want something off one of its disks:
    a) you put it in a hot swap bay, and aren’t rebooting the server –
    you are going to be manually mounting it, so no conflict
    b) you need to replace the server in -10 sec: you throw the drive(s)
    into a standby box, and either
    i. it’s got partitions labelled /boot and /; fine, you
    *want* it to use those
    ii. you want a drive from another disk on that failed
    system: no problem – see 2.a.
    c) you have a system without hot swap bays, and you install
    the drive from the failed system, and then you do have to
    power up; this is the only case I can think of, off the
    top of my head where you have a collision. In this case,
    you need linux rescue, and relabel.

    So, where’s the big issue with std. labels?

    mark

  • You power down, add some disks that you want to re-use. Maybe even add a controller. Just because a bay looks like you can hot-swap doesn’t mean it is a good idea if you don’t have to. You boot up. When the label scheme was first rolled out, the machine wouldn’t boot if it found a duplicate. Now it will pick one. Possibly the wrong one. As you might when you do a rescue boot for the relabel since you won’t know which controller is detected first.

  • Les Mikesell wrote:

    Okayyyy… We differ, here – I’ve come to adore hot-swap bays, and hate having to take a system apart to add another drive.

    Reused disks – I reformat them, usually in a hot swap bay.

    Of course, I *do* have some additional concerns – I have to worry about PII and HIPAA data that may, *possibly*, be on the drives.

    But you can do a rescue, mount, and look at what’s on what the controller found.

    mark

  • Same here, in terms of the actual swap. But I’m old enough to remember electronics that were sensitive to static, power fluctuations, etc., so I generally power down while doing it. And I
    don’t want to create a scenario where the machine might do something unexpected if it did happen to reboot with the disks added.

    Same here, but I’ve had unwanted surprises from duplicate labels before the format. Hence the conclusion that duplicate labels are as bad and idea as duplicate hostnames, IP addresses, or any other identifier would be.

    I normally don’t have to worry about contents unless the disks leave the site.

    And they all look alike…

LEAVE A COMMENT