New Motherboard – Kernel Panic

Home » CentOS » New Motherboard – Kernel Panic
CentOS 14 Comments

I had to replace the motherboard on one of my CentOS 4 systems and am now getting a kernel panic.

When I try to boot up, I get this:

Volume group “VolGroup00” not found
ERROR: /bin/lvm exited abnormally! (pid 448)
mount: error 6 mounting ext3
mount: error 2 mounting none
switchroot: mount failed: 22
umount /initrd/dev failed: 2
Kernel panic – not syncing: Attempted to kill init!

(I apologize for any typos, I don’t have a way to copy/paste from that screen)

I can boot up with the boot CD, go into rescue mode and browse the files, so I know the drives are ok.

I found some stuff online saying that I should recreate my initrd from rescue mode if the motherboard changed. I tried this, but am still getting the same results.

Suggestions?

Thanks,

14 thoughts on - New Motherboard – Kernel Panic

  • The old board was an ASUS A8N-VM CSM with the NVIDIA nForce 430 chipset.

    The new board is an ASRock A785GM-LE board with the AMD SB710 chipset.

  • Anyone have any ideas here? I can rebuild the machine if I have to, but that’s a last resort.

    The old board was an ASUS A8N-VM CSM with the NVIDIA nForce 430 chipset.

    The new board is an ASRock A785GM-LE board with the AMD SB710 chipset.

    Thanks,

  • Bowie Bailey wrote:

    Just sort of guessing – could the new m/b have resulted in a new UUID, and the configuration – fstab? hwconf? – is looking for the old?

    mark “I like LABEL=”

  • Bowie Bailey wrote:

    Sorry, hit and had another thought: I think you said you rebuilt the initrd… *could* you see the drives? *Did* the running system you rebuilt from have all the LVM drivers loaded when you rebuilt it?

    mark

  • CentOS 4 – seriously???

    You need to include whatever drivers loaded in rescue mode in the new initrd, but I’ve forgotten the exact details. In CentOS5 you would add alias entries to /etc/modprobe.conf but it might have been named something else in C4. Maybe you can see what is there before you chroot to the installed instance and change the file there to match, then make the new initrd. Once in a similar circumstance I just copied the whole contents of ./boot from a different machine with identical hardware so I didn’t have to know as much as anaconda about matching hardware and drivers.

  • The drives are in an md array (raid 1) which is used as the PV for the volume group that is producing the error. fstab simply references the logical volume. LVM configuration refers to /dev/md1. mdadm.conf simply says “device partitions”. I can’t go any farther than that without a running system.

  • Yea, it’s an old system.

    There is a /etc/modprobe.conf file on the original system. Among other things, it says:

    alias scsi_hostadapter sata_nv

    I assume that refers to the driver for the nvidia chipset.

    I found a modprobe.conf file in the rescue environment living in
    /tmp/modprobe.conf. This one says:

    alias scsi_hostadapter ahci

    I guess that’s a driver that works with the new hardware? I do not have the ports in ahci mode in the bios.

    What do I need to do to make sure the driver gets into initrd? Or do I
    just need to make the change to /etc/modprobe.conf on the hard drive?

  • If you have somewhere to copy the data, the best approach would be to back it up from the rescue-mode boot, reinstall CentOS 6 and copy back anything you need – and be good for another many years.

    lsmod from the running rescue system should show the loaded modules. Your initrd has to include anything needed to access the hard drive and filesystem before you can find the others.

    I think you would change the /etc/modprobe.conf on the hard drive and chroot there (/mnt/sysimage) before running mkinitrd.

  • This system is running BackupPC. The number of hardlinks in the data makes copying impractical. I may rebuild it with CentOS 6 later and let the backups rebuild themselves, but I don’t want to do it now if I can avoid it.

    I figured that one out just before I received your response. That fixed the bootup problem. The changes to modprobe.conf were what was missing from the instructions I found online on Friday.

    For future reference, this is what I did:

    1) Boot into rescue mode.
    2) Look at /tmp/modprobe.conf in the rescue environment to see what driver was in use.
    3) Edit /mnt/sysimage/etc/modprobe.conf and add the driver there
    4) chroot /mnt/sysimage
    5) cd /boot
    6) mv initrd-(kernel version).img initrd-(kernel version).img.bkup
    7) mkinitrd initrd-(kernel version).img (kernel version)
    8) reboot

    Now I’ve just got to work on getting the network card going, but that (I
    hope!) should be much easier now that the system is booting.

  • I got it figured out. The problem was at the sata driver level. The instructions I found for rebuilding the initrd neglected to mention that I needed to edit modprobe.conf and add the appropriate driver information first.

    I’m still not sure why it was able to get as far as loading the kernel before suddenly being unable to see the drives. If it needs sata drivers to see the disks, why doesn’t it need them to read the boot partition? I didn’t have to mess with grub or the boot sector after changing motherboards.

  • Grub uses the system bios to load the kernel and initrd. Then the kernel takes over and has to either have the needed disk/raid/lvm/filesystem drivers compiled in or available as modules in the initrd to be able continue and mount the drives.

LEAVE A COMMENT