Mdraid Doesn’t Allow Creation: Device Or Resource Busy

Home » CentOS » Mdraid Doesn’t Allow Creation: Device Or Resource Busy
CentOS 2 Comments

Dear fellow CentOS users,

I have never experienced this problem with hard disk management before and cannot explain it to myself on any rational basis.

The setup:
I have a workstation for testing, running latest CentOS 7.3 AMD64. I am evaluating oVirt and a storage-ha as part of my bachelors thesis. I have already been running a RAID1 (mdraid, lvm2) for the system and some oVirt 4.1 testing. Now I added 6x 500 GB platters from an old server running Debian 8 Jessie with software RAID of a similar fashion as well. That would unexpectably prevent the system from booting past something like (I copied it from the working setup), I ran it over night, so it was actually about 16 hours:
“A start job is running for dev-mapper-vg0\x2droot.device (13s / 1min
30s)”

Can it be just some kind of a scan, which takes so long? The current throughput based on time (16 h) and capacity (3 TB) would be about 50 MBps. (Those drives can be pretty slow when writing, dd showed about 30 MBps, the write cache is off.)

This is actually repeatable. If I unplug those drives and boot again, it all works.

I don’t know, if it helps but before that I had two screens full of:
“dracut-initqueue[331]: Warning: dracut-initqueue timeout – starting timeout scripts”

Well, I proceded without this array, and after it booted I connected the array of 6 hard disks again. They were recognized etc. The problem is, I
cannot do much. I can dd from and to the harddrives, I can create and delete partitions but I cannot create an md raid array out of them, I
cannot create a physical volume or format them with a filesystem. I even tried overwriting all of those harddrives with zeroes, which worked but didn’t help at all with the creation of the array afterwards.

mdadm –create /dev/md6 –level=5 –raid-devices=6 /dev/sd[cdefgh]1

“mdadm: cannot open /dev/sdc1: Device or resource busy”

with pvcreate, it seems as if there was no device, but I clearly see it in /dev/sdc1…

“Device /dev/sdc1 not found (or ignored by filtering).”

partprobe yields:

“device-mapper: remove ioctl on ST3500630NS_9QG0P0PY1 failed: Device or resource busy Warning: parted was unable to re-read the partition table on
/dev/mapper/ST3500630NS_9QG0P0PY (Device or resource busy). This means Linux won’t know anything about the modifications you made. device-mapper: create ioctl on ST3500630NS_9QG0P0PY1part1-mpath-
ST3500630NS_9QG0P0PY failed: Device or resource busy device-mapper: remove ioctl on ST3500630NS_9QG0P0PY1 failed: Device or resource busy”

In some forums, it was suggested, dmraid (yes, the old) could be the trouble. I eliminated this hypothesis (overwritten all with zeroes, fakeraid was never present with these disks). Also multipathd/
dm_multipath could be the trouble, someone suggested. The problem is, if I was to remove device-mapper-multipath, I would loose oVirt-engine, because it has multipath as dependency for some reason.

Do you have any ideas? What logs/ information should I provide if you want to have a look into this.

Best regards Adam Kalisz

2 thoughts on - Mdraid Doesn’t Allow Creation: Device Or Resource Busy

  • Run “vgs” to see if the drives brought up an old volume group. Run “pvs”
    to see if they were imported into the volume group you were already using. Check the content of /proc/mdstat to see if an old RAID array was started.

    Use “vgchange -a n old_group” to stop an old volume group, and “wipefs
    -a /dev/md42” to delete the LVM2 data on the old volume group’s array.
    Use “mdadm –stop md42” to stop the old array, and “wipefs -a
    /dev/sd[cdefgh]1” to delete the dm RAID data on all of those partitions. Use “wipefs -a /dev/sd[cdefgh]” to delete the partition table on those drives.

  • Dear CentOS users and administrators,

    I solved the problem! It really was multipath and it had to do with some automatic mapping probably. I maybe did reboot after I created a new partition tables on each drive, which probably wasn’t very smart. Anyway, the solution was:

    # multipath -l ST3500630NS_9QG0P0PY dm-1 ATA     ,ST3500630NS
    size=466G features=’0′ hwhandler=’0′ wp=rw
    `-+- policy=’service-time 0′ prio=0 status=active
      `- 6:0:0:0 sdf 8:80  active undef running ST3500630NS_9QG0HLRX dm-6 ATA     ,ST3500630NS
    size=466G features=’0′ hwhandler=’0′ wp=rw
    `-+- policy=’service-time 0′ prio=0 status=active
      `- 4:0:0:0 sdd 8:48  active undef running ST3500630NS_9QG0H3N5 dm-3 ATA     ,ST3500630NS
    size=466G features=’0′ hwhandler=’0′ wp=rw
    `-+- policy=’service-time 0′ prio=0 status=active
      `- 5:0:0:0 sde 8:64  active undef running ST3500630NS_9QG0KYMH dm-4 ATA     ,ST3500630NS
    size=466G features=’0′ hwhandler=’0′ wp=rw
    `-+- policy=’service-time 0′ prio=0 status=active
      `- 3:0:0:0 sdc 8:32  active undef running ST3500630NS_9QG0KQH2 dm-5 ATA     ,ST3500630NS
    size=466G features=’0′ hwhandler=’0′ wp=rw
    `-+- policy=’service-time 0′ prio=0 status=active
      `- 8:0:0:0 sdh 8:112 active undef running ST3500630NS_9QG0H3JL dm-2 ATA     ,ST3500630NS
    size=466G features=’0′ hwhandler=’0′ wp=rw
    `-+- policy=’service-time 0′ prio=0 status=active

    and remove mapping one after one, or with -F all at once:

    # multipath -f ST3500630NS_9QG0KYMH

    It was then possible to:

    # mdadm –create /dev/md6 –assume-clean –level=5 –raid-devices=6
    /dev/sd[cdefgh]1

    which worked without problem…

    Best regards

    Adam Kalisz