RAID: by host or within KVM?

Home » CentOS » RAID: by host or within KVM?
CentOS, CentOS-Virt 4 Comments

Hi Virtualizers,

I just setup a CentOS 6 box (at home) to run as a KVM host. It’s replacing an absolutely ancient CentOS 5 server that’s running Xen. I have one OS drive, and two drives in RAID 1 with LVM on top which is being used as the KVM storage pool.

I created a KVM that will run OpenMediaVault (OMV). OMV requires an OS drive (which is really a LVM), and a separate drive(s) to put all the media on. This is where I’m a little unsure on how to proceed. I think I have two options:

1. Let the KVM host manage the drives (i.e. RAID with LVM on top) and just assign the single volume to OMV. OMV will see it as one HD.
2. Assign the individual drives to the OMV KVM, and let OMV manage the RAID creation, management, etc.

I’m not sure which one will perform better. My hunch is if the RAID management is left at the host level, I’ll see better overall performance. Performance isn’t exactly my number one goal here, but I don’t want to kill it completely either by going the wrong way.

On the other hand, if I let OMV do the RAID management for the media storage disks, I’ll gain future flexibility because it’ll be much easier to move OMV to bare metal.

Which way should I go? What would you guys do?

Regards,
Ranbir

4 thoughts on - RAID: by host or within KVM?

  • I recommend option 1 simply because of recovery methodology. If you lose a disk and replace it, if the host controls the RAID then you have one point of repair and the VMs don’t even notice. If, however, each VM does RAID itself then _each_ VM will need to perform disk replace and rebuild, which is a lot of admin overhead. Also that could cause a lot of disk contention and slow down the rebuild.

    Today you only have one VM. Tomorrow? :-)

  • I usually go with #1 now because it makes the VM simpler and allows add further VM easily.

    You could probably plan for this by setting things up in advanced to make it easier to move in the future.

    Right now, for want of a better/simpler solution, I’m setting up degraded mdadm raid 1 within the VM. The idea being that anytime I
    want to move the VM to bare metal or another host, I could just add a drive (or map one), let it sync, then shut it down, shift the drive and theoretically boot it up on the new machine.

  • I have set up an equivalent system and set up the RAID 1 mirror on the KVM host for the reasons above. The problem I have is that the weekly raid-check cron script on the host runs at glacial speed due to disk activity generated in the CentOS 5 VM, even though that system is largely idle.

    Without the VM active, the raid-check verify takes a couple of hours on 2x1TB drives. With the VM active it takes close to the full week!

    With both systems idle, there is about 10% utilisation of the mirror drives – and I’ve found it hard to work out what is generating this I/O traffic.

LEAVE A COMMENT