Kvm Cluster W/ C6

Home » CentOS » Kvm Cluster W/ C6
CentOS 9 Comments

In our development lab, I am installing 4 new servers, that I want to use for hosting KVM. each server will have its own direct attached raid. I’d love to be able to ‘pool’ this storage, but over gigE, I
probably shouldn’t even try.

most of the VM’s will be running CentOS 5 and 6, some of the VM’s will be PostgreSQL database dev/test servers, others will be running java messaging workloads, and various test jobs.

to date, my experience with KVM is bringing up one c6 VM on a c6 host, manually w/ virt-install and virsh…

stupid questions …

whats the best storage setup for KVM when using direct attached raid?
surely using disk image files on a parent ext4/xfs file system isn’t the best performance? Should I use host lvm logical volumes as guest vdisks? we’re going to be running various database servers in dev/test and wanting at least one or another at a time to really be able to get some serious iops.

its virt-manager worth using, or is it too simplistic/incomplete ?

will virt-manager or some other tool ‘unify’ management of these 4 VM
hosts, or will it be pretty much, me-the-admin keeps track of what vm is on what host and runs the right virt-manager and manages it all fairly manually?

“That may be the easy way, but its not the Cowboy Way”

9 thoughts on - Kvm Cluster W/ C6

  • Am Sat, 19 Oct 2013 23:22:12 -0700
    schrieb John R Pierce :

    I’m not sure if somebody has re-built RHEV on CentOS (couldn’t find it with a quick google-search).

    RHEV = http://www.redhat.com/products/cloud-computing/virtualization/

    Then, on top of that you’d need RedHat Storage Server. It’s a stabilized build of GlusterFS with Enterprise-Support – and Enterprise-price…

    Also, you could try OpenStack – but I’m not sure if it’s worth the hassle and if four noodes is actually enough to have a usable setup.

    RedHat Storage Server recommends 10G ethernet, BTW.

    For your setup, I’d invest more in the hardware itself (redundant PSU, more redundancy in the disks, more powerful RAID-controller with battery-backed cache, the more hardware is hot-pluggable, the better etc..)

    Oh, and I’d love to hear success-stories of people who actually use RHEV+RHSS. Any kind of distributed storage, actually.

  • I’ve build DRBD-backed shared storage using 1 Gbit network for replication for years and the network has not been an issue. So long as your apps can work with ~110 MB/sec max throughput, you’re fine.

    Latency is not effected because the average seek time of a platter, even
    15krpm SAS drives, it higher than the network latency (assuming decent equipment).

    What makes the most difference is not the RAID configuration but having batery-backed (or flash-backed) write caching. With multiple VMs having high disk IO, it will get random in a hurry. The caching allows for keeping the systems responsive even under these highly random writes.

    As for the storage type; I use clustered LVM (with DRBD as the PVs) and give each VM a dedicated LV, as you mentioned above. This takes the FS
    overhead out of the equation.

    I use it from my laptop, via an SSH tunnel, to the hosts all the time. I
    treat it as a “remote KVM” switch as it gives me access to the VMs regardless of their network state. I don’t use it for anything else.

    Depends what you mean by “manage” it. You can use ‘virt-manager’ on your main computer to connect to the four hosts (and even set them to auto-connect on start). From there, it’s trivial to boot/connect/shut down the guests.

    If you’re looking for high-availability of your VMs (setting up your servers in pairs), this might be of interest;


  • got all that. dual PSUs which will be plugged into two seperate UPS”s, and the direct attached storage is on a SAS2 raid card with 2GB
    flash-backed writeback cache.

  • Stay away from LVM if you want performance. There is something single threaded in it which stops you from hitting really great performance. qCow2 will work pretty okay but you cant get away from the fact that you are running a filesystem ontop of a filesystem which is never going to be awesome.

    In CentOS, use elrepo and install the “kernel-lt”. This will give you a much later kernel which is great for KVM as much improvement has been made from the stock 2.6.32 kernel.

    If you need really good I/O then use 10G networking and turn one of the boxes into a ZFS/NFS and put your virtual images on that. My experiments with FreeNAS were extremely potent and quite stable.

    Remember that a single SATA harddrive has an approximate equivalent performance of 1G ethernet. 100MB/s or so.

    Virt-Manager is an ok GUI tool but I would recommend using command line tools virt-install and virsh. They require a little more learning but ultimately give you a better understanding of the stack. They are very powerful when you learn to script with them if you dont have the ability to write python (which I dont)

    Yes however this is just the creation and destruction of VMs. You cant do live migration because you dont have a shared storage device
    (unless you use my ZFS NAS idea)

    I would stay away from openstack / RHEV (which is actually a fedora project called ovirt) They add a layer of complexity and inflexibility which is not so useful in a lab environment.




    Here is a virt-install command that i found in my bash history :)

    virt-install –connect qemu:///system -n nfs-server -r 2048 –vcpus=2
    –disk path=/vols/nfs-server.img,size ,device=disk,bus=virtio –vnc

  • A) I have a SATA3 3TB disk on my windows PC that is twice that fast on sequential IO

    B) these new servers are SAS2 with 15k hybrid drives, and use RAID cards with 2GB flash-backed cache

  • [ … ]

    [… ]

    In the area of virtualization it is IOPS what counts, not sequential read/write. And in terms of IOPS SATA drives are the worst option as storage backend.


  • Indeed. however I was just illustrating an example. People often don’t realise how slow Gbit Ethernet actually is.