BAD Disk I/O Performance

Home » CentOS-Virt » BAD Disk I/O Performance
CentOS-Virt 3 Comments

Hello,

i’m trying to convert my physical web servers to a virtual guest. What i’m experiencing is a poor disk i/o, compared to the physical counterpart
(having strace telling me that each write takes approximately 100 times the time needed on physical).

Tested hardware is pretty good (HP Proliant 360p Gen8 with 2xSAS 15k rpm 48
Gb Ram).

The hypervisor part is a minimal CentOS 6.5 with libvirt. The guest is configured using: VirtIO as disk bus, qcow2 storage format
(thick allocation), cache mode: none (needed for for live migration – this could be changed if is the bottleneck), IO mode: default.

Is someone willing to give me some adivices? :)

Thanks

Luca

3 thoughts on - BAD Disk I/O Performance

  • Have you tried using a raw images just for testing? I’ve seen some pretty nasty performance degradation with qcow2 but unfortunately I was never able to track down what exactly caused this. Switching to raw images fixed the issue for me.

    Regards,
    Dennis

  • I also experienced really bad disk I/O performance with qcow2 images
    (under CentOS 6.4 hosts.)
    When I converted the disk image to a raw logical volume (created with lvm2) I get almost bare-metal disk I/O performance.

    Also note mentioning: check if your disk partitions are properly aligned and begin at 4k block boundaries. I use parted for this. For more info see http://rainbow.chard.org/2013/01/30/how-to-align-partitions-for-best-performance-using-parted/
    or google it.

    There are more performance tuning options, e.g. you can set vm.swappiness = 0 on the host’s Linux kernel. You can also try different kernel scheduling options, etc. These gave me only minor performace gains. The most important part was getting away from qcow2 and using properly aligned disk partitions.

    Zoltan

  • Hi,

    well quickly reading this thread I didn’t see anyone mentioning the I/O scheduler which is the component with the highest performance impact.

    you might want to check you are useing “deadline”
    i/o scheduler.

    extensive documentation on how to achieve this is found on the web.

    HTH