KVM Virtual Machine And SAN Storage With FC

Home » CentOS-Virt » KVM Virtual Machine And SAN Storage With FC
CentOS-Virt 8 Comments

Is there any body got any experience on setting up a virtualized environment in which the vm’s can access a fiber channel SAN storage connected to host? the host access the SAN through its own HBA, but the hba is not recognized inside the virtual machines. Please let me know the step to go through this.

Regards

8 thoughts on - KVM Virtual Machine And SAN Storage With FC

  • How you use this storage depends on whether you plan to do migration from one server to another.

    If you’re not going to be migrating then you can just allocate the FC LUN to LVM volume group and carve off logical volumes for the KVM VMs to use. These can then have meaningful LVM names in /dev/vg_(VG)/lv_(LV) that can be allocated to the VM. See system-config-lvm.

    If you’re planning to migrate between machines then the LVM solution is not going to work. In that case then you might need to create volumes on you FC controller that will be seen as individual devices/luns on the host servers. There is a consistent device name that can be used that appears under /dev/disk/by-id. This will be identical on any host servers that can see that volume. This can be allocated to the VM and will be consistent for a migration. Using this method requires careful management and meticulous documentation of which LUNs have been allocated to which VM. The lun ids are not very user friendly.

    We’ve also have good results with DRBD for times when you want to be able to migrate between machines but do not have a SAN. You have to allocate all the storage on each server but you gain by having a sort of backup.

    Finally, I can recommend convirt as a good system manager interface.

    Regards Brett

  • Don’t… do this. Two database clients writing to the same database filesystem back ends, simultaneously, is an enormous source of excited sounding flow charts and proposals which simply do not work and are very, very likely to corrupt your database beyond recover. These problems have been examined, for *decades* with shared home directories and saved email and for high performance or clustered databases that need to not have “split brain” skew, It Does Not Work.

    Set up a proper database *cluster* with distinct back ends.

    Survey says *bzzzt*. See above for databases. For shared storage, you should really be using some sort of network based access to a filesystem back end. NetApp and EMC spend *billions* in research building high availability shared storage, and even they don’t pull stunts like this the last I looked. I can vaguely imagine one of the hosts doing write access and the other having read-only access. But really, most databases today support good clustering configurations that avoid precisely these issues.

    Multipath does not mean “multiple clients of the same hardware storage”. That’s effectively like letting two kernels write to the same actual disk at the same time, and it’s quite dangerous.

    Now, if you want each client to access their own fiber channel disk resource, that should be workable. Even if you have to mount the fiber channel resources on the KVM host, and make disk images for the KVM
    guest, that should at least get you a testable resource. But the normal approach is have a fiber channel storage server that makes disk images available via NFS, so that the guest VM’s can be migrated from one server to another with the shared storage more safely.

  • It depends on what database product you’re using. If it’s Oracle Database Server it’s designed to work with shared devices/file systems. You shouldn’t have a problem running it in an active/active
    (load balanced) configuration. If it’s MySQL/MariaDB the best you can hope for, as far as I know, is an active/passive configuration with replication.

    I’m guessing you’re using MySQL. Make your database highly available in an active/passive configuration with replication and use some sort of failover (heartbeat, carp, etc) or a network load balancer. Depending on your application you can still run it in a active/active
    (load balanced) configuration.

  • Oracle is hideously expensive in this mode. It basically uses a customized operating system, and it’s *still* prone to the basic problem of locking transactions to avoid conflicts. They just expend a
    *lot* of system and software resources to manage it, which is why such a clustered Oracle database takes so much resources.

    There is “Multiple-Master MySQL”, which basically provides built-in election of the master node and interesting load factors to split the load, and uses a separate IP address for the “master” node. It works pretty well and is available in the “mysql-mmm” package from EPEL.

    Been there, done that, had *way* too many places just wave their hands at the failover and never actually configure it. mysql-mmm takes a lot of the guesswork out.

  • Hi,

    while i am using mysql_mmm myself, it does has ist quirks and tends to get the odd node out of sync, especially if your run additional slaves connected to the master-master setup. You might have a look at galera cluster which is available standalone or as part of a special version of MariaDB. I have had a good experience with it, although it’s innoDB only for now.

    Regards, Thomas

  • Heh. For good reason. MyISAM is being deprecated, by a lot of developers, for a lot of reasons. Keeping the transactions atomic is apparently a *big* MyISAM problem, and exacerbated by clustering software.

    I am curious about the multiple slave problem you mention. If this is a reasonable group to detail it, do tell!

LEAVE A COMMENT