vSphere Storage vMotion (svMotion) enables the live migration of disk files belonging to virtual machines (VMs). svMotion helps to eliminate the down time of the applications running in VMs when the virtual disk files containing the applications’ data have to be moved between storage devices for the purpose of hardware maintenance, upgrades, load-balancing storage resources, or proactive disaster recovery.
svMotion is the missing piece in liberating VMs and VMs’ associated files completely from the physical hardware on which they reside. Because of the importance of svMotion in the virtual landscape, we at VMware Performance Engineering Labs conducted a study involving the svMotion of the virtual disk files of a VM hosting a large SQL Server database. The focus of the study was to understand:
- The impact on performance of the SQL Server database when migrating physical files of different database components such as data, index, and log.
- The effect of the I/O characteristics of the database components on the migration time of the virtual disk containing the files of those components.
The results from the study show:
- A consistent and predictable disk migration time that was largely influenced by the capabilities of the source and the destination storage hardware.
- That the I/O characteristics of the database components do influence disk migration time.
- A 5% to 22% increase, depending on the VM load conditions, in the CPU cost of a transaction of the database workload while migrating a virtual disk containing the physical files of the database.
For more details, refer to the white paper “Storage vMotion of a Virtualized SQL Server Database”
4 comments have been added so far
One question that wasn’t perfectly clear when I read the white paper – I do see where you were using vSphere 5.0 – were the VMFS datastores involved in this testing all VMFS-5 ?
All the datastores [at both the source and the destination] used in the testing were VMFS-5 based.
this is very interesing
thanks for this paper
do you think you can repeat the tests or at least enhance it by checking the same issuea over a CNa ( converged network adapter) type of infrastructure ???
at least for me is extremely important to know
Thanks for the feedback. As of now, there are no plans to do any additional tests. But, that can change depending on the demand from readers such as you 🙂 So, keep an eye on VROOM. If I do repeat the experiments using other storage protocols, I will definitely post the results and a blog entry here.