Uncategorized

VMI performance benefits

One of the hallmarks of VMware has been its continuous innovation. With every new release VMware adds exciting new features to its products. But there are also lots of changes in each release that may not be as obvious, but are still quite important. Performance improvements often fall into this category. One such feature that was added in VMware ESX Server 3.5 is the support for guest operating systems that are paravirtualized using the virtual machine interface (VMI) specification.

A little bit of history will provide insight into the motivation for the VMI standard. Previous Linux paravirtualization efforts produced paravirtualized kernels that were tightly coupled to the hypervisor.  Frequent interface changes led to guest kernel and hypervisor version dependencies, impeding the independent evolution of the kernel and the hypervisor. These paravirtualized kernels were also not compatible with native hardware. Wouldn’t it be nice if a single kernel could run both natively and in a virtual machine, but with improved performance using paravirtualization in the latter case? This feature, now called transparent paravirtualization, was non-existent. The lack of transparent paravirtualization doubled the number of kernels to develop, test, and debug. In order to address these issues VMware proposed the VMI paravirtualization standard in 2006. As a proof of concept VMware implemented the VMI standard in Linux kernel 2.6.16 and demonstrated it to the Linux open-source community at the Linux Symposium in Ottawa, Canada. It was clear that the performance gains of paravirtualization could be attained without sacrificing modularity and code cleanliness and that VMI-enabled kernels could run transparently on native machines without any performance impact. Several VMI-style paravirtualization design philosophies were accepted by the Linux community, leading to the development of the paravirt-ops interface in the 2.6.20 mainline kernel. VMware then worked with the Linux community to include VMI in the Linux kernel. Today VMI can be enabled in mainline Linux kernel 2.6.22 and above. See Knowledge base article #1003644 for instructions on how to enable VMI in your custom Linux kernel.

Since VMI-enabled kernels can run on native systems, the popular Linux distributions Ubuntu Feisty Fawn (7.04) and Ubuntu Gutsy Gibbon (7.10) were shipped with VMI enabled by default in the kernel, providing transparent performance benefits when they are run in ESX Server 3.5. VMware is also working with Novell to include VMI in the SUSE Linux Enterprise Server distribution. We in the Performance group did a study to evaluate the performance benefits of using a distribution that supports VMI, the results of which can be found in this whitepaper.

The paper has details on the workloads that we ran, the benchmark methodologies used, and the reasoning behind them. It will be clear from the paper that VMware’s VMI-style paravirtualization offers performance benefits for a wide variety of workloads in a totally transparent way. With support for virtualization techniques like binary translation and hardware assist, and now the addition of VMI paravirtualization, VMware provides customers the most comprehensive range of choices, allowing them to choose the right virtualization technique based on the guest operating system and workload.

You can also learn more about VMI’s performance benefits at my upcoming talk, "VMI: Maximizing Linux virtual machine performance in ESX Server 3.5" at VMworld Europe 2008. I hope to see you there.