computer science

White paper: Understanding Full Virtualization, Paravirtualization, and Hardware Assist

Very good intro to some of the history and challenges of x86 virtualization, the difference in various approaches, what’s being used by the various players today, what the future is bringing with the coming generations of hardware assist, and how VMware is supporting open standards to get us all to an interoperable, high-performance virtual future.

Link: Understanding Full Virtualization, Paravirtualization, and Hardware Assist

Multimode
VMware offers a flexible “multi-mode” VMM architecture depicted in Figure 12 that enables a separate VMM to host each virtual machine. VMware allows you to select the mode that achieves the best workload-specific performance based on the CPU support available. The same VMM architecture is used for ESX Server, Player, Server, Workstation and ACE. While today’s workloads can employ a 32-bit BT VMM or a 64-bit VMM with BT or VT-x, tomorrow’s workloads will be hosted on VMMs that support 32 and 64-bit versions of AMD-V + NPT and VT-x + EPT. VMware provides a flexible architecture to support emerging virtualization technologies. Multi-mode VMM utilizes binary translation, hardware assist and paravirtualization to select the best operating mode for each workload and processor combination. Hardware assist will continue to mature and broaden the workloads that can be readily virtualized.

This is also an interesting point to keep in mind: Moore’s Law trumps everything else we’re doing with respect to performance:

Compute-intensive workloads already run well with binary translation of privileged instructions and direct execution of non-privileged instructions, but NPT/EPT will provide noticeable performance improvements for memory-remapping intensive workloads by removing the need for shadow page tables that consume system memory. Increased performance and reduced overhead expected in future CPUs will provide motivation to use hardware assist features much more broadly, but don’t expect revolutionary improvements. As processors get significantly faster each year, each year’s processor performance increases will likely have a greater impact on virtualization capacity and performance than future hardware assist optimizations.

And why paravirtualization won’t solve your problems today (but with open standards, may in the future):

To be clear, VMware does find processor paravirtualization to increase performance significantly on some workloads today, but the longer term performance delta when second generation hardware assist features are available is unclear. The performance difference may be reduced, eliminated, or expanded as enhancements to the paravirtualization interface may create new opportunities. It’s an open question.

As VMware sees it, the major problem with processor paravirtualization is the need for guest OS modification that makes it dependent on a specific hypervisor to run. The Xen interface, for example, implements deep paravirtualization with strong hypervisor dependency. The OS kernel is closely tied to structures in the hypervisor implementation. This creates an incompatibility as the XenLinux kernel can’t run on native hardware or other hypervisors, doubling the number of kernel distributions that have to be maintained. Additionally, it’s limited to newer, open source operating systems as the intrusive changes to the guest OS kernel require OS vendor support. Finally, the strong hypervisor dependency impedes the independent evolution of the kernel.