performance

Benchmarking ESX Server vs Xen

Let us first have Tarry Singh remind us that the real value of benchmarking is in configuring and managing your systems better, not about raw competition in non-real-world speed races. From TarryBlogging:

Benchmarking is not about VMware is better than Parallels is better that Xen is better than whatever. It is about telling the customer to be careful when applying and deploying certain scenarios. VMware is doing a pretty good job at it.

Also, as VMware is explaining in excruciating detail, a virtual infrastructure solution is about much more than a hypervisor — you need to be managing storage, networking, and other devices; you need resource pool management solutions like DRS, HA, and VCB; you need management automation tools; you then need to combine all these things with other tools to take care of your business in areas like business continuity, virtual desktop, and software lifecycle. The complete solution is a heckuva lot more than just a hypervisor.

All that being said, let’s take a look at hypervisor performance: VMware ESX Server 3.0.1 vs. Xen 3.0.3 running Microsoft Windows Server 2003 here: A Performance Comparison of Hypervisors. Yes, VMware ESX Server is faster.

Single Virtual CPU Tests

For both SPECcpu2000 and Passmark –CPU tests, the Xen hypervisor showed on average twice the overhead compared to VMware ESX Server. For enterprise applications that are sensitive to CPU resources, this means that the Xen hypervisor can deliver much lower throughput than VMware ESX Server for the same CPU utilization. Furthermore, most enterprises start implementing virtualization to consolidate underutilized servers. These results imply that VMware ESX Server can support many more virtual machines per core compared to the Xen hypervisor.

The Netperf tests performed extremely poorly for the Xen hypervisor compared to VMware ESX Server. We believe that this happened because the Xen hypervisor lacks an open-source paravirtualized network driver for Windows, similar to the paravirtualized vmxnet driver provided by VMware ESX Server. The commercial versions of Xen are expected to offer paravirtualized network drivers similar to the vmxnet driver. However, such proprietary guest drivers will further add to forking of the open Xen source code and make it difficult for datacenter customers to migrate between various flavors of Xen. Furthermore, unlike the Xen hypervisor, these commercial and supported versions will not be free, and hence will change ROI and TCO calculations that were based on an open-source free offering.

Virtual SMP Tests

The tests for both virtual SMP configuration as well as virtual machine scalability could not be run due to issues with the Xen virtualization hypervisor. A two virtual CPU Windows guest could not be booted using the Xen hypervisor.

The virtual machine scalability tests could not be run because more than two uniprocessor Windows guests could not be booted using the Xen hypervisor. At this time, it is not known when this issue will be fixed and the tests can be tried again.
While Xen claims to support virtual SMP and virtual machine scalability, the results from these experiments demonstrate that enterprise customers should run their own tests to make sure such configurations actually work.

Linux tests are coming — remember that VMware has shown we can run paravirtualized guests as well, so I would be very surprised if those results come out differently.

But remember, let’s end where we started. The hypervisor is not the full solution. To make this technology anything but a partitioning toy, you need a full virtual infrastructure solution stack surrounding it. Benchmarks mean little compared to your own workloads running in your own infrastructure. Talk to other VMware customers to get a real sense of what can happen when you virtualize.