Home > Blogs > VMware VROOM! Blog


A Performance Comparison of Hypervisors

At VMworld last November I had the opportunity to talk to many ESX users and to discover for myself what performance issues were most on their minds. As it turned out, this endeavor was not very successful; everybody was generally happy with ESX performance. On the other hand the performance and best practice talks were among the most popular, indicating that users were very interested in learning new ways of getting the most out of ESX. VMworld was just the wrong audience to reach people who had concerns about performance. I was preaching to the choir, instead of the non-virtualized souls out there. At the same time aggresive marketing by other virtualization companies creates confusion about ESX performance. So it was decided that we needed to make a better effort at clearing misconceptions and providing real performance data, especially to enterprises just starting to consider their virtualization options.

A Performance Comparison of Hypervisors is the first fruit of this effort. In this paper we consider a variety of simple benchmarks running in a Windows guest on both ESX 3.0.1 and the open-source version of Xen 3.0.3. We chose Windows guests for this first paper since it’s the most widely used OS on x86 systems. We used open-source Xen 3.0.3 for these tests since it was the only Xen variant that supported Windows guests at the time we ran the tests. Everything was run on an IBM X3500 with two dual-core Intel Woodcrest processors. Xen used the hardware-assist capabilities of this processor (Intel-VT) in order to run an unmodified guest while ESX used VMware’s very mature Binary Translation technology. The results might not be what you expect from reading marketing material! Even for CPU and memory benchmarks dominated by direct execution, Xen shows significantly more overhead than ESX. The difference is bigger for a compilation workload, and huge for networking. The latter is due mostly to a lack of open-source paravirtualized (PV) device drivers for Windows. PV drivers are available in some commercial products based on Xen and should give much better performance. Xen was not able to run SPECjbb2005 at all since SMP Windows guests were not supported at the time the tests were done. This support was added very recently to Xen 3.0.4, however the commercial products are still on Xen 3.0.3. ESX has had PV network drivers (vmxnet) and been able to run SMP Windows guests for years.

We are currently exploring the many dimensions of the performance matrix; 64 bit, Linux guests, AMD processors, more complex benchmarks, etc. Results will be posted to VMTN as they are obtained. Readers are encouraged to perform their own tests and measure the performance for themselves.

Please give us your feedback on this paper and the usefulness to you of this kind of work in general. And if ESX fans find this paper informative, so much the better!

8 thoughts on “A Performance Comparison of Hypervisors

  1. Shad Collins

    The biggest performance problems we have had with ESX is IO. We are de-virtualizing our SQL and Exchange environments and have seen fairly major improvements in performance.

    Reply
  2. Carl Klemmer

    We’ve also seen IO issues. It would be really nice to see some stats on IO vitualization. We are doing some of our own tests using IOMeter

    Reply
  3. Jeff Buell

    Heavy IO workloads are among the most challenging to virtualize, and will show significant overhead. I agree that we need to publish more benchmarks in order to set the right expectations. Also, please take a look at our Performance Tuning Best Practices paper for hints on getting the best IO performance: http://www.vmware.com/vmtn/resources/707

    Reply
  4. Jody

    Jeff, What kind of benchmark leaves out IO when you are comparing system performance. Obviously the numbers must not be that impressive.

    Reply
  5. Ken Zink

    Thanks for the report, I think it is a great start. Additional information I would like to see in future tests and papers are:
    – Overall host CPU utilization for each benchmark execution
    – Results of an IO throughput test using a variety of IO request sizes
    – Results of a network throughput test using a variety of message sizes
    – The same tests on 64 bit hardware and software
    More of this type of work is needed by the industry.

    Reply
  6. Jeff Buell

    Jody, certainly storage I/O is very important and has been studied extensively at VMware. But proper benchmarking requires a large SAN to generate significant CPU utilization. Such tests are planned for the near future. Limited I/O results have been published (i.e., at VMworld 2006). These used a single disk and showed essentially no throughput effect of virtualization.
    Ken, I agree that more benchmarking is needed, and you will see more. All of the things you mention are being considered. For CPU utilization, all the benchmarks except for 1-client netperf saturate the CPU available within the virtual machine, plus a little extra for the hypervisor which runs on other physical CPUs.

    Reply
  7. John Mauceri

    Has anyone had any comparison experiences between ESX and MS Windows Virtual Server? We are begining to setup a bake-off of these two technologies to determine pros and cons.

    Reply
  8. Matt Watson

    We used Microsoft Virtual Server and it was horrible. Free VMWARE version was MUCH better than Virtual Server, and VMWARE ESX is even that much better.
    I wouldn’t recommend anyone waste their time on Microsoft Virtual Server.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *

*