Home > Blogs > Virtual Reality > Monthly Archives: August 2011

Monthly Archives: August 2011

It’s no surprise that vSphere 5 holds up under pressure, but what about Hyper-V?

Before we head out to VMworld, I want to share with you some fascinating test results just published by Principled Technologies that compare vSphere 5 performance and scalability to Microsoft Hyper-V Server R2 SP1.

When Microsoft released Windows Server 2008 R2 SP1, they added a feature called “Dynamic Memory” that they claimed brought them into parity with vSphere in VM density – the number of VMs doing useful work a host can support. We’d tested previous releases of Hyper-V without Dynamic Memory and found that, without the ability to overcommit memory, Hyper-V would hit a VM density brick wall far before vSphere reached the point of diminishing returns. Would Dynamic Memory yield a breakthrough improvement for Hyper-V? We had our doubts because of Dynamic Memory’s reliance on in-guest ballooning as its only way to reclaim memory from guests to support memory overcommitment. We knew from our history with ESX, ESXi and vSphere that getting good, predictable performance when VM density gets high and host memory is overcommitted requires more than just ballooning. We’ve built an array of technologies into vSphere that have been optimized for over a decade to make it a platform our customers feel comfortable with when pushing resources to the limit.

To get an answer, VMware commissioned Principled Technologies to do a side-by-side comparison of vSphere 5 and Hyper-V R2 SP1 throughput when running a SQL Server workload under high VM densities. They used the well-respected DVD Store Version 2 benchmark to measure total throughput delivered by a host running 24 VMs, and then 30 VMs. With 24 4GB VMs, the 96GB host was just reaching full memory commit, and 30 VMs pushed it to 25% memory overcommit – familiar territory for vSphere users.

The results won’t surprise vSphere customers – here’s how the VM-by-VM score looked:


When Principled Technologies added up the throughput of each VM, vSphere 5 delivered 19% more aggregate throughput (orders per minute as measured by DVD Store) on the host running 30 VMs.


Findings that really pleased our vSphere engineers became evident when Principled Technologies dug into the benchmark results a little deeper. One of the key behaviors we seek with vSphere is fairness across the VMs. Assuming equal resource shares and limits, we want each VM to perform as well as its neighbors. Too much variability would be unfair to your users who might get their workloads stuck on an underperforming VM. vSphere 5 came out ahead in fairness as shown in the figure below with a tighter standard deviation in throughput across the 30 VMs as the smaller height of the vSphere box shows in the chart below.


Another striking validation of vSphere 5’s scalability advantage over Hyper-V R2 SP1 was shown when Principled Technologies compared aggregate DVD Store throughput for the 24 VM and 30 VM cases. For Hyper-V, its throughput dropped by 3% when six VMs were added. Evidently, Hyper-V with Dynamic Memory doesn’t hold up so well when you make your VMs do some real work once the host memory becomes overcommitted. In contrast, vSphere 5 throughput increased by 11% as those six additional VMs were added. vSphere 5 is clearly handling the 25% memory overcommit condition with ease.


So, thanks to Principled Technologies, we have the answer to our question: vSphere 5 holds up better under workload and memory pressure to let our users reliably achieve higher VM densities and that means better scalability and lower costs. You can access the full report from Principled Technologies titled, “Virtualization Performance: VMware vSphere 5 vs. Microsoft Hyper-V R2 SP1” on their Web site here, or we’ve also posted a copy on our site here.