Home > Blogs > VMware VROOM! Blog


Monster Performance with SQL Server VMs on vSphere 5.5

VMware vSphere provides an ideal platform for customers to virtualize their business-critical applications, including databases, ERP systems, email servers, and even newly emerging technologies such as Hadoop.  I’ve been focusing on the first one (databases), specifically Microsoft SQL Server, one of the most widely deployed database platforms in the world.  Many organizations have dozens or even hundreds of instances deployed in their environments. Consolidating these deployments onto modern multi-socket, multi-core, multi-threaded server hardware is an increasingly attractive proposition for IT administrators.

Achieving optimal SQL Server performance has been a continual focus for VMware; with current vSphere 5.x releases, VMware supports much larger “monster” virtual machines that can scale up to 64 virtual CPUs and 1 TB of RAM, including exposing virtual NUMA architecture to the guest. In fact, the main goal of this blog and accompanying whitepaper is to refresh a 2009 study that demonstrated SQL performance on vSphere 4, given the marked technology advancements on both the software and hardware fronts.

These tests show that large SQL Server 2012 databases run extremely efficiently with VMware, achieving great performance in a variety of virtual machine configurations with only minor tunings to SQL Server and the vSphere ESXi host. These tunings and other best practices for fully optimizing large virtual machines for SQL Server databases are presented in the paper.

One test in the paper shows the maximum host throughput achieved with different numbers of virtual CPUs per VM. This was measured starting with 8 vCPUs per VM, then doubled to 16, then 32, and finally 64 (the maximum supported with vSphere 5.5).  DVD Store, which is a popular database tool and a key workload of the VMmark benchmark, was used to stress the VMs.  Here is a graph from the paper showing the 8 vCPU x 8 VMs case, which achieved an aggregate of 493,804 opm (operations per minute) on the host:

8 x 8 vCPU VM throughput

There are also tests using CPU affinity to show the performance differences between physical cores and logical processors (Hyper-Threads), the impact of various virtual NUMA (vNUMA) topologies, and experiments with the Latency Sensitivity advanced setting.

For more details and the test results, please download the whitepaper: Performance and Scalability of Microsoft SQL Server on VMware vSphere 5.5.

This entry was posted in Web/Tech and tagged , , , , , , , , on by .

About David Morse

David Morse is a member of the VMware Performance Engineering Group. He has 18 years of benchmarking experience at VMware, Dell, and NCR. He has led benchmarking teams which were responsible for numerous benchmark leadership positions. Since 1999, he has held a variety of roles within the Standard Performance Evaluation Corporation (SPEC), including chairing the Open Systems Steering Committee and serving on the Board of Directors. David has a B.S. in Computer Engineering from the University of South Carolina and is a Red Hat Certified Engineer (RHCE).

6 thoughts on “Monster Performance with SQL Server VMs on vSphere 5.5

  1. Ganesh Ramaswamy

    David I just want to congratulate you on such a brilliant document. It is such a comprehensive and detailed study that has allowed me to infer many best practice recommendations and saved me weeks of work.
    The standard of that document is incredible, thank you so much.

    Reply
  2. Pingback: Newsletter: January 25, 2014 | Notes from MWhite

  3. Pingback: SQL Server VM Performance on VMware vSphere 6 | VMware VROOM! Blog - VMware Blogs

  4. Pingback: Monster Performance with SQL Server VMs on vSphere 5.5 | VMware VROOM! Blog – VMware Blogs | Infrastructure Land

  5. Pingback: SQL Server VM Performance with VMware vSphere 6.5 - VMware VROOM! Blog - VMware Blogs

  6. Pingback: VMware vCloud Air Database Performance Scalability with SQL Server - VMware VROOM! Blog - VMware Blogs

Leave a Reply

Your email address will not be published. Required fields are marked *

*