VMware vSphere includes a number of enhancements that enables
it to deliver very high I/O performance. In this study, we demonstrate that
vSphere can easily support even an extreme demand for I/O throughput made
possible by new products like Enterprise Flash Drives (EFD) offered by EMC. In the
experiments conducted at EMC labs, we were able to achieve just above 350,000
I/O operations per second with
- Single vSphere host with just three virtual
machines running on it
- Latencies under 2ms
- I/O block size of 8KB
What does such a high throughput mean to customers? Consider
this: the entire database of Wikipedia is supported by 20 MySQL servers each 200GB to
300GB in size. On an average Wikipedia receives 50,000 http requests or 80,000
SQL queries per second1, which
translates to 4.3 billion hits per day. With the storage infrastructure used in
our experiments we could easily accommodate the entire database of Wikipedia
and still be left with enough space. A single vSphere host driving more than
350,000 I/O requests per second could easily support the
throughput requirements of Wikipedia.
In late May 2008, we published a blog article on
achieving 100K I/O operations per second with ESX 3.5. To achieve that, we had used 495 15K RPM Fibre
Channel disks spread across three CX3-80 arrays. If we were to push the
envelope further with vSphere, we needed more storage bandwidth. It would have taken approximately 1750 15K rpm Fibre Channel drives with 120
Disk Array Enclosures to provide the 350,000 I/O operations per second throughput. If we
were to have some redundancy in the storage then the numbers would increase
further and go as high as 3500 drives for a RAID 1/0 configuration doubling the
entire SAN infrastructure.
Instead only 30 EFDs housed in three CX4-960 arrays provided enough
storage bandwidth for vSphere to drive just above 350,000 I/O requests per
We could have achieved higher I/O operations per second with a smaller
block size, but we focused our studies on 8KB block because it is the
most representative of real applications. We chose an I/O pattern that was 100% random in nature.
- 3 VMs on
one vSphere host supported 350,000 I/O operations per second with 8KB block
size (Figure. 1)
- A single VM with 2 vCPU and 4GB memory provided just under 120,000 I/O
operations per second with 8KB block
latency as measured in ESX was just under 2 ms
new paravirtualized SCSI adapter (pvSCSI) offered 12% improvement in throughput at 18% less CPU cost compared to LSI virtual adapter
Figure.1 Scaling I/O
performance through vSphere
We are documenting all the experiments in detail in a white paper that will be posted on the VMware website. We encourage readers to refer to that white paper for more details.
This testing was the result of a joint effort between VMware and EMC. We would like to thank the Midrange Partner Solutions Engineering team at EMC,
Santa Clara for providing access to the hardware, for the use of their lab, and
for their joint collaboration throughout this project.
For more comments or questions, please join us in the VMware Performance Community website.
About the Authors:
Chethan Kumar is a member of Performance Engineering team at VMware. Radhakrishnan Manga is a member of Midrange Partner Solutions Engineering team at EMC.