Uncategorized

Storage performance improvements in vSphere 4.0

We made a huge number of performance improvements in vSphere
4.0. The ESX storage stack was no exception. We ran a wide variety of micro and
real world benchmarks to thoroughly evaluate and optimize vSphere’s storage
subsystem. It is now even more efficient for the enterprise and ready to
support the cloud.

A wide variety of I/O intensive applications will run
efficiently on vSphere with all the improvements.  You can find details on the architectural
changes and storage performance improvements made in this white paper.

Some of the noteworthy improvements are:

·        
VMware
Paravirtualized SCSI (PVSCSI driver):
vSphere ships with this new high performance
virtual storage adapter. Bus logic and LSI logic were the only choices so far. PVSCSI
is best suited to run highly I/O intensive applications in the guest more
efficiently (reduced CPU cycles). This is
possible with a series of optimizations explained in the paper.

·        
iSCSI
support improvements:
We made significant improvements in the iSCSI stack
for both software and hardware iSCSI. The improvements are not just in terms of
performance but features as well. Noteworthy among these is CPU efficiency
improvements that range from 7-52% depending on the type and size of I/O.

·        
Software
iSCSI and NFS support with Jumbo Frames:
vSphere adds jumbo frames and 10Gbit
NIC networking support for both NFS and iSCSI. This helps drive bandwidth that
is many times faster than previous ESX releases.

·        
File
system improvements for enhanced Virtual Desktop experience and scalable cloud
solutions:
We made several optimizations in VMware File System (VMFS) with
a special focus on enterprise desktop and cloud solutions. File system along
with other improvements in different parts of ESX improves performance of
several provisioning operations dramatically. An example is “boot storm”
performance (where several hundreds of virtual machines are booted
simultaneously in a virtual desktop environment). With these improvements time
taken to boot a large number of virtual machines simultaneously is many times
faster compared to ESX3.5.

ESX supports
several different storage protocols such as Fibre Channel, iSCSI and NFS. We published a white
paper that compares I/O performance using each of these protocols.  Results
show that line rate can be achieved with each of the storage protocols for single or
multiple virtual machines
. The paper also highlights CPU efficiency
improvements in vSphere compared to the previous release. This means that more virtual machines can now run on
the same hardware.  Graph below shows one
example (sequential read, 64KB block size) of the relative CPU cost for each of the storage protocols.
Results on ESX 4.0 are shown next to ESX 3.5 to highlight efficiency
improvements on all protocols.


Hardware configuration and detailed results can be found in
this protocol comparison white
paper
.

Storage-protocol-efficiency-comparison-ver3

 (Lower is better)

 

Figure: Relative CPU cost of 64 KB
sequential reads in a single virtual machine