performance

Five new performance papers from VMware

Performance of VMware VMI

VROOM! blog entry: VMI performance benefits; White paper: The Performance of VMware VMI. (Note that Krishna will be speaking at VMworld Europe 2008.)

Since VMI-enabled kernels can run on native systems, the popular
Linux distributions Ubuntu Feisty Fawn (7.04) and Ubuntu Gutsy Gibbon
(7.10) were shipped with VMI enabled by default in the kernel,
providing transparent performance benefits when they are run in ESX
Server 3.5. VMware is also working with Novell to include VMI in the SUSE Linux Enterprise Server distribution. …

The paper has details on the workloads that we ran, the benchmark
methodologies used, and the reasoning behind them. It will be clear
from the paper that VMware’s VMI-style paravirtualization offers
performance benefits for a wide variety of workloads in a totally
transparent way.

SPECweb2005 Performance on VMware ESX Server 3.5

VROOM! blog entry: SPECweb2005 Performance on VMware ESX Server 3.5; Performance study: SPECweb Performance.

Truth be told, with a number of superior features and performance
optimizations in VMware ESX Server 3.5, performance is no longer a
barrier to virtualization, even for the most I/O-intensive workloads.
In order to dispel the misconceptions these customers had, we decided
to showcase the performance of ESX Server by benchmarking with
industry-standard I/O-intensive benchmarks. We looked at the whole
spectrum of I/O-intensive workloads. My colleague has already addressed
database performance. Here, I’d like to focus on web server
performance; in particular, the performance of a single virtual machine
running the highly-network intensive SPECweb2005 benchmark.

SPECweb2005 is a SPEC benchmark for measuring a system’s ability to
act as a web server. It is designed with three workloads to
characterize different web usage patterns: Banking (emulates online
banking), E-commerce (emulates an E-commerce site), and Support
(emulates a vendor support site providing downloads). The three
benchmark components have vastly different workload characteristics and
we thus look at results from all three.

Performance Characterization of VMFS and RDM Using a SAN

White paper: Performance Characterization of VMFS and RDM Using a SAN

The test results described in this study show that VMFS and RDM provide similar I/O throughput for most of the workloads we tested. The small differences in I/O performance we observed were with the virtual machine running CPU‐saturated. The differences seen in these studies would therefore be minimized in real life workloads because most applications do not usually drive virtual machines to their full capacity. Most enterprise applications can, therefore, use either VMFS or RDM for configuring virtual disks when run in a virtual machine.
However, there are a few cases that require use of raw disks. Backup applications that use such inherent SAN features as snapshots or clustering applications (for both data and quorum disks) require raw disks. RDM is recommended for these cases. We recommend use of RDM for these cases not for performance reasons but because these applications require lower‐level disk control.

Large Page Performance

Performance study: Large Page Performance

The enhanced large page support in VMware ESX Server 3.5 and ESX Server 3i v3.5 enables 32‐bit virtual machines in PAE mode and 64‐bit virtual machines to make better use of large pages than they could when running on earlier versions of ESX Server. Our study, using SPECjbb2005, shows that using large pages can significantly improve the performance of this workload, compared to running the workload using small pages. The results demonstrate that if an application can benefit from large pages on a native machine, it can potentially achieve similar performance improvement in a virtual machine running on ESX Server 3.5 and ESX Server 3i v3.5.

What’s New in VMware Infrastructure 3: Performance Enhancements

White paper: What’s New in VMware Infrastructure 3: Performance Enhancements

Table of Contents

  • Scalability Enhancements
  • New Guest Operating System Support
  • Networking Enhancements
  • VMXNET Enhancements
  • TCP Segmentation Offload (TSO)
  • Jumbo Frames
  • 10 Gigabit Ethernet
  • NetQueue
  • Intel I/O Acceleration Technology Support (Experimental)
  • CPU Enhancements
  • Paravirtualized Linux Guests
  • Memory Enhancements
  • NUMA Improvements
  • Storage Enhancements
  • Infiniband Support
  • Summary