Home > Blogs > VMware VROOM! Blog > Monthly Archives: September 2012

Monthly Archives: September 2012

Impact of Enhanced vMotion Compatibility on Application Performance

Enhanced vMotion Compatibility (EVC) is a technique that allows vMotion to proceed even when ESXi hosts with CPUs of different technologies exist in the vMotion destination cluster. EVC assigns a baseline to all ESXi hosts in the destination cluster so that all of them will be compatible for vMotion. An example is assigning a Nehalem baseline to a cluster mixed with ESXi hosts with Westmere, Nehalem processors. In this case, the features available in Westmere would be hidden, because it is a newer processor than Nehalem. But all ESXi hosts would “broadcast” that they have Nehalem features.

Tests showed how utilizing EVC with different applications affected their performance. Several workloads were chosen to represent typical applications running in enterprise datacenters. The applications represented included database, Java, encryption, and multimedia. To see the results and learn some best practices for performance with EVC, read Impact of Enhanced vMotion Compatibility on Application Performance.

Performance Best Practices for VMware vSphere 5.1

We’re pleased to announce the availability of Performance Best Practices for vSphere 5.1.  This is a book designed to help system administrators obtain the best performance from vSphere 5.1 deployments.

The book addresses many of the new features in vSphere 5.1 from a performance perspective.  These include:

  • Use of a system swap file to reduce VMkernel and related memory usage
  • Flex SE linked clones that can relinquish storage space when it’s no longer needed
  • Use of jumbo frames for hardware iSCSI
  • Single Root I/O virtualization (SR-IOV), allowing direct guest access to hardware devices
  • Enhancements to SplitRx mode, a feature allowing network packets received in a single network queue to be processed on multiple physical CPUs
  • Enhancements to the vSphere Web Client
  • VMware Cross-Host Storage vMotion, which allows virtual machines to be moved simultaneously across both hosts and datastores

We’ve also updated and expanded on many of the topics in the book.

These topic include:

  • Choosing hardware for a vSphere deployment
  • Power management
  • Configuring ESXi for best performance
  • Guest operating system performance
  • vCenter and vCenter database performance
  • vMotion and Storage vMotion performance
  • Distributed Resource Scheduler (DRS), Distributed Power Management (DPM), and Storage DRS performance
  • High Availability (HA), Fault Tolerance (FT), and VMware vCenter Update Manager performance
  • VMware vSphere Storage Appliance (VSA) and vCenter Single Sign on Server performance

The book can be found at: http://www.vmware.com/pdf/Perf_Best_Practices_vSphere5.1.pdf.

Storage I/O Performance on vSphere 5.1 over 16Gb Fibre Channel

At the vSphere 5.1 release time frame, the 16Gb Fibre Channel fabric and 16Gb FC cards have become generally available. The release of the 16Gb FC driver on the VSphere platform can now take full advantage of the new 16Gb FC HBA and thus have better storage I/O performance.

As described in the paper “Storage I/O Performance on vSphere 5.1 over 16Gb Fibre Channel”, the storage I/O throughput has doubled for the larger block I/Os compared to the 8Gb FC. In the paper it uses single storage I/O worker to show the throughput has improved with better CPU efficiency per I/O. For random I/Os in small block sizes, 16Gb FC can attain much higher I/Os per second than a 8Gb FC connection.

vCenter Server 5.1 Database Performance with Large Inventories

 

Better performance, lower latency, and streamlined statistics are just some of the new features you can expect to find in the vCenter Server in version 5.1. The VMware performance team has published a paper about vCenter Server 5.1 database performance in large environments. The paper shows that statistics collection creates the biggest performance impact on the vCenter Server database. In vSphere 5.1, several aspects of statistics collection have been changed to improve the overall performance of the database. There were three sources of I/O to the statistics tables in vCenter Server—inserting statistics, rolling up statistics between different intervals, and deleting statistics when they expire. These activities have been improved by changing the way the relevant data is persisted to the tables, by partitioning the tables instead of using staging tables. In addition, by removing the staging tables, statistics collection is more robust, resolving the issues described in KB 2011523 and KB 1003878. Scalability is also improved by allowing larger inventories to be supported because they don’t take so long to read/write data from the old staging tables. The paper also includes best practices to take advantage of these changes in environments where vCenter Server has a large inventory. For more details, see vCenter Server 5.1 Database Performance in Large-Scale Environments.

Here are the URLs for the paper, “VMware vCenter Server 5.1 Database Performance Improvements and Best Practices for Large-Scale Environments”:

http://www.vmware.com/resources/techresources/10302

http://www.vmware.com/files/pdf/techpaper/VMware-vCenter-DBPerfBestPractices.pdf

 

VXLAN Performance on vSphere 5.1

The VMware vSphere I/O performance team has published a paper that shows VXLAN performance on vSphere 5.1. Virtual extensible LAN (VXLAN) is a network encapsulation mechanism that enables virtual machines to be deployed on any physical host, regardless of the host’s network configuration.

The paper shows how VXLAN is competitive in terms of performance when compared to a configuration without VXLAN enabled. The paper describes the test results for three experiments: throughput for large and small message sizes, CPU utilization for large and small message sizes, and throughput and CPU utilization for 16 virtual machines with various message sizes. Results show that a virtual machine configured with VXLAN achieved similar networking performance to a virtual machine without VXLAN configured, both in terms of throughput and CPU cost. Additionally, vSphere 5.1 scales well as more virtual machines are added to the VXLAN network. Read the full paper here.