Home > Blogs > VMware VROOM! Blog

Detailed stats for vSphere Flash Read cache

In the Performance of vFlash Read Cache in VMware vSphere 5.5 whitepaper details about performance in several different workloads is provided [http://www.vmware.com/files/pdf/techpaper/vfrc-perf-vsphere55.pdf].  It also covers details about how to tune and configure vFRC to obtain best performance.  This blog article goes through the details of how to obtain runtime statistics about the vSphere Flash Read Cache like the Cache hit rate, Latency of cached I/Os, and average number of cache blocks evicted as used in the whitepaper.

You can run esxcli to get some of these stats using the following commands:

~ # esxcli storage vflash cache list

This would list the identifiers for the caches that are currently in use.  These are displayed one per vFlash-enabled-VMDKs.   To retrieve the vFRC statistics for a particular vFlash-enabled-VMDK, the following command can be used:

~ # esxcli storage vflash cache get –c <cache-identifier>

However, a few more advanced statistics like the amount of data that is cached at any point of time may be obtained by directly accessing the VSI nodes.  The process to do this is as follows:

Cache identifier may be obtained by either the esxcli command shown above or using

~ # cacheID=`vsish -e ls /vmkModules/vflash/module/vfc/cache/`
~ # vsish -e get /vmkModules/vflash/module/vfc/cache/${cacheID}stats

This displays an output similar to the following:

vFlash per cache instance statistics {
cacheBlockSize:8192
numBlocks:1270976
numBlocksCurrentlyCached:222255
numFailedPrimaryIOs:0
numFailedCacheIOs:0
avgNumBlocksOnCache:172494
read:vFlash per I/O type Statistics {
numIOs:168016
avgNumIOPs:61
maxNumIOPs:1969
avgNumKBs:42143
maxNumKBs:227891
avgLatencyUS:16201
maxLatencyUS:41070
numPrimaryIOs:11442
numCacheIOs:156574
avgCacheLatencyUS:17130
avgPrimaryLatencyUS:239961
cacheHitPercentage:94
}
write:vFlash per I/O type Statistics {
numIOs:102264
avgNumIOPs:307
maxNumIOPs:3982
avgNumKBs:10424
maxNumKBs:12106
avgLatencyUS:3248
maxLatencyUS:31798
numPrimaryIOs:102264
numCacheIOs:0
avgCacheLatencyUS:0
avgPrimaryLatencyUS:3248
cacheHitPercentage:0
}
rwTotal:vFlash per I/O type Statistics {
numIOs:270280
avgNumIOPs:88
maxNumIOPs:2027
avgNumKBs:52568
maxNumKBs:233584
avgLatencyUS:11300
maxLatencyUS:40029
numPrimaryIOs:113706
numCacheIOs:156574
avgCacheLatencyUS:17130
avgPrimaryLatencyUS:27068
cacheHitPercentage:58
}
flush:vFlash per operation type statistics {
lastOpTimeUS:0
numBlocksLastOp:0
nextOpTimeUS:0
numBlocksNextOp:0
avgNumBlocksPerOp:0
}
evict:vFlash per operation type statistics {
lastOpTimeUS:0
numBlocksLastOp:0
nextOpTimeUS:0
numBlocksNextOp:0
avgNumBlocksPerOp:0
}
}

This output contains all of the metrics that are discussed in the vShpere Flash Read Cache whitepaper.  This information can be used for sizing the cache and cache blocks in the most effective way.

Disclaimer: This interface (vsish) is not officially supported by VMware, so please use at your own risk.

First Certified SAP BW-EML Benchmark on Virtual HANA

The first certified SAP Business Warehouse-Enhanced Mixed Workload (BW-EML) standard application benchmark based on a virtual HANA database was recently published by HP.  We worked with HP to configure and run this benchmark using a virtual HANA database running on vSphere 5.5 in a monster VM of 64 vCPUs and almost 1TB of RAM.  The test was run with a total of 2 billion records and achieved a throughput of 111,850 ad-hoc navigation steps per hour.

The same hardware configuration was used by HP to publish a native only benchmark with the same number of records. In that test, the result was 126,980 ad-hoc navigation steps per hour which is only 12% higher throughput than the virtual HANA result.

BW-EML_VirtualHANA_Graph_VROOM

Although the hardware setup was the same, this comparison between native and virtual performance has one wrinkle that gave the native system a slight advantage, estimated to be about 5%.

The reason for the estimated 5% advantage for the native system is due to the difference between cores and threads and the maximum number of vCPUs.  In the case of the native test, the BW-EML workload was able to exercise all 120 hardware threads of the physical 60 core server.  The number of threads is twice the number of physical server cores because these processors utilize Intel Hyper-Threading technology.

In vSphere 5.5 (the current version) the maximum number of vCPUs that can be used in a single VM is 64. Each vCPU is mapped to a hardware thread when scheduled to run. This limits the number of hardware threads that a single VM can use to 64, which means that for this test only slightly more than half of the 120 hardware server threads could be used for the HANA virtual machine. This means that the virtual machine was not able to directly benefit from Hyper-Threading, but was able to use all 60 cores.

The benefit of Hyper-Threading can be as much as 20% to 30% for some applications, but in the case of the BW-EML benchmark, it is estimated to be about 5%.  This estimate was found by running the native BW-EML benchmark system with and without Hyper-Threading enabled.  Because the virtual machine was not able to use the Hyper-Threads, it is estimated that the native system had a 5% advantage due to its ability to use all 120 threads of the physical server.

In theory, the advantage for the native system could be reduced by either creating a bigger virtual machine or running the native system without Hyper-Threading.  If this were done, then the difference between native and virtual should be about 5% smaller and would mean that the difference between native and virtual could shrink to single digits (approximately 7%).

Additional details about the certified SAP BW-EML benchmark configurations used in the tests: SAP HANA 1.0 on HP DL580 Gen8, 4 processors with 60 cores / 120 threads using Intel Xeon E7-4880 v2 running at 2.5 GHz and 1TB of main memory (each processor has 15 cores / 30 threads).  The application servers were SAP NetWeaver 7.30 on HP BL680 G7, 4 processors with 40 cores / 80 threads using Intel Xeon E7-4870 running at 2.4 GHz and 1TB of main memory (each processor has 10 cores / 20 threads). The OS used for all servers was SuSE Enterprise Linux Server 11 SP2.  The certification number for the native test is 2014009 and the certification number for the virtual test is 2014021.

Reducing Power Consumption in the vSphere 5.5 Datacenter

Today’s virtualized datacenters consist of several servers connected to shared storage, and this configuration has been necessary to enable the flexibility that virtualization provides and still allow for high performance. However, the power consumption of this setup is a major concern because shared storage can consume as much as 2-3x the power of a single, mid-ranged server. In this blog, we look at the performance impact of replacing shared storage with local disks and PCIe flash storage in a vSphere 5.5 datacenter to save power.

We leverage two innovative vSphere features in this performance test:

  • Unified live migration, first introduced with vSphere 5.1, removes the shared storage requirement for vMotion and allows combining traditional vMotion and Storage vMotion into one operation. This combined live migration copies both the virtual machine’s memory and storageover the network to the destination vSphere host. This feature offers administrators significantly more simplicity and flexibility in managing and moving virtual machines across their virtual infrastructures compared to the traditional vMotion and Storage vMotion migration solutions. More information about vMotion can be found in the VMware vSphere 5.1 vMotion Architecture, Performance, and Best Practices white paper.
  • vSphere 5.5 improves server power management by enabling processor C-states, in addition to the previously-used P-states, to improve power savings in the Balanced policy setting. More information about these improvements can be found in the Host Power Management in vSphere 5.5 white paper.

We measure the performance and power savings of these features when replacing shared storage with local disks and PCIe flash storage using a modified version of VMware VMmark 2.5. VMmark is a multi-host virtualization benchmark that uses varied application workloads, as well as common datacenter operations to model the demands of the datacenter. Each VMmark tile contains a set of VMs running diverse application workloads as a unit of load. For more details, see the VMmark 2.5 overview. The benchmark was modified to replace the traditional vMotion workload component with the new shared-nothing, unified live migration.

Testing Methodology

VMmark 2.5 was modified to convert the vMotion workload into a migration without shared storage. All other workloads were unchanged. This allowed a comparison of local, direct attached storage to a traditional Fibre Channel SAN. We measured the power consumption of each configuration using a pair of Yokogawa WT210 power meters, one attached to the servers and the other attached to the external storage.

Configuration

  • Systems Under Test: 2x Dell PowerEdge R710 servers
  • CPUs (per server): 2x Intel Xeon X5670 @ 2.93 GHz
  • Memory (per server): 96 GiB
  • Hypervisor: VMware vSphere 5.5
  • Local Storage (per server): 1x 785GB Fusion-io ioDrive2, 2x 300GB 10K RPM SAS drives in RAID 0
  • SAN: 8Gb Fibre Channel, 30x 200GB SATA Flash drives, 30x 600GB 15K RPM SAS drives
  • Benchmarking software: VMware VMmark 2.5

All I/O-intensive virtual disks were stored on the Fusion-io devices for local storage tests or the SATA flash drives for the SAN tests.  This included the DVD Store database files, the mail server database, and the Olio database.  All remaining virtual machine data was stored on the local SAS drives for the local storage tests and the SAN SAS drives for the SAN tests.

Results
 
VMmark performance using shared-nothing, unified live migration backed by fast local storage showed only minor differences compared to the results with shared storage.  The largest variance was seen in the infrastructure operations, which was expected as the vMotion workload was modified to include a storage migration.  The chart below shows the scores normalized to the 3-tile SAN test results.

scores

When we add the power data to these results, and compare the Performance Per Killowatt (PPKW), we see a much different picture.  The local storage-based PPKW score is much higher than shared storage due to higher power efficiency.

ppkw

We can see the reason for this difference is due to the power consumption of each configuration.  The SAN is consuming over 1000 watts, which is typical of this storage solution.  Replacing that power-hungry component with local storage greatly reduces vSphere datacenter power consumption while maintaining good performance.

power

This SAN should be able to support approximately 25 VMmark tiles (based on the storage capacity of the SSDs), roughly five times the load being supported by the two servers we had available for testing in our lab. However, it should be noted that these servers are two generations old. Current-generation two-socket servers with a comparable power usage can support 2-3x the number of tiles based on published VMmark results. This would imply that the SAN could support at most four current-generation servers. While an additional two servers will further amortize the power cost of the SAN, significant power savings would still be achieved with an all-local storage architecture.

This is not without a cost.  Removing shared storage reduces the functionality of the datacenter because there are a number of vSphere features which will no longer function, such as DRS and traditional vMotion. The reduction in the infrastructure performance due to no shared storage will limit the workloads that can be run in this manner to virtual machines with smaller disks which can be moved between hosts without shared storage fairly quickly. Virtual machines with large disks would take much longer to move and would be better suited to a shared storage environment.

We have shown that it is possible to significantly reduce datacenter power consumption without significantly reducing performance by replacing shared storage with local storage solutions.  Unified live migration enables the use of local storage without a significant infrastructure performance penalty while maintaining application performance comparable to traditional environments using shared storage for the server workloads represented in VMmark.  The resulting elimination of shared storage creates significant power savings and lower operations costs.

Virtual SAP HANA Achieves Production Level Performance

VMware CEO Pat Gelsinger announced production support for SAP HANA on VMware vSphere 5.5 at EMC World this week during his keynote. This is the end result of a very thorough joint testing project over the past year between VMware and SAP.

HANA is an in-memory platform (including database capabilities) from SAP that has enabled huge gains in performance for customers and has been a high priority for SAP over the past few years.  In order for HANA to be supported in a virtual machine on vSphere 5.5 for production workloads, we worked closely with SAP to enable, design, and measure in-depth performance tests.

In order to enable the testing and ongoing production support of SAP HANA on vSphere, two HANA appliance servers were ordered, shipped, and installed into SAP’s labs in Waldorf Germany.  These systems are dedicated to running SAP HANA on vSphere onsite at SAP.  Each system is an Intel Xeon E7-8870 (Westmere-EX) based four-socket server with 1TB of RAM.  They are used for performance testing and also for ongoing support of HANA on vSphere.  Additionally, VMware has onsite support engineering to assist with the testing and support.

SAP designed an extensive performance test suite that used a large number of test scenarios to stress all functions and capabilities of HANA running on vSphere 5.5.  They included OLAP and OLTP with a wide range of data sizes and query functions. In all, over one thousand individual test cases were used in this comprehensive test suite.  These same tests were run on identical native HANA systems and the difference between native and virtual tests was used as the key performance indicator.

In addition, we also tested vSphere features including vMotion, DRS, and VMware HA with virtual machines running HANA.  These tests were done with the HANA virtual machine under heavy stress.

The test results have been extremely positive and are one of the key factors in the announcement of production support.  The difference between virtual and native HANA across all the performance tests was on average within a few percentage points.

The vMotion, DRS, and VMware HA tests were all completed without issues.  Even with the large memory sizes of HANA virtual machines, we were still able to successfully migrate them with vMotion while under load with no issues.

One of the results of the extensive testing is a best practices guide for HANA on vSphere 5.5. This document includes a performance guide for running HANA on vSphere 5.5 based on this extensive testing.  The document also includes information about how to size a virtual HANA instance and how VMware HA can be used in conjunction with HANA’s own replication technology for high availability.

Power Management and Performance in VMware vSphere 5.1 and 5.5

Power consumption is an important part of the datacenter cost strategy. Physical servers frequently offer a power management scheme that puts processors into low power states when not fully utilized, and VMware vSphere also offers power management techniques. A recent technical white paper describes the testing and results of two performance studies: The first shows how power management in VMware vSphere 5.5 in balanced mode (the default) performs 18% better than the physical host’s balanced mode power management setting. The second study compares vSphere 5.1 performance and power savings in two server models that have different generations of processors. Results show the newer servers have 120% greater performance and 24% improved energy efficiency over the previous generation.

For more information, please read the paper: Power Management and Performance in VMware vSphere 5.1 and 5.5.

VDI Performance Benchmarking on VMware Virtual SAN 5.5

In the previous blog series, we presented the VDI performance benchmarking results with VMware Virtual SAN public beta and now we announced the general availability of VMware Virtual SAN 5.5 which is part of VMware vSphere 5.5 U1 GA and VMware Horizon View 5.3.1 which supports Virtual SAN 5.5. In this blog, we present the VDI performance benchmarking results with the Virtual SAN GA bits and highlight the CPU improvements and 16-node scaling results. With Virtual SAN 5.5 with default policy, we could successfully run 1615 heavy VDI users (VDImark) out-of-the-box on a 16-node Virtual SAN cluster and see about 5% more consolidation when compared to Virtual SAN public beta.

virtualsan-view-block-diagram

To simulate the VDI workload, which is typically CPU bound and sensitive to I/O, we use VMware View Planner 3.0.1. We run View Planner and consolidate as many heavy users as we can on a particular cluster configuration while meeting the quality of service (QoS) criteria and we define the score as VDImark. For QoS criteria, View Planner operations are divided into three main groups: (1) Group A for interactive operations, (2) Group B for I/O operations, and (3) Group C for background operations. The score is determined separately for Group A user operations and Group B user operations by calculating the 95th percentile latency of all the operations in a group. The default thresholds are 1.0 second for Group A and 6.0 seconds for Group B. Please refer to the user guide, and the run and reporting guides for more details. The scoring is based on several factors such as the response time of the operations, compliance of the setup and configurations, and other factors.

As discussed in the previous blog, we used the same experimental setup (shown below) where each Virtual SAN host has two disk groups and each disk group has one PCI-e solid-state drive (SSD) of 200GB and six 300GB 15k RPM SAS disks. We use default policy when provisioning the automated linked clones pool with VMware Horizon View for all our experiments.

virtualsan55-setup

CPU Improvements in Virtual SAN 5.5

There were several optimizations done in Virtual SAN 5.5 compared to the previously available public beta version and one of the prominent improvements is the reduction of CPU usage for Virtual SAN. To highlight the CPU improvements, we compare the View Planner score on Virtual SAN 5.5 (vSphere 5.5 U1) and Virtual SAN public beta (vSphere 5.5).  On a 3-node cluster, VDImark (the maximum number of desktop VMs that can run with passing QoS criteria) is obtained for both Virtual SAN 5.5 and Virtual SAN public beta and the results are shown below:

virtualsan55-3node

The results show that with Virtual SAN 5.5, we can scale up to 305 VMs on a 3-node cluster, which is about 5% more consolidation when compared with Virtual SAN public beta. This clearly highlights the new CPU improvements in Virtual SAN 5.5 as a higher number of desktop VMs can be consolidated on each host with a similar user experience.

Linear Scaling in VDI Performance

In the next set of experiments, we continually increase the number of nodes for the Virtual SAN cluster to see how well the VDI performance scales. We collect the VDImark score on 3-node, 5-node, 8-node, 16-node increments, and the result is shown in the chart below.

virtualsan55-scaling

The chart illustrates that there is a linear scaling in the VDImark as we increase the number of nodes for the Virtual SAN cluster. This indicates good performance as the nodes are scaled up. As more nodes are added to the cluster, the number of heavy users that can be added to the workload increases proportionately. In Virtual SAN public beta, a workload of 95 heavy VDI users per host was achieved and now, due to CPU improvements in Virtual SAN 5.5, we are able to achieve 101 to 102 heavy VDI users per host. On a 16-node cluster, a VDImark of 1615 was achieved which accounts for about 101 heavy VDI users per node.

To further illustrate the Group A and Group B response times, we show the average response time of individual operations for these runs for both Group A and Group B, as follows.

virtualsan55-groupA

As seen in the figure above, the average response times of the most interactive operations are less than one second, which is needed to provide a good end-user experience. If we look all the way up to 16 nodes, we don’t see much variance in the response times, and they almost remain constant when scaling up. This clearly illustrates that, as we scale the number of VMs in larger nodes of a Virtual SAN cluster, the user experience doesn’t degrade and scales nicely.

virtualsan55-groupB

Group B is more sensitive to I/O and CPU usage than Group A, so the resulting response times are more important. The above figure shows how VDI performance scales in Virtual SAN. It is evident from the chart that there is not much difference in the response times as the number of VMs are increased from 305 VMs on a 3-node cluster to 1615 VMs on a 16-node cluster. Hence, storage-sensitive VDI operations also scale well as we scale the Virtual SAN nodes from 3 to 16.

To summarize, the test results in this blog show:

  • 5% more VMs can be consolidated on a 3-node Virtual SAN cluster
  • When adding more nodes to the Virtual SAN cluster, the number of heavy users supported increases proportionately (linear scaling)
  • The response times of common user operations (such as opening and saving files, watching a video, and browsing the Web) remain fairly constant as more nodes with more VMs are added.

To see the previous blogs on the VDI benchmarking with Virtual SAN public beta, check the links below:

VDI Benchmarking Using View Planner on VMware Virtual SAN – Part 3

In part 1 and part 2 of the VDI/VSAN benchmarking blog series, we presented the VDI benchmark results on VSAN for 3-node, 5-node, 7-node, and 8-node cluster configurations. In this blog, we compare the VDI benchmarking performance of VSAN with an all flash storage array. The intent of this experiment is not to compare the maximum IOPS that you can achieve on these storage solutions; instead, we show how VSAN scales as we add more heavy VDI users. We found that VSAN can support a similar number of users as that of an all flash array even though VSAN is using host resources.

The characteristic of VDI workload is that they are CPU bound, but sensitive to I/O which makes View Planner a natural fit for this comparative study. We use VMware View Planner 3.0 for both VSAN and all flash SAN and consolidate as many heavy users as much we can on a particular cluster configuration while meeting the quality of service (QoS) criteria. Then, we find the difference in the number of users we can support before we run out of CPU, because I/O is not a bottleneck here. Since VSAN runs in the kernel and uses CPU on the host for its operation, we find that the CPU usage is quite minimal, and we see no more than a 5% consolidation difference for a heavy user run on VSAN compared to the all flash array.

As discussed in the previous blog, we used the same experimental setup where each VSAN host has two disk groups and each disk group has one PCI-e solid-state drive (SSD) of 200GB and six 300GB 15k RPM SAS disks. We built a 7-node and a 8-node cluster and ran View Planner to get the VDImark™ score for both VSAN and the all flash array. VDImark signifies the number of heavy users you can successfully run and meet the QoS criteria for a system under test. The VDImark for both VSAN and all flash array is shown in the following figure.

View Planner QoS (VDImark)

 

 From the above chart, we see that VSAN can consolidate 677 heavy users (VDImark) for 7-node and 767 heavy users for 8-node cluster. When compared to the all flash array, we don’t see more than 5% difference in the user consolidation. To further illustrate the Group-A and Group-B response times, we show the average response time of individual operations for these runs for both Group-A and Group-B, as follows.

Group-A Response Times

As seen in the figure above for both VSAN and the all flash array, the average response times of the most interactive operations are less than one second, which is needed to provide a good end-user experience.  Similar to the user consolidation, the response time of Group-A operations in VSAN is similar to what we saw with the all flash array.

Group-B Response Times

Group-B operations are sensitive to both CPU and IO and 95% should be less than six seconds to meet the QoS criteria. From the above figure, we see that the average response time for most of the operations is within the threshold and we see similar response time in VSAN when compared to the all flash array.

To see other parts on the VDI/VSAN benchmarking blog series, check the links below:
VDI Benchmarking Using View Planner on VMware Virtual SAN – Part 1
VDI Benchmarking Using View Planner on VMware Virtual SAN – Part 2
VDI Benchmarking Using View Planner on VMware Virtual SAN – Part 3

 

VDI Benchmarking Using View Planner on VMware Virtual SAN – Part 2

In part 1, we presented the VDI benchmark results on VSAN for 3-node and 7-node configurations. In this blog, we update the results for 5-node and 8-node VSAN configurations and show how VSAN scales for these configurations.

The View Planner benchmark was run again to find the VDImark for different numbers of nodes (5 and 8 nodes) in a VSAN cluster as described in the previous blog and the results are shown in the following figure.

View Planner QoS (VDImark)

 

In the 5-node cluster, a VDImark score of 473 was achieved and for the 8-node cluster, a VDImark score of 767 was achieved. These results are similar to the ones we saw on the 3-node and 7-node cluster earlier (about 95 VMs per host). So, there is nice scaling in terms of maximum VMs supported as the numbers of nodes were increased in the VSAN from 3 to 8.

To further illustrate the Group-A and Group-B response times, we show the average response time of individual operations for these runs for both Group-A and Group-B, as follows.

Group-A Response Times

As seen in the figure above, the average response times of the most interactive operations are less than one second, which is needed to provide a good end-user experience. If we look at the new results for 5-node and 8-node VSAN, we see that for most of the operations, the response time mostly remains the same across different node configurations.

Group-B Response Times

Since Group-B is more sensitive to I/O and CPU usage, the above chart for Group-B operations is more important to see how View Planner scales. The chart shows that there is not much difference in the response times as the number of VMs were increased from 286 VMs on a 3-node cluster to 767 VMs on an 8-node cluster. Hence, storage-sensitive VDI operations also scale well as we scale the VSAN nodes from 3 to 8 and user experience expectations are met.

To see other parts on the VDI/VSAN benchmarking blog series, check the links below:
VDI Benchmarking Using View Planner on VMware Virtual SAN – Part 1
VDI Benchmarking Using View Planner on VMware Virtual SAN – Part 2
VDI Benchmarking Using View Planner on VMware Virtual SAN – Part 3

 

 

Each vSphere release introduces new vMotion functionality, increased reliability and significant performance improvements. vSphere 5.5 continues this trend by offering new enhancements to vMotion to support EMC VPLEX Metro, which enables shared data access across metro distances.

In this blog, we evaluate vMotion performance on a VMware vSphere 5.5 virtual infrastructure that was stretched across two geographically dispersed datacenters using EMC VPLEX Metro.

Test Configuration

The VPLEX Metro test bed consisted of two identical VPLEX clusters, each with the following hardware configuration:

• Dell R610 host, 8 cores, 48GB memory, Broadcom BCM5709 1GbE NIC
• A single engine (two directors) VPLEX Metro IP appliance
• FC storage switch
• VNX array, FC connectivity, VMFS 5 volume on a 15-disk RAID-5 LUN


Figure 1. Logical layout of the VPLEX Metro deployment

Figure 1 illustrates the deployment of the VPLEX Metro system used for vMotion testing. The figure shows two data centers, each with a vSphere host connected to a VPLEX Metro appliance. The VPLEX virtual volumes presented to the vSphere hosts in each data center are synchronous, distributed volumes that mirror data between the two VPLEX clusters using write-through caching. As a result, vMotion views the underlying storage as shared storage, or exactly equivalent to a SAN that both source and destination hosts have access to. Hence, vMotion in a Metro VPLEX environment is as easy as traditional vMotion that live migrates only the memory and device state of a virtual machine.

The two VPLEX Metro appliances in our test configuration used IP-based connectivity. The vMotion network between the two ESXi hosts used a physical network link distinct from the VPLEX network. The Round Trip Time (RTT) latency on both VPLEX and vMotion networks was 10 milliseconds.

Measuring vMotion Performance

The following metrics were used to understand the performance implications of vMotion:

• Migration Time: Total time taken for migration to complete
• Switch-over Time: Time during which the VM is quiesced to enable switchover from source to the destination host
• Guest Penalty: Performance impact on the applications running inside the VM during and after the migration

Test Results


Figure 2. VPLEX Metro vMotion performance in vSphere 5.1 and vSphere 5.5

Figure 2 compares VPLEX Metro vMotion performance results in vSphere 5.1 and vSphere 5.5 environments. The test scenario used an idle VM configured with 2 VCPUs and 2GB memory. The figure shows a minor difference in the total migration time between the two vSphere environments and a significant improvement in vMotion switch-over time in the vSphere 5.5 environment. The switch-over time reduced from about 1.1 seconds to about 0.6 second (a nearly 2x improvement), thanks to a number of performance enhancements that are included in the vSphere 5.5 release.

We also investigated the impact of VPLEX Metro live migration on Microsoft SQL Server online transaction processing (OLTP) performance using the open-source DVD Store workload. The test scenario used a Windows Server 2008 VM configured with 4 VCPUs, 8GB memory, and a SQL Server database size of 50GB.


Figure 3. VPLEX Metro vMotion impact on SQL Server Performance

Figure 3 plots the performance of a SQL Server virtual machine in orders processed per second at a given time—before, during, and after VPLEX Metro vMotion. As shown in the figure, the impact on SQL Server throughput was very minimal during vMotion. The SQL Server throughput on the destination host was around 310 orders per second, compared to the throughput of 350 orders per second on the source host. This throughput drop after vMotion is due to the VPLEX inter-cluster cache coherency interactions and is expected. For some time after the vMotion, the destination VPLEX cluster continued to send cache page queries to the source VPLEX cluster and this has some impact on performance. After all the metadata is fully migrated to the destination cluster, we observed the SQL Server throughput increase to 350 orders per second, the same level of throughput seen prior to vMotion.

These performance test results show the following:

  • Remarkable improvements in vSphere 5.5 towards reducing vMotion switch-over time during metro migrations (for example, a nearly 2x improvement over vSphere 5.1)
  • VMware vMotion in vSphere 5.5 paired with EMC VPLEX Metro can provide workload federation over a metro distance by enabling administrators to dynamically distribute and balance the workloads seamlessly across data centers

To find out more about the test configuration, performance results, and best practices to follow, see our recently published performance study.

SEsparse Shows Significant Improvements over VMFSsparse

Limited amounts of physical resources can make large-scale virtual infrastructure deployments challenging. Provisioning dedicated storage space to hundreds of virtual machines can become particularly expensive. To address this VMware vSphere 5.5 provides two sparse storage techniques, namely VMFSparse and SEsparse. Running multiple VMs using sparse delta-disks with a common parent virtual disk brings down the required amount of physical storage making large-scale deployments manageable. SEsparse was introduced in VMware vSphere 5.1 and in vSphere 5.5 became the default virtual disk snapshotting technique for VMDKs greater than 2 TB. Various enhancements were made to SEsparse technology in the vSphere 5.5 release, which makes SEsparse perform mostly on par or better than VMFSsparse formats. In addition dynamic space reclamation confers on SEsparse a significant advantage over VMFSsparse virtual disk formats. This feature makes SEsparse the choice for VMware® Horizon View™ environments where space reclamation is critical due to the large number of tenants sharing the underlying storage.


A recently published paper reports the results from a series of performance studies of SEsparse and VMFsparse using thin virtual disks as baselines. The performance was evaluated using a comprehensive set of Iometer workloads along with workloads from two real world application domains: Big Data Analytics and Virtual Desktop Infrastructure (VDI). Overall, the performance of SEsparse is significantly better than the VMFSsparse format for random write workloads and mostly on par or better for the other analyzed workloads, depending on type.

Read the full performance study, “SEsparse in VMware vSphere 5.5.”