Home > Blogs > VMware VROOM! Blog > Category Archives: Uncategorized

Category Archives: Uncategorized

VMware pushes the envelope with vSphere 6.0 vMotion

vMotion in VMware vSphere 6.0 delivers breakthrough new capabilities that will offer customers a new level of flexibility and performance in moving virtual machines across their virtual infrastructures. Included with vSphere 6.0 vMotion are features – Long-distance migration, Cross-vCenter migration, Routed vMotion network – that enable seamless migrations across current management and distance boundaries. For the first time ever, VMs can be migrated across vCenter Servers separated by cross-continental distance with minimal performance impact. vMotion is fully integrated with all the latest vSphere 6 software-defined data center technologies including Virtual SAN (VSAN) and Virtual Volumes (VVOL). Additionally, the newly re-architected vMotion in vSphere 6.0 now enables extremely fast migrations at speeds exceeding 60 Gigabits per second.

In this blog, we present the latest vSphere 6.0 vMotion features as well as the performance results. We first evaluate vMotion performance across two geographically dispersed data centers connected by a 100ms round-trip time (RTT) latency network. Following that we demonstrate vMotion performance when migrating an extremely memory hungry “Monster” VM.

Long Distance vMotion

vSphere 6.0 introduces a Long-distance vMotion feature that increases the round-trip latency limit for vMotion networks from 10 milliseconds to 150 milliseconds. Long distance mobility offers a variety of compelling new use cases including whole data center upgrades, disaster avoidance, government mandated disaster preparedness testing, and large scale distributed resource management to name a few. Below, we examine vMotion performance under varying network latencies up to 100ms.

Test Configuration

We set up a vSphere 6.0 test environment with the following specifications:

Hardware

  • Two HP ProLiant DL580 G7 servers (32-core Intel Xeon E7-8837 @ 2.67 GHz, 256 GB memory)
  • Storage: Two EMC VNX 5500 arrays, FC connectivity, VMFS 5 volume on a 15-disk RAID-5 LUN
  • Networking: Intel 10GbE 82599 NICs
  • Latency Injector: Maxwell-10GbE appliance to inject latency in vMotion network

Software

  • VM config: 4 VCPUs, 8GB mem, 2 vmdks (30GB system disk, 20GB database disk)
  • Guest OS/Application: Windows Server 2012 / MS SQL Server 2012
  • Benchmark: DVDStore (DS2) using a database size of 12GB with 12,000,000 customers, 3 drivers without think-time

vsphere60-fig1-2-new

Figure 1 illustrates the logical deployment of the test-bed used for long distance vMotion testing. Long distance vMotion is supported with both no shared storage infrastructure and with shared storage solutions such as EMC VPLEX Geo which enables shared data access across long distances. Our test-bed didn’t use shared storage which resulted in migration of the entire state of VM including its memory, storage and CPU/device states. As shown in Figure 2, our test configuration deployed a Maxwell-10GbE network appliance to inject latency in vMotion network.

Measuring vMotion Performance

The following metrics were used to understand the performance implications of vMotion:

  • Migration Time: Total time taken for migration to complete
  • Switch-over Time: Time during which the VM is quiesced to enable switchover from source to the destination host
  • Guest Penalty: Performance impact on the applications running inside the VM during and after the migration

Test Results

We investigated the impact of long distance vMotion on Microsoft SQL Server online transaction processing (OLTP) performance using the open-source DVD Store workload. The test scenario used a Windows Server 2012 VM configured with 4 VCPUs, 8GB memory, and a SQL Server database size of 12GB. Figure 3 shows the migration time and VM switch-over time when migrating an active SQL Server VM at different network round-trip latencies. In all the test scenarios, we used a load of 3 DS2 users with no think time that generated substantial load on the VM. The migration was initiated during the steady-state period of the benchmark when the CPU utilization (esxtop %USED counter) of the VM was close to 120%, and the average read IOPS and average write IOPS were about 200 and 150, respectively.
ldvmotion-fig3 Figure 3 shows that the impact of round-trip latency was minimal on both duration of the migration and switch-over time, thanks to the latency aware optimizations in vSphere 6.0 vMotion. The difference in the migration time among the test scenarios was in the noise range (<5%). The switch-over time increased marginally from about 0.5 seconds in 5ms test scenario to 0.78 seconds in 100ms test scenario.

ldvmotion-fig4 Figure 4 plots the performance of a SQL Server virtual machine in orders processed per second at a given time—before, during, and after vMotion on a 100 ms round-trip latency network. In our tests, DVD store benchmark driver was configured to report the performance data at a fine granularity of 1 second (default: 10 seconds). As shown in the figure, the impact on SQL Server throughput was minimal during vMotion. The only noticeable dip in performance was during the switch-over phase (0.78 seconds) from the source to destination host. It took less than 5 seconds for the SQL server to resume to normal level of performance.

Faster migration

Why are we interested in extreme performance? Today’s datacenters feature modern servers with many processing cores (up to 80), terabytes of memory and high network bandwidth (10 and 40 GbE NICs). VMware supports larger “monster” virtual machines that can scale up to 128 virtual CPUs and 4TB of RAM. Utilizing higher network bandwidth to complete migrations of these monster VMs faster can enable you to implement high levels of mobility in private cloud deployments. The reduction in time to move a virtual machine can also reduce the overhead on the total network and CPU usage.

Test Config

  • Two Dell PowerEdge R920 servers (60-core Intel Xeon E7-4890 v2 @ 2.80GHz, 1TB memory)
  • Networking: Intel 10GbE 82599 NICs, Mellanox 40GbE MT27520 NIC
  • VM config: 12 VCPUs, 500GB mem
  • Guest OS: Red Hat Enterprise Linux Server 6.3

We configured each vSphere host with four Intel 10GbE ports and a single Mellanox 40 GbE port with total of 80Gb/s network connectivity between the two vSphere hosts. Each vSphere host was configured with five vSwitches, with four vSwitches having one unique 10GbE uplink port and fifth vSwitch with a 40GbE uplink port. The MTU of the NICs was set to the default of 1500 bytes. We created one VMkernel adapter on each of four vSwitches with 10GbE uplink port and four VMkernel adapters on the vSwitch with 40GbE uplink port. All the 8 VMkernel adapters were configured on the same subnet. We also enabled each VMkernel adapter for vMotion, which allowed vMotion traffic to use the 80Gb/s network connectivity.

Methodology

To demonstrate the extreme vMotion throughput performance, we simulated a very heavy memory usage footprint in the virtual machine. The memory-intensive program allocated 300GB memory inside the guest and touched a random byte in each memory page in an infinite loop. We migrated this virtual machine between the two vSphere hosts under different test scenarios: vMotion over 10Gb/s network, vMotion over 20Gb/s network, vMotion over 40Gb/s network and vMotion over 80Gb/s network. We used esxtop to monitor network throughput and CPU utilization on the source and destination hosts.

Test Results

ldvmotion-fig5

Figure 5 compares the peak network bandwidth observed in vSphere 5.5 and vSphere 6.0 under different network deployment scenarios. Let us first consider the vSphere 5.5 vMotion throughput performance. Figure 5 shows vSphere 5.5 vMotion reaches line rate in both 10Gb/s network and 20Gb/s network test scenarios. When we increased the available vMotion network bandwidth to beyond 20 Gb/s, the vMotion peak usage was limited to 18Gb/s in vSphere 5.5. This is because in vSphere 5.5 vMotion, each vMotion is assigned by default two helper threads which do the bulk of vMotion processing. Since the vMotion helper threads are CPU saturated, there is no performance gain when adding additional network bandwidth. When we increased the number of vMotion helper threads from 2 to 4 in the 40Gb/s test scenario, and thereby removed the CPU bottleneck, we saw the peak network bandwidth usage of vMotion in vSphere 5.5 increase to 32Gb/s. Tuning the helper threads beyond four hurt vMotion performance in 80Gb/s test scenario, as vSphere 5.5 vMotion has some locking issues which limit the performance gains when adding more helper threads. These locks are VM specific locks that protect VM’s memory.

The newly re-architected vMotion in vSphere 6.0 not only removes these lock contention issues but also obviates the need to apply any tunings. During the initial setup phase, vMotion dynamically creates the appropriate number of TCP/IP stream channels between the source and destination hosts based on the configured network ports and their bandwidth. It then instantiates a vMotion helper thread per stream channel thereby removing the necessity for any manual tuning. Figure 5 shows vMotion reaches line rate in 10Gb/s, 20Gb/s and 40Gb/s scenarios, while utilizing little over 64 Gb/s network throughput in 80Gb/s scenario. This is over a factor of 3.5x improvement in performance when compared to vSphere 5.5.

ldvmotion-fig5
Figure 6 shows the network throughput and CPU utilization data in vSphere 6.0 80Gb/s test scenario. During vMotion, memory of the VM is copied from the source host to the destination host in an iterative fashion. In the first iteration, the vMotion bandwidth usage is throttled by the memory allocation on the destination host. The peak vMotion network bandwidth usage is about 28Gb/s during this phase. Subsequent iterations copy only the memory pages that were modified during the previous iteration. The number of pages transferred in these iterations is determined by how actively the guest accesses and modifies the memory pages. The more modified pages there are, the longer it takes to transfer all pages to the destination server, but on the flip side, it enables vMotion’s advanced performance optimizations to kick-in to fully leverage the additional network and compute resources. That is evident in the third pre-copy iteration when the peak measured bandwidth was about 64Gb/s and the peak CPU utilization (esxtop ‘PCPU Util%’ counter) on destination host was about 40%.

Conclusions

The main results of this performance study are the following:

  • The dramatic 10x increase in round-trip time support offered in long-distance vMotion now makes it possible to migrate workloads non-disruptively over long distances such as New York to London
  • Remarkable performance enhancements in vSphere 6.0 towards improving Monster VM migration performance (up to 3.5x improvement over vSphere 5.5) in large scale private cloud deployments

 

 

 

VMware Horizon 6 and Hardware Accelerated 3D Graphics

A recently published paper presents best practices and performance data for Horizon 6’s support for hardware accelerated 3D. Awhile back, we published a paper about the same subject with VMware Horizon View 5.2. The new paper updates all the graph data for Horizon 6 and shows the improvements in performance. The View Planner 3.5 benchmark is used to simulate four workloads:

  • A light 3D workload, which simulates an office worker using such applications as Office, Acrobat, and Internet Explorer.
  • A light CAD workload, which adds the SOLIDWORKS CAD viewer to the light 3D workload. In this test, the CAD viewer is used to run two models: a sea scooter and a cross-section of a shaft.
  • The sea scooter and shaft models are run in the SOLIDWORKS CAD viewer without any other applications running on the test system.
  • A Solid Edge CAD viewer is run on its own using a different model of a 3-to-1 reducer.

Read the results for VMware Horizon 6 and Hardware Accelerated 3D Graphics.

Virtual SAN and SAP IQ – a Perfect Match

A performance study shows that VMware vSphere 5.5 with Virtual SAN as the storage backend provides an excellent platform for virtualized deployments of SAP IQ Multiplex Servers.

We created four virtual machines with the RHEL 6.3 operating system, and these virtual machines made up the SAP IQ Multiplex Server, which used Virtual SAN as its storage backend. In order to measure performance, we looked at the distributed query processing (DQP) modes of SAP IQ. In DQP, work is performed by threads running on both leader and worker nodes, and intermediate results are transmitted between these nodes through a shared disk space, or over an inter-node network. In the paper, we refer to these modes as storage-transfer and network-transfer.

In a test consisting of concurrent streams of queries designed to emulate a multi-user scenario, we found that the read-heavy I/O profile of this workload takes full advantage of the Virtual SAN’s flash acceleration layer. Data read from magnetic disks in each disk group, is cached in the SSD in the disk group. Since 70% of SSD capacity is reserved for the read cache, a significant amount of data is quickly placed in very low latency storage. Once it is warmed up, I/O requests are served from the read cache, leading to fast query response times. Add to this SAP IQ’s ability to use network resources to handle intermediate results transfer and we get an additional bump in throughput since we no longer have the overhead of writing intermediate, shared results to disk.

Read more about Distributed Query Processing in SAP IQ on VMware vSphere and Virtual SAN.

Microsoft Exchange Server Shows Great Performance on VMware Virtual SAN

Email servers are a business-critical component of IT systems implementations and Exchange Server is one of the most ubiquitous of them. As such, we wanted to see how we could leverage Virtual SAN to bring new technology in serving the storage needs of this application. We administered some tests to see how Exchange Server would perform on Virtual SAN. We ran five Virtual SAN servers, and each server hosted two virtual machines with the Exchange Server roles Mailbox and HUB. The first host had an added virtual machine for the AD Server role. A client virtual machine on a separate host ran the load generator.

Benchmarks are an important part of performance testing—we used Exchange Load Generator to simulate, for Exchange Server, users sending and receiving email. Then we measured the Sendmail latency of these requests for the average and 95th-percentile for three separate loads of 12,000 users, 16,000 users, and 20,000 users. This shows how Virtual SAN can accommodate the storage needs from additional users and be flexible for scaling out.

The results are shown in the following figure. The industry-standard measure of good latency is anything below 500ms. As shown here, the Sendmail latency is well below 500ms for both the average and 95th-percentile.

exchange-vsan-perf

 

For more information, read the paper here.

First Certified SAP BW-EML Benchmark on Virtual HANA

The first certified SAP Business Warehouse-Enhanced Mixed Workload (BW-EML) standard application benchmark based on a virtual HANA database was recently published by HP.  We worked with HP to configure and run this benchmark using a virtual HANA database running on vSphere 5.5 in a monster VM of 64 vCPUs and almost 1TB of RAM.  The test was run with a total of 2 billion records and achieved a throughput of 111,850 ad-hoc navigation steps per hour.

The same hardware configuration was used by HP to publish a native only benchmark with the same number of records. In that test, the result was 126,980 ad-hoc navigation steps per hour which is only 12% higher throughput than the virtual HANA result.

BW-EML_VirtualHANA_Graph_VROOM

Although the hardware setup was the same, this comparison between native and virtual performance has one wrinkle that gave the native system a slight advantage, estimated to be about 5%.

The reason for the estimated 5% advantage for the native system is due to the difference between cores and threads and the maximum number of vCPUs.  In the case of the native test, the BW-EML workload was able to exercise all 120 hardware threads of the physical 60 core server.  The number of threads is twice the number of physical server cores because these processors utilize Intel Hyper-Threading technology.

In vSphere 5.5 (the current version) the maximum number of vCPUs that can be used in a single VM is 64. Each vCPU is mapped to a hardware thread when scheduled to run. This limits the number of hardware threads that a single VM can use to 64, which means that for this test only slightly more than half of the 120 hardware server threads could be used for the HANA virtual machine. This means that the virtual machine was not able to directly benefit from Hyper-Threading, but was able to use all 60 cores.

The benefit of Hyper-Threading can be as much as 20% to 30% for some applications, but in the case of the BW-EML benchmark, it is estimated to be about 5%.  This estimate was found by running the native BW-EML benchmark system with and without Hyper-Threading enabled.  Because the virtual machine was not able to use the Hyper-Threads, it is estimated that the native system had a 5% advantage due to its ability to use all 120 threads of the physical server.

In theory, the advantage for the native system could be reduced by either creating a bigger virtual machine or running the native system without Hyper-Threading.  If this were done, then the difference between native and virtual should be about 5% smaller and would mean that the difference between native and virtual could shrink to single digits (approximately 7%).

Additional details about the certified SAP BW-EML benchmark configurations used in the tests: SAP HANA 1.0 on HP DL580 Gen8, 4 processors with 60 cores / 120 threads using Intel Xeon E7-4880 v2 running at 2.5 GHz and 1TB of main memory (each processor has 15 cores / 30 threads).  The application servers were SAP NetWeaver 7.30 on HP BL680 G7, 4 processors with 40 cores / 80 threads using Intel Xeon E7-4870 running at 2.4 GHz and 1TB of main memory (each processor has 10 cores / 20 threads). The OS used for all servers was SuSE Enterprise Linux Server 11 SP2.  The certification number for the native test is 2014009 and the certification number for the virtual test is 2014021.

Virtual SAP HANA Achieves Production Level Performance

VMware CEO Pat Gelsinger announced production support for SAP HANA on VMware vSphere 5.5 at EMC World this week during his keynote. This is the end result of a very thorough joint testing project over the past year between VMware and SAP.

HANA is an in-memory platform (including database capabilities) from SAP that has enabled huge gains in performance for customers and has been a high priority for SAP over the past few years.  In order for HANA to be supported in a virtual machine on vSphere 5.5 for production workloads, we worked closely with SAP to enable, design, and measure in-depth performance tests.

In order to enable the testing and ongoing production support of SAP HANA on vSphere, two HANA appliance servers were ordered, shipped, and installed into SAP’s labs in Waldorf Germany.  These systems are dedicated to running SAP HANA on vSphere onsite at SAP.  Each system is an Intel Xeon E7-8870 (Westmere-EX) based four-socket server with 1TB of RAM.  They are used for performance testing and also for ongoing support of HANA on vSphere.  Additionally, VMware has onsite support engineering to assist with the testing and support.

SAP designed an extensive performance test suite that used a large number of test scenarios to stress all functions and capabilities of HANA running on vSphere 5.5.  They included OLAP and OLTP with a wide range of data sizes and query functions. In all, over one thousand individual test cases were used in this comprehensive test suite.  These same tests were run on identical native HANA systems and the difference between native and virtual tests was used as the key performance indicator.

In addition, we also tested vSphere features including vMotion, DRS, and VMware HA with virtual machines running HANA.  These tests were done with the HANA virtual machine under heavy stress.

The test results have been extremely positive and are one of the key factors in the announcement of production support.  The difference between virtual and native HANA across all the performance tests was on average within a few percentage points.

The vMotion, DRS, and VMware HA tests were all completed without issues.  Even with the large memory sizes of HANA virtual machines, we were still able to successfully migrate them with vMotion while under load with no issues.

One of the results of the extensive testing is a best practices guide for HANA on vSphere 5.5. This document includes a performance guide for running HANA on vSphere 5.5 based on this extensive testing.  The document also includes information about how to size a virtual HANA instance and how VMware HA can be used in conjunction with HANA’s own replication technology for high availability.

SEsparse Shows Significant Improvements over VMFSsparse

Limited amounts of physical resources can make large-scale virtual infrastructure deployments challenging. Provisioning dedicated storage space to hundreds of virtual machines can become particularly expensive. To address this VMware vSphere 5.5 provides two sparse storage techniques, namely VMFSparse and SEsparse. Running multiple VMs using sparse delta-disks with a common parent virtual disk brings down the required amount of physical storage making large-scale deployments manageable. SEsparse was introduced in VMware vSphere 5.1 and in vSphere 5.5 became the default virtual disk snapshotting technique for VMDKs greater than 2 TB. Various enhancements were made to SEsparse technology in the vSphere 5.5 release, which makes SEsparse perform mostly on par or better than VMFSsparse formats. In addition dynamic space reclamation confers on SEsparse a significant advantage over VMFSsparse virtual disk formats. This feature makes SEsparse the choice for VMware® Horizon View™ environments where space reclamation is critical due to the large number of tenants sharing the underlying storage.


A recently published paper reports the results from a series of performance studies of SEsparse and VMFsparse using thin virtual disks as baselines. The performance was evaluated using a comprehensive set of Iometer workloads along with workloads from two real world application domains: Big Data Analytics and Virtual Desktop Infrastructure (VDI). Overall, the performance of SEsparse is significantly better than the VMFSsparse format for random write workloads and mostly on par or better for the other analyzed workloads, depending on type.

Read the full performance study, “SEsparse in VMware vSphere 5.5.”

VMware vFabric Postgres 9.2 Performance and Best Practices

VMware vFabric Postgres (vPostgres) 9.2 improves vertical scalability over the previous version by 300% for pgbench SELECT-only (a common read-only OLTP workload) and by 100% for pgbench (a common read/write OLTP workload). vPostgres 9.2 on vSphere 5.1 achieves equal-to-native vertical scalability on a 32-core machine.

Using out-of-the-box settings for both vPostgres and vSphere, virtual machine (VM)-based database consolidation performs on par with alternative approaches (such as consolidated on one vPostgres server instance or consolidated on multiple vPostgres server instances but one operating system instance) in a baseline memory-undercommitted situation for a standard OLTP workload (using dbt2 benchmark, an open-source fair implementation of TPC-C); while performs increasingly more robust as memory overcommitment escalates (200% better than alternatives under a 55% memory-overcommitted situation).

By using an unconventionally larger database shared buffers (75% of memory size rather than the conventional 25%), vPostgres can attain both better performance (12% better) and more consistent performance (70% less temporal variation).

When using an unconventionally larger database shared buffers, the vPostgres database memory ballooning technique can enhance the robustness of VM-based database consolidation: under a 55% memory-overcommitted situation, using its help can advance the performance advantage of VM-based consolidation over alternatives from 60% to 140%.

For more details including experimentation methodology and references, please read the namesake whitepaper.

Performance Best Practices for vSphere 5.5 is Available

We are pleased to announce the availability of Performance Best Practices for vSphere 5.5. This is a book designed to help system administrators obtain the best performance from vSphere 5.5 deployments.

The book addresses many of the new features in vSphere 5.5 from a performance perspective. These include:

  • vSphere Flash Read Cache, a new feature in vSphere 5.5 allowing flash storage resources on the ESXi host to be used for read caching of virtual machine I/O requests.
  • VMware Virtual SAN (VSAN), a new feature (in beta for vSphere 5.5) allowing storage resources attached directly to ESXi hosts to be used for distributed storage and accessed by multiple ESXi hosts.
  • The VMware vFabric Postgres database (vPostgres).

We’ve also updated and expanded on many of the topics in the book. These include:

  • Running storage latency and network latency sensitive applications
  • NUMA and Virtual NUMA (vNUMA)
  • Memory overcommit techniques
  • Large memory pages
  • Receive-side scaling (RSS), both in guests and on 10 Gigabit Ethernet cards
  • VMware vMotion, Storage vMotion, and Cross-host Storage vMotion
  • VMware Distributed Resource Scheduler (DRS) and Distributed Power Management (DPM)
  • VMware Single Sign-On Server

The book can be found here.

VMware Horizon View 5.2 Performance & Best Practices and A Performance Deep Dive on Hardware Accelerated 3D Graphics

VMware Horizon View 5.2 simplifies desktop and application management while increasing security and control and delivers a personalized high fidelity experience for end-users across sessions and devices. It enables higher availability and agility of desktop services unmatched by traditional PCs while reducing the total cost of desktop ownership and end-users can enjoy new levels of productivity and the freedom to access desktops from more devices and locations while giving IT greater policy control.

Recently, we published two whitepapers to provide a performance deep-dive on Horizon View 5.2 performance and hardware accelerated 3D graphics (vSGA) feature. The links to these whitepapers are as follows:

* VMware Horizon View 5.2 Performance and Best Practices
* VMware Horizon View 5.2 and Hardware Accelerated 3D Graphics

The first whitepaper describes View 5.2 new features, including access of View desktops with Horizon, space efficient sparse (SEsparse) disks, hardware accelerated 3D graphics, and full support of Windows 8 desktops. View 5.2 performance improvements in PCoIP and View management are highlighted. In addition, this paper presents View 5.2 PCoIP performance results, Windows 8 and RDP 8 performance analysis, and a vSGA performance analysis, including how vSGA compares to the software renderer support introduced in View 5.1.

The second whitepaper goes in-depth on the support for hardware accelerated 3D graphics that debuted with VMware vSphere 5.1 and VMware Horizon View 5.2 and presents performance and consolidation results for a number of different workloads, ranging from knowledge workers using 3D desktops to performance-intensive CAD-based workloads. Because the intensity of a 3D workload will vary greatly from user to user and application to application, rather than highlighting specific case studies, we demonstrate how the solution efficiently scales for both light- and heavy-weight 3D workloads, until GPU or CPU resources are fully utilized. This paper also presents key best practices to extract peak performance from a 3D View 5.2 deployment.