Home > Blogs > VMware VROOM! Blog > Tag Archives: Performance

Tag Archives: Performance

Weathervane, a benchmarking tool for virtualized infrastructure and the cloud, is now open source.

Weathervane is a performance benchmarking tool developed at VMware.  It lets you assess the performance of your virtualized or cloud environment by driving a load against a realistic application and capturing relevant performance metrics.  You might use it to compare the performance characteristics of two different environments, or to understand the performance impact of some change in an existing environment.

Weathervane is very flexible, allowing you to configure almost every aspect of a test, and yet is easy to use thanks to tools that help prepare your test environment and a powerful run harness that automates almost every aspect of your performance tests.  You can typically go from a fresh start to running performance tests with a large multi-tier application in a single day.

Weathervane supports a number of advanced capabilities, such as deploying multiple independent application instances, deploying application services in containers, driving variable loads, and allowing run-time configuration changes for measuring elasticity-related performance metrics.

Weathervane has been used extensively within VMware, and is now open source and available on GitHub at https://github.com/vmware/weathervane.

The rest of this blog gives an overview of the primary features of Weathervane.

Weathervane Overview

Weathervane is an application-level benchmarking tool.  It allows you to place a controlled load on a computing environment by deploying a realistic application, and then simulating users interacting with the application. In the case of Weathervane, the application is a scalable Java web application which implements a real-time auction web site, and the environment can be anything from a single bare-metal server to a large virtualized cluster of servers or a public or private cloud.   You can use collected metrics to evaluate the performance of the environment or to investigate the effect of changes in the environment.  For example, you could use Weathervane to compare the performance of different cloud environments, or to evaluate the impact of changing storage technologies on application-level performance.

Weathervane consists of three main components and a number of supporting tools:

  • The Auction application to be deployed on the environment under test.
  • A workload driver that can drive a realistic and repeatable load against the application.
  • A run harness which automates the process of executing runs and collecting results, logs, and relevant performance data.
  • Supporting tools include scripts to set up an operating system instance with all of the software needed to run Weathervane, to create Docker images for the application services, and to load and prepare the application data needed for a run of the benchmark.

Figure 1 shows the logical layout of the main components of a Weathervane deployment. Additional background about the components of Weathervane can be found at https://blogs.vmware.com/performance/2015/03/introducing-weathervane-benchmark.html.

Figure 1 Weathervane Deployment

logicalLayoutFull

The design goal for Weathervane has been to provide flexibility so that you can adapt it to suit the needs a wide range of performance evaluation tasks. You can customize almost every aspect of a Weathervane deployment. You can vary …

  • … the number of instances in each service tier. For example, there can be any number of load balancer, application server, or web server nodes.  This allows you to create very small or very large configurations as needed.
  • … the number of tiers. For example, it is possible to omit the load-balancer and/or web server tiers if only a small application deployment is needed.
  • … the implementation to be used for each service. For example, Weathervane currently supports PostgreSQL and MySQL as the transactional database, and Apache Httpd and Nginx as the web server.  Because Weathervane is open source, you can add additional implementations of a service type if desired.
  • … the tuning and configuration of the services. The most common performance and configuration tunings for each service tier can be set in the Weathervane configuration file.  The run harness will then apply the tunings to each service instance automatically when you perform a run of the benchmark.

The Weathervane run harness makes it easy for you to take advantage of this flexibility by managing the complexity involved in configuring and starting the application, running the workload, and collecting performance results.  In many cases, you can make complex changes in a Weathervane deployment with just a few simple changes in a configuration file.

Weathervane also provides advanced features that allow you to evaluate the performance impact of many important issues in large virtualized and cloud environments.

  • You can run the service instances of the Auction application directly on the OS, either in a virtual machine or on a bare-metal server, or deploy them in Docker containers. Weathervane comes with scripts to create Docker images for all of the application services.
  • You can run and drive load against multiple independent instances of the Auction application. This is useful if you want to investigate the interactions between multiple independent applications, or when it is necessary to drive loads larger than can be handled by a single application instance.  The configuration and load for each instance can be specified independently.
  • You can specify a user load for the application instances that varies over the course of run. This allows you to investigate performance issues related to bursty loads, cyclical usage patterns, and, as discussed next, the impact of application elasticity.
  • The Action application supports changing the number of instances of some service tiers at run-time to support application-level elasticity (https://en.wikipedia.org/wiki/Elasticity_(cloud_computing)), and the study of elasticity-related performance metrics. Weathervane currently includes a scheduled elasticity-service, which allows you to specify changes in the application configuration over time.  When used in combination with the time-varying loads, this enables the investigation of some elasticity-related performance issues.  Future implementations of the elasticity service will use real-time monitoring to make decisions about configuration changes.

Over the coming months, we will be publishing additional posts demonstrating the use of these features.

Future Direction

We intend to continue to grow and improve Weathervane’s applicability and ease of use.  We also plan to focus on improving its usefulness as a platform for examining the meaning of performance evaluation in the cloud.  As an open source project, we invite the wider performance community to not only use it, but to participate in extending it and shaping its future direction.  Enhancements to Weathervane may include adding new performance metrics, better metric reporting and real-time monitoring, support for additional services such as cloud-vendor specific databases and load balancers, and even adding additional applications to be deployed on the environment under test.  Visit the Weathervane GitHub repository at  https://github.com/vmware/weathervane for more information about getting involved.

The Weathervane team would like to thank the VMware Open-Source Program Office, https://blogs.vmware.com/opensource/, and VMware’s commitment to open-source software, for helping to make this release possible.

vCenter 6.5 Performance: what does 6x mean?

At the VMworld 2016 Barcelona keynote, CTO Ray O’Farrell proudly presented the performance improvements in vCenter 6.5. He showed the following slide:

6x_slide

Slide from Ray O’Farrell’s keynote at VMworld 2016 Barcelona, showing 2x improvement in scale from 6.0 to 6.5 and 6x improvement in throughput from 5.5 to 6.5.

As a senior performance engineer who focuses on vCenter, and as one of the presenters of VMworld Session INF8108 (listed in the top-right corner of the slide above), I have received a number of questions regarding the “6x” and “2x scale” labels in the slide above. This blog is an attempt to explain these numbers by describing (at a high level) the performance improvements for vCenter in 6.5. I will focus specifically on the vCenter Appliance in this post.

6x and 2x

Let’s start by explaining the “6x” and the “2x” from the keynote slide.

  1. 6x:  We measure performance in operations per second, where operations include powering on VMs, clones, VMotions, etc. More details are presented below under “Benchmarking Details.” The “6x” refers to a sixfold increase in operations per second from vSphere 5.5 to 6.5:
    1. In 5.5, vCenter was capable of approximately 10 operations per second in our testbed.
    2. In 6.0, vCenter was capable of approximately 30 operations per second in our testbed.
    3. In 6.5 vCenter can now perform more than 60 operations per second in our testbed. With faster hardware, vCenter can achieve over 70 operations per second in our testbed.
  2. 2x: The 2x improvement refers to a change in the supported limits for vCenter. The number of hosts has doubled, and the number of VMs has more than doubled:
    1. The supported limits for a single instance of vCenter 6.0 are 1000 hosts, 15000 registered VMs, and 10000 powered-on VMs.
    2. The supported limits for a single instance of vCenter 6.5 are 2000 hosts, 35000 registered VMs, and 25000 powered-on VMs.

Not only are the supported limits higher in 6.5, but the resources required to support such a limit are dramatically reduced.

What does this mean to you as a customer?

The numbers above represent what we have measured in our labs. Clearly, configurations will vary from customer to customer, and observed improvements will differ. In this section, I will give some examples of what we have observed to illustrate the sorts of gains a customer may experience.

PowerOn VM

Before powering on a VM, DRS must collect some information and determine a host for the VM. In addition, both the vCenter server and the ESX host must exchange some information to confirm that the powerOn has succeed and must record the latest configuration of the VM. By a series of optimizations in DRS related to choosing hosts, and by a large number of code optimizations to reduce CPU usage and reduce critical section time, we have seen improvements of up to 3x for individual powerOns in a DRS cluster. We give an example in the figure below, in which we show the powerOn latency (normalized to the vSphere 6.0 latency, lower is better).

powerOnSingleCluster

Example powerOn latency for 6.0 vs. 6.5, normalized to 6.0. Lower is better. 6.5 outperforms 6.0. The gains are due primarily to improvements in DRS and general code optimizations.

The benefits are likely to be most prominent in large clusters (i.e., 64 hosts and 8000 VMs in a cluster), although all cluster sizes will benefit from these optimizations.

Clone VM

Prior to cloning a VM, vCenter does a series of calls to check compatibility of the VM on the destination host, and it also validates the input parameters to the clone. The bulk of the latency for a clone is typically in the disk subsystem of the vSphere hosts. For our testing, we use small VMs (as described below) to allow us to focus on the vCenter portion of latency. In our tests, due to efficiency improvements in the compatibility checks and in the validation steps, we see up to 30% improvement in clone latency, as seen in the figure below, which depicts normalized clone latency for one of our tests.

Example clone VM latency for 6.0 vs. 6.5, normalized to 6.0. Lower is better. 6.5 outperforms 6.0. The gains are in part due to code improvements where we determine which VMs can run on which hosts.

These gains will be most pronounced when the inventory is large (several thousand VMs) or when the VMs to be cloned are small (i.e., < 16GB). For larger VMs, the latency to copy the VM over the network and the latency to write the VM to disk will dominate over the vCenter latency.

VMotion VM

For a VMotion of a large VM, the costs of pre-copying memory pages and then transferring dirty pages typically dominates. With small VMs (4GB or less), the costs imposed by vCenter are similar to those in the clone operation: checking compatibility of the VM with the new host, whether it be the datastore, the network, or the processor. In our tests, we see approximately 15% improvement in VMotion latency, as shown here:

Example VMotion latency for 6.0 vs. 6.5, normalized to 6.0. Lower is better. 6.5 is slightly better than 6.0. The gains are due in part to general code optimizations in the vCenter server.

As with clone, the bulk of these improvements is from a large number of code optimizations to improve CPU and memory efficiency in vCenter. Similar to clone, the improvements are most pronounced with large numbers of VMs or when the VMs are less than 4GB.

Reconfigure VM

Our reconfiguration operation changes the memory share settings for a VM. This requires a communication with the vSphere host followed by updates to the vCenter database to store new settings. While there have been improvements along each of these code paths, the overall latency is similar from 6.0 to 6.5, as shown in the figure below.

RegisterVMatScale

Example latency for VM reconfigure task for 6.0 vs. 6.5, normalized to 6.0. Lower is better. The performance is approximately the same from 6.0 to 6.5 (the difference is within experimental error).

Note that the slight increase in 6.5 is within the experimental error for our setup, so for this particular test, the reconfigure operation is basically the same from 6.0 to 6.5.

The previous data were for a sampling of operations, but our efficiency improvements should result in speedups for most operations, whether called through the UI or through our APIs.

Resource Usage and Restart Time of vCenter Server

In addition to the sorts of gains shown above, the improvements from 6.0 to 6.5 have also dramatically reduced the resource usage of the vCenter server. These improvements are described in more detail below, and we give one example here. For an environment in our lab consisting of a single vCenter server managing 64 Hosts and 8,000 VMs, the overall vCenter server resource usage dropped from 27GB down to 14GB. The drop is primarily due to removal of inventory service and optimizations in the core vpxd process of vCenter (especially with respect to DRS).

In our labs, the optimizations described below have also reduced the the restart time of vCenter (the time from when the machine hosting vCenter is booted until vCenter can accept API or UI requests). The impact depends on the extensions installed and the amount of data to be loaded at startup by the web client (in the case of accepting UI requests), but we have seen improvements greater than 3x in our labs, and anecdotal evidence from the field suggests larger improvements.

Brief Deep Dive into Improvements

The previous section has shown the types of improvements one might expect over different kinds of operations in vCenter. In this section, we briefly describe some of the code changes that resulted in these improvements.

 “Rocks” and “Pebbles”

The changes from 6.0 to 6.5 can be divided into large, architectural-type changes (so-called “Rocks” because of the large size of the changes) and a large number of smaller optimizations (so-called “Pebbles” because the changes themselves are smaller).

Rocks

There are three main “Rocks” that have led to performance improvements from 6.0 to 6.5:

  1. Removal of Inventory Service
  2. Significant optimizations to CPU and memory usage in DRS, specifically with respect to snapshotting inventory for compatibility checks and initial placement of VMs upon powerOn.
  3. Change from SLES11 (6.0) to PhotonOS (6.5).

Inventory Service. The Inventory Service was originally added to vCenter in the 5.0 release in order to provide a caching layer for inventory data. Clients to vCenter (like the web client) could then retrieve data from the inventory service instead of going to the vpxd process within vCenter. Second- and Third-party solutions (e.g., vROps or other solutions) could store data in this inventory service so that the web client could easily retrieve such data This inventory service was implemented in Java and was backed by an embedded database. While this approach has some benefits with respect to reducing load to vCenter, the cost of maintaining this cache was far higher than its benefits. In particular, in the largest supported setups of vCenter, the memory cost of this service was nearly 16GB, and could be even larger in customer deployments. Maintaining the embedded database also required significant disk IO (nearly doubling the overall IO in vCenter) and CPU. In 6.5, we have removed this Inventory Service and instead have employed a new design that efficiently retrieves directly from vpxd. With the significant improvements to the vpxd process, this approach is much faster than using the Inventory Service. Moreover, it saves nearly 16GB from our largest setups. Finally, removing Inventory Service also leads to faster restart times for the vCenter server, since the Inventory Service no longer has to synchronize its data with the core vpxd process of vCenter server before vCenter has finished starting up. In our test labs, the restart times (the time from reboot until vCenter can accept client requests) improved by up to 3x, from a few minutes down to around one minute.

DRS. Our performance results had suggested the DRS adds some overhead when computing initial placement and ongoing placement of VMs. When doing this computation, DRS needs to retrieve the current state of the inventory. A significant effort was undertaken in 6.5 to reduce this overhead. The sizes of the snapshots were reduced, and the overhead of taking such a snapshot was dramatically reduced. One additional source of overhead is doing the compatibility checks required to determine if a VM is able to run on a given host. This code was dramatically simplified while still preserving the appropriate load-balancing capabilities of DRS.
The combination of simplifying DRS and removing Inventory Service resulted in significant resource usage reductions. To give a concrete example, in our labs, to support the maximum supported inventory of a 6.0 setup (1000 hosts and 15000 registered VMs) required approximately 27GB, while the same size inventory required only 14GB in 6.5.

PhotonOS. The final “Rock” that I will describe is the change from SLES11 to PhotonOS. PhotonOS uses a much more recent version of the Linux Kernel (4.4 vs. 3.0 for SLES11). With much newer libraries, and with a slimmer set of default modules installed in the base image, PhotonOS has proven to be a more efficient guest OS for the vCenter Server Appliance. In addition to these changes, we have also tuned some settings that have given us performance improvements in our labs (for example, changing some of the default NUMA parameters and ensuring that we are using the pre-emptive kernel).

Pebbles

The “Pebbles” are really an accumulation of thousands of smaller changes that together improve CPU usage, memory usage, and database utilization. Three examples of such “Pebbles” are as follows:

  1. Code optimizations
  2. Locking improvements
  3. Database improvements

Code optimizations. Some of the code optimizations above include low-level optimizations like replacing strings with integers or refactoring code to significantly reduce the number of mallocs and frees. The vast majority of cycles used by the vpxd process are typically spent in malloc or in string manipulations (for example, serializing data responses from hosts). By reducing these overheads, we have significantly reduced the CPU and memory resources used to manage our largest test setups.

Locking improvements. Some of the locking improvements include reducing the length of critical sections and also restructuring code to enable us to remove some coarse-grained locks. For example, we have isolated situations in which an operation may have required consistent state for a cluster, its hosts, and all of its VMs, and reduced the scope so that only VM-level or host-level locks are required. These optimizations require careful reasoning about the code, but ultimately significantly improve concurrency.  An additional set of improvements involved simplifying the locking primitives themselves so that they are faster to acquire and release. These sorts of changes also improve concurrency. Improving concurrency not only improves performance, but it better enables us to take advantage of newer hardware with more cores: without such improvements, software would be a bottleneck, and the extra cores would otherwise be idle.

Database improvements. The vCenter server stores configuration and statistics data in the database. Any changes to the VM, host, or cluster configuration that occur as a result of an operation (for example, powering on a VM) must be persisted to the database. We have made an active effort to reduce the amount of data that must be stored in the database (for example, storing it on the host instead). By reducing this data, we reduce the network traffic between vCenter server and the hosts, because less data is transferred, and we also reduce disk traffic by the database.

A side benefit of using the vCenter server appliance is that the database (Postgres) is embedded in the appliance. As a result, the latency between the vpxd service and the database is minimized, resulting in performance improvements relative to using a remote database (as is typically used in vCenter Windows installations). This improvement can be 10% or more in environments with lots of operations being performed.

Benchmarking Details

Our benchmark results are based on our vcbench workload generator. A more complete description of vcbench is given in VMware vCenter Server Performance and Best Practices, but briefly, vcbench consists of a Java client that sends management operations to vCenter server. Operations include (but are not limited to) powering on VMs, cloning VMs, migrating VMs, VMotioning VMs, reconfiguring VMs, registering VMs, and snapshotting VMs. The Java client opens up tens to hundreds of concurrent sessions to the vCenter server and issues tasks on each of these sessions. A graphical depiction is given in the “VCBench Architecture” slide, above.

The performance of vcbench is typically given in terms of throughput, for example, operations per second. This number represents the number of management operations that vCenter can complete per second. To compute this value, we run vcbench for a specified amount of time (for example, several hours) and then measure how many operations have completed. We then divide by the runtime of the test. For example, 70 operations per second is 4200 operations per minute, or over 25000 operations in an hour. We run anywhere from 32 concurrent sessions to 512 concurrent sessions connected to vCenter.

The throughput measured by vcbench is dependent on the types of operations in the workload mix. We have tried to model our workload mix based on the frequency of operations in customer setups. In such setups, often power operations and provisioning operations (e.g., cloning VMs) are prevalent.

Finally, the throughput measured by vcbench also depends on hardware and other vCenter settings. For example, in our “benchmarking” runs, we run with level 1 statistics. We also do performance measurements with higher statistics levels, but our baseline measurements use level 1. In addition, we use SSDs to ensure the the disk is not a bottleneck, and we also make sure to have sufficient CPU and memory to ensure that they are not resource-constrained. By removing hardware as a constraint, we are able to find and fix bottlenecks in our software. Our benchmarking runs also typically do not have extensions like vROps or NSX connected to vCenter. We do additional runs with these extensions installed so that we can understand their impact and provide guidance to customers, but they are not part of the base performance reports.

Conclusion

vCenter 6.5 can support 2x the inventory size as vCenter 6.0. Moreover, vCenter 6.5 provides dramatically higher throughput than 6.0, and can manage the same environment size with less CPU and memory. In this note, I have tried to give some details regarding the source of these improvements. The gains are due to a number of significant architectural improvements (like removing the Inventory Service caching layer) as well as a great deal of low-level code optimizations (for example, reducing memory allocations and shortening critical sections). I have also provided some details about our benchmarking methodology as well as the hardware and software configuration.

Acknowledgments

The vCenter improvements described in this blog are the results of thousands of person-hours from vCenter developers, performance engineers, and others throughout VMware. I am deeply grateful to them for making this happen.

SQL Server VM Performance with VMware vSphere 6.5

Achieving optimal SQL Server performance on vSphere has been a constant focus here at VMware; I’ve published past performance studies with vSphere 5.5 and 6.0 which showed excellent performance up to the maximum VM size supported at the time.

Since then, there have been quite a few changes!  While this study uses a similar test methodology, it features an updated hypervisor (vSphere 6.5), database engine (SQL Server 2016), OLTP benchmark (DVD Store 3), and CPUs (Intel Xeon v4 processors with 24 cores per socket, codenamed Broadwell-EX).

The new tests show large SQL Server databases continue to run extremely efficiently, achieving great performance on vSphere 6.5. Following our best practices was all that was necessary to achieve this scalability – which reminds me, don’t forget to check out Niran’s new SQL Server on vSphere best practices guide, which was also just updated.

In addition to performance, power consumption was measured on each ESXi host. This allowed for a comparison of Host Power Management (HPM) policies within vSphere, performance per watt of each host, and power draw under stress versus idle:

Generational SQL Server DB Host Power and Performance/watt

Generational SQL Server DB Host Power and Performance/watt

Additionally, this new study compares a virtual file-based disk (VMDK) on VMware’s native Virtual Machine File System (VMFS 5) to a physical Raw Device Mapping (RDM). I added this test for two reasons: first, it has been several years since they have been compared; and second, customer feedback from VMworld sessions indicates this is still a debate that comes up in IT shops, particularly with regard to deploying database workloads such as SQL Server and Oracle.

For more details and the test results, download the paper: Performance Characterization of Microsoft SQL Server on VMware vSphere 6.5

Performance of Storage I/O Control (SIOC) with SSD Datastores – vSphere 6.5

With Storage I/O Control (SIOC), vSphere 6.5 administrators can adjust the storage performance of VMs so that VMs with critical workloads will get the I/Os per second (IOPS) they need. Admins assign shares (the proportion of IOPS allocated to the VM), limits (the upper bound of VM IOPS), and reservations (the lower bound of VM IOPS) to the VMs whose IOPS need to be controlled.  After shares, limits, and reservations have been set, SIOC is automatically triggered to meet the desired policies for the VMs.

A recently published paper shows the performance of SIOC meets expectations and successfully controls the number of IOPS for VM workloads.

Continue reading

vCenter Server 6.5 High Availability Performance and Best Practices

High availability (aka HA) services are important in any platform, and VMware vCenter Server® is no exception. As the main administrative and management tool of vSphere, it is a critical element that requires HA. vCenter Server HA (aka VCHA) delivers protection against software and hardware failures with excellent performance for common customer scenarios, as shown in this paper.

Much work has gone into the high availability feature of VMware vCenter Server® 6.5 to ensure that this service and its operations minimally affect the performance of your vCenter Server and vSphere hosts. We thoroughly tested VCHA with a benchmark that simulates common vCenter Server activities in both regular and worst case scenarios. The result is solid data and a comprehensive performance characterization in terms of:

  • Performance of VCHA failover/recovery time objective (RTO): In case of a failure, vCenter Server HA (VCHA) provides failover/RTO such that users can continue with their work in less than 2 minutes through API clients and less than 4 minutes through UI clients. While failover/RTO depends on the vCenter Server configuration and the inventory size, in our tests it is within the target limit, which is 5 minutes.
  • Performance of enabling VCHA: We observed that enabling VCHA would take around 4 – 9 minutes depending on the vCenter Server configuration and the inventory size.
  • VCHA overhead: When VCHA is enabled, there is no significant impact for vCenter Server under typical load conditions. We observed a noticeable but small impact of VCHA when the vCenter Server was under extreme load; however, it is unlikely for customers to generate that much load on the vCenter Server for extended time periods.
  • Performance impact of vCenter Server statistics level: With an increasing statistics level, vCenter Server produces less throughput, as expected. When VCHA is enabled for various statistics levels, we observe a noticeable but small impact of 3% to 9% on throughput.
  • Performance impact of a private network: VCHA is designed to support LAN networks with up to 10 ms latency between VCHA nodes. However, this comes with a performance penalty. We study the performance impact of the private network in detail and provide further guidelines about how to configure VCHA for the best performance.
  • External Platform Services Controller (PSC) vs Embedded PSC: We study VCHA performance comparing these two deployment modes and observe a minimal difference between them.

Throughout the paper, our findings show that vCenter Server HA performs well under a variety of circumstances. In addition to the performance study results, the paper describes the VCHA architecture and includes some useful performance best practices for getting the most from VCHA.

For the full paper, see VMware vCenter Server High Availability Performance and Best Practices.

vSphere 6.5 DRS Performance – A new white-paper

VMware recently announced the general availability of vSphere 6.5. Among the many new features in this release are some DRS specific ones like predictive DRS, and network-aware DRS. In vSphere 6.5, DRS also comes with a host of performance improvements like the all-new VM initial placement and the faster and more effective maintenance mode operation.

If you want to learn more about them, we published a new white-paper on the new features and performance improvements of DRS in vSphere 6.5. Here are some highlights from the paper:

 

65wp-blog-3

 

65wp-blog-2

Latency Sensitive VMs and vSphere DRS

Some applications are inherently highly latency sensitive, and cannot afford long vMotion times. VMs running such applications are termed as being ‘Latency Sensitive’. These VMs consume resources very actively, so vMotion of such VMs is often a slow process. Such VMs require special care during cluster load balancing, due to their latency sensitivity.

You can tag a VM as latency sensitive, by setting the VM option through the vSphere web client as shown below (VM → Edit Settings → VM Options → Advanced)

edit-vmsettings-2
By default, the latency sensitivity value of a VM is set to ‘normal’. Changing it to ‘high’ will make the VM ‘Latency Sensitive’. There are other levels like ‘medium’ and ‘low’ which are experimental right now. Once the value is set to high, 100% of the VM configured memory should be reserved. It is also recommended to reserve 100% of its CPU. This white paper talks more about the VM latency sensitivity feature in vSphere.

DRS support

VMware vSphere DRS provides support for handling such special VMs. If a VM is part of a DRS cluster, tagging it as latency sensitive will create a VM-Host soft affinity rule. This will ensure that DRS will not move the VM unless it is absolutely necessary. For example, in scenarios where the cluster is over-utilized, all the soft rules will be dropped and VMs can be moved.

To showcase how this option works, we ran a simple experiment with a four host DRS cluster running a latency sensitive VM (10.156.231.165:VMZero-Latency-Sensitive-1) on one of its host (10.156.231.165)

cluster-load

As we can see from the screenshot, CPU usage of host ‘10.156.231.165’ is higher compared to the other hosts, and the cluster load is not balanced. So DRS migrates VMs from the highly utilised host (10.156.231.165) to distribute the load.

Since latency sensitive VM is a heavy consumer of resources, it will be the best possible candidate to migrate, as moving it will distribute the load in one shot. So DRS migrated the latency sensitive VM to a different host in order to distribute the load.

migrations-1

Then we put the cluster back in its original state, and set the VM latency sensitivity value to ‘high’ using VM options (as mentioned earlier). Also set 100% of memory and cpu reservations. This time, due to associated soft-affinity rule, DRS completely avoided the latency sensitive VM. It migrated other VMs from the same host to distribute the load.

migrations-2

Things to note:

  • 100% memory reservation for the latency sensitive VM is a must. Without the memory reservation, vMotion will fail; if the VM is powered-Off, it cannot be powered-On until reservation is set.
  • Since DRS uses a soft-affinity rule, sometimes the cluster might get imbalanced due to  these VMs.
  • If multiple VMs are latency sensitive, spread them across hosts and then tag them as latency sensitive. This will avoid over-utilization of hosts and results in better resource distribution.

Understanding vSphere DRS Performance – A White Paper

VMware vSphere Distributed Resource Scheduler (DRS) is responsible for placement of Virtual Machines and balancing of resources in a cluster. The key driver for DRS is VM/Application happiness, and it achieves this by effective VM placement and efficient load balancing. We have a new white paper, which tries to explain how DRS works in basic scenarios and how it can be tuned to behave differently for specific scenarios.

The white paper talks about the factors that influence DRS decisions and provides some useful insights into different parameters that can be tuned in specific scenarios to make DRS more effective. It also explains how to monitor DRS to better understand its behavior.

It covers DRS behavior in specific scenarios with some case studies. Some of these studies are around

  •  VM Consumed vs. Active Memory – How it impacts DRS behavior.
  •  Impact of VM overrides on cluster balance.
  •  Prerequisite moves during initial placement.
  •  Using shares to prioritize cluster resources.

The paper provides knowledge about the factors that affect DRS behavior and helps understand how DRS does what it does. This knowledge, along with monitoring and troubleshooting tips, including real case studies, will help tune DRS clusters for optimum performance.

Machine Learning on VMware vSphere 6 with NVIDIA GPUs

by Uday Kurkure, Lan Vu, and Hari Sivaraman

Machine learning is an exciting area of technology that allows computers to behave without being explicitly programmed, that is, in the way a person might learn. This tech is increasingly applied in many areas like health science, finance, and intelligent systems, among others.

In recent years, the emergence of deep learning and the enhancement of accelerators like GPUs has brought the tremendous adoption of machine learning applications in a broader and deeper aspect of our lives. Some application areas include facial recognition in images, medical diagnosis in MRIs, robotics, automobile safety, and text and speech recognition.

Machine learning workloads have also become a critical part in cloud computing. For cloud environments based on vSphere, you can even deploy a machine learning workload yourself using GPUs via the VMware DirectPath I/O or vGPU technology.

GPUs reduce the time it takes for a machine learning or deep learning algorithm to learn (known as the training time) from hours to minutes. In a series of blogs, we will present the performance results of running machine learning benchmarks on VMware vSphere using NVIDIA GPUs.

Episode 1: Performance Results of Machine Learning with DirectPath I/O and NVIDIA GPUs

In this episode, we present the performance results of running machine learning benchmarks on VMware vSphere with NVIDIA GPUs in DirectPath I/O mode and on GRID virtual GPU (vGPU) mode.

Training Time Reduction from Hours to Minutes

Training time is the performance metric used in supervised machine learning—it is the amount of time a computer takes to learn how to solve the given problem. In supervised machine learning, the computer is given data in which the answer can be found. So, supervised learning infers a model from the available, or labelled training data.

Our first machine learning benchmark is a simple demo model in the TensorFlow library. The model classifies handwritten digits from the MNIST dataset. Each digit is a handwritten number that is centered within a consistently sized grayscale bitmap. The MNIST database of handwritten digits contains 60,000 training examples and has a test set of 10,000 examples.

First, we compare training times for the model using two different virtual machine configurations:

  • NVIDIA GRID Configuration (vg1c12m60GB): 1 vGPU, 12 vCPUs, 60GB memory, 96GB of SSD storage, CentOS 7.2
  • No GPU configuration (g0c12m60GB): No GPU, 12 vCPUs, 60GB memory, 96GB of SSD storage, CentOS 7.2
MNIST vg1c12m60GB
1 vGPU 
(secs)
g0c12m60GB
No GPU (secs)
Normalized Training Time
(wrt vg1c12)
1.0 10.06
CPU Utilization 8% 43%

The above table shows that vGPU reduces the training time by 10 times. The CPU utilization also goes down 5 times. See the graphs below.

01-training-time-mnist

02-mnist-cpu-util

Scaling from One GPU to Four GPUs

This machine learning benchmark is made up of two components:

We use the metric of images per second (images/sec) to compare the different configurations as we scale from a single GPU to 4 GPUs. The metric of images/second denotes the number of images processed per second in training the model.

Our host has two NVIDIA M60 cards. Each card has 2 GPUs. We present the performance results for scaling up from 1 GPU to 4 GPUs.

You can configure the GPUs in two modes:

  • DirectPath I/O passthrough: In this mode, the host can be configured to have 1 to 4 GPUs in a DirectPath I/O passthrough mode. A virtual machine running on the host will have access to 1 to 4 GPUs in passthrough mode.
  • GRID vGPU mode: For machine learning workloads, each VM should be configured with the highest profile vGPU. Since we have M60 GPUs, we configured VMs with vGPU type M60-8q. M60-8q implies one VM/GPU.

DirectPath I/O

First we focus on DirectPath I/O passthrough mode as we scale from 1 GPU to 4 GPUs.

CIFAR-10 g1c48m60GB g2c48m60GB g4c48m60GB
   1 GPU  2 GPUs  4 GPUs
Normalized Images/sec in Thousands (w.r.t. 1 GPU) 1.0 2.04 3.74
CPU Utilization 25% 44% 71%

As the above table shows, the images processed per second improves almost linearly with the number of GPUs on the host. This means that the number of images processed becomes greater with each increase in the number of GPUs in an amount that is expected. 1 GPU sets the normalized data at 1,000 image/sec. We expect 2 GPUs to handle about double that of 1 GPU, which the graph shows. Next, we see that 4 GPUs can handle nearly 4,000 images/sec.

03-cifar10-images-per-sec

Host CPU utilization also increases linearly, as shown in the following graph.

04-cifar10-cpu-used

Single GPU DirectPath I/O vs GRID vGPU mode

Now, we present comparison of performance results for DirectPath IO and GRID vGPU mode.

Since each VM can have only one vGPU in GRID vGPU mode, we first present the results for 1 GPU configuration in DirectPath IO mode with vGPU mode.

 

MNIST g1c48m60GB vg1c48m60GB
(Lower Is Better) DirectPath I/O GRID vGPU
Normalized Training Times 1.0 1.05

 

CIFAR-10 g1c48m60GB vg1c48m60GB
(Higher  Is Better) DirectPath I/O GRID vGPU
Normalized Images/sec 1.0 0.87

 

The above tables show that one GPU configuration in DirectPath I/O and GRID mode vGPU are very close in performance. We suggest you use GRID vGPU mode because it offers the benefits of virtualization.

Multi-GPU DirectPath I/O vs Multi-VM DirectPath I/O vs Multi-VMs in GRID vGPU mode

Now we move on to multi-GPU performance results for DirectPath I/O and GRID vGPU mode. In DirectPath I/O mode, a VM can be configured with all the GPUs on the host.  In our case, we configured the VM with 4 GPUs. In GRID vGPU mode, each VM can have at most 1 GPU. Therefore, we compare the results of 4 VMs running the same job with a VM using 4 GPUs using Direct Path I/O.

CIFAR-10 g4c48m60GB g1c12m16GB (4-vms) vg1c12m16GB(4-vms)
DirectPath I/O DirectPath I/O (4 VMs) GRID vGPU ( 4 VMs)
Normalized Images/Sec
(Higher Is Better)
1.0 0.98 0.92
CPU Utilization 71% 68% 69%

05-cifar10

06-cifar10

The multi-GPU DirectPath I/O mode configuration performs better. If your workload requirement is low latency or requires a short training time, you should use multi-GPU DirectPath I/O mode. However, other virtual machines will not be able use the GPUs on the host at the same time. If you can tolerate longer latencies or training times, we recommend using a 1-GPU configuration.  GRID vGPU mode enables the benefits of virtualization: flexibility and elasticity.

Takeaways

  • GPUs bring the training times of machine learning algorithms from hours to minutes.
  • You can use NVIDIA GPUs in two modes in the VMware vSphere environment for machine learning applications:
    • DirectPath I/O passthrough mode
    • GRID vGPU mode
  • You should use GRID vGPU mode with the highest vGPU profile. The highest vGPU profile implies 1 VM/GPU, thus giving the virtual machine full access to the entire GPU.
  • For a 1-GPU configuration, the performance of the machine learning applications in GRID vGPU mode is comparable to DirectPath I/O.
  • For the shortest training time, you should use a multi-GPU configuration in DirectPath I/O mode.
  • For running multiple machine learning jobs simultaneously, you should use GRID vGPU mode. This configuration offers a higher consolidation of virtual machines and leverages the flexibility and elasticity benefits of VMware virtualization.

Go to Machine Learning on vSphere 6 with Nvidia GPUs – Episode 2.

References

Configuration Details

Host Configuration

Model Dell PowerEdge R730
Processor Type Intel® Xeon® CPU E5-2680 v3 @ 2.50GHz
CPU Cores 24 CPUs, each @ 2.499GHz
Processor Sockets 2
Cores per Socket 12
Logical Processors 48
Hyperthreading Active
Memory 768GB
Storage Local SSD (1.5TB), Storage Arrays, Local Hard Disks
GPUs 2x M60 Tesla

Software Configuration

ESXi  6.0.0, 3500742
Guest OS CentOS Linux release 7.2.1511 (Core)
CUDA Driver 7.5
CUDA Runtime 7.5

VM Configurations

VM vCPUs Memory Storage GPUs Guest OS Mode
g0xc12m60GB 12 vCPUs 60GB 1x96GB (SSD) 0 CentOS 7.2 No GPU
g1xc12m60GB 12 vCPUs 60GB 1x96GB (SSD) 1 CentOS 7.2 DirectPath I/O
g2xc48m60GB 48 vCPUs 60GB 1x96GB

(SSD)

2 CentOS 7.2 DirectPath I/O
g4xc48m60GB 48 vCPUs 60GB 1x96GB

(SSD)

4 CentOS 7.2 DirectPath I/O
vg1xc12m60GB 12 vCPUs 60GB 1x96GB (SSD) 1 CentOS 7.2 GRID vGPU
g1c12m16GB 12 vCPUs 16GB 1x96GB

(SSD)

1 CentOS 7.2 DirectPath I/O
vg1c12m16GB 12 vCPUs 16GB 1x96GB

(SSD)

1 CentOS 7.2 GRID vGPU

 

 

New White Paper: Best Practices for Optimizing Big Data Performance on vSphere 6

A new white paper is available showing how to best deploy and configure vSphere for Big Data applications such as Hadoop and Spark. Hardware, software, and vSphere configuration parameters are documented, as well as tuning parameters for the operating system, Hadoop, and Spark.

The best practices were tested on a Dell 12-server cluster, with Hadoop installed on vSphere as well as on bare metal. Workloads for both Hadoop (TeraSort and TestDFSIO) and Spark (Support Vector Machines and Logistic Regression) were run on the cluster. The virtualized cluster outperformed the bare metal cluster by 5-10% for all MapReduce and Spark workloads with the exception of one Spark workload, which ran at parity. All workloads showed excellent scaling from 5 to 10 worker servers and from smaller to larger dataset sizes.

Here are the results for the TeraSort suite:

TeraSort Suite Performance

And for Spark Support Vector Machines:

Spark Support Vector Machine Performance

Here are the best practices cited in this paper:

  • Reserve about 5-6% of total server memory for ESXi; use the remainder for the virtual machines.
  • Create 1 or more virtual machines per NUMA node.
  • Limit the number of disks per DataNode to maximize the utilization of each disk – 4 to 6 is a good starting point.
  • Use eager-zeroed thick VMDKs along with the ext4 filesystem inside the guest.
  • Use the VMware Paravirtual SCSI (pvscsi) adapter for disk controllers; use all 4 virtual SCSI controllers available in vSphere 6.0.
  • Use the vmxnet3 network driver; configure virtual switches with MTU=9000 for jumbo frames.
  • Configure the guest operating system for Hadoop performance including enabling jumbo IP frames, reducing swappiness, and disabling transparent hugepage compaction.
  • Place Hadoop master roles, ZooKeeper, and journal nodes on 3 virtual machines for optimum performance and to enable high availability.
  • Dedicate the worker nodes to run only the HDFS DataNode, YARN NodeManager, and Spark Executor roles.
  • Use the Hadoop rack awareness feature to place virtual machines belonging to the same physical host in the same rack for optimized HDFS block placement.
  • Run the Hive Metastore in a separate database.
  • Set the Yarn cluster container memory and vcores to slightly overcommit both resources.
  • Adjust the task memory and vcore requirement to optimize the number of maps and reduces for each application.

All details are in the paper, Big Data Performance on vSphere 6: Best Practices for Optimizing Virtualized Big Data Applications.