Home > Blogs > VMware VROOM! Blog

Performance of Storage I/O Control (SIOC) with SSD Datastores – vSphere 6.5

With Storage I/O Control (SIOC), vSphere 6.5 administrators can adjust the storage performance of VMs so that VMs with critical workloads will get the I/Os per second (IOPS) they need. Admins assign shares (the proportion of IOPS allocated to the VM), limits (the upper bound of VM IOPS), and reservations (the lower bound of VM IOPS) to the VMs whose IOPS need to be controlled.  After shares, limits, and reservations have been set, SIOC is automatically triggered to meet the desired policies for the VMs.

A recently published paper shows the performance of SIOC meets expectations and successfully controls the number of IOPS for VM workloads.

Continue reading

Virtual Machine vCPU and vNUMA Rightsizing – Rules of Thumb

Using virtualization, we have all enjoyed the flexibility to quickly create virtual machines with various virtual CPU (vCPU) configurations for a diverse set of workloads.  But as we virtualize larger and more demanding workloads, like databases, on top of the latest generations of processors with up to 24 cores, special care must be taken in vCPU and vNUMA configuration to ensure performance is optimized.

Continue reading

Machine Learning on vSphere 6 with Nvidia GPUs – Episode 2

by Hari Sivaraman, Uday Kurkure, and Lan Vu

In a previous blog [1], we looked at how machine learning workloads (MNIST and CIFAR-10) using TensorFlow running in vSphere 6 VMs in an NVIDIA GRID configuration reduced the training time from hours to minutes when compared to the same system running no virtual GPUs.

Here, we extend our study to multiple workloads—3D CAD and machine learning—run at the same time vs. run independently on a same vSphere server.

Continue reading

New Fling released – IOInsight

By Sankaran Sivathanu

VMware IOInsight is a tool to help people understand a VM’s storage I/O behavior. By understanding their VM’s I/O characteristics, customers can make better decisions about storage capacity planning and performance tuning. IOInsight ships as a virtual appliance that can be deployed in any vSphere environment and includes an intuitive web-based UI that allows users to choose VMDKs to monitor and view results.

Where does IOInsight help?

  • Customers may better tune and size their storage.
  • When contacting VMware Support for any vSphere storage issues, including a report from IOInsight can help VMware Support better understand the issues and can potentially lead to faster resolutions.
  • VMware Engineering can optimize products with a better understanding of various customers’ application behavior.

IOInsight captures I/O traces from ESXi and generates various aggregated metrics that represent the I/O behavior. The IOInsight report contains only these aggregated metrics and there is no sensitive information about the application itself. In addition to the built-in metrics computed by IOInsight, users can also write new analyzer plugins to IOInsight and visualize the results. A comprehensive SDK and development guide is included in the download bundle.

The fling works with vSphere 5.5 or above and can be downloaded at https://labs.vmware.com/flings/ioinsight.

vSphere 6.5 Encrypted vMotion Architecture and Performance

With the rise in popularity of hybrid cloud computing, where VM sensitive data leaves the traditional IT environment and traverses over the public networks, IT administrators and architects need a simple and secure way to protect critical VM data that traverses across clouds and over long distances.

The Encrypted vMotion feature available in VMware vSphere® 6.5 addresses this challenge by introducing a software approach that provides end-to-end encryption for vMotion network traffic. The feature encrypts all the vMotion data inside the vmkernel by using the most widely used AES-GCM encryption standards, and thereby provides data confidentiality, integrity, and authenticity even if vMotion traffic traverses untrusted network links.

A new white paper, “VMware vSphere 6.5 Encrypted vMotion Architecture, Performance and Best Practices”, is now available. In that paper, we describe the vSphere 6.5 Encrypted vMotion architecture and provide a comprehensive look at the performance of live migrating virtual machines running typical Tier 1 applications using vSphere 6.5 Encrypted vMotion. Tests measure characteristics such as total migration time and application performance during live migration. In addition, we examine vSphere 6.5 Encrypted vMotion performance over a high-latency network, such as that in a long distance network. Finally, we describe several best practices to follow when using vSphere 6.5 Encrypted vMotion.

In this blog, we give a brief overview of vSphere 6.5 Encrypted vMotion technology, and some of the performance highlights from the paper.

Continue reading

vCenter Server 6.5 High Availability Performance and Best Practices

High availability (aka HA) services are important in any platform, and VMware vCenter Server® is no exception. As the main administrative and management tool of vSphere, it is a critical element that requires HA. vCenter Server HA (aka VCHA) delivers protection against software and hardware failures with excellent performance for common customer scenarios, as shown in this paper.

Much work has gone into the high availability feature of VMware vCenter Server® 6.5 to ensure that this service and its operations minimally affect the performance of your vCenter Server and vSphere hosts. We thoroughly tested VCHA with a benchmark that simulates common vCenter Server activities in both regular and worst case scenarios. The result is solid data and a comprehensive performance characterization in terms of:

  • Performance of VCHA failover/recovery time objective (RTO): In case of a failure, vCenter Server HA (VCHA) provides failover/RTO such that users can continue with their work in less than 2 minutes through API clients and less than 4 minutes through UI clients. While failover/RTO depends on the vCenter Server configuration and the inventory size, in our tests it is within the target limit, which is 5 minutes.
  • Performance of enabling VCHA: We observed that enabling VCHA would take around 4 – 9 minutes depending on the vCenter Server configuration and the inventory size.
  • VCHA overhead: When VCHA is enabled, there is no significant impact for vCenter Server under typical load conditions. We observed a noticeable but small impact of VCHA when the vCenter Server was under extreme load; however, it is unlikely for customers to generate that much load on the vCenter Server for extended time periods.
  • Performance impact of vCenter Server statistics level: With an increasing statistics level, vCenter Server produces less throughput, as expected. When VCHA is enabled for various statistics levels, we observe a noticeable but small impact of 3% to 9% on throughput.
  • Performance impact of a private network: VCHA is designed to support LAN networks with up to 10 ms latency between VCHA nodes. However, this comes with a performance penalty. We study the performance impact of the private network in detail and provide further guidelines about how to configure VCHA for the best performance.
  • External Platform Services Controller (PSC) vs Embedded PSC: We study VCHA performance comparing these two deployment modes and observe a minimal difference between them.

Throughout the paper, our findings show that vCenter Server HA performs well under a variety of circumstances. In addition to the performance study results, the paper describes the VCHA architecture and includes some useful performance best practices for getting the most from VCHA.

For the full paper, see VMware vCenter Server High Availability Performance and Best Practices.

vSphere 6.5 Update Manager Performance and Best Practices

vSphere Update Manager (VUM) is the patch management tool for VMware vSphere 6.5. IT administrators can use VUM to patch and upgrade ESXi hosts, VMware Tools, virtual hardware, and virtual appliances.

In the vSphere 6.5 release, VUM has been integrated into the vCenter Server appliance (VCSA) for the Linux platform. The integration eliminates remote data transfers between VUM and VCSA, and greatly simplifies the VUM deployment process. As a result, certain data-driven tasks achieve a considerable performance improvement over VUM for the Windows platform, as illustrated in the following figure:

vum-blog-fig1

To present the new performance characteristics for VUM in vSphere 6.5, a paper has been published. In particular, the paper describes the following topics:

  • VUM server deployment
  • VUM operations including scan host, scan VM, stage host, remediate host, and remediate VM
  • Remediation concurrency
  • Resource consumption
  • Running VUM operations with vCenter Server provisioning operations

The paper also offers a number of performance tips and best practices for using VUM during patch maintenance. For the full details, read vSphere Update Manager Performance and Best Practices.

Whitepaper on vSphere Virtual Machine Encryption Performance

vSphere 6.5 introduces a feature called vSphere VM encryption.  When this feature is enabled for a VM, vSphere protects the VM data by encrypting all its contents.  Encryption is done both for already existing data and for newly written data. Whenever the VM data is read, it is decrypted within ESXi before being served to the VM.  Because of this, vSphere VM encryption can have a performance impact on application I/O and the ESXi host CPU usage.

We have published a whitepaper, VMware vSphere Virtual Machine Encryption Performance, to quantify this performance impact.  We focus on synthetic I/O performance on VMs, as well as VM provisioning operations like clone, snapshot creation, and power on.  From analysis of our experiment results, we see that while VM encryption consumes more CPU resources for encryption and decryption, its impact on I/O performance is minimal when using enterprise-class SSD or VMware vSAN storage.  However, when using ultra-high performance storage like locally attached NVMe drives capable of handling up to 750,000 IOPS, the minor increase in per-I/O latency due to encryption or decryption adds up quickly to have an impact on IOPS.

For more detailed information and data, please refer to the whitepaper

vSphere 6.5 DRS Performance – A new white-paper

VMware recently announced the general availability of vSphere 6.5. Among the many new features in this release are some DRS specific ones like predictive DRS, and network-aware DRS. In vSphere 6.5, DRS also comes with a host of performance improvements like the all-new VM initial placement and the faster and more effective maintenance mode operation.

If you want to learn more about them, we published a new white-paper on the new features and performance improvements of DRS in vSphere 6.5. Here are some highlights from the paper:

 

65wp-blog-3

 

65wp-blog-2

Expandable Reservation for Resource Pools

One of the questions I was often asked about resource pools (RP) is ‘Expandable reservation’. What is expandable reservation, and why should I care about it? Although it sounds intuitive, it can be easily misunderstood.

To put it simply, a resource pool with ‘expandable reservation’ can expand its reservation by asking more resources from its parent .

The need to expand reservation comes from the increase in reservation demand of its child objects (VMs or resource pools). If the parent resource pool is short of resources, then the parent expands it reservation asking resources from the grand parent.

Let us try to understand this with a simple example. Consider the following RP hierarchy. If RP-4 has to expand its reservation, it requests resources from its parent RP-3 and if RP-3 has to expand resources it eventually requests Root-RP.

exp-res-7

Continue reading