Home > Blogs > VMware VROOM! Blog > Tag Archives: storage drs

Tag Archives: storage drs

Storage DRS Performance Improvements in vSphere 6.7

Virtual machine (VM) provisioning operations such as create, clone, and relocate involve the placement of storage resources. Storage DRS (sometimes seen as “SDRS”) is the resource management component in vSphere responsible for optimal storage placement and load balancing recommendations in the datastore cluster.

A key contributor to VM provisioning times in Storage DRS-enabled environments is the time it takes (latency) to receive placement recommendations for the VM disks (VMDKs). This latency particularly comes into play when multiple VM provisioning requests are issued concurrently.

Several changes were made in vSphere 6.7 to improve the time to generate placement recommendations for provisioning operations. Specifically, the level of parallelism was improved for the case where there are no storage reservations for VMDKs. This resulted in significant improvements in recommendation times when there are concurrent provisioning requests.

vRealize automation suite users who use blueprints to deploy large numbers of VMs quickly will notice the improvement in provisioning times for the case when no reservations are used.

Several performance optimizations were further made inside key steps of processing the Storage DRS recommendations. This improved the time to generate recommendations, even for standalone provisioning requests with or without reservations.

Test Setup and Results

We ran several performance tests to measure the improvement in recommendation times between vSphere 6.5 and vSphere 6.7. We ran these tests in our internal lab setup consisting of hundreds of VMs and few thousands of VMDKs. The VM operations are

  1. CreateVM – A single VM per thread is created.
  2. CloneVM – A single clone per thread is created.
  3. ReconfigureVM – A single VM per thread is reconfigured to add an additional VMDK.
  4. RelocateVM – A single VM per thread is relocated to a different datastore.
  5. DatastoreEnterMaintenance – Put a single datastore into maintenance mode. This is a non-concurrent operation.

Shown below are the relative improvements in recommendation times for VM operations at varying concurrencies. The y-axis has a numerical limit of 10, to allow better visualization of the relative values of the average recommendation time. 

The concurrent VM operations show an improvement of between 20x and 30x in vSphere 6.7 compared to vSphere 6.5 

Below we see the relative average time taken among all runs for serial operations.

The Datastore Enter Maintenance operation shows an improvement of nearly 14x in vSphere 6.7 compared to vSphere 6.5

With much faster storage DRS recommendation times, we expect customers to be able to provision multiple VMs much faster to service their in-house demands. Specifically, we expect VMware vRealize Automation suite users to hugely benefit from these improvements.

Performance Implications of Storage I/O Control-Enabled NFS Datastores

Storage I/O Control (SIOC) allows administrators to control the amount of access virtual machines have to the I/O queues on a shared datastore. With this feature, administrators can ensure that a virtual machine running a business-critical application has a higher priority to access the I/O queue than that of other virtual machines sharing the same datastore. In vSphere 4.1, SIOC was supported on VMFS-based datastores that used SAN with iSCSI and Fibre Channel. In vSphere 5, SIOC support has been extended to NFS-based datastores.

Recent tests conducted at VMware Performance Engineering lab studied the following aspects of SIOC:

  • The performance impact of SIOC: A fine-grained access management of the I/O queues resulted in a 10% improvement in the response time of the workload used for the tests.
  • SIOC’s ability to isolate the performance of applications with a smaller request size: Some applications like Web and media servers use I/O patterns with a large request size (for example, 32K). But some other applications like OLTP databases request smaller I/Os ≤8K. Test findings show that SIOC helped an OLTP database workload to achieve higher performance when sharing the underlying datastore with a workload that used large-sized I/O requests.
  • The intelligent prioritization of I/O resources: SIOC monitors virtual machines’ usage of the I/O queue at the host and dynamically redistributes any unutilized queue slots to those virtual machines that need them. Tests show that this process happens consistently and reliably.

For the full paper, see Performance Implications of Storage I/O Control–Enabled NFS Datastores in VMware vSphere 5