Home > Blogs > VMware VROOM! Blog

How to correctly test the performance of Virtual SAN 6.2 deduplication feature

In VMware Virtual SAN 6.2, we introduced several features highly requested by customers, such as deduplication and compression. An overview of this feature can be found in the blog: Virtual SAN 6.2 – Deduplication And Compression Deep Dive.

The deduplication feature adds the most benefit to an all-flash Virtual SAN environment because, while SSDs are more expensive than spinning disks, the cost is amortized because more workloads can fit on the smaller SSDs. Therefore, our performance testing is performed on an all-flash Virtual SAN cluster with deduplication enabled.

When testing the performance of the deduplication feature for Virtual SAN, we observed the following:

  • Unexpected deduplication ratio
  • High device read latency in the capacity tier, even though the SSD is perfectly fine

In this blog, we discuss the reason behind these two issues and share our testing experience.

When we tested the performance of Virtual SAN, we decided to use two common tools, IOBlazer and Iometer:

  1. We used IOBlazer to populate the disks. We configured IOBlazer to run 100% large sequential writes. This was to make sure all the blocks were allocated before testing any read-related workload. Some people prefer to zero out all the blocks using the dd command, which has a similar effect.
  2. We then ran an Iometer workload. We set the read percentage, randomness, I/O size, number of outstanding I/Os, and so on.

We found, however, that there were two issues with the above procedure when testing the deduplication feature:

  • Iometer did not support configuring I/O content. In other words, we could not use Iometer to generate I/Os with various deduplication ratios in step 2.
  • We should not have populated the disks using IOBlazer or dd in step 1 because each utility pollutes the disks with random data or zeros, both of which yielded the wrong deduplication ratio for later tests.

To address these issues, we decided to use the Flexible I/O (FIO) benchmark to both populate the disks and run the tests. FIO allowed us to specify the deduplication ratio. By following these steps, we were able to successfully test the deduplication feature in Virtual SAN 6.2:

  1. Run FIO with 100% 4KB sequential write with the given deduplication and compression ratio. This will populate the disks with the desired deduplication and compression ratio.
  2. Run FIO with the specified read/write percentage, I/O size, randomness, number of outstanding I/Os, and deduplication and compression ratio.

Below is a sample configuration file for FIO. We modified the parameters for different tests.

[global]
ioengine=libaio; async I/O engine for Linux
direct=1
thread ; use thread rather than process
group_reporting
; Test name: 4K_rd70_rand100_dedup0_compr0
runtime=3600
time_based
readwrite=randrw
iodepth=8
rwmixread=70
blocksize=4096
randrepeat=0
blockalign=4096
buffer_compress_percentage=0
dedupe_percentage=0

[job 1]
filename=/dev/sdb
filesize=25G
[job 2]
filename=/dev/sdc
filesize=25G

If steps 1 and 2 were not performed properly, the results could be unexpected. To further illustrate that, we take two issues we encountered as examples.

Issue #1: The SSD showed high read latency, but the SSD hardware had no issues

We observed a high device read latency issue with FIO micro-benchmarks. The high read latency occurred because we were issuing a large amount of concurrent I/O (outstanding I/O, also known as OIO) to the same Logic Block Address/LBA (or a small range of LBAs) on the SSD. This is more likely to happen with any type of deduplication solution, regardless of the storage vendor.

To resolve this issue, we first performed a test to learn the behavior of the SSD device. Below shows the read latency to one address with an increasing amount of outstanding I/Os.

4KB read from the same LBA:
1 OIO Latency: 0.12 ms
16 OIOs Latency: 1.51 ms
32 OIOs Latency: 3.06 ms
64 OIOs Latency: 6.07 ms
128 OIOs Latency: 12.08 ms
256 OIOs Latency: 12.68 ms

When we issued multiple OIOs to a single 4KB block, those I/Os were serialized to one single channel inside the SSD device that was connected to that offset. In other words, we lost the benefits of the SSD’s internal parallelism (from multiple channels). The device latency rose as we increased the number of OIOs. High OIO to the same LBA (or a smaller range of LBAs) caused high device read latency.

In the extreme case where we prepared the disk by zeroing out all the blocks, all the data was deduplicated to one block. As a result, the upcoming read I/O was issued to the same device address, which caused high device read latency as discussed above.

Figures 1 and 2 (stats from Virtual SAN observer tool) show sample results from our test. (Even though the screenshots show HDDs, our test was on an all-flash Virtual SAN cluster. The HDDs in the graph actually mean SSDs used as capacity tier devices.) As can be seen, inside one disk group, one capacity tier SSD (naa.55cd2e404ba2ce71 in Figure 1 or naa.55cd2e404ba535b7 in Figure 2) always shows higher than usual read latency (than in the other capacity tier SSDs). This is because we zeroed out the data blocks before running the test. Later in the test, a large amount of outstanding read I/Os were issued to a single address on that SSD.

Note: In Figures 1 and 2, where there are no units specified, the unit is milliseconds. Where an “m” is specified, the unit is microseconds. Where “k” is specified, the unit is thousands.

vsan-dedup-fig1

Figure 1. Sample test 1 showed the first capacity-tier SSD (“HDD”)  to have up to 3 milliseconds latency, which is much higher than the next two capacity-tier SSDs (also labelled “HDD”), which show just below 150 microseconds of latency.

vsan-dedup-fig2

Figure 2. Sample test 2 is similar to sample test 1. The bottom capacity-tier SSD (“HDD”) shows up to 6 milliseconds of latency, whereas the first two show slightly over 100 microseconds.

Issue #2: Deduplication ratio was not what we set

Because Virtual SAN distributes multiple virtual disks across its datastore, it is hard to determine the exact deduplication ratio of the data that the workload generated. In the FIO configuration file, we changed the dedupe_percentage to a desired value. However, in the testing system, there were a couple factors that affected the actual deduplication ratio reported by Virtual SAN.

  • I/Os from other virtual disks (vmdk files) can have the duplicated data. In the FIO configuration file, if the randrepeat parameter is set to 1, FIO will use the same random seed for all the disks. Although the data pattern obeys the dedupe_percentage set by the user for each vmdk, there will be high deduplicated data across vmdks. Note that those vmdks are placed on the same Virtual SAN datastore, which means that datastore will see more duplicated data than specified.
  • I/O size when preparing disks will affect the deduplication ratio. Currently, Virtual SAN uses 4KB chunk size as the unit to calculate the deduplication ratio. If the user uses non 4KB IO size to prepare the disk, Virtual SAN could see a different deduplication ratio. Meanwhile, if the IO is not aligned to 4KB (blockalign parameter), Virtual SAN could also observe a different deduplication ratio.

Figure 3 (below) shows a sample test in which we ran FIO with a 0% deduplication ratio. Due to the issues described above, Virtual SAN erroneously reports about an 80% deduplication ratio (shown in blue).

vsan-dedup-fig3

Figure 3. The blue line shows a deduplication percentage of about 80%, even though we set deduplication to be 0%.

To avoid these problems, we suggest performance testers use a 4KB I/O size (and aligned) and set randrepeat to 0 to prepare the disk in order to get the desired deduplication ratio. Note that Virtual SAN can properly handle any type of I/O configuration. The purpose of this blog is to explain the possible discrepancy between the FIO-specified dedupe_percentage and the Virtual SAN reported deduplication ratio if performance testers use different I/O configurations to evaluate the Virtual SAN datastore.

vsan-dedup-fig4

Figure 4. No blue line is shown, indicating the correct deduplication of 0%.

 

 

DRS Doctor is here to diagnose your DRS clusters

Mystery revealed, DRS for VMware vSphere is no more a black box! DRS Doctor will tell you all you need to know about your DRS clusters.

Our latest fling, DRS Doctor, will monitor your DRS clusters for virtual machine and host resource usage data, DRS-recommended migrations, and the reason behind each migration. It also monitors all the cluster-related events, tasks, and cluster balance, and logs all this information into a plain text log file that anyone can read.

Read this blog for more information on how DRS Doctor can monitor and diagnose your clusters.

Download DRS Doctor from our flings site.

Virtual SAN 6.2 Performance with OLTP and VDI Workloads

Virtual SAN is a VMware storage solution that is tightly integrated with vSphere—making storage setup and maintenance in a vSphere virtualized environment fast and flexible. Virtual SAN 6.2 adds several features and improvements, including additional data integrity with software checksum, space efficiency features of RAID-5 and RAID-6, deduplication and compression, and an in-memory client read cache.

We ran several tests to compare the performance of Virtual SAN 6.1 and 6.2 to make sure they were on par with each other. In addition, we wanted to know how new feature performance compared to a 6.2 baseline with no new features enabled. The tests used benchmark workloads that simulate real-world activities in online stores and brokerage firms (online transaction processing, or OLTP) and in a virtual desktop infrastructure (VDI) environment. We published the test results in the following papers:

One such test used a workload that simulated typical user actions in an online brokerage application. The following graphs show the virtual machine IOPs per host and disk space usage saving for the Brokerage workload.

vm-iops-disk-saving

6.2 R5 (RAID-5 configured) maintains almost the same IOPs per host as 6.2 but brings down the space usage in the cluster from about 3200GiB to 2280GiB—that is 29% space saving. 6.2 D, which has deduplication and compression enabled, brings IOPs down by 25% to about 30,000, but meanwhile saves 88% disk space, taking only 373GiB on the disks. In the 6.2 R5+D case, where RAID-5 is used together with deduplication and compression, IOPs are down further by 11% to 25,000. However, the disk space saving is 92%, using only 261GiB space in the cluster.

Please note that the substantial space saving is observed because a single Brokerage workload virtual machine contains duplicable and compressible data that can be reduced significantly by the deduplication and compression feature in Virtual SAN 6.2. (The actual space saving in your production environment will depend on the workload.)

Read more test results and find more information about test configuration in the published papers for OLTP and VDI workloads.

Peeking At The Future with Giant Monster Virtual Machines

Remember that cool project with VMware, HP Enterprise, and IBM where four super huge monster virtual machines (VMs) of 120 vCPUs each were all running at the same time on a single server with great performance? 

That was Project Capstone, and it was presented at VMworld San Francisco and VMworld Barcelona last fall as a spotlight session.  The follow-up whitepaper is now completed and published,  which means that there are lots of great technical details available with testing results and analysis. 

In addition to the four 120 vCPU VMs test, additional configurations were also run with eight 60 vCPU VMs and sixteen 30 vCPU VMs.  This shows that plenty of large VMs can be run on a single host with excellent performance when using a solution that supports tons of CPU capacity and cutting edge flash storage.

The whitepaper not only contains all of the test results from the original presentation, but also includes additional details around the performance of CPU Affinity vs PreferHT and under-provisioning.  There is also a best practices section that if focused on running monster VMs.

 

Tutorial Session on Performance Debugging on VMware vSphere

Ever wondered what it takes to debug performance issues on a VMware stack? How do you figure out if the performance issue is in your virtual machine, or the network layer, or the storage layer, or the hypervisor layer?

Here’s a handy tutorial that showcases a systematic approach for troubleshooting performance using tools like Esxtop, vSCSI stats and Net stats on a VMware stack. The tutorial also talks about some very useful optimizations and performance best practices.

Thanks to Ramprasad K. S. for putting together the slides based on his vast experience dealing with customer issues. Thanks also to Ramprasad and Sai Inabattini for presenting this at the CMG India 2nd Annual conference in Bangalore in November 2015, which was received very well.

Fault Tolerance Performance in vSphere 6

VMware has published a technical white paper about vSphere 6 Fault Tolerance architecture and performance. The paper describes which types of applications work best in virtual machines with vSphere FT enabled.

VMware vSphere Fault Tolerance (FT) provides continuous availability to virtual machines that require a high amount of uptime. If the virtual machine fails, another virtual machine is ready to take over the job.  vSphere achieves FT by maintaining primary and secondary virtual machines using a new technology named Fast Checkpointing. This technology is similar to Storage vMotion, which copies the virtual machine state (storage, memory, and networking) to the secondary ESXi host. Fast Checkpointing keeps the primary and secondary virtual machines in sync.

vSphere FT works with (and requires) vSphere HA—when an administrator enables FT, vSphere HA selects the secondary VM (admins can vMotion the VM to another server if needed). vSphere HA also creates a new secondary if the primary fails—the original secondary becomes the new primary, and vSphere HA selects an available virtual machine to use as the new secondary.

vSphere 6 FT supports applications with up to 4 vCPUs and 64GB memory on the ESXi host. The performance study shows results for various workloads run on virtual machines with 1, 2, and 4 vCPUs.

The workloads—which tax the virtual machine’s CPU, disk, and network—include:

  • Kernel compile – loads the CPU at 100%
  • Netperf-  measures network throughput and latency
  • Iometer- characterizes the storage I/O of a Microsoft Windows virtual machine
  • Swingbench- drives an OLTP load on a virtual machine running Oracle 11g
  • DVD Store –  drives an OLTP load on a virtual machine running Microsoft SQL Server 2012
  • A brokerage workload – simulates an OLTP load of a brokerage firm
  • vCenterServer workload – simulates actions performed in vCenter Server

Testing shows that vSphere FT can successfully protect a number of workloads like CPU-bound workloads, I/O-bound workloads, servers, and complex database workloads; however, admins should not use vSphere FT to protect highly latency-sensitive applications like voice-over-IP (VOIP) or high-frequency trading (HFT).

For the results of these tests, read the paper. Also useful is the VMware Fault Tolerance FAQ.

Virtualizing Performance-Critical Database Applications in VMware vSphere 6.0

by Priti Mishra

Performance studies have previously shown that there is no doubt virtualized servers can run a variety of applications near, or in some cases even above, that of software running natively (on bare metal). In a new white paper, we raise the bar higher and test “monster” vSphere virtual machines loaded with CPU and running the most taxing databases and transaction processing applications.

The benchmark workload, which we call Order-Entry, is based on an industry-standard online transaction processing (OLTP) benchmark called TPC-C. Both rigorous and demanding, the Order-Entry workload pushes virtual machine performance.

Note: The Order Entry benchmark is derived from the TPC-C workload, but is not compliant with the TPC-C specification, and its results are not comparable to TPC-C results.

The white paper quantifies the:

  • Performance differential between ESXi 6.0 and native
  • Performance differential between ESXi 6.0 and ESXi 5.1
  • Performance gains due to enhancements built into ESXi 6.0

Results from these experiments show that even the most demanding applications can be run, with excellent performance, in a virtualized environment with ESXi 6.0.  For example, our test results show that ESXi 6.0 virtual machines run out of the box at 90% of the performance of native systems. In addition, a 64-vCPU, 475GB VM processes 59.5K DBMS transactions per second while issuing 155K IOPS, capabilities well above even the high-end Oracle database installations. Even for applications that may require 64 or 128 vCPUs, the high-end performance boost of ESXi 6.0 over ESXi 5.1 makes ESXi 6.0 the best platform for virtualizing databases such as Oracle.

ESXi 6.0 Performance Relative to Native

With a 64-vCPU VM running on a 72-pCPU ESXi host, throughput was 90% of native throughput on the same hardware platform. Statistics which give an indication of the load placed on the system in the native and virtual machine configurations are summarized in Table 1.

Metric Native VM
Throughput in transactions per second 66.5K 59.5K
Average CPU utilization of 72 logical CPUs 84.7% 85.1%
Disk IOPS 173K 155K
Disk Megabytes/second 929MB/s 831MB/s
Network packets/second 71K/s receive
71K/s send
63K/s receive
64K/s send
Network Megabytes/second 15MB/s receive
36MB/s send
13MB/s receive
32MB/s send

Table 1. Comparison of Native and Virtual Machine Benchmark Load Profiles

 

The corresponding guest statistics in Table 2 provide another perspective on the resource-intensive nature of the workload. These common Linux performance metrics show that while the benchmark workload was heavy in terms of raw CPU demands, it also placed a heavy load on the operating system, interrupt handling, and the storage subsystem, areas that have traditionally been associated with high virtualization overheads.

 

Metric Amount
Interrupts per second 327K
Disk IOPS 155K
Context switches per second 287K
Load average 231

Table 2. Guest OS Statistics

ESXi 6.0 Performance Relative to ESXi 5.1

Experimental data comparing ESXi 6.0 with ESXi 5.1 (see Figures 1 and 2) show that high-end scale-up with ESXi 6.0 mirrors that of native systems.

fig1-dbapps-perf

Figure 1. Absolute throughput values

With ESXi 5.1, the Order-Entry benchmark throughput of a 64-vCPU VM on a 4-socket, 32 core/64 thread E7- 4870 (Westmere) server was 70% of the throughput of the same server in native mode when both servers were running at 77% CPU utilization (the native server reached a maximum CPU utilization of 88% and throughput of 54.8 transactions per second).

fig2-dbapps-perf

Figure 2. Relative throughput ratios

vSphere has the capability to handle loads far larger than that demanded by most Oracle database applications in production. Support for monster VMs with up to 128 vCPUs, throughput which is 90% of native and a significant performance boost over ESXi 5.1, make ESXi 6.0 an excellent platform for virtualizing very high end Oracle databases.

For details regarding experiments and the performance enhancements in vSphere, please read the paper.

VMware vCloud Air Database Performance Scalability with SQL Server

Previous posts have shown vSphere can easily handle running Microsoft SQL Server on four-socket servers with large numbers of cores—with vSphere 5.5 on Westmere-EX and more recently with vSphere 6 on Ivy Bridge-EX.  We recently ran similar tests on vCloud Air to measure how these enterprise databases with mission critical performance requirements perform in a cloud environment. The tests show that SQL Server databases scale very well on vCloud Air with a variety of virtual machine (VM) counts and virtual CPU (vCPU) sizes.

The benchmark tests were run with vCloud Air using their Virtual Private Cloud (VPC) subscription-based service.  This is a very compelling hybrid cloud service that allows for an on-premises vSphere infrastructure to be expanded into the public cloud in a secure and scalable way. The underlying host hardware consisted of two 8-core CPUs for a total of 16 physical cores, which meant that the maximum number of vCPUs was 16 (although additional processors were available via Hyper-Threading, they were not utilized).

Windows Server 2012 R2 was the guest OS, and SQL Server 2012 Standard edition was the database engine used for all the VMs.  All databases were placed on an SSD Accelerated storage tier for maximum disk I/O performance.  The test configurations are summarized below:

# VMs, # vCPUs, Memory configurations tested

DVD Store 2.1 (an open-source OLTP database stress tool) was the workload used to stress the VMs.  The first experiment was to scale up the number of 4 vCPU VMs.  The graph below shows that as the number of VMs is increased from 1 to 4, the aggregate performance (measured in orders per minute, or OPM) increases correspondingly:
vCA_SQL_4vCPU
When the size of each VM was doubled from 4 to 8 virtual CPUs, the OPM also approximately doubles for the same number of VMs as shown in the chart below.vCA_SQL_8vCPU

This final chart includes a test run with one large 16 vCPU VM.  As expected, the 16 vCPU performance was similar to the four 4vCPU VMs and eight 2vCPU VM test cases.  The slight drop can be attributed to spanning multiple physical processors and thus multiple NUMA nodes within a single VM.

vCA_SQL_16vCPU

In summary, SQL Server was found to perform and scale extremely well running on vCloud Air with 4, 8, and 16 vCPU VMs.  In the future, look for more benchmarks in the cloud as it continues to evolve!

For more information on vCloud Air, check out these third-party studies from Principled Technologies that compare it to competitive offerings, namely Microsoft Azure and Amazon Web Services (AWS):

Scaling Performance for VAIO in vSphere 6.0 U1

by Chien-Chia Chen

vSphere APIs for I/O Filtering (VAIO) is a framework that enables third-party software developers to implement data services, such as caching and replication, to vSphere. Figure 1 below shows the general architecture of VAIO. Once I/O filter libraries are installed to a virtual disk (VMDK), every I/O request generated from the guest operating system to the VMDK will first be intercepted by the VAIO framework at the file device layer. The VAIO framework then hands over the I/O request to the user space I/O filter libraries, where a series of third party data service operations can be performed against the I/O. After processing the I/O, user space I/O filter libraries return the I/O back to the VAIO framework, which continues the rest of the issuing path. Similarly, upon completion, the I/O will first be processed by the user space I/O filter libraries before continuing its original completion path.

There have been questions around the overhead of the VAIO framework due to its extra user-to-kernel communication. In this blog post, we evaluate the performance of vSphere APIs for I/O Filtering using a null I/O filter and demonstrate how VAIO scales with respect to the number of virtual machines and outstanding I/Os (OIOs). The null I/O filter accepts each I/O request and immediately returns it.

fig1-iofilt-arch

Figure 1. vSphere APIs for I/O Filtering Architecture

System Configuration

The configuration of our systems is as follows:

  • One ESXi host
    • Machine: Dell R720 server running vSphere 6.0 Update 1
    • CPU: 16-core, 2-socket (32 hyper-threads) Intel® Xeon® E5-2665 @ 2.4 GHz
    • Memory: 128GB memory
    • Physical Disk: One Intel® S3700 400GB SATA SSD on LSI MegaRAID SAS controller
    • VM: Up to 32 link-cloned I/O Analyzer 1.6.2 VMs (SUSE Linux Enterprise 11 SP2; 1 virtual CPU (VCPU) and 1GB memory each). Each virtual machine has 1 PVSCSI controller hosting two 1GB VMDKs—one has no I/O filter and another has a null filter, both think-provisioned.
  • Workload: Iometer 4K sequential read (4K-aligned) with various number of OIOs

Methodology

We conduct two sets of tests separately—one against VMDK without an I/O filter (referred to as “default”) and another against the null-filter VMDK (referred to as “iofilter”). In each set of tests, every virtual machine has one Iometer disk worker to generate 4K sequential read I/Os to the VMDK under test. We have a 2-minute warm-up time and measure I/Os per second (IOPS), normalized CPU cost, and read latency over the next 2-minute test duration. The latency is the median of the average read latencies reported by all Iometer workers.

Note that I/O sizes and access patterns do not affect the performance of VAIO since it does no additional data copying, maintains the original access patterns, and incurs no extra access to physical disks.

Results

VM Scaling

Figures 2 and 3 below show the IOPS, CPU cost per 1K IOPS, and latency with a different number of virtual machines at 128 OIOs. Except for the single virtual machine test, results show that VAIO achieves similar IOPS and has similar latency compared to the default VMDK. However, VAIO introduces 10%-20% higher CPU overhead per 1K IOPS. The single virtual machine IOPS with iofilter is 80% higher than the default VMDK. This is because, in the default case, the VCPU performs the majority of synchronous I/O work; whereas, in the iofilter case, VAIO contexts take over a big portion of the work and unblock the VCPU from generating more I/Os. With additional VCPUs and Iometer disk workers to mitigate the single core bottleneck, the default VMDK is also able to drive over 70K IOPS.

fig2a-iofilt
Figure 2. IOPS and CPU Cost vs. Number of VMs (128 Outstanding I/Os)

fig3-iofilt

Figure 3. Iometer Read Latency vs. Number of VMs (128 Outstanding I/Os)

 

OIO Scaling

Figures 4 and 5 below show the IOPS, CPU cost per 1K IOPS, and latency with a different number of OIOs at 16 virtual machines. A similar trend again holds that VAIO achieves the same IOPS and has the same latency compared to the default VMDK while it incurs 10%-20% higher CPU overhead per 1K IOPS.

fig4-iofilt

Figure 4. Percent of a Core per 1 Thousand IOPS vs. Outstanding I/Os (16 VMs)

fig5-iofilt

Figure 5. Iometer Read Latency vs. Outstanding I/Os (16 VMs)

Conclusion

Based on our evaluation, VAIO achieves comparable throughput and latency performance at a cost of 10%-20% more CPU cycles. From our experience, when using the VAIO framework, we recommend the following general best practices:

  • Reduce CPU over-commitment. The VAIO framework introduces at least one additional context per VMDK with an active filter. Over-committing CPU can result in intensive CPU contention, thus much worse virtualization efficiency.
  • Avoid blocking when developing I/O filter libraries. Keep in mind that an I/O will be blocked until the user space I/O filter finishes processing. Thus additional processing time will result in higher end-to-end latency.
  • Increase concurrency wisely when developing I/O filter libraries. The user space I/O filter can potentially serve I/Os from all VMDKs. Thus, when developing I/O filter libraries, it is important to be flexible in terms of concurrency to avoid a single core CPU bottleneck and meanwhile without introducing too many unnecessary active contexts that cause higher CPU contention.

 

Dynamic Host-Wide Performance Tuning in VMware vSphere 6.0

by Chien-Chia Chen

Introduction

The networking stack of vSphere is, by default, tuned to balance the tradeoffs between CPU cost and latency to provide good performance across a wide variety of applications. However, there are some cases where using a tunable provides better performance. An example is Web-farm workloads, or any circumstance where a high consolidation ratio (lots of VMs on a single ESXi host) is preferred over extremely low end-to-end latency. VMware vSphere 6.0 introduces the Dynamic Host-Wide Performance Tuning  feature (also known as dense mode), which provides a single configuration option to dynamically optimize individual ESXi hosts for high consolidation scenarios under certain use cases. Later in this blog, we define those use cases. Right now, we take a look at how dense mode works from an internal viewpoint.

Mitigating Virtualization Inefficiency under High Consolidation Scenarios

Figure 1 shows an example of the thread contexts within a high consolidation environment. In addition to the Virtual CPUs (each labeled VCPU) of the VMs, there are per-VM vmkernel threads (device-emulation, labeled “Dev Emu”, threads in the figure) and multiple vmkernel threads for each Physical NIC (PNIC) executing physical device virtualization code and virtual switching code. One major source of virtualization inefficiency is the frequent context switches among all these threads. While context switches occur due to a variety of reasons, the predominant networking-related reason is Virtual NIC (VNIC) Interrupt Coalescing, namely, how frequently does the vmkernel interrupt the guest for new receive packets (or vice versa for transmit packets). More frequent interruptions are likely to result in lower per-packet latency while increasing virtualization overhead. At very high consolidation ratios, the overhead from increased interrupts hurts performance.

Dense mode uses two techniques to reduce the number of context switches:

  • The VNIC coalescing scheme will be changed to a less aggressive scheme called static coalescing.
    With static coalescing, a fixed number of requests are delivered in each batch of communication between the Virtual Machine Monitor (VMM) and vmkernel. This, in general, reduces the frequency of communication, thus fewer context switches, resulting in better virtualization efficiency.
  • The device emulation vmkernel thread wakeup opportunities are greatly reduced.
    The device-emulation threads now will only be executed either periodically with a longer timer or when the corresponding VCPUs are halted. This optimization largely reduces the frequency that device emulation threads being waken up, so frequency of context switch is also lowered.

fig1-high-cons

Figure 1. High Consolidation Example

Enabling Dense Mode

Dense mode is disabled by default in vSphere 6.0. To enable it, change Net.NetTuneHostMode in the ESXi host’s Advanced System Settings (shown below in Figure 2) to dense.

fig2-dense-mode-ui

Figure 2. Enabling Dynamic Host-Wide Performance Tuning
“default” is disabled; “dense” is enabled

Once dense mode is enabled, the system periodically checks the load of the ESXi host (every 60 seconds by default) based on the following three thresholds:

  • Number of VMs ≥ number of PCPUs
  • Number of VCPUs ≥ number of 2 * PCPUs
  • Total PCPU utilization ≥ 50%

When the system load exceeds the above thresholds, these optimizations will be in effect for all regular VMs that carry default settings. When the system load drops below any of the thresholds, those optimizations will be automatically removed from all affected VMs such that the ESXi host performs identical to when dense mode is disabled.

Applicable Workloads

Enabling dense mode can potentially impact performance negatively for some applications. So, before enabling, carefully profile the applications to determine whether or not the workload will benefit from this feature. Generally speaking, the feature improves the VM consolidation ratio on an ESXi host running medium network throughput applications with some latency tolerance and is CPU bounded. A good use case is Web-farm workload, which needs CPU to process Web requests while only generating a medium level of network traffic and having a few milliseconds of tolerance to end-to-end latency. In contrast, if the bottleneck is not at CPU, enabling this feature results in hurting network latency only due to less frequent context switching. For example, the following workloads are NOT good use cases of the feature:

  • X Throughput-intensive workload: Since network is the bottleneck, reducing the CPU cost would not necessarily improve network throughput.
  • X Little or no network traffic: If there is too little network traffic, all the dense mode optimizations barely have any effect.
  • X Latency-sensitive workload: When running latency-sensitive workloads, another set of optimizations is needed and is documented in the “Deploying Extremely Latency-Sensitive Applications in VMware vSphere 5.5” performance white paper.

Methodology

To evaluate this feature, we implement a lightweight Web benchmark, which has two lightweight clients and a large number of lightweight Web server VMs. The clients send HTTP requests to all Web servers at a given request rate, wait for responses, and report the response time. The request is for static content and it includes multiple text and JPEG files totaling around 100KB in size. The Web server has memory caching enabled and therefore serves all the content from memory. Two different request rates are used in the evaluation:

  1. Medium request rate: 25 requests per second per server
  2. High request rate: 50 requests per second per server

In both cases, the total packet rate on the ESXi host is around 400 Kilo-Packets/Second (KPPS) to 700 KPPS in each direction, where the receiving packet rate is slightly higher than the transmitting packet rate.

System Configuration

We configured our systems as follows:

  • One ESXi host (running Web server VMs)
    • Machine: HP DL580 G7 server running vSphere 6.0
    • CPU: Four 10-core Intel® Xeon® E7-4870 @ 2.4 GHz
    • Memory: 512 GB memory
    • Physical NIC: Two dual-port Intel X520 with a total of three active 10GbE ports
    • Virtual Switching: One virtual distributed switch (vDS) with three 10GbE uplinks using default teaming policy
    • VM: Red Hat Linux Enterprise Server 6.3 assigned one VCPU, 1GB memory, and one VMXNET3 VNIC
  • Two Clients (generating Web requests)
    • Machine: HP DL585 G7 server running Red Hat Linux Enterprise Server 6.3
    • CPU: Four 8-core AMD Opteron™ 6212 @ 2.6 GHz
    • Memory: 128 GB memory
    • Physical NIC: One dual-port Intel X520 with one active 10GbE port on each client

Results

Medium Request Rate

We first present the evaluation results for medium request rate workloads. Figures 3 and 4 below show the 95th-percentile response time and total host CPU utilization as the number of VMs increase, respectively. For the 95th-percentile response time, we consider 100ms as the preferred latency tolerance.

Figure 3 shows that at 100ms, default mode consolidates only about 470 Web server VMs, whereas dense mode consolidates more than 510 VMs, which is an over 10% improvement. For CPU utilization, we consider 90% is the desired maximum utilization.

fig3-med-95

Figure 3. Medium Request Rate 95-Percentile Response Time
(Latency Tolerance 100ms)

Figure 4 shows that at 90% utilization, default mode consolidates around 465 Web server VMs, whereas dense mode consolidates about 495 Web server VMs, which is still a nearly 10% improvement in consolidation ratio. We also notice that dense mode, in fact, also reduces response time. This is because the great reduction in context switching improves virtualization efficiency, which compensates the increase in latency due to more aggressive batching.

fig4-med-90

Figure 4. Medium Request Rate Host Utilization
(Desired Maximum Utilization 90%)

High Request Rate

Figures 5 and 6 below show the 95th-percentile response time and total host CPU utilization for a high request rate as the number of VMs increase, respectively. Because the request rate is doubled, we reduce the number of Web server VMs consolidated on the ESXi host. Figure 5 first shows that at 100ms response time, dense mode only consolidates about 5% more VMs in a medium request rate case (from ~280 VMs to ~290 VMs). However, if we look at the CPU utilization as shown in Figure 6, at 90% desired maximum load, dense mode still consolidates about 10% more VMs (from ~ 240 VMs to ~260 VMs). Considering both response time and utilization metrics, because there are a fewer number of active contexts under the high request rate workload, the benefit of reducing context switches will be less significant compared to a medium request rate case.

fig5-high-95

Figure 5. High Request Rate 95-Percentile Response Time
(Latency Tolerance 100ms)

fig6-high-90

Figure 6. High Request Rate Host Utilization
(Desired Maximum Utilization at 90%)

Conclusion

We presented the Dynamic Host-Wide Performance Tuning feature, also known as dense mode. We proved a Web-farm-like workload achieves up to 10% higher consolidation ratio while still meeting 100ms latency tolerance and 90% maximum host utilization. We emphasized that the improvements do not apply to every kind of application. Because of this, you should carefully profile the workloads before enabling dense mode.