Home > Blogs > VMware VROOM! Blog


VDI Benchmarking with View Planner 3.0

Recently we announced the general availability of VMware View Planner 3.0 as a VDI benchmark. VMware View Planner is a tool designed to simulate a large-scale deployment of virtualized desktop systems. This is achieved by generating a workload representative of many user-initiated operations that take place in a typical VDI environment. The results allow us to study the effects on an entire virtualized infrastructure. The tool can be downloaded from http://www.vmware.com/products/desktop_virtualization/view-planner/overview.html

In this blog, we present a high-level overview of the View Planner benchmark and some of its use cases. Finally, we present a simple storage scaling use case using a flash memory storage array from Violin Memory, who has partnered with us during the validation of this new benchmark.

With version 3.0, View Planner can be run as a benchmark which will help VMware partners and customers to precisely characterize and compare both the software and hardware solutions in their VDI environments. Using View Planner’s comprehensive standard methodology, VDI architects can compare and contrast different layers of the VDI stack including processor architectures; the results can be used to objectively show the performance improvements of the next generation chipset in contrast with the current generation. In addition, various storage solutions like hybrid, all-flash, and vSAN can be compared with different SAN configurations for a given hardware setup.

View Planner 3.0 provides a number of features which include

• Application-centric benchmarking of real-world workloads
• Unique and patented client-side performance measurement technology to better understand the end user experience
• High precision scoring methodology for repeatability
• Benchmark metrics to highlight infrastructure efficiencies—density, performance and economics.
• Support for latest VMware vSphere and Horizon View versions
• Better automation and stats reporting for ease of use and performance analysis
• Auto-generated PDF reports providing a summary of the run

View Planner Scoring
The View Planner score is represented as VDImark. This metric encapsulates the number of VDI users that can be run on a given system with application response time less than the set threshold. Hence, the scoring is based on several factors such as the response time of the operations, compliance of the setup and configurations, and so on.

For response time characterization, View Planner operations are divided into three main groups: (1) Group A for interactive operations, (2) Group B for I/O operations, and (3) Group C for background operations. The score is determined separately for Group A user operations and Group B user operations by calculating the 95th percentile latency of all the operations in a group. The default thresholds are 1.0 seconds for Group A and 6.0 seconds for Group B. Please refer to the user guide, and the run and reporting guides for more details.

View Planner Benchmarking Use Cases
As mentioned earlier, the View Planner 3.0 benchmark can be used to benchmark different CPU architectures, hosts, and storage architectures. Using the tool, vendors and partners can scale the VMs on a specific processor architecture and find out how many VMs per core can be supported and the same can be also done for different server host systems. In the same direction, the storage system can be benchmarked to see how may VMs can be supported without seeing significant increase in the I/O latency and hence the user experience for a given storage configuration. It can be also used to study the impact of different configurations and optimizations that can be done in different layers of both the software and the hardware stack. Next, we look at one such use case of View Planner by looking at storage scaling by running View Planner VMs on multiple hosts.

Use Case Example: Storage Scaling
To illustrate one of the use cases of View Planner 3.0, we look at storage scaling aspects. In this experiment, we scale the number of hosts (3, 5, 6) and each host runs about 90 to 100 VMs. Then we see how the Violin storage array is able to scale with increasing IOPS requirements. We didn’t go beyond 6 hosts because of hardware availability. The experimental setup for this use case is shown below.

 

The host running the desktop VMs has 16 Intel Xeon E5-2690 cores running @ 2.9 GHz. The host has 256GB physical RAM, which is more than sufficient to run 90-100 1GB Win7 VMs. The desktop is connected to a Violin storage array using the Fibre Channel host bus adapter (FC HBA)on the host.

View Planner QoS

We ran 285 VMs (3 hosts), 480 VMs (5 hosts), and 560 VMs (6 hosts), and we collected the View Planner response times and the QoS is shown in the following figure.

In all the runs, we see in the bar chart that both Group A and Group B 95% response times are less than the threshold of 1 second, and 6 seconds respectively. Also, we don’t see much variation as we increased the number of hosts and we can clearly see that Violin storage is easily coping with a greater number of desktop VMs and servicing their IOPS requirements even when the number of desktops is doubled. To further illustrate the Group A and Group B response times, we show the average response time of individual operations for these three runs for both Group A and Group B, as follows.

Group A Response Times

As seen in the figure above, the average response times of the most interactive operations are less than one second, which is needed to provide good end-user experience. If we look all three runs, we don’t see much variance in the response times and they almost remained constant when scaling up.

Group B Response Times

Since Group B is composed of I/O operations, this will provide good insight for storage-related experiments. In the bar chart shown above, we see that the latency of operations such as PPTx-Open, Word-Open, or Word-Save didn’t change much as we scaled from 285 VMs (3 hosts) to 560 VMs (6 hosts).

IOPS Requirements

The above chart shows the total IOPS seen by the Violin storage array when the benchmark was being executed. (This doesn’t include the IOPS from any management operations such as Bootstorm, virus scan, and so on.) For the 560 VM run, we see that the total IOPS from all the hosts is going up to 10k and then tapering down to about 6k in the steady state. So, in the first iteration, we see higher IOPS requirement than the steady state as expected. We see similar behavior with 285 VMs and 480 VMs run where we see peaks in the first iteration and then we see steady IOPS usage in the steady state iterations.

While we have presented one simple use case of storage scaling in this blog, View Planner 3.0 can be used for many use cases (CPU scaling, processor architecture comparison, host configurations, and so on) as mentioned earlier.
If you have any questions and want to know more about View Planner, you can reach out to the team at viewplanner-info@vmware.com

If you are attending VMworld this year, please check out our session on “View Planner 3.0 as a benchmark”. Here are the session details:
TEX5760 – View Planner 3.0 as a VDI Benchmark
Tuesday: 3:30 PM
Banit Agrawal & Rishi Bidarkar

4 thoughts on “VDI Benchmarking with View Planner 3.0

  1. Pingback: Server and Storage I/O Benchmarking and Performance Resources | StorageIOblog

  2. Pingback: The Data Center JournalServer and Storage I/O Benchmarking and Performance Resources - The Data Center Journal

Leave a Reply

Your email address will not be published. Required fields are marked *

*