VMware App Volumes VMware Horizon

App Volumes Performance and Scalability Testing – Preparing to Test

By Tristan Todd, Architect, End-User Computing, VMware

In advance of the next major release of VMware App Volumes, VMware is undertaking a performance and scalability reference architecture (RA) project. This project is aimed at helping customers and partners understand the App Volumes product in a production configuration, where technical planning helps deliver consistent performance and predictable scalability.

As part of this exciting project, we are doing something different. We will post a three-part blog series that offers a before, during, and after view into our reference architecture lab. In this first blog post, we are going to talk about the planned approach to testing and address some of the key areas of focus. We will also describe the test environment and test tools that we are using.

What Are We Testing?

App Volumes is a transformative solution that delivers applications to View virtual desktops in Horizon 6. Applications installed on multi-user AppStacks or user-specific writable volumes attach instantly to a desktop at login or at application launch. The user experience is as if the applications were natively installed on the desktop.

These are the major test hypotheses that we are examining:

  • Applications delivered by App Volumes provide a user experience similar to native apps, with only modest increases in vSphere and Horizon 6 VDI (View) host load.
  • App Volumes introduces new storage performance characteristics and new capacity consumption patterns.
  • Adjusting the number of users per AppStack and apps per AppStack can affect performance.
  • App Volumes shows linear performance scaling from 0 to 2000 active users.

We are also exploring the following topical questions as they apply to performance and scalability in production environments:

  • How do native apps perform compared to applications delivered by AppStacks with regard to user experience metrics?
  • Do vSphere hosts show the same resource loading with either natively installed applications or applications in App Volumes containers?
  • How do storage capacity consumption and I/O compare between native application installations and applications delivered by App Volumes?
  • What vSphere, App Volumes Manager, vCenter, and SQL resources are required for 2000 active users?
  • Should a single AppStack be limited to 500 users, 1000 users, or 2000 users? Is there an optimal number of users or a limit?
  • When does it make sense to bundle multiple apps into a single AppStack? When does it make sense to spread the number of apps over multiple AppStacks? Is there a performance difference?

Test Tools

We will use the excellent Login Virtual Session Indexer (Login VSI) tool for this project. Login VSI is the industry-standard benchmarking tool for measuring the performance and scalability of centralized desktop environments such as Virtual Desktop Infrastructure (VDI) and server-based computing (SBC). We are using Login VSI to generate a reproducible, real-world test case that simulates the execution of various applications, including Microsoft Internet Explorer, Adobe Flash video, and Microsoft Office applications, to determine how many virtual desktop sessions each configuration can support.

We will be testing natively installed applications on hosted desktops as compared to the following AppStack configurations:


We will use vRealize Operations for Horizon as the foundational tool for vSphere and View performance monitoring and reporting. In a testing environment, it is invaluable for both real-time and historical statistics. The custom dashboarding and options for performance-data export help us gather and publish the most meaningful performance statistics.

Test Environment

Our test environment is built on the latest shipping versions of vSphere and Horizon 6. Our vSphere hosts with VMFS datastores will be deployed according to the orientation in the following diagram:


The environment will be segregated according to best practices as described in the following diagram:


We are planning a basic Horizon 6.2 configuration similar to the following diagram:


We will be using the most common benchmark workload in Login VSI, the knowledge worker workload, with each desktop running Win7 64-bit, 2 vCPUs, and 2 GB of vRAM. During AppVolumes testing, each desktop will have one writable volume attached and at least one AppStack attached. Finally, each desktop will be optimized with the VMware Operating System Optimization Tool.

We will be executing four major test phases, where we compare View desktops with six natively installed applications, to View desktops with the same six applications delivered by AppStacks, in three different configurations. We will also compare results for 500, 1000, 1500, and 2000 desktops.


This should give us enough performance data and insight to address our hypotheses and the areas of discovery described at the beginning of this article.


You should now have a better understanding of what we are going to test. The next article in this series will describe how the testing is progressing. The final article in this series will include our overall findings and a link to the reference architecture.