Home > Blogs > VMware End-User Computing Blog


New Large-Scale Horizon View Reference Architecture Paper!

By Tristan Todd, Architect, End-User Computing Technical Enablement, VMware

A team of engineers and architects from VMware, EMC, VCE, and Login VSI recently completed a reference architecture project that focused on a large-scale deployment of VMware Horizon View 5.2. Our team focused on real-world test scenarios, realistic user workloads, and practical infrastructure system configurations. We deployed 7,000 virtual desktop users on Horizon View 5.2 and used the VCE Vblock Specialized System for Extreme Applications, composed of Cisco UCS server blades and EMC XtremIO flash storage. This combination revealed world-class operational performance, efficient use of storage, and ease of use, all at an attractive price point.

horizon-view-at-scale-architecture-hosts-clones

The VMware Horizon View Large-Scale Reference Architecture paper presents our detailed findings and describes what we built and tested. Does this sound like another boring reference architecture paper? We made sure to pack the important details up front, and we made the paper easy to navigate and consume. All system configurations, test parameters, and a detailed bill of materials are included.

Was this architecture simple to implement and manage?

We observed simple sizing, deployment, and vSphere integration in preparing the Horizon View environment. With massive efficiency from data deduplication and with high levels of storage IOPs, we adequately handled the VDI workload.

We enjoyed the same ease of use and centralized management capabilities in the Cisco UCS and EMC XtremIO management platforms as with vSphere and Horizon View. It doesn’t take a lot of training or time with documentation to manage this type of environment.

What does high-performance storage really mean for VDI?

Storage IOPS and throughput are popular topics when discussing storage systems in a VDI environment. However, in terms of user experience, storage latency is an even more important metric to look at. The backend compute and storage platforms must be able to handle periodic bursts in user work patterns (often random), periodic IO bursts (login storms, application launch storms), and background activities (pool expansions, pool recomposes, desktop redeployment).

vmware-horizon-view-resource-usage-large-scale

Login VSI user-experience testing showed that the infrastructure could handle a variety of workload bursts and steady state periods, and a mix of workloads, and still deliver average storage latency to the virtual desktops at less than 1ms. This low latency translates to responsive desktops, snappy applications, and happy users.

What about desktop pool operations efficiency?

linked-clone-recompose-vmware-horizon-view-large-scale

Increasingly, VDI customers and users are sensitive to the speed and impact of desktop update operations. Desktops and desktop pools require regular OS and application updates, and these updates must not impact the ability of users to get work done.  For example, recently a large healthcare customer approached VMware and shared an SLA that applies to their VDI environment:

Non-functional requirement:

Hospital clinical desktops must be updated regularly with minimal disruption to healthcare services.

Functional requirement:

2000 bedside desktops in a single hospital must be recomposed each month during a single 4-hour scheduled maintenance window. During the recompose, no measurable impact can be felt on background or neighboring systems.

This is a challenging SLA to meet with a traditional backend infrastructure. Too often, hosts and storage systems are not able to keep up with the workload bursts that come with a large-scale recompose operation, and recompose performance suffers. However, we demonstrated that we could execute a 2000-seat recompose in 3 hours and 30 minutes with no impact to desktops in the same environment.

What about desktop costs?

Our testing was carried out on the pre-release version of the VCE Vblock Specialized System for Extreme Applications. The estimated costs for the tested solution is approximately $500 per desktop for all compute, network, and storage. This is a fantastic cost-point when you consider the level of performance that each desktop delivers to the user. Add to that the world-class engineering and support that is delivered by VCE. At this cost, you simply cannot find the same performance, scalability, and ease-of-administration on any other backend infrastructure.

So, what made this testing different from previous testing efforts?

We had never challenged our technical marketing labs with this size and scale of testing. We tested 7,000 desktops on 2 EMC XtremIO X-Bricks, 52 Cisco UCS blades, and 8 EMC Isilon storage units. We married the best technologies from VMware, EMC, Cisco, VCE, and Login VSI.

horizon-view-large-scale-infrastructure

Rather than place user data and persona data on the high-performance XtremIO storage, we redirected this data to EMC Isilon NAS storage. With this architecture, we saw great efficiency and storage performance, and reduced overall costs.

Another aspect of this testing which was more challenging than previous reference architectures was that we tested desktop pool operations while we had significant background workload. For example, during our 1000-seat linked-clone pool deployment and recompose tests, we had 5,000 active Login VSI sessions running in the background.

For more information on these very exciting test results, see VMware Horizon View Large-Scale Reference Architecture.

3 thoughts on “New Large-Scale Horizon View Reference Architecture Paper!

  1. Pingback: New Large-Scale Horizon View Reference Architecture Paper! - VMware End User Computing - BrianMadden.com

    1. Tristan Todd (VMware)

      John S – Thanks for your question. The cost is for all datacenter components (network, compute, storage). And this is based on modest discount levels (aka “street pricing”). The cost estimate does not include endpoints or any software/OS licensing costs.

Comments are closed.