VMware Horizon

VMware EUC Portfolio: Demystifying VMware View Large Scale Designs

By John Dodge, technical marketing, VMware

VMware View Large Scale Design Series, Episode 1: Core View Infrastructure –  The first of a series of posts enabling large scale VMware View designs and deployments. Future posts will look at Security Servers, View Composer, Connection Server component interaction, and more.

With the industry’s largest number of public reference deployments of more than 5,000 seats, VMware View is a great solution for large-scale deployments.  That said, we never stop enhancing View to make the jobs of IT admins easier.  With the launch of VMware View 5.1 we are making large-scale virtual desktop environments even easier to deploy and manage.

Allow me to introduce myself. My name is John Dodge, and I focus on Technical Enablement as a part of VMware’s End-User Computing Technical Marketing team. A critical part of my role is to ensure that our customers and partners know how to implement and use our products to solve their business problems and create new capabilities — critical activities to cementing IT as a strategic part of the business.

In this post I would like to introduce a topic that is near and dear to my heart: large-scale design for VMware View. In my various roles at VMware I have been fortunate to work with some of VMware’s largest End-User Computing customers.  This includes many deployments with tens-of-thousands of desktops in production.


In VMware View 5.1, we have made several enhancements to help further scale large View deployments, including:

  • Increased scale in NFS attached clusters — now you can scale your VMware vSphere clusters up to 32 ESXi hosts
  • Reduce storages costs with View Storage Accelerator — combine VMware View 5.1 with VMware vSphere 5 and substantially optimize read performance using in-memory cache of commonly read blocks — totally transparent to the guest OS
  • Standalone View Composer Server — VMware View Composer can now be installed on its own server, opening up several new capabilities

To begin understanding large scale VMware View designs, you need the basic building blocks found in all successful VMware View implementations. The three key building blocks are the View Pod, View Block, and Management Pod. These are logical objects, but they do have some tangible boundaries.

View Pod

I’ll start with the View Pod: A View Pod is a specific instantiation of a cluster of View Connection Servers that replicate the Active Directory Lightweight Directory Services (formerly ADAM) as well as a volatile desktop session map, which is replicated within the cluster using the Java Message System (JMS) message bus. For reasons that will be explored in a future post, the recommended maximum number of Connection Servers in a cluster is seven, and VMware only supports LAN connectivity within the cluster (in other words, no stretching a View cluster over WAN links). We support up to 2,000 concurrent connections per Connection Server up to a maximum number of 10,000 desktops in a View Pod, though in future posts I’ll discuss why you might not want to scale to this maximum number. 

Figure 1, A View Pod consisting of a cluster of 7 Connection Servers

LargeScale-1

In large scale deployments make sure your Connection Servers have at least 10GB of RAM and 4 vCPUs, and ensure you don’t exceed this maximum number of concurrent connections on all Connection Servers in the cluster accommodating for taking Connection Servers offline for maintenance or downtime.

View Block

VMware View is very tightly integrated with VMware vSphere, which incidentally is one of the key differentiators between VMware View and similar products on the market. VMware View provisions and controls desktops through VMware vCenter. In large scale View designs we recommend using multiple separate vCenter instances, and each vCenter instance demarcates a View Block. I’m including five View Blocks in this example to demonstrate how we could scale up to 10,000 desktops.

Figure 2, View Blocks

LargeScale-2

Why do we recommend multiple vCenter instances? I am glad you asked!

In large scale VMware View implementations there is a provisioning and power operations that occur when initially deploying a desktop pool, when performing refit operations (refresh, recompose, or rebalance), or when doing advanced orchestration, such as powering off unused desktops overnight to conserve power consumption of ESXi by leveraging vSphere Distributed Power Management. Including multiple vCenter instances means these operations occur in parallel as opposed to queuing up in serial. This makes for faster deployments and refit operations.

Tuning power and provisioning operations are important enough to managing large scale View deployments that it will get a blog post all on its own.

Management Block

Next is the Management Block. The Management Block is a separate VMware vSphere cluster that contains the server infrastructure that supports the View infrastructure virtual machines, minimally, Connection Servers, View Block vCenter servers, and database servers needed for vCenter, and likely including View Security Servers and View Composer servers (now separate from vCenter server in View 5.1!).

The Management Block is a best practice for large-scale implementations because server workloads tend to be relatively static compared to the highly volatile workloads of desktops. Separating these workloads ensures that they do not interfere with each other, impacting user experience. In addition, a separate vSphere cluster is a best practice from vSphere design that dictates you shouldn’t attempt to manage a vSphere cluster from a virtual machine running vCenter that is running on an ESXi host that it manages. And you thought the plot of Inception was confusing! An illustration will hopefully make this clear.

Figure 3, Management Block

LargeScale-3

One thing I’m leaving out of this diagram is that someplace you’ll need a separate vSphere cluster for the Management Block’s vCenter Server. This could be any cluster, such as a View Block.

Each View Block can be thought of as a resource boundary. VMware View desktop pools are confined to a vSphere cluster, and the desktops in each pool are confined to the storage visible to the ESXi hosts where the desktops running. The storage can be dedicated to a View Block or shared across multiple View Blocks. The primary design constraint is performance (read/write IOPS), the second design constraint is disk footprint (the total disk space consumed by all desktops in all the pools in the View Block).

Figure 4, Example Large Scale View Logical Design 

LargeScale-4

Here my illustration sacrifices some accuracy for illustrating this concept simply. In this illustration I am not depicting separate VI (vSphere) clusters within the View Block, but in large scale View designs we’d expect to see multiple vSphere clusters, and a number of ESXi hosts within each vSphere cluster. Starting with View 5.1, the maximum vSphere cluster size is dictated by storage types: When running desktops on VMFS datastores the vSphere cluster size is limited to 8 ESXi hosts per cluster; when exclusively using NFS datastores you can go up to 32 ESXi hosts per cluster.  Designing for maximum virtual machine density is an important topic that will have a posting all its own.

Scalability Guidelines of Desktop Pool types

We have different types of desktop image management innovations; for simplicity sake I’ll refer to them as full clones and linked clones. Full clones are what you’d expect—a standalone virtual machine with a monolithic (and likely thin-provisioned) .vmdk file. Linked clones take advantage of our scalable virtual image (SVI) technology available when using VMware View Composer (for more information on how View Composer works I recommend you read our product documentation). Full clone desktop pools can be quite large, but we recommend that you constrain linked clone pools to a maximum of 1,000 desktops. This isn’t a hard limit—this is an operational consideration.

In View, a recompose operation is typically (but not exclusively) applied to an entire pool. During a recompose View is deleting the desktop, including its linked clone delta disk and creating a new one. Suffice to say there is a lot of activity that occurs during recompose operations, including a lot of provisioning ops going through vCenter and a lot of disk operations. The larger the pool, the longer it takes to recompose them. To put some perspective on how long this takes, I  have a customer that recomposes 3,000 desktops at least once per month and it takes about 7 and a half hours to complete. I will explore storage design in a separate post (don’t you love the shameless promotional plugs?), but for now suffice to say a key contributing factor of provisioning and recompose time is storage performance. 

Recap of key points

For those of you enjoying the same benefits of an abundance of multitasking opportunities such as I, here is a recap of the key points of this post:

Logical Design structure

  • A View Pod is a cluster of View Connection Servers
  • A View Block is a View desktop resource boundary with a vCenter server as a point of demarcation
  • A View Management Block is a separate vSphere cluster where View infrastructure virtual machines reside, which helps to separate relatively static server workloads from highly volatile desktop workloads

 Connection Servers

  • Replication between Connection Servers across a WAN link is not supported, as JMS communication is highly impacted by network latency
  • The maximum supported number of desktops in a View Pod is 10,000
  • Each Connection Server can support 2,000 concurrent connections
  • Multiple vCenter servers provide parallelization benefits to power and provisioning operations in large View deployments, reducing provisioning and recompose times (at a minimum)

 

Storage and desktop pools

  • Storage systems can be shared between View Blocks, or can be dedicated to View Blocks to ensure ample performance
  • Desktop pools are constrained to a vSphere cluster
  • vSphere clusters hosting linked clone pools are limited to 8 ESXi hosts when using VMFS datastores and 32 ESXi hosts when using only NFS datastores
  • Limit your linked clone desktop pool based on the length of your maintenance window for recompose of that pool

Wrapping up—for now

I know that it may seem there are a number of variables to balance in large scale View design, but like all meaningful and valuable activities (such as doing celebrity voice impersonations at parties) there is significant benefit to learning the design craft and understanding our best practices. There are more posts to come in this large scale design series and now you should have a solid foundation in basic View infrastructure. Stay tuned dear readers—there is much more to come!

Read more about the other products launched as part of our VMware End-User Computing portfolio announcement and register for the VMware End-User Computing Virtual Conference on May 3rd, 8:30am to hear more.

About the Author

John Dodge is a Director in Technical Marketing for the End User Computing Business Unit. John’s team is responsible for technical enablement of Enterprise customer accounts and SISO and channel partners around the globe. John is a well-reviewed and regular speaker at VMworld and Partner Exchange events and has recorded numerous videos and podcasts on View Best Practices, Troubleshooting, and large-scale design. When John isn’t developing new content to entertain and inform on a variety of technical subjects (why should deep technical material be boring?) he can usually be found helping customers freeing the greatest business value trapped in their IT infrastructure.