Home > Blogs > VMware End-User Computing Blog > Monthly Archives: November 2012

Monthly Archives: November 2012

Solved! VMFS File Locking in VMware View 5.1 – All You Wanted to Know

By Fred Schimscheimer, Sr. Technical Marketing, End User Computing 

Ever wonder why VMware View would support more than eight hosts in an ESX cluster?

Wonder no more. VMware View 5.1 now supports up to 32 hosts in an ESX cluster!  That is provided a NFS data store is specified for linked clone replica creation.   Since more than eight hosts can now be used in an ESX cluster, VMware View 5.1 makes desktop consolidation even more efficient than ever before.

I recently reviewed a technical white paper that discusses VMFS File Locking and Its Impact in VMware View 5.1 and the number of hosts it can support in an ESX cluster.  It is a good read for those of you who want to understand the technical side of things.

You are given a primer on the VMFS File Locking mechanism, including:

  • Exclusive Lock
  • Read-Only Lock
  • Multi-Writer Lock
  • Eight vs 32 host limit in an ESX cluster

You’ll also discover that network attached storage (like NFS) is not affected by the eight host limit and that in View 5.1 you are able to select a cluster having more than eight hosts provided a NFS data store is used.   In addition, you’ll learn that NFS supports a protocol called Network Lock Manager (NLM) and that the Network Lock Manager uses an advisory locking scheme for locking.  It is this mechanism that allows View 5.1 to support up to 32 hosts.

Last but not least, this paper provides you with a few use cases and best practices that should be followed while using a large cluster (having more than eight hosts and using a NFS data store) in VMware View 5.1.  Take a look.  It is a good read.

Pre-Defined and Certified Solutions with Cisco UCS Servers, NexentaVSA and VMware View

By: Alex Aizman, CTO & Co-Founder, Nexenta Systems

Desktop virtualization solutions are gaining traction with small and mid-sized organizations. But many of these IT organizations are extremely lean, and don’t have additional resources to dedicate to VDI deployment, systems integration and SAN management.  In order to help simplify VDI rollouts,  Cisco, VMware, and Nexenta have teamed up to develop a set of integrated solutions that are also fully validated under the VMware Rapid Desktop Program.

These solutions are now available in the following configurations:

Entry-level: Cisco UCS C 240 M3 with Nexenta VSA for VMware View™: This solution is based on UCS C-Series rack mount server leveraging internal storage and VMware View deployed on VMware vSphere using the Nexenta VSA for desktop deployment and configuration. With this solution you have the option to start small with just one C240 and grow your environment by adding more C240s to the cluster. Each C240 M3 server can handle up to 175-200 virtual desktops (depending on workloads, desktop images and usage patterns).

Fig 1. Deployment Option1: Floating Desktops Using Cisco UCS C240 M3 Rack Mount Servers

Scale-out: This solution is based on having VMware View desktop workload on UCS B-Series blade servers and storage provided by UCS C-Series rack mount servers managed by NexentaVSA.

A combination of UCS B-Series blade (B230 M2 or B200 M3)and UCS C-Series rack servers (C240 M3) can pack more density and provide a shared storage option to stand up dedicated desktops. NexentaVSA in the local blade hosts provides the performance required by the desktops while the Nexenta storage in the rack servers provides the external storage to support backup, high availability, replication etc. The rack servers can be configured in a mirrored pair to handle high availablity should one of the C240s become unavailable.

Fig 2. Deployment Option2: Dedicated Desktops Using Cisco UCS C240 M3 Rack Mount Servers

Expect to be able to stand up approximately 1,600 VMware View desktops on eight blade servers in a chassis with this configuration.

For more details, visit  https://www.nexenta.com/corp/solutions/cisco-ucs-and-nexenta

Get More out of Your Storage with View Storage Accelerator

By Fred Schimscheimer, Sr. Technical Marketing, End User Computing

Are you tired of stressing out your servers and storage?  Do you want to reduce your IOPS?

If the answer is yes, you should take a look at the VMware View Storage Accelerator (also known as Host Caching) for View desktops.  Content-Based Read Cache (CBRC) is part of vSphere 5.0 and has been integrated into VMware View 5.1.  One performance bottleneck is the I/O requests issued from a virtual machine to the underlying storage for accessing the data contents from its virtual machine image.  The View Storage Accelerator addresses this bottleneck by leveraging the CBRC feature in vSphere that provides a per-host RAM-based solution for View desktops. This considerably reduces the read I/O requests that are issued to the storage layer and also addresses boot storms.

Many large VDI deployments will see performance improvements during a boot storm with a large number of cloned virtual machines, especially when the boot storm is highly read-intensive and multiple virtual machines issue read requests for data blocks that have identical content.  Additionally, the View Storage Accelerator is beneficial when disks contain the same content irrespective of where it comes from.

Greg Pellegrino at Pivot3 did some early testing of the View Storage Accelerator and here is what he found.  The chart below shows the read rate of the SSD tier.  The desktops are Windows 7 with recommended settings for View applied.

Figure 1:  Graph courtesy of Pivot3

The timeline shows:

  • Time 12 – 67:  Login storm
  • Time 170-232:  Media player file reads from RAWC pass 1
  • Time 340-405:  Media player file reads from RAWC pass 2

5.0 and 5.1 refer to View version.

Reboot refers to a test run immediately following the desktop boots.  No Reboot is a subsequent test run without rebooting desktops.

CBRC Impact:

  • At the login storm when logins immediately follow the desktop reboots.  This corresponds with Windows loading of executables to transition into a user runtime state.
  • Reads of the large media files from disk.  The first desktop that reads these files causes them to get cached into CBRC.  Then the other desktops get cache hits.

To take advantage of this functionality in vSphere you’ll need VMware View 5.1 and latest service pack.   Configuring the View Storage Accelerator is done through the View Administrator console and is just a one step process when configuring a pool.  The View Storage Accelerator is supported for any vCenter-Managed View Desktops such as manual desktops, automated full clone desktops and automated linked clone desktops.

To learn more about the View Storage Accelerator, visit this white paper:  View Storage Accelerator in VMware View 5.1