Home > Blogs > VMware End-User Computing Blog


View Storage Accelerator – In Practice

By Narasimha Krishnakumar, Staff Product Manager, End-User Computing

This post details how to use View Storage Accelerator. Read my previous post for an overiew on View Storage Accelerator: VMware EUC Portfolio: Optimizing Storage with View Storage Accelerator

Let's get right into it. VMware View 5.1 exposes the View Storage Accelerator feature at pool creation time.  As shown below in Figure 1., Desktop administrators can choose to use host caching at pool creation time through the Advanced Storage Options configuration screen.  

ViewStorageAccelerator-3.jpg

Figure 1 . View Administrator UI screen for configuring View Storage Accelerator (OS Disks)


You can also configure the following parameters when you chose to use host caching:

Disk type for host caching – OS disk or OS disk and user persistent disk for linked clone desktops 

Cache regeneration time – This determines the time when the ESXi server will regenerate the cache. Additional details of cache implementation are discussed in the sections below.When users have chosen to use host caching, the host cache size is configured by default to a size of 1024MB. This default can be changed to any size between 100MB and 2048MB depending on the needs of the virtual desktop environment. Figure 2 below, shows the host cache settings on the server. Although View administrators can configure the cache size to be different on each ESXi server, VMware’s best practice recommendation is to use the same size cache on each ESXi server.

ViewStorageAccelerator-4
Figure 2: View Server configuration for View Storage Accelerator

A cache regeneration policy needs to be configured when you enable host caching. The cache regeneration policy determines when the cache is regenerated. In the example (Figure 5) below, the cache regeneration policy has been defined to run on all days of the week with a blackout period between 8AM and 5PM. The cache regeneration operation can consume compute resources (CPU) on the ESX server and may affect desktop performance if it is run when the desktops are being actively used.  Desktop administrators should avoid running the cache regeneration when desktops are being actively used and should define this policy based on the requirements of their environment. It is highly recommended that the regeneration policy be defined such that it occurs outside business hours when desktops are not active.

ViewStorageAccelerator-5
Figure 3. View Administrator UI screen for scheduling cache regeneration

If we look at what is going on inside of View Storage Accelerator, there are two main components: 

In memory cache – This is the area in ESXi memory that caches the common blocks. This cache is shared by all the VM’s on the ESXi server.

Digest – This is an on disk structure that is stored as a file and is stored for every disk that is enabled by View Storage Accelerator 

We need to consider three different phases to explain the internal functions of View Storage accelerator:

Setup Phase – When a desktop administrator creates a View Pool, he/she makes the choice of enabling View Storage Accelerator.  After enabling View Storage Accelerator, they select a disk type, which are either OS disk or OS disk and User Disk for linked clones and OS disk and User Disk for full clones. Depending on the disks selected the feature works as follows:

  • For Linked Clone VMs, if OS disk is selected, it creates an on disk digest for all the replica disks in the environment. 
  • For Linked Clone VMs, if OS disk and user disk are selected, a digest is created for the replica disks as well as the user disks
  • For Full clone VM’s, a digest is created for each VM

The digest contains hash values for all the blocks in the vmdk. The content of the block determines the value of the hash entry for the block.  At pool creation time, View Storage Accelerator constructs the digest by walking through the entire vmdk and filling up the digest with entries for each block on the vmdk.

Operational Phase – During the operational phase, the in memory cache and digest are both accessed. Two scenarios can occur during the operational phase:

  • When a user desktop reads a block, it first looks up the hash value of the block based on the block address. If the block is valid and is in memory it is returned to the user, otherwise it is read from disk, loaded into memory and returned to the user. If the block is marked invalid, it reads the block from disk and returns it to the user.
  • When a user desktop writes a block that is in cache, the entry for the block in the digest is invalidated.

Regeneration Phase – When a user writes to any of the common blocks cached in memory, the entries in the digest corresponding to these blocks are marked as invalid. The digest has to be regenerated when a lot of the common blocks have changed. The regeneration is triggered on a specific interval defined by desktop administrators in the cache regeneration policy settings. The regeneration is not automatic and is only triggered by the policy setting. 

Some things to consider…

View Storage Accelerator is an easy to use feature. The only precaution that users need to take is while configuring the cache regeneration policy. VMware’s best practice recommendation is to schedule cache regeneration during periods of low user activity.  The following are a few considerations when planning to use the View Storage Accelerator feature:

  • The feature is not supported with View Composer API’s for Array Integration, which is a Tech Preview feature of View 5.1.
  • The feature is not supported for use with desktops with the local mode feature turned on.
  • Use of View Storage Accelerator is not supported when View Replica Tiering is enabled.
  • View Storage Accelerator requires VMware vSphere 5.0

And some common questions …

Why is the Cache size limited to 2GB?

In some of the internal tests that have been conducted, it has been found that users achieve maximum benefits up to a cache size of 2GB.  We are constantly looking for input from our customer base to help us determine whether raising the cache size limit would further enhance these benefits.

Is the configured cache size the actual usable cache size? Are there any memory overheads?

Yes, the configured cache size is the actual size that is used for caching common blocks of data. A very small amount of memory overhead is associated with storing the metadata for the cache and is maintained separately in memory.

Does View Storage Accelerator provide Write Caching Capabilities?

At this time, View Storage Accelerator is meant to address read intensive peak workloads, which are characteristic of VDI deployments, as well as workloads, which access common blocks on disk.

I have a local SSD drive which offers a high read IOPS density. Will I get any benefit if I use View Storage Accelerator?

View Storage Accelerator caches common blocks in memory and addresses peak read events such as boot storms as well as workloads that access common blocks of data. An SSD class storage device on the server provides high density IOPS required for the desktops as well as the capacity to store desktop user data. Although SSD is a faster tier that can provide rapid access to data, it does not provide the content based caching capability. View Storage Accelerator caches data that is common to all the VMs on an ESXi. The cache size is fixed irrespective of the number of VMs whereas SSD’s will need to scale in capacity with the addition of VMs. The performance benefits of View Storage Accelerator are additive to the benefits provided by SSD class storage. 

I use a shared storage array that has a read cache on board with the CPU. Will there be a benefit of using View? 

Typically, a storage array sits behind a storage network and applications need to access the read cache of the storage array over the network, which will incur an additional latency. View Storage Accelerator reduces the overall network latency and bandwidth by accessing the storage array’s read cache and caching the common blocks on the ESX server. In addition, most shared storage arrays do not have a content-based cache on board and the cache is subject to eviction due to use by many different workloads. However, if storage arrays implement caching capabilities in the form of an additional software license or a combination of hardware and software, View Storage Accelerator complements the functionality and reduces network bandwidth. 

Is there a risk of data corruption if I use View Storage Accelerator?

View Storage Accelerator does not cause data corruption. It caches common blocks in ESX server memory and provides it to desktop users that request it through their applications. The ESXi server ensures that at any time a block is modified, the block is not returned from the in memory cache until the digest has been regenerated and the content hash for the block is up to date.

I am a VMware partner who needs to test View Storage Accelerator and make best practice recommendations of its use with my products. Is there a command to force regeneration of the cache?

At present, the View Storage Accelerator cache can be regenerated only through the definition of a cache regeneration policy.

 

 

2 thoughts on “View Storage Accelerator – In Practice

  1. Craig Stewart

    Just want to clarify my understanding of the operational phase. Does the statement below indicate that the ESXi host has to go to the on disk digest to determine whether it should be reading from memory or from disk? Does that not introduce a latency in itself as regardless whether the read is served from memory or disk you are still having to go to the shared storage to make that decision?
    “When a user desktop reads a block, it first looks up the hash value of the block based on the block address. If the block is valid and is in memory it is returned to the user, otherwise it is read from disk, loaded into memory and returned to the user. If the block is marked invalid, it reads the block from disk and returns it to the user.”
    I’m also wondering how you age data out of the cache, you discuss a situation where you have a read miss, data isn’t in the cache so is read from disk and cached in memory for future reads. How are you dealing with aging data out of the cache to make way for new data coming in when these read misses occur.
    Love the technology, makes complete sense to me, as always I just want to fully understand the internal workings of it :-)

  2. Narasimha Krishnakumar

    @Craig,
    View Storage Accelerator maintains metadata in memory for each digest file. This insures that you do not have to go to disk to read a block of data that is already loaded in the cache. Data in the cache ages out when applications/users write to common blocks of data that have been cached. The digest has to be regenerated (Regeneration phase described above) when a lot of common blocks have been invalidated due to writes on these blocks. Typically, in VDI environments, there is a lot of common data that is often read from (for example : Boot images are usually identical and shared among several desktops) and rarely written to. If the environment consists of a lot of common user data (for example – several desktop users are accessing the same powerpoint presentation or the same office document and are making changes to their local copies), the blocks that have been written to will be marked as invalid and a regenerate operation will help in that case. Hope this helps.

Comments are closed.