By Narasimha Krishnakumar, Staff Product Manager, End-User Computing
We are excited to announce a new feature with VMware View 5.1 – VMware View Storage Accelerator. View Storage Accelerator (formerly known as Content Based Read Cache) optimizes storage load and improves performance by caching common image blocks when reading virtual desktop images, helping to reduce overall TCO in VMware View deployments.
Figure 1: Performance improvements during a boot storm on a single vSphere server running Windows 7
Digging deeper, View Storage Accelerator is an in memory (ESXi Server Memory) cache of common blocks. It is applicable to stateless (non-persistent) as well as stateful (persistent) desktops and is completely transparent to the guest virtual machine/desktop. It does not require any special storage array technology and provides additional performance benefits when used in conjunction with Storage Array technologies.
Benefits of using View Storage Accelerator include:
Cost Savings – Storage sizing and storage costs are two critical challenges that customers face when deploying VDI. Both these challenges are closely related. VDI workloads are characterized by a pattern of peak workloads and steady state workloads. The peak workloads are usually due to events such as boot storms, login storms that are read intensive workloads. The variance between these peak events and steady state workloads is very high and sizing a storage system for peak workloads will make the system cost prohibitive.
On the flip side, sizing storage for steady state will cause user dissatisfaction due to in-consistent system behavior. The View Storage Accelerator feature helps alleviate both these challenges by giving users the ability to set aside a small amount of ESXi Server memory to address peak read IO VDI workloads. Since the cache can handle these peak read workloads, IT pros can size the shared storage array for steady state workloads rather than for peak workloads. It reduces the cost of the VDI deployment by decreasing the amount of storage that customers need to buy to address peak workloads.
Improved End User Experience – Since most of the common blocks are cached in main memory, desktop users and their applications avoid having to access these blocks on the shared storage array. This results in improving the overall end user experience. There are at least two examples of improved user experience:
Fast boot times – If the OS used by all the desktops on an ESX server are the same, most of the blocks of the OS disk will be cached in main memory and the boot times are faster than booting from a centralized shared storage array.
Application Performance – If several users load the same application on an ESX server (For Example: MS Word) the common blocks corresponding to the application are cached once and all users are served directly from the cache.
Performance Improvements – During peak events such as boot storms and login storms (read intensive events) the performance improvement measured as a net reduction in IOPS to a centralized shared storage array is greatly improved. Our internal tests indicate that during a multi host boot storm event, the IOPS to the shared storage array are reduced by about 65%.
Network bandwidth reduction – Since most of the common blocks are cached in ESX memory, there is a reduction in overall network bandwidth consumption both during peak events as well as steady state events. Since users do not have to access the storage array over the network, the number of packets transmitted over the storage network is considerably reduced. This can enable customers to opt for a lower bandwidth switch, which further reduce the costs of deploying VDI. Figure 1 and Figure 2 below show the reduction in IOPS and bandwidth during boot storm events in the VDI environment.
Stay tuned next week where I will explain how to use View Storage Accelerator in practice.
Read more about the other products launched as part of our VMware End-User Computing portfolio announcement and register for the VMware End-User Computing Virtual Conference on May 3rd, 8:30am to hear more.