By TJ Vatsa, VMWare EUC Consultant
INTRODUCTION I am writing this blog to share my thoughts and experiences when it comes to architecting enterprise virtual desktop infrastructure (VDI) solutions. While some schools of thought believe that a one size fits all approach provides a low cost, modular deployment strategy, I believe in a different perspective, which is – “one size may fit, or better yet, align with one specific use case”. This approach in my opinion leads to a repeatable, predictable design strategy and methodology that can be applied to any use case from any industry vertical. This strategy and methodology is what I’ll attempt to articulate in the next few paragraphs and in my continuing subsequent blog series on this topic.
Having worked with many customers across different industry domains namely- Healthcare, Financial, Insurance Services, Manufacturing and others, I’ve noticed one key aspect of VDI that is the most crucial element to either a successful or a challenging VDI deployment – “Storage”, boon or bane. If you’ve got your share of scars implementing VDI like I have, then you know what I’m talking about.
With this introductory background, let’s cut to the chase. The most prevalent, key VDI challenges that I’ve come across are the following:
- 1. CAPACITY – Oversized but underperforming storage platforms that are very costly to own depending upon the availability of the capital on hand or budgeted amount for the fiscal year.
Given the current trend that the storage capacity is becoming cheaper for certain tiers of storage (still somewhat expensive for tier 1) customers are often tempted to go in for high-end capacity storage arrays. During the VDI storage sizing effort, this approach creates a perception that there is and will be sufficient storage capacity available to house the VDI storage requirements for the current as well as the future VDI user population. While the perception may be true, it only guarantees storage capacity but not the performance that the users expect from a VDI response time perspective.
- 2. PERFORMANCE – Cluttered user segmentation assuming “one size fits all approach”.
The performance capability of a storage array is commonly measured in terms of how many total IOPS (Input/Output Operations Per Second) say “Z”, is the storage array capable of supporting. [Note: From a VDI perspective, we are interested in the frontend (aka) logical IOPS of the storage array.]
From a VDI deployment perspective, the next logical step is to determine the IOPS requirements per desktop, say “X” and then multiply that with the total number of VDI users, say “Y”. So the obvious conclusion for the Storage Architects, IT Managers, IT Directors and other stake holders to arrive at is that as long as (X * Y) <= Z, the storage array will be capable of supporting the expected performance service-level agreement (SLA).
The biggest pit fall that has been made in this calculation is the assumption that the IOPS per desktop “X” is the same across all the user categories aka user communities/segmentations as well as the use cases across different lines of businesses (LOBs) within an enterprise. This leads to the challenging “one size fits all” approach. The resulting outcome is either an undersized storage array design or an oversized storage array design contingent upon “X” being the peak or the valley on the IOPs graph. In either scenario it will be a costly proposition:
- a) Oversized Storage Array
Upfront costly investment (should “X” represent peak IOPS) since not all VDI users will be requiring such high IOPS.
- b) Undersized Storage Array
Delayed but additional investment (should “X” represent the valley IOPS) because you would need to augment the required storage performance needs at a later date to cater to your power users who demand higher IOPS.
- 3. OPERATIONS – Performance blues during patching operations.
Another challenging aspect that I’ve experienced with the storage sizing effort is the fact that the teams involved end up overlooking the storage storms. These storms cause operational blues during patch updates, Anti-Virus (AV) updates as well as the booting operations causing boot storms.
Assuming that you’ve deployed a desktop assessment/monitoring application to measure the IOPS on a per desktop basis, there are at least these two important categories of IOPS that you should be aware of:
- a) Steady State IOPS
These are the IOPS metrics that the desktop assessment/monitoring application reports during normal day to day desktop operations. Let us say that this is represented by a measure “S”.
- b) Peak State IOPS
These are the IOPS metrics that the desktop assessment/monitoring application reports during the storage storms. I have seen this metric averaging up to at least three times the steady state. For instance if the steady state IOPS per desktop is “S”, the peak state IOPS say “P” can be up to and in certain cases beyond “3S”. Therefore based on the preceding example: (P = 3S).
For those of you who are already considering these aspects during your VDI storage plan and design phase, hats off to you. For others, I’d highly recommend keeping these aspects in mind while you are planning and designing storage requirements for your VDI deployment.
In my next blog (Part II – Storage Boon or Bane, VMware View Storage Design Strategy & Methodology), I’ll be sharing with you my experiences on how to overcome these challenges with tried and tested design approaches for a scalable and predictable VMware View VDI deployment.
Until then, Go VMware!
TJ Vatsa has worked at VMware for the past 3 years with over 19 years of expertise in the IT industry, mainly focusing on the enterprise architecture. He has extensive experience in professional services consulting, Cloud Computing, VDI infrastructure, SOA architecture planning, implementation, functional/solution architecture, and technical project management related to enterprise application development, content management and data warehousing technologies. |