Home > Blogs > VMware Consulting Blog > Monthly Archives: April 2013

Monthly Archives: April 2013

The Proof is in the Impact

Today’s challenging business environment is a convergence of many changes. In this new business paradigm, IT executives are faced with determining how to best direct their staff, how to redesign IT processes, and how to use technology to grow businesses and/or fundamentally shift business models. Anticipating and staying abreast of these challenges requires thought leadership and seamless technical capabilities.

In this video, Michael Hubbard, Sr. Director of Accelerate and Services Sales for the Americas, discusses the value of gleaning best practices and insights from our consulting experts on virtualization, end user computing, cloud computing and more in this blog. He also shares a customer success story where VMware delivered an impactful, always on point-of-care solution for a major hospital.

Check back soon for more stories, best practices and insights.

Part II: Storage Boon or Bane – VMware View Storage Design Strategy & Methodology

By TJ Vatsa, VMWare EUC Consultant

INTRODUCTION Welcome to Part II of the VMware View Storage Design Strategy and Methodology blog. This blog is in continuation to Part I that can be found here. In the last blog, I had listed some of the most prevalent challenges that impede a predictable VMware View Storage design strategy.. In this blog, I will articulate some of the successful storage design approaches that are employed by VMware End User Computing (EUC) Consulting practice to overcome those challenges.

I’d like to reemphasize the fact that storage is very crucial to a successful VDI deployment. Should the VDI project be made prone to the challenges listed in Part I, Storage, for sure, will seem to be a “bane”. But, if the recommended design strategy listed below is followed, you would be surprised to find VDI Storage being a “boon” for a scalable and predictable VDI deployment.

With that in mind, let’s dive in. Some successful storage design approaches I’ve encountered are the following:

    • 1.     PERFORMANCE Versus CAPACITY Recommendation: “First performance and then capacity”
      Often times, capacity seems more attractive when compared to performance. But, is it really so? Let’s walk through an example.

 

    • a)     Let’s say vendor “A” is selling you a storage appliance, “Appliance A” that has a total capacity of 10TB, being delivered by 10 SATA drives of 1TB capacity each.

 

    • b)     On “Appliance A”, let’s say that each SATA drive delivers approximately 80 IOPS. So, for 10 drives, the total IOPS being delivered by the appliance is 800 IOPS (10 drives * 80 IOPS).

 

    • c)     Now let’s say that vendor “B” is selling you a storage appliance, “Appliance B” that also has a total capacity of 10TB, but it is being delivered by 20 SATA drives of 0.5TB capacity each. [Note: “Appliance B” may be expensive as there is more drives compared to “Appliance A”.]

 

  • d)     Now for “Appliance B”, assuming that the SATA drive specifications are the same as those of “Appliance A”, you should be expecting 1600 IOPS (20 drives * 80 IOPS)
    • It’s mathematically clear; “Appliance B” will be delivering twice as much IOPS than “Appliance A”. More storage IOPS invariably turns out to be a boon for a VDI deployment. Another important point to consider, is the fact that employing higher tier storage also ensures high IOPS availability. Case in point, replacing the SATA drives in the example above with SAS drives will certainly provide higher IOPS. SSD drives, while expensive, will provide even higher IOPS.

 

    • 2.     USER SEGMENTATIONRecommendation: Intelligent user segmentation that does not assume “one size fits all approach”.

As was explained in Part I, taking a generic user IOPS, say “X” and then multiplying that with the total number of VDI users in an organization say “Y”, may result in an Oversized or an Undersized Storage Array design. This approach may prove costly, either upfront or at a later date.

The recommended design approach is to intelligently categorize the user’s IOPS as “Small, Medium or High” based on the load a given category of users generate across the organization. As part of the common industry nomenclature for VDI users:

a)     Task Workers: associated with small IOPS.
b)     Knowledge Workers: associated with medium IOPS.
c)     Power Users: associated with high IOPS.

With these guidelines in mind, let me walk you through an example. Let’s say that Customer A’s Silicon Valley campus location has 1000 VDI users. Assuming that the user % split is:

a)     15% Task Workers with an average of 7 IOPS each
b)     70% Knowledge Workers with an average of 15 IOPS each
c)     15% Power Users with an average of 30 IOPS each

The resulting calculation of total estimated IOPS required will look similar to Table 1 below.

Key Takeaways:

      1. It is highly recommended to discuss/consult with the customer and to also make use of a desktop assessment tool to determine the user % distribution (split) as well as the average IOPS per user segmentation.
      2. Estimated capacity growth and the buffer percentage, is assumed to be 30%. This may vary for your customer based on the industry domain and other factors.
      3. This approach to IOPS calculation is more predictable based on user segmentation specific to a given customer’s desktop usage.
      4. You can apply this strategy to customers from Healthcare, Financial, Insurance Services, Manufacturing and other industry domains.
3.     OPERATIONSRecommendation: “Include Operational IOPS related to Storage Storms”.It is highly recommended to proactively account for IOPS related to the storage storms. Any lapse can result in a severely, painful VDI user experience during the storage storms – Patch Storm, Boot Storm and Anti-Virus (AV) storm.

Assuming that a desktop assessment tool is employed to do the analysis, it is recommended to analyze the user % split targeted during each of the storm operations listed above.

For instance, if the desktop operations team pushes OS/Application/AV patches in batches of 20% of the total user community, and the estimated IOPS is let’s say three times the steady state IOPS (explained in Part I), it will be prudent to include another attribute for operational IOPS to Table 1 listed above.

A similar, strategy should also be employed to account for the boot and the log-off storms.

I hope you will find this information handy and useful during your VDI architecture design and deployment strategy.

Until next time. Go VMware!

TJ Vatsa has worked at VMware for the past 3 years with over 19 years of expertise in the IT industry, mainly focusing on the enterprise architecture. He has extensive experience in professional services consulting, Cloud Computing, VDI infrastructure, SOA architecture planning, implementation, functional/solution architecture, and technical project management related to enterprise application development, content management and data warehousing technologies.

 

Part I: Storage Boon or Bane – VMware View Storage Design Strategy & Methodology

By TJ Vatsa, VMWare EUC Consultant

INTRODUCTION I am writing this blog to share my thoughts and experiences when it comes to architecting enterprise virtual desktop infrastructure (VDI) solutions. While some schools of thought believe that a one size fits all approach provides a low cost, modular deployment strategy, I believe in a different perspective, which is – “one size may fit, or better yet, align with one specific use case”. This approach in my opinion leads to a repeatable, predictable design strategy and methodology that can be applied to any use case from any industry vertical. This strategy and methodology is what I’ll attempt to articulate in the next few paragraphs and in my continuing subsequent blog series on this topic.

Having worked with many customers across different industry domains namely- Healthcare, Financial, Insurance Services, Manufacturing and others, I’ve noticed one key aspect of VDI that is the most crucial element to either a successful or a challenging VDI deployment – “Storage”, boon or bane. If you’ve got your share of scars implementing VDI like I have, then you know what I’m talking about.

With this introductory background, let’s cut to the chase. The most prevalent, key VDI challenges that I’ve come across are the following:

    • 1.     CAPACITY – Oversized but underperforming storage platforms that are very costly to own depending upon the availability of the capital on hand or budgeted amount for the fiscal year.

Given the current trend that the storage capacity is becoming cheaper for certain tiers of storage (still somewhat expensive for tier 1) customers are often tempted to go in for high-end capacity storage arrays. During the VDI storage sizing effort, this approach creates a perception that there is and will be sufficient storage capacity available to house the VDI storage requirements for the current as well as the future VDI user population. While the perception may be true, it only guarantees storage capacity but not the performance that the users expect from a VDI response time perspective.

    • 2.     PERFORMANCECluttered user segmentation assuming “one size fits all approach”.

The performance capability of a storage array is commonly measured in terms of how many total IOPS (Input/Output Operations Per Second) say “Z”, is the storage array capable of supporting. [Note: From a VDI perspective, we are interested in the frontend (aka) logical IOPS of the storage array.]

From a VDI deployment perspective, the next logical step is to determine the IOPS requirements per desktop, say “X” and then multiply that with the total number of VDI users, say “Y”. So the obvious conclusion for the Storage Architects, IT Managers, IT Directors and other stake holders to arrive at is that as long as (X * Y) <= Z, the storage array will be capable of supporting the expected performance service-level agreement (SLA).

The biggest pit fall that has been made in this calculation is the assumption that the IOPS per desktop “X” is the same across all the user categories aka user communities/segmentations as well as the use cases across different lines of businesses (LOBs) within an enterprise. This leads to the challenging “one size fits all” approach. The resulting outcome is either an undersized storage array design or an oversized storage array design contingent upon “X” being the peak or the valley on the IOPs graph. In either scenario it will be a costly proposition:

      • a)     Oversized Storage Array
        Upfront costly investment (should “X” represent peak IOPS) since not all VDI users will be requiring such high IOPS.

 

      • b)     Undersized Storage Array
        Delayed but additional investment (should “X” represent the valley IOPS) because you would need to augment the required storage performance needs at a later date to cater to your power users who demand higher IOPS.

 

  • 3.     OPERATIONSPerformance blues during patching operations.

 

Another challenging aspect that I’ve experienced with the storage sizing effort is the fact that the teams involved end up overlooking the storage storms. These storms cause operational blues during patch updates, Anti-Virus (AV) updates as well as the booting operations causing boot storms.

Assuming that you’ve deployed a desktop assessment/monitoring application to measure the IOPS on a per desktop basis, there are at least these two important categories of IOPS that you should be aware of:

    • a)     Steady State IOPS
      These are the IOPS metrics that the desktop assessment/monitoring application reports during normal day to day desktop operations. Let us say that this is represented by a measure “S”.

 

    • b)     Peak State IOPS

These are the IOPS metrics that the desktop assessment/monitoring application reports during the storage storms. I have seen this metric averaging up to at least three times the steady state. For instance if the steady state IOPS per desktop is “S”, the peak state IOPS say “P” can be up to and in certain cases beyond “3S”. Therefore based on the preceding example: (P = 3S).

For those of you who are already considering these aspects during your VDI storage plan and design phase, hats off to you. For others, I’d highly recommend keeping these aspects in mind while you are planning and designing storage requirements for your VDI deployment.

In my next blog (Part II – Storage Boon or Bane, VMware View Storage Design Strategy & Methodology), I’ll be sharing with you my experiences on how to overcome these challenges with tried and tested design approaches for a scalable and predictable VMware View VDI deployment.

Until then, Go VMware!

TJ Vatsa has worked at VMware for the past 3 years with over 19 years of expertise in the IT industry, mainly focusing on the enterprise architecture. He has extensive experience in professional services consulting, Cloud Computing, VDI infrastructure, SOA architecture planning, implementation, functional/solution architecture, and technical project management related to enterprise application development, content management and data warehousing technologies.

 

Plan the Work, Work the Plan – Pragmatic Advice for End User Computing Success

By Justin Venezia, VMware EUC Consultant

Many folks who are implementing end user computing solutions (VDI, application virtualization, etc.) ask the same age-old question – “How can I be successful?” Well, the answer is quite simple – plan the work, work the plan.  Jumping in with both feet and going right into the build and deployment of VDI, for example, may help you meet a tactical need and get you to a few hundred desktops. However, when you try to “scale” that environment, introducing other use cases and elements into the solution, that’s when folks start hitting road blocks or brick walls. Taking a little extra time up front, as well as doing some careful planning, will help keep your end user computing services deployment moving forward. Here’s a 30,000 foot view on the major planning elements and lifecycle one should consider for success:

  • Establish an End User Computing Strategy – It’s important to ensure your plan aligns with the company’s business and IT-related strategic plans. Also, it’s critical to identify realistic business/technical objectives and any current challenges or pain points resolved with end user computing solutions. A good starting point is to transform your end user computing strategy to an “EUC as a Service” platform – modular in nature so other products and solutions can be easily integrated, with clearly defined  service offerings and classes of service, and scalable (both up and out) for the enterprise.
  • Assess your environment – Understanding the footprint and usage patterns of your desktops, applications, data, as well as how users “use” the desktop day-to-day is critical for properly designing your solution and identifying and confirming business requirements. It also helps you identify potential risks and constraints right up front.
  • Proof of Concept – PoC’s should be conducted based on defined & measurable success criteria. Basically, the PoC should be the “does it work” phase. Of course, this should be done in a controlled testing environment; PoC’s should not be rolled out into production.
  • Plan & Design – Design your solution based on strategy and requirements, and not around product capabilities or features. Also, be forward–thinking and strategic when working through the plan and design; tactical designs require lots of updates and are just that – tactically done to serve a specific purpose. Finally, leave no stone unturned during your design – be thorough and keep an open mind. Integration of 3rd party products or changes in the way you do business may be necessary to align with your end user computing strategy/vision and your business and technical requirements.
  • Operational Readiness & Preparation – This is a phase many people overlook, and what I refer to as “how do you keep the engine running” phase. End User Computing is a different technical and operational paradigm. It requires most people to adopt new policies and procedures (or adapt/adjust old ones) to effectively maintain and manage their EUC solution. You can have a new car and it runs great, but if you can’t maintain it – it will break down, eventually and always at the wrong time. Take the time to review and adjust your user on-boarding and operational polices, procedures, and resources to build an effective support model and truly achieve the operational benefits of EUC solutions.
  • Build & Validation – This phase is where you build out the solution and conduct cursory functional testing of your design. Make sure you build it into your plan and design blueprint and have testing plans that align with solution requirements. It is also important to be thorough and test all aspects of your design (network, storage, functions/features, etc.). Based off of your findings, you may have to adjust your design.
  • Scalability & Functional Testing – Another phase that’s typically overlooked. The assessment phase will provide some insight on how your solution can scale and what density numbers you POSSIBLY could achieve; the scalability test is the proof in the pudding. It will not only help validate your solution will scale, but that dependent infrastructure can scale with it.  It also paints a data-backed picture on how your solution can scale out (for capacity planning purposes) and helps flush out any misconfiguration or problems before the pilot phase of the project.
  • Pilot – Just like the PoC, the pilot phase is where you actually have a controlled user population testing the solution in production. Ensure you get user feedback on end user experience – this is one of the most important measurements of success. Also, be proactive – monitor the performance of the desktop and provide users with an easy way to get support and provide constructive feedback.
  • Production & Support – If you’ve followed the phases and high-level guidance outlined above, you’re most likely well on your way to EUC success.

Good luck, and remember – plan the work, work the plan!

Justin Venezia has worked at VMware for three years as an architect within VMware’s End User Computing (EUC) Global Professional Services Engineering team. He has deep expertise in EUC strategy development and deployment of large-scale end user computing solutions.

 

Four Commonly Missed and Easy to Implement Best Practices

By Nathan Smith, VMware EUC Consultant

I want to highlight a few of the best practices in a View deployment that are often overlooked but easily corrected.  My highlights are based on our practice’s collective experience with one  of the services offered by EUC Professional Services, the Desktop Virtualization Health Check. These are normally undertaken after the environment has been up and running for a while or ahead of a significant expansion. Amongst other things the Health Check includes a comparison of both vSphere and View best practices to your current environment. In total we check over 150 best practices. Some are straightforward and hopefully done already – for example, using separate vSphere environments for the Desktops and Infrastructure components. Some are a little more esoteric, like reviewing the congestion threshold for SIOC.

1. Configure a vCenter user and role with appropriate permissions.  It is often tempting when going through a deployment to use an account that you know isn’t going to run in to permissions issues. This frequently ends up being a full vCenter Administrator. While this approach will work, it is not recommended to provide more permissions than are necessary. Correcting this is straightforward. Setup a new vCenter Role with the privileges defined in the View Administrators Guide (be sure to add the Composer and Local Mode privileges if you are using those features) and assign permissions at the vCenter level for a new user using this role. In View Administrator modify the account used to connect to vCenter under View Configuration->Servers-> vCenter Servers->Edit.

2. The next two considerations both fall in to the same category, virtual hardware configuration. The first is the virtual network adapter type. This should be VMXNET3 for both Windows XP and 7. The second is to verify that the disk controller is an LSI Logic controller. This should be LSI Logic Parallel or SAS for Windows XP and LSI Logic SAS for Windows 7. Simon Long does a great job of summarizing the reasoning here.

3. Appropriately size the Connection Server and JVM heap. The recommendation for RAM on Connection Servers supporting over 50 desktops is 10GB on a Windows Server 2008 R2 and 6GB on a Windows 2003 Server. If you are increasing memory on a 2008 R2 installation you will need to reinstall Connection Server to reset the JVM heap size. On a 2003 server you can follow this section from the Administrators guide. Also consider whether you have increased memory in the past, for example, when moving from pilot to production. Note that as of View 5.1, Windows 2003 is no longer a supported operating system for Connection Servers.

4. The last area I want to highlight is OS Optimization. There are two technical papers available, one for Windows XP and one for Windows 7, that take you step by step through the best practices for optimizing the OS. These perhaps don’t fall in to the easy to implement category as they are a little more time consuming but really essential to a successful deployment.

Good luck with your deployment and don’t hesitate to contact your EUC Professional Services lead with questions.

Nathan Smith joined VMware in 2012, bringing with him over 15 years of IT experience. He works in the EUC Professional Services practice, focusing on VMware View deployments.