Author Archives: VMware Professional Services

Join our VMware PS Consultants, TAMs & Solution Architects at VMworld 2013!

Ready to defy convention? Ted Ohr, Senior Director of Americas Service Delivery, personally invites you to join industry leaders and colleagues at the 10th annual US VMworld event in San Francisco, August 25—29. Gain the tools you need to simplify your operations and transform conventional practices into business advantages. Details at http://www.vmworld.com

It All Starts Here: Internal Implementation of Horizon Workspace at VMware

By Jim Zhang, VMWare Professional Services Consultant

VMware has had a dogfood tradition since previous CEO Paul Maritz’ instilled the practice of having VMware IT deploy VMware products for production use internally. As a VMware employee personally, I can understand some criticism to this practice, but I definitely believe it serves to build and deliver a solid and quality product to the market.

Prior to the release of VMware’s Horizon Suite, VMware IT provided Horizon Workspace to its employees in the production environment. It’s very exciting! Right now, I can use my iPhone and iPad to access my company files without being tied to my desk. Also, it is very easy to share a folder and files with other colleagues, expanding our ability to collaborate and also track various file versions. Additionally, with Workspace, I can access internal applications without further authentication after I login to the Horizon portal. Even my entitlement virtual desktops are still there!

While Mason and Ted discuss the IT challenges with mobility computing in this blog, we at VMware understand these challenges because ‘we eat our own dogfood’.  In this blog I’d like to share some of the key sizing concepts of each of the Horizon components and reference which sizes VMware IT utilized to deploy the Horizon Workspace for its 13,000+ employees.

Horizon Workspace is a vApp that generally has 5 Virtual Machines (VM) by default:

Lets go through each VM and see how to size it in each case:

1.  Configurator VA (virtual appliance): This is the first virtual appliance to be deployed. It is used to configure the vApp from a single point and deploy and configure the rest of the vApp. The Configurator VA is also used to add or remove other Horizon Workspace virtual appliances. There can only be one Configurator VA per vApp.

  • 1x Configurator VA is used. 2vCPU, 2G Memory

2.  Connector VA:  Enterprise deployments require more than one Connector VA to support different authentication methods, such as RSA SecureID and Kerberos SSO. To provide high availability when deploying more than one Connector VA, you must front-end the Connector VAs with a load balancer. Each Connector VA can support up to 30,000 users. Specific use cases, such as Kerberos, ThinApp integration, and View integration, require the Connector VA to be joined to the Windows domain.

  • 6x Connector VA is used. 2 vCPU, 4G Memory

3.  Gateway VA: The Gateway VA is the single namespace for all Horizon Workspace interaction. For high availability, place multiple Gateway VAs behind a load balancer. Horizon Workspace requires one Gateway VA for every two Data VAs, or one Gateway VA for every 2,000 users.

  • 4x Gateway VA is used: 2 vCPU, 8G Memory

4.  Management VA: aka Service VA. Enterprise deployments require two or more Service VAs. Each service VA can handle up to 100,000 users.

  • 2x Service VA is used: 2vCPU, 6G Memory (1 for HA)

5.  Data VM: Each Data VA can support up to 1,000 users. At least three Data VAs are required. The first Data VA is a master data node, the others are user data nodes. Each user data node requires its own dedicated volume. In proof of concept or small-scale pilot scenarios, you can use a Virtual Machine Disk (VMDK). For production, you must use NFS.

  • 11x Data VA is used: 6 vCPU, 32G Memory

6.  Database: Workspace only supports Postgres. For enterprise deployment best practice is to use an external Postgres database.

  • 2x Postgres Server is used: 4 vCPU, 4G Memory (1 for replication)

7.  MS Office Preview Server: Windows 7 Enterprise or Windows 2008 R2 Standard required; MS Office 2010 Professional, 64-bit required;Admin account w/ permissions to create local accounts; Disable UAC; Real-time conversion of documents

  • 3x MS Office Preview Server: 4vCPU, 4G Memory

 

If you want to learn more about the real deployment experience and best practices for deploying the Horzion Suite, please contact your local VMware Professional Services team. They have the breadth of experience and technical ability to help you achieve your project goals: from planning and design to implementation and maintenance. Also, be on the look out for upcoming Horizon reference guides being released from VMware soon. Good luck!

Jim Zhang joined VMware in November 2007 as a quality engineering manager for VMware View.  In 2011, he moved to Professional Services as consultant and solution architect.  Jim has extensive experience in desktop virtualization and workspace solution design and delivery.

How Virtualizing Your Desktops Can Help You Protect Sensitive Data

By Jeremy Wheeler, VMWare Professional Services Consultant

As Ted and Mason mentioned in their video post last week, today’s IT staff faces many challenges involving security, cost, risk, and governance. I’d like to address one particular challenge associated with those: how to manage data.

Let’s consider a heavily regulated industry like health care. In a typical healthcare setting, if disaster strikes, hospitals risk losing extremely sensitive patient data, either virtual or physical. In addition to implementing disaster recovery processes and large backup tapes, IT techs always have to ensure patient data doesn’t fall into the wrong hands.

This is further complicated by today’s trend toward workers using various devices, such as mobile phones and tablets, to perform daily job functions, instead of a doing everything on a single device. Employees need to be able to use the mobile device of their choice, while still being able to securely access their work applications and documents.

VMware knows IT has plenty of things to worry about besides physical end-point devices, so they provides tools to centralize data in the data center. When virtualized desktops are managed from the data center, rather than at the endpoints, IT departments can deliver consistent desktop performance, achieve the agility they desire, and reduce costs at the same time—all because of single-image-management linked-clone technology.

For on-the-move users like healthcare professionals, VMware has solutions such as “follow-me desktop,” which provides physicians with rapid access to their workspace on kiosks across the hospital. Providing users with a single point of entry to their applications and documents is not only more convenient for the user, it’s also easier for IT to manage.

With VMware’s AlwaysOn Point of Care architecture, VMware View pools balance between multiple sites, providing continuous uptime even in the event of a major disaster to a datacenter. This works with a combination of load balancers, such as F5 and provisioning half the resources per pool.

When deploying VMware AlwaysOn Point of Care, companies typically run into challenges with the dynamics required to deploy the solution, especially around communities versus use cases. For instance, check out the chart below, which illustrates three user communities in the hospital setting:

 

For a successful VDI deployment, it is critical to define two categories: communities and use cases. Communities are defined from a high level, followed by use cases. When determining use cases, it’s best to categorize the use cases as power users, knowledge workers, task workers, and kiosk users similar to what  my co-worker, TJ Vatsa, outlined in his blog.Once the communities and use cases have been identified, the next step is to size the VDI environment based on use cases. In clinical use cases, nursing units may need access to applications that doctors won’t need, or vice-versa. Every application uses guest-level resources that, in turn, eventually use host resources. One way to offload these resources is using VMware’s ThinApp technology. Resources involved with deploying a VDI environment consist of compute, networking, storage, and security.

Parent images, sometimes called “Gold Images,” are typically created per use case. If the ER nurses don’t need specific applications installed on their virtual desktop, but physicians do, IT can use two different images.

Application streaming, assisted by VMware’s ThinApp technology, is a great way to save resources from a storage and performance perspective. Administrators can update single applications across an entire infrastructure with no impact to the end-user. A key element I found when deploying Horizon View and ThinApp are “Health-Checks.” Streaming anything across a wire, you’ll need to know how much bandwidth it’s utilizing.

Recently, I did some work for a large hospital and they decided they wanted all their applications streamed. After further investigation, I discovered there was no assessment of the network before making this decision. ThinApp streaming is a great technology, but some key items need to be considered before making the decision to stream. To start with, I typically utilize Wireshark and watch packets while launching an application. The first launch packet size will determine the initial VMware ThinApp cache size. The second launch packet size is the pre-cached ThinApp package size. Once these packet sizes are established, multiply the size by the user-count to determine the needed bandwidth.

Please reference this article for further information on breakdown of use-cases: http://pubs.vmware.com/view-51/index.jsp?topic=%2Fcom.vmware.view.planning.doc%2FGUID-DA16011C-6128-44FC-97DF-0E4FB66A0309.html

For an example of a healthcare case study using VMware technology, view Michael Hubbard’s video blog.

Sizing environments for these types of solutions can be very tricky and proper planning is critical. When implementing a project plan for VDI, it’s necessary to consider disaster recovery within a cluster and between multiple sites. With VMware Horizon View and ThinApp, any origination will have the option to provide continuous uptime. This makes VMware Professional Services for End-User Computing ideal for professional project planning.


Jeremy Wheeler has extensive experience with Vmware products and solutions. He has been in the IT field for 19 years and focuses around Vmware View and AlwaysOn Healthcare.

 

Staying Ahead in the Boom of the Mobile Workforce

Today’s IT department is inundated by new devices, new applications and new ways to work. It used to be that IT defined, provided and supported the device or endpoint; they defined the refresh or upgrade cycle; they assessed, procured and installed all the applications. Users had very little influence or input into what they used at work. Today, that’s all changed.

In this 2-part video blog, Ted Ohr, Sr. Director of Professional Services and Mason Uyeda, Sr. Director of Technical Marketing and Enablement discuss the incredible explosion around end-user computing and the mobile workforce, the challenges that IT faces and what VMware is doing about it.

In this new landscape, we have users with choice, multiple devices and multiple ways for IT to approach the challenges of control vs. agility vs. cost. In Part 2, Ted and Mason highlight VMware’s IT solutions space for the customer, providing users access to the data and applications they need to get the job done

With over 18 years of technology experience, Ted Ohr is the Senior Director of Americas Service Delivery, which includes Software Defined Data Center, Mobility, Project Management and Technical Account Management. In addition to driving services revenue growth in Latin America, he is also responsible for leading all aspects of service delivery, thought leadership and best practices for VMware’s Professional Services business for both North and Latin America, helping to ensure customer success and satisfaction.
Mason Uyeda joined VMware in November 2007 and leads technical and solution marketing for VMware’s end-user computing business, bringing more than 18 years of experience in strategy, product marketing, and product management. He is responsible for the development and marketing of solutions that feature such end-user computing technologies as desktop virtualization and workspace aggregation.

 

Don’t Leave Security Off the Table

By Bill Mansfield, VMWare Professional Services Consultant

I find myself at a large majority of my enterprise customers discussing non-technical issues. Brokering a truce between operational organizations that have evolved in their own silos, and who don’t play well with others.  In the early days of Virtualization, it was difficult to get three key parties in the same room in large shops to hash out architectural requirements and operational process. Networking, Storage, and Virtualization were typically at odds with each other for any number of reasons, and getting everyone to play nice was difficult. These days, it’s primarily Security that’s left out of the room.  A large government customer recently told me flat out “We don’t care about security”, implying that it was another department’s responsibility. Indeed, the SecOps (Security Operations) and SecEngineering (Security Engineering) teams had never been brought into a Virtualization meeting in the 7 years virtualization had been in house.

This segregation of the Security team, whether intentional or not, causes some serious problems during a security incident. Typically SecOps only has a view into the core network infrastructure and some agent based sensors that may or may not make it onto the VMs that are being investigated. Network sensors typically only exist at the edges of the network, and occasionally at the core in larger shops. Any VM to VM traffic may or may not even transit the physical network at any given time.  For a long time, the ability to watch Virtual Switches for data was not available and the Security teams got used to that. These days, all the traditional methods of monitoring and incident investigation are readily available within vSphere. The vSphere 5.1 Distributed Virtual Switch can produce NetFlow data for consumption by any number of tools. RSPAN and ERSPAN can provide full remote network monitoring or recording. Inter VM traffic is no longer invisible to Security tools. Security teams just need to be involved, and need to hook their existing toolset into the Software-defined data center. No need to reinvent the wheel. Sure we can enhance capabilities, but first we need to get the Security teams to the table and allow them to use the tools they already have.

So what are some typical questions from Security Operations about the Software-defined data center? Some of them I can answer, some of them are still works in progress.  All of which deserve their own write-ups.

How do we monitor the network?

  • Port Mirroring has been around for a while, and Netflow, RSPAN and ERSPAN capabilities now allow us to function with a great deal of industry standard tools.

How do we securely log events?

  • SEIM integration is fairly straightforward via Syslog or direct pulls from the relevant vSphere databases.

Where do we put IDS/IPS?

  • Leave the traditional edge monitoring in place, enhance with solutions inside the vSphere stack.
  • vSphere accommodates traditional agent based IPS as well as a good number of agentless solutions via EPSec and NetX API integration.  Most of the major vendors have some amount of integration.

Can you accommodate for segregation of duties?

  • vSphere and vCNS vShield Manager both provide role based segregation and audit capability.

Can you audit against policy?

  • This is a big topic. We can audit host profiles and admin activity in vCenter. We can audit almost anything in vCenter Configuration Manager at all levels of the stack.
  • We can baseline the network traffic of the enterprise with vADP (Application Discovery Planner, not to be confused with our backup API.) We can periodically check for deltas with vADP to find anomalous traffic.

What tools work with VMware to assist with forensics and incident management?

  • Again, this is another big topic. Guests are just data, and a VM doesn’t know when it’s had a snapshot taken. I’ve worked with EnCase, CAINE, BackTrack, and other tools to look at things raw. Procedurally it’s fairly simple. DD off the datastore to run through one of the usual tools and/or run the tool against copies of the VMDKs in question.
  • On the Network side, tie ERSPAN to Wireshark, and use traditional methodology. If you’re feeling clever you can look at live memory by recording a vMotion.

How does legal chain of custody work for forensics on a VM?

  • I’m not a lawyer. I’m not a certified forensic examiner. So, I’ve always had someone from a firm who specializes in forensics like Foundstone with me to handle the paperwork.

Is this a comprehensive list? Not at all. It’s just the beginning. The first step is getting Security to the table, and getting them actively participating in design and operational decisions. With higher and higher consolidation rations it becomes more important than ever to instrument the Virtual Infrastructure. For larger organizations, tools like EMC NetWitness can provide insight into all aspects of software-defined data center. SEIM engines like ArcSight can correlate events and provide an enterprise wide threat dashboard. For small organizations, there’s a large amount of Open Source tools available.

Security professionals, where are you seeing resistance while trying to do your jobs in the software-defined data center? What requirements are you finding most challenging to address? Let us know in the comments below!

Bill Mansfield has worked as a Senior Security Consultant at VMware for the past 6 years. He has extensive knowledge on transitioning traditional security tools into the virtual world.

 

Virtualize SAP – Risky or Not?

By Girish Manmadkar, VMWare Professional Services Consultant

In years past, some IT managers were not ready to talk about virtualizing SAP due to technical and political reasons. The picture is very different today, in part because of the increased emphasis on IT as a strategic function towards ‘Software–Defined Data Center’ (SDDC).

Virtualization and the road to SDDC expands the cost and operational benefits of server virtualization to all data center infrastructure—network, security, storage, and management. For example, peak workloads such as running consolidated financial reports are handled much more effectively, thanks to streamlined provisioning. Integrating systems because of company acquisitions are more easily managed due to the flexibility offered with virtualized platforms. And finally customers are leveraging their virtualized SAP environment to add additional capabilities such as enhanced disaster recovery/business continuity or chargeback systems.

Many customers have been realizing virtualization benefits ever since they moved their SAP production workloads to the VMware platform. As IT budgets continue to shrink, the imperative to lower operating costs becomes more urgent—and virtualization can make a real difference. Server consolidation through virtualization translates directly into lower costs for power, cooling, and space—and boosts the organizations “green” profile in the bargain.

Organizations Benefit from Virtualizing SAP

The main requirement for any IT manager supporting an SAP environment is to ensure high availability —even a few minutes of downtime can cause loss of dollars, not to mention angry phone calls from executive management as well as frustrated users. VMware virtualization takes advantage of SAP’s high-availability features to ensure that the SAP software stays running without any interruption and helps keep those phonelines quiet.

Greenfield SAP deployments are a great way to start building the environment right from ground zero by utilizing a building-block approach. You will start seeing the benefits of flexibility, scalability and availability of the newly built environment on VMware.

Upgrades comes with two scenario’s

  • A. SAP hardware refresh cycle
  • B. An SAP Application and/or database upgrade

Upgrades are a part of every SAP landscape and they can be complex and require long-term efforts. I have seen that most of my customers who go through their standard physical environment for SAP upgrades, spend many man hours or even days – if they have the hardware available at their disposal. However, in the virtual environment, the provisioning process is pretty rapid and can be executed in minutes, including the deprovisioning to reclaim required resources back in the resource pool which makes the upgrade process that much more streamlined and efficient. When going through an SAP upgrade – a very time and cost sensitive project, it is very important to provide required resources to the development team in a timely manner.

Time to Move

Let’s say that you’ve decided to virtualize your SAP environment—now the question is timing. I have seen many customers take the SAP upgrade and/or platform or hardware refresh as possible opportunities to move to the virtual platform.

A planned SAP upgrade can be a good time to move. I have seen some customers cash in on the planned move to SAP NetWeaver & other add-ons to virtualize their entire SAP landscapes—with savings of more than half of their capital expenses.

A hardware refresh is a great time to move. Many customers take advantage of the change in hardware to also consider a migration to virtualization at the same time. It allows customer to integrate the hardware refresh and virtualization projects to minimize disruptions and combine staff training for new hardware and software.

SAP Requirements: Security,Compliance and Disaster Recovery

Challenges like compliance and security policies often require substantial infrastructure changes, that can highlight the inherent inflexibility of the existing traditional hardware platform and persuade top management to invest in infrastructure. Many customers have successfully implemented VMware-provided solutions to ensure the security and compliance of their SAP environment so that they can experience the benefits from virtualization.

Disaster Recovery
A Business Continuity plan is imperative for many of our SAP customers. Disasters – a natural or man-made disaster severely impacts operation which impacts the bottom line. Which of course, is the reason why executives often order a review of the company’s disaster recovery/business continuity plans. VMware understands this importance and the risk which is addressed by VMware Site Recovery Manager product.

So is virtualizing your platform for your SAP environment too risky? All IT projects have risk. Is it so risky to pass up the benefits of virtualization? In my opinion, no – not if you follow the advice and methodology offered by my colleagues, David Gallant (Business as usual with Tier 1 Business Critical Applications? – Not!) and Eiad (Knowing Your Applications is Key to Successful Data Center Transformation). I ask you – if you haven’t already virtualized your SAP environment, why not explore it now? There’s been so many advances in technology and alliances, you can’t ignore it any longer.

Girish Manmadkar is a veteran VMware SAP Virtualization Architect with extensive knowledge and hand-on experience on various SAP and VMware products including various databases. He focuses on SAP migrations, architecture designs, and implementation, including disaster recovery.

 

Knowing Your Applications is Key to Successful Data Center Transformation

By Eiad Al-Aqqad, VMWare Professional Services Consultant

This decade has offered more Data Center transformation options than most IT Professionals were able to keep up with. Virtualization has dramatically changed the way things were traditional done in the datacenter. Having the largest Data Center is not something to brag about anymore, as it might be a symbol of inefficiencies. Next, the Cloud Computing storm hit the datacenter and while IT Professionals started to digest it, the Software Defined Data Center concept has evolved. While each of these Data Center Transformations has offered great advantages to adopters, it had its own challenges and quite frankly, planning was not optional for a successful implementation.

Planning is critical for Data Center transformation and does not stop at the infrastructure planning, but extends to understanding your applications.

Most organizations are good at conducting the infrastructure portion of planning, but have difficulties in planning their applications for transformation. I’ve witness many transformation efforts where the customer team has a hard time answering these  simple questions:

  1.      What are your Applications Priorities?
  2.      What are your Applications RPO/RTO and how are you planning to achieve them?
  3.      What are the security requirements of each of your APPs ?
  4.      What does your Application Dependency look like?

It is critical to know your applications well enough before starting any transformation effort. The four questions above are a good start. While the first three questions can normally be answered by collecting bits and pieces from contacting the right SMEs & business units, Application Dependency is more challenging and is what I want to focus on in this article. For more thoughts on workload classifications, please check out my colleague David’s post:  Business as Usual with Tier 1 Business Critical Applications? – Not!

Application Dependency has proved to be more challenging due to many reasons including:

  1.      Applications Dependencies aren’t static and can change on daily basis.
  2.      Most organizations have inherited legacy applications with very little   documentation.
  3.      Current Change Management systems while helping to document changes are still lagging when it comes to documenting Applications Dependencies.
  4.      Application Dependencies are always filled with unexpected surprises that no one wants to admit, like having a critical application dependent on a DB running on a PC hidden under a developer’s desk.

While Application Dependency Planning without the right tools might be challenging, the point is, before any data center transformation, thorough planning and investigation is required for a successful end game. Tools definitely help with your efforts but even more importantly, making sure you ask yourself the questions above is really the first step before anything.

At last the good news is the availability of tools and services that help automate the process of creating an accurate application dependency mapping of your environment.  ADM & the Virtualization Assessment service (includes the use of Capacity Planning & Application Dependency Planner (ADP)) offered by VMware can be quite handy in creating an Application Dependency Mapping for applications within your environment. For more information about ADP, please visit:  My VMware Application Dependency Planner Post

Eiad AlAqqad is a Senior Consultant within the SDDC Professional Services practice. He has been an active consultant using VMware technologies since 2006. He is VMware Certified Design Expert (VCDX#89), as well an an expert in VMware vCloud, vSphere, & SRM.

 

Business as Usual with Tier 1 Business Critical Applications? – Not!

By David Gallant, VMWare Professional Services Consultant

Ok so you’ve decided to virtualize your Tier 1 Business Critical Applications, awesome that’s great news.  The daunting question is “Where do you start?” As a VMware PS Consultant, I see customers go through this process every day; some customers get us involved when that question comes up, others get us involved much later. I can say with certainty earlier is always better than later.  Tier 1 application design and architecture it is hardly ever business as usual, but it had better be business as usual when you finish!

So, where do you start?

Without a doubt I always start with something I call “Workload Classification.” It’s the phase where the virtualization architect or administrator works with the application teams to understand 3 aspects of enterprise application architecture.

  1. Application dependency planning (Enterprise Architecture)
  2. Understanding the performance profile
  3. Defining the security profile

We will explore these tenets deeper in upcoming blogs for this month, so I’ll start by talking about the core classification work.

When classifying workloads for virtualization the first instinct is to collect as much data as possible.  That would be incorrect, instead think about the 4 components we measure for vSphere: Compute (CPU and Memory), Storage and Network. I recommend only collecting data on these areas to start as it makes our work much simpler to gather and analyze this data.

CPU Memory Storage Network

  • CPU collect Utilization by percentage and by MHz at the server level, instance/process level and database level when measuring databases.
  • Memory collect utilization by percentages and some measure of bytes (KB, MB or GB).
  • Storage collect IOPS, throughput, Storage consumed, and growth rate.
  • Network collect percentage, keep in mind you need to know the link speed of the target and source to match them up.  If you want to go deep use a tool like Wireshark to measure the individual application

Collect the data for a period of time (typically 4 weeks). I like to use ends of quarters when possible so I can see trends in larger data spikes. Further, the fiscal year end is the best time, especially when trying to classify finance applications like SAP ECC.  Also think about the data collection interval, the amount of frequency of when you grab a data point.  I typically use 1 minute intervals for most workloads, but a smaller interval may be necessary for a high performance / low latency application. If a smaller interval is the case, reduce the period so as to limit the amount of data you’ll have to analyze and instead consider two or more collection periods.

Once you have your data; analyze it against the target hosts’ specifications to determine how many hosts are required and some initial placement strategies. Remember vSphere DRS will help with final placement and keep the load balanced, so think of this as a theoretical exercise to help architect and design the environment.

After the workload classification study is complete I always compare my results to an Application Dependency Plan; the two studies together should provide an excellent basis for a migration or re-platform study.  Another piece of the puzzle is defining the security profile of the target environment, comparing and contrasting the existing security profile versus the future state one.  There are tremendous advantages to implementing proper security in the vSphere environment that we will describe in a future blog this month.

I’ll leave you with some final thoughts on workload classification.  If done ahead of time, going through this process will not only guide the design of the future environment, but will probably help define a new optimized way to go to market for your business critical applications. You will probably find business level design flaws in your current environment that when changed, will allow you to more easily manage, maintain, optimize and scale up and/or out in the new environment.

If you’re thinking of virtualizing your business critical applications and you’re not sure where to start, contact your account team and get us involved today.

David Gallant has worked at Vmware for over 2 years with over 20 years experience in the IT industry. He specializes in Virtualizing SAP, Microsoft SQL Server and Oracle Non-RAC.

 

The Proof is in the Impact

Today’s challenging business environment is a convergence of many changes. In this new business paradigm, IT executives are faced with determining how to best direct their staff, how to redesign IT processes, and how to use technology to grow businesses and/or fundamentally shift business models. Anticipating and staying abreast of these challenges requires thought leadership and seamless technical capabilities.

In this video, Michael Hubbard, Sr. Director of Accelerate and Services Sales for the Americas, discusses the value of gleaning best practices and insights from our consulting experts on virtualization, end user computing, cloud computing and more in this blog. He also shares a customer success story where VMware delivered an impactful, always on point-of-care solution for a major hospital.

Check back soon for more stories, best practices and insights.

Part II: Storage Boon or Bane – VMware View Storage Design Strategy & Methodology

By TJ Vatsa, VMWare EUC Consultant

INTRODUCTION Welcome to Part II of the VMware View Storage Design Strategy and Methodology blog. This blog is in continuation to Part I that can be found here. In the last blog, I had listed some of the most prevalent challenges that impede a predictable VMware View Storage design strategy.. In this blog, I will articulate some of the successful storage design approaches that are employed by VMware End User Computing (EUC) Consulting practice to overcome those challenges.

I’d like to reemphasize the fact that storage is very crucial to a successful VDI deployment. Should the VDI project be made prone to the challenges listed in Part I, Storage, for sure, will seem to be a “bane”. But, if the recommended design strategy listed below is followed, you would be surprised to find VDI Storage being a “boon” for a scalable and predictable VDI deployment.

With that in mind, let’s dive in. Some successful storage design approaches I’ve encountered are the following:

    • 1.     PERFORMANCE Versus CAPACITY Recommendation: “First performance and then capacity”
      Often times, capacity seems more attractive when compared to performance. But, is it really so? Let’s walk through an example.

 

    • a)     Let’s say vendor “A” is selling you a storage appliance, “Appliance A” that has a total capacity of 10TB, being delivered by 10 SATA drives of 1TB capacity each.

 

    • b)     On “Appliance A”, let’s say that each SATA drive delivers approximately 80 IOPS. So, for 10 drives, the total IOPS being delivered by the appliance is 800 IOPS (10 drives * 80 IOPS).

 

    • c)     Now let’s say that vendor “B” is selling you a storage appliance, “Appliance B” that also has a total capacity of 10TB, but it is being delivered by 20 SATA drives of 0.5TB capacity each. [Note: “Appliance B” may be expensive as there is more drives compared to “Appliance A”.]

 

  • d)     Now for “Appliance B”, assuming that the SATA drive specifications are the same as those of “Appliance A”, you should be expecting 1600 IOPS (20 drives * 80 IOPS)
    • It’s mathematically clear; “Appliance B” will be delivering twice as much IOPS than “Appliance A”. More storage IOPS invariably turns out to be a boon for a VDI deployment. Another important point to consider, is the fact that employing higher tier storage also ensures high IOPS availability. Case in point, replacing the SATA drives in the example above with SAS drives will certainly provide higher IOPS. SSD drives, while expensive, will provide even higher IOPS.

 

    • 2.     USER SEGMENTATIONRecommendation: Intelligent user segmentation that does not assume “one size fits all approach”.

As was explained in Part I, taking a generic user IOPS, say “X” and then multiplying that with the total number of VDI users in an organization say “Y”, may result in an Oversized or an Undersized Storage Array design. This approach may prove costly, either upfront or at a later date.

The recommended design approach is to intelligently categorize the user’s IOPS as “Small, Medium or High” based on the load a given category of users generate across the organization. As part of the common industry nomenclature for VDI users:

a)     Task Workers: associated with small IOPS.
b)     Knowledge Workers: associated with medium IOPS.
c)     Power Users: associated with high IOPS.

With these guidelines in mind, let me walk you through an example. Let’s say that Customer A’s Silicon Valley campus location has 1000 VDI users. Assuming that the user % split is:

a)     15% Task Workers with an average of 7 IOPS each
b)     70% Knowledge Workers with an average of 15 IOPS each
c)     15% Power Users with an average of 30 IOPS each

The resulting calculation of total estimated IOPS required will look similar to Table 1 below.

Key Takeaways:

      1. It is highly recommended to discuss/consult with the customer and to also make use of a desktop assessment tool to determine the user % distribution (split) as well as the average IOPS per user segmentation.
      2. Estimated capacity growth and the buffer percentage, is assumed to be 30%. This may vary for your customer based on the industry domain and other factors.
      3. This approach to IOPS calculation is more predictable based on user segmentation specific to a given customer’s desktop usage.
      4. You can apply this strategy to customers from Healthcare, Financial, Insurance Services, Manufacturing and other industry domains.
3.     OPERATIONSRecommendation: “Include Operational IOPS related to Storage Storms”.It is highly recommended to proactively account for IOPS related to the storage storms. Any lapse can result in a severely, painful VDI user experience during the storage storms – Patch Storm, Boot Storm and Anti-Virus (AV) storm.

Assuming that a desktop assessment tool is employed to do the analysis, it is recommended to analyze the user % split targeted during each of the storm operations listed above.

For instance, if the desktop operations team pushes OS/Application/AV patches in batches of 20% of the total user community, and the estimated IOPS is let’s say three times the steady state IOPS (explained in Part I), it will be prudent to include another attribute for operational IOPS to Table 1 listed above.

A similar, strategy should also be employed to account for the boot and the log-off storms.

I hope you will find this information handy and useful during your VDI architecture design and deployment strategy.

Until next time. Go VMware!

TJ Vatsa has worked at VMware for the past 3 years with over 19 years of expertise in the IT industry, mainly focusing on the enterprise architecture. He has extensive experience in professional services consulting, Cloud Computing, VDI infrastructure, SOA architecture planning, implementation, functional/solution architecture, and technical project management related to enterprise application development, content management and data warehousing technologies.