Home > Blogs > VMware Consulting Blog > Tag Archives: VDI

Tag Archives: VDI

Control-Alt-Delete in the World of VDI

by Mike Erb

When I was still working as an Escalation Engineer for VMware® Global Support, there was a time-honored tradition among the Broomfield center’s EUC support group: If you left your computer unlocked and walked out of eyesight, you’d always come back to a surprise.  The HR folks would probably be unhappy at such an unauthorized use, but a quick flip of the screen with Ctrl-Alt-Up and a dash back to your desk, leaving their display inverted and the surrounding engineers glancing over for the inevitable reaction, was worth the risk.

Continue reading

Composite USB Devices Step by Step

Jeremy WheelerBy Jeremy Wheeler

Users have a love/hate relationship with VDI: they love the ability to access apps and information from any device, at any time, but they hate the usual trade-offs in performance and convenience. If you’re using VMware Horizon View, you’ve already overcome a huge acceptance hurdle, by providing a consistently great experience for knowledge workers, mobile workers and even 3D developers across devices, locations, media and connections.

But sometimes, peripherals don’t behave as expected in a VDI environment, which can lead to JWheeler Composite USB White Paperuser frustration. For example, when someone wants to use a Microsoft LifCam Cinema camera, they naturally expect to just plug it into a USB device and have it auto-connect to their VDI session. But if anyone in your organization has tried to do this, you already know that’s not the case. Fortunately, there is an easy workaround to fix the problem.

Download the white paper for the VMware-tested fix to this common problem.

 


Jeremy Wheeler is an experienced Consulting Architect for VMware’s Professional Services Organization, End-user Computing specializing in VMware Horizon Suite product-line and vRealize products such as vROps, and Log Insight Manager. Jeremy has over 18 years of experience in the IT industry. In addition to his past experience, Jeremy has a passion for technology and thrives on educating customers. Jeremy has 7 years of hands-¬‐on virtualization experience deploying full-life cycle solutions using VMware, CITRIX, and Hyper-V. Jeremy also has 16 years of experience in computer programming in various languages ranging from basic scripting to C, C++, PERL, .NET, SQL, and PowerShell.

Jeremy Wheeler has received acclaim from several clients for his in-¬‐depth and varied technical experience and exceptional hands-on customer satisfaction skills. In February 2013, Jeremy also received VMware’s Spotlight award for his outstanding persistence and dedication to customers and was nominated again in October of 2013

So You Virtualized Your Desktop Environment. Now what?

mmarx.phpBy Mike Marx

Most of my customers start with a low-risk user group consisting of a large number of users with identical application requirements. This is the common scenario when starting out on the virtual desktop infrastructure (VDI) journey and ‘testing the waters.’ With proper design efforts, initial implementations are highly successful.

I spend the majority of my consulting effort working with customers helping them create their initial VDI design. Designs can be simple or complicated, but they all utilize a common technical approach for success: understanding user requirements, and calculating infrastructure sizing. But I’m not blogging about technical calculations or infrastructure sizing. Instead I would like to address a VDI design challenge customers face as they expand their VDI design: user application assignments.

While resource requirements are simple to assess, calculate and scale, application delivery becomes increasingly challenging as more users are added to the design. VDI administrators struggle to manage increasing numbers of desktop users – each having unique application requirements.

Applications are easy to add to a large static group of user desktops using linked-clones. But when unique user groups are introduced, and application requirements change, administrators are confronted with the challenge of maintaining a large number of small desktop pools – or impacting large groups of users in order to change an application assignment.

So how do we design an effective stateless desktop and maintain application diversity amongst unique user groups? VMware Horizon AppVolumes is the answer.

Using AppVolumes, VDI designs become simple to understand and implement. Once applications are effectively removed from the VDI desktop, VDI administrators are left with a simple stateless desktop. But users aren’t productive with an empty desktop operating system; they need applications – and lots of them.

Without going into deep technical detail (there are excellent blogs on this topic already) AppVolumes captures the application files, folders and registry components, and encapsulates them into a transportable virtual disk called an AppStack. As the user logs on to a stateless desktop, the assigned AppStack(s) will automatically attach and merge the user’s applications with the desktop virtual machine.

Now users are presented with a stateless desktop that is uniquely assembled with all of their applications. AppVolumes’ attached applications interact with other applications— and the operating system—as if they were natively installed, so the user experience is seamless.

Now that applications are no longer an impediment to VDI designs, VDI administrators are able to support large groups of users and application requirements using the same stateless desktop pool. By following the KISS principle: “Keep It Simply Stateless,” AppVolumes will open the door to new design possibilities and wider adoption by users and IT administrators.


Mike Marx is a Consulting Architect with the End User Computing group at VMware. He has been an active consultant using VMware technologies since 2005.  His certifications include : VCAP-DTD, VCP-DT, VCA-WM, VCA-DT, VCP2-5 as well as being an expert in VMware View, Thinapp, vSphere and SRM.

VDI Current Capacity Details

Anand Vaneswaran

By Anand Vaneswaran

In my previous post, I provided instructions on constructing a high-level “at-a-glance” VDI dashboard in vRealize Operations for Horizon, one that would aid in troubleshooting scenarios. In the second of this three-part blog series, I will be talking about constructing a custom dashboard that will take a holistic view of my vSphere HA clusters that run my VDI workloads in an effort to understand current capacity. The ultimate objective would be to place myself in a better position in not only understanding my current capacity, but I better hope that these stats help me identify trends to be able to help me forecast capacity. In this example, I’m going to try to gain information on the following:

  • Total number of running hosts
  • Total number of running VMs
  • VM-LUN densities
  • Usable RAM capacity (in a N+1 cluster configuration)
  • vCPU to pCPU density (in a N+1 cluster configuration)
  • Total disk space used in percentage.

You can either follow my lead and recreate this dashboard step-by-step, or simply use this as a guide and create a dashboard of your own for the most important capacity metrics you care about. In my environment, I have five (5) clusters comprising of full-clone VDI machines and three (3) clusters comprising of linked-clone VDI machines. I have decided to incorporate eight (8) “Generic Scoreboard” widgets in a two-column custom dashboard. I’m going to populate each of these “Generic Scoreboard” widgets with the relevant stats described above.

anand_vdi_1

Once my widgets have been imported, I will rearrange my dashboard so that the left side of the screen occupies full-clone clusters and the right side of the screen occupies linked-clone clusters. Now, as part of this exercise I determined that I needed to create super metrics to calculate the following metrics:

  • VM-LUN densities
  • Usable RAM capacity (in a N+1 cluster configuration)
  • vCPU to pCPU density (in a N+1 cluster configuration)
  • Total disk space used in percentage

With that being said, let’s begin! The first super metric I will create will be called SM – Cluster LUN Density. I’m going to design my super metric with the following formula:

sum(This Resource:Deployed|Count Distinct VM)/sum(This Resource:Summary|Total Number of Datastores)

anand_vdi_2

In this super metric I will attempt to find out how many VMs reside in my datastores on average. The objective is to make sure I’m abiding by the recommended configuration maximums of allowing a certain number of virtual machines to reside on my VMFS volume.

The next super metric I will create is called SM – Cluster N+1 RAM Usable. I want to calculate the usable RAM in a cluster in an N+1 configuration. The formula is as follows:

(((sum(This Resource:Memory|Usable Memory (KB)/sum(This Resource:Summary/Number of Running Hosts))*.80)*(sum(This Resource:Summary/Number of Running Hosts)-1))/10458576

anand_vdi_3

Okay, so clearly there is a lot going on in this formula. Allow me to try to break it down and explain what is happening under the hood. I’m calculating this stat for an entire cluster. So what I will do is take the usable memory metric (installed) under the Cluster Compute Resource Kind. Then I will divide that number by the total number of running hosts to give me the average usable memory per host. But hang on, there are two caveats here that I need to take into consideration if I want an accurate representation of the true overall usage in my environment:

1)      I don’t think I want my hosts running at more than 80 percent capacity when it
comes to RAM utilization. I always want to leave a little buffer. So my utilization factor will be 80 percent or .8.

2)      I always want to account for the failure of a single host (in some environments, you might want to factor in the failure of two hosts) in my cluster design so that compute capabilities for running VMs are not compromised in the event of a host failure.  I’ll
want to incorporate this N+1 cluster configuration design in my formula.

So, I will take the result of my overall usable, or installed, memory (in KB) for the cluster, divide that by the number of running hosts on said cluster, then multiply that result by the .8 utilization factor to arrive at a number – let’s call it x – this is the amount of real usable memory I have for the cluster. Next, I’m going to take x, then multiply the total number of hosts minus 1, which will give me y. This will take into account my N+1 configuration. Finally I’m going to take y, still in KB, and divide it by (1024×1024) to convert it to GB and get my final result, z.

The next super metric I will create is called SM – Cluster N+1 vCPU to Core Ratio. The formula is as follows:

sum(This Resource:Summary|Number of vCPUs on Powered On VMs)/((sum(This Resource:CPU Usage|Provisioned CPU Cores)/sum(This Resource:Summary|Total Number of Hosts))*(sum(This Resource:Summary|Total Number of Hosts)-1))

anand_vdi_4

anand_vdi_5

This formula is fairly self-explanatory. I’m taking the total space used for that datastore cluster and dividing that by the total capacity of that datastore cluster. This is going to give me a number greater than 0 and less than 1, so I’m going to multiply this number by 100 to give me a percentage output.

Once I have the super metrics I want, I want to attach these super metrics to a package called SM – Cluster SuperMetrics.

anand_vdi_6

The next step would be to tie this package to current Cluster resources as well as Cluster resources that will be discovered in the future. Navigate to Environment > Environment Overview > Resource Kinds > Cluster Compute Resource. Shift-select the resources you want to edit, and click on Edit Resource.

anand_vdi_7

Click the checkbox to enable “Super Metric Package, and from the drop-down select SM – Cluster SuperMetrics.

anand_vdi_8

To ensure that this SuperMetric package is automatically attached to future Clusters that are discovered, navigate to Environment > Configuration > Resource Kind Defaults. Click on Cluster Compute Resource, and on the right pane select SM – Cluster SuperMetrics as the Super Metric Package.

anand_vdi_9

Now that we have created our super metrics and attached the super metric package to the appropriate resources, we are now ready to begin editing our “Generic Scoreboard” widgets. I will tell you how to edit two widgets (one for a full-clone cluster and one for a linked-clone cluster) with the appropriate data and show its output. We will then want to replicate the same procedures to ensure that we are hitting every unique full clone and linked clone cluster. Here is an example of what the widget for a full-clone cluster should look like:

anand_vdi_10

And here’s an example of what a widget for a linked-clone cluster should look like:

anand_vdi_11

Once we replicate the same process and account for all of our clusters, our end-state dashboard should resemble something like this:

anand_vdi_12

And we are done. A few takeaways from this lesson:

  • We delved into the concept of super metrics in this tutorial. Super metrics are awesome resources that allow you the ability to manipulate metrics and display just the data you want to.  In our examples we created some fairly involving formulas, but a very simple example for why a super metric can be particularly useful would be memory. vRealize Operations Manager displays memory metrics in KB, but how do we get it to display in GB? Super metrics are your solution here.
  • Obviously, every environment is configured differently and therefore behaves differently, so you will want to tailor the dashboards and widgets according to your environment needs, but at the very least the above examples can be a good starting point to build your own widgets/dashboards.

In my next tutorial, I will walk through the steps for creating a high-level “at-a-glance” VDI dashboard that your operations command center team can monitor. With most organizations, IT issues are categorized on a severity basis that are then assigned to the appropriate parties by a central team that runs point on issue resolution by coordinating with different departments.  What happens if a Severity 1 issue happens to afflict your VDI environment? How are these folks supposed to know what to look for before placing that phone call to you? This upcoming dashboard will make it very easy. Stay tuned!!


Anand Vaneswaran is a Senior Technology Consultant with the End User Computing group at VMware. He is an expert in VMware Horizon (with View), VMware ThinApp, VMware vCenter Operations Manager, VMware vCenter Operations Manager for Horizon, and VMware Horizon Workspace. Outside of technology, his hobbies include filmmaking, sports, and traveling.

How-to: Create a vCOPS for View At-A-Glance High-Level VDI Dashboard

By Anand Vaneswaran

Anand VaneswaranVDI environments are complex because there are so many moving parts. As a result, there is a real need for architects, admins, managers, or operations professionals to see a high-level breakdown of the most important stats—stats that are especially important when we receive that escalated phone call about an issue that could potentially affect a large number of users.

In this first post of a three-part blog series, I’ll provide details about a high-level VDI custom dashboard in vCenter Operations Manager for View that was renamed vCenter Operations Manager for Horizon when Horizon 6.0 was released. (I’ll also assume you’re all well versed in VDI.)

To start, some of the stats or information I deeply care about in my test environment are as follows:

Download

Download the Step-by-Step

  1. Viewing the number of tunneled connections that are coming in through my security servers.
  2. Viewing the overall health of my connection servers.
  3. Keeping tabs on the resources (CPU, RAM, Disk) of my most critical VDI servers (Connection and security servers, vCenter server, View Composer, etc.).
  4. Monitoring resources (CPU and RAM) on my ESXi hosts running VDI workloads. (I will go one step further and break it down into hosts for my full clone pools, and linked clone pools.)
  5. Finally, looking at my LUNs and keep tabs on a number of metrics, but most importantly VM-to-LUN densities.

When compiled together, the information listed above comprises the end-state dashboard I want to achieve. The dashboard will have two generic scoreboard widgets on either side to depict the number of user connections through my security servers and the workload percentage of my connection servers. In addition, two Health-Workload scoreboard widgets on either side will depict the health of security and connection servers. The scoreboard is set up so that when you click a particular object in the Generic Scoreboard widget, the scoreboard is automatically populated with the health of that relevant object.

Finally, I want four Heat Map widgets: one to provide information about critical server resources, two to give me updates on ESXi host resources, and one to give me details about VM-to-LUN densities. I chose to populate my dashboard with an assortment of these built-in Generic Scoreboard, Health-Workload, and Heat Map widgets because I find that these types of widgets provide the most efficient means of graphically conveying the state of an environment, in essence, a point-in-time snapshot of your environment at any given time.

Now, if you’re ready to build, get detailed, step-by-step instructions for creating the dashboard.


Anand Vaneswaran is a senior technology consultant with the End User Computing group at VMware. He is an expert in VMware Horizon (with View), VMware ThinApp, VMware vCenter Operations Manager, VMware vCenter Operations Manager for Horizon, and VMware Horizon Workspace. Outside of technology, his hobbies include filmmaking, sports, and traveling.

End User Computing 101: Network and Security

By TJ Vatsa, Principal Architect, VMware Professional Services

TJ Vatsa

In my first post on the topic of End User Computing (EUC), I provided a few digestible tidbits around infrastructure, desktop and server power, and storage. In this post, we’ll go a bit further into the infrastructure components that affect user experience and how users interact with the VDI infrastructure. We’ll cover network and security, devices, converged appliances, and desktop as a service.

Let’s look a bit more closely at network and security first.

Network and Security

To ensure acceptable VDI user experience, monitor the bandwidth and latency or jitter of the network. This means performing the appropriate network assessment by deploying monitoring tools to first establish a baseline. Once that’s completed, you’ll need to monitor the network resources against those baselines. As with any network, high latency can negatively affect performance, though some components are more sensitive to high latency than others.

When deploying Horizon View desktops using the PC-over-IP (PCoIP) remote display protocol in a WAN environment, consider the Quality of Service (QOS) aspect. Ensure that the round-trip network latency is less than 250 ms. And know that PCoIP is a real-time protocol, so it operates just like VoIP, IPTV, and other UDP-based streaming protocols.

To make sure that PCoIP is properly delivered, it needs to be tagged in QoS so that it can compete fairly across the network with other real-time protocols. To achieve this objective, PCoIP must be prioritized above other non-critical and latency tolerant protocols (for example, file transfers and print jobs). Failure to tag PCoIP properly in a congested network environment leads to PCoIP packet loss and a poor user experience, as PCoIP adapts down in response. For instance, tag and classify PCoIP as interactive real-time traffic. (Classify PCoIP just below VoIP, but above all other TCP-based traffic.)

For optimizing network bandwidth, ensure that you’ve got a full-duplex end-to-end network link. Consider segmenting PCoIP traffic via IP Quality of Service (QoS) Differentiated Services Code Point (DSCP) or a layer 2 Class of Service (CoS) or virtual LAN (VLAN). While using VPN, ensure that UDP traffic is supported.

Enterprise security for corporate virtual desktops is of paramount importance for the successful rollout of VDI infrastructure. It is highly recommended that an enterprise scale, policy-based management security solution be used to define and enforce security policies within the enterprise.

Based on typical customer requirements, secure access to the VDI infrastructure is provisioned via the following user access modes:

  1. LAN Users: VDI users accessing virtual desktop infrastructure via the corporate LAN network.
  2. VPN Users: VDI users accessing corporate virtual desktop infrastructure via the VPN tunnel.
  3. Public Network Users: VDI users accessing virtual desktop infrastructure via the public network.

Use Case: VDI User Secure Access Modes

Enforcing authentication and authorization policies is a domain by itself, and is influenced by industry verticals. For instance, many hospitals prefer “tap-‘n’-go” solutions to authenticate and authorize their clinical staff to access devices and Electronic Medical Record (EMR) applications. The regulatory compliance perspective should not be ignored either when it comes to industry verticals, such as HIPAA for healthcare industry and PCI for the financial industry.

Note: The scenario depicted below is that of a typical public network user.

Infrastructure scenario

Horizon View infrastructure can be easily optimized to support any combination of secure VDI user access modes.

Devices

Based on security policies and regulatory compliance standards that are prevalent within the enterprise, I highly recommended doing a thorough end user devices/endpoints assessment. You’ll want to categorize your users based on desktop communities that support one or more types of endpoints. VMware’s Horizon View client supports a variety of endpoints, whether they’re desktops, laptops, thin clients, zero clients, mobile devices, or tablets that support iOS, Android, Mac OS X, Linux, Windows, HTML Access—just to name a few.

Converged Appliances

The converged appliances industry is rapidly and effectively maturing as more and more customers prefer converged appliances because they enable faster infrastructure deployment times. From an EUC infrastructure perspective, it’s important to evaluate available converged appliance solutions available for your business scenarios.

Vendors are and will be providing customized and optimized solutions for EUC, business continuity and disaster recovery (BCDR) as x-in-a-box, wherein the required infrastructure components, hardware and software have been validated and optimized to cater to specific business scenarios.

Desktop as a Service (DaaS)

Some customers worry about EUC datacenter planning, infrastructure procurement, and deployment.

DaaS scenario

Look to hosted desktop services, such as Horizon DaaS, to address business requirements and use cases that revolve around development, testing, seasonal bursts, and even BCDR. DaaS can even provide a more economical alternative to traditional datacenter deployment. For instance, DaaS reduces your up-front costs and lowers your desktop TCO with predictable cloud economics that enable you to move from CapEx to OpEx in a predictable way.

Plus, users can access Windows desktops and applications from the cloud on any device, including tablets, smartphones, laptops, PCs, thin clients, and zero clients. DaaS solutions like Horizon DaaS desktops can also be tailored to meet the simplest or most demanding workloads, from call center software to CAD and 3D graphics packages.

In these first two posts, we’ve gotten a good handle on infrastructure, devices, and security. In my next post, I’ll cover mobility and BYOD along with applications and image management, and weave it all together with EUC project methodology.


TJ has worked at VMware for the past four years, with over 20 years of experience in the IT industry. At VMware TJ has focused on enterprise architecture and applied his extensive experience to Cloud Computing, Virtual Desktop Infrastructure, SOA planning and implementation, functional/solution architecture, enterprise data services and technical project management.

TJ holds a Bachelor of Engineering degree in Electronics and Communications from Delhi University and has attained multiple industry and professional certifications in enterprise architecture and technology platforms. TJ is a speaker and a panelist at industry conferences such as VMworld, VMware’s PEX (Partner Exchange) and BEAworld. His passion is the real-life application of technology to drive successful user experiences and business outcomes.

End User Computing 101 and Tips for Successful Deployments

By TJ Vatsa, Principal Architect, VMware Professional Services

TJ VatsaThe topic of End User Computing (EUC) is heating up. This is not only because our industry considers this to be a dynamic domain for tremendous innovation today, but also because the industry views great potential for the future and is heavily investing in the space.

In this three-part blog series, I’ll assimilate the vast EUC landscape into digestible tidbits that focus on the infrastructure, mobility and BYOD, applications and image management, and discuss a typical EUC project scenarios and methodology.

My goal is to provide insight into the things you should consider for your own EUC deployment.

EUC Landscape

First Things First: Infrastructure

As soon as someone mentions EUC, the first thing that comes to mind is Virtual Desktop Infrastructure (VDI). The very fact that VDI is deployed in the datacenter, away from individual desktops, means that you must plan the underlying infrastructure in a systematic and thorough way.

At a minimum, this means allocating key infrastructure resources: compute, storage, network, and security.
It is also imperative that some sort of infrastructure resource assessment tools be deployed to establish a baseline for each of these infrastructure components.

Desktop and Server Power

Assuming that a baseline has been established for the compute resources in terms of CPU, clock speed, and memory requirements per desktop, it is important to choose a server configuration with the right processor, clock speed, and physical memory. In turn, this drives the correct consolidation ratio of virtual desktops per core and, ultimately, for the entire server.

Give careful attention to different use cases where specific workloads require different combinations of CPU, clock speed, and memory. You must ensure that you also plan for growth and seasonal/occasional bursts seen in those workloads historically.

For a typical Horizon View deployment, there are two categories of VMs (virtual machines) recommended for deployment inside the data center: one for management purposes and another for desktop purposes. Management VMs are mainly servers (connection brokers, databases, etc.) whereas the desktop VMs are the actual virtual desktops.

For a production deployment, VMware recommends creating two separate cluster types–Management Cluster(s) and Desktop Cluster(s)–to avoid any race conditions that might arise as a result of, say, competing workloads or operational maintenance.

Storage: Key to VDI Success

Having worked with many customers across many different industry verticals (healthcare, financial, entertainment services, and manufacturing) I’ve noticed that there’s one critical success factor in common: storage.

For more information about VDI storage and detailed insight into what is important for a successful VDI deployment, read these two blog posts:

Part I: Storage Boon or Bane – VMware View Storage Design Strategy & Methodology
Part II: Storage Boon or Bane – VMware View Storage Design Strategy & Methodology

In my next post, I’ll cover the remaining considerations around a successful VDI deployment, including network and security, converged appliances, and desktop as a service. Stay tuned!


TJ has worked at VMware for the past four years, with over 20 years of experience in the IT industry. At VMware TJ has focused on enterprise architecture and applied his extensive experience to Cloud Computing, Virtual Desktop Infrastructure, SOA planning and implementation, functional/solution architecture, enterprise data services and technical project management.

TJ holds a Bachelor of Engineering degree in Electronics and Communications from Delhi University and has attained multiple industry and professional certifications in enterprise architecture and technology platforms. TJ is a speaker and a panelist at industry conferences such as VMworld, VMware’s PEX (Partner Exchange) and BEAworld. His passion is the real-life application of technology to drive successful user experiences and business outcomes.

Slowing Down for Strategy Speeds Up the Move to Mobile

By Gary Osborne, Senior Solutions Product Manager – End User Computing

Today’s workers are more reliant on—and demanding of—mobility than ever before. They need personalized desktops that follow them from work to home. They need to connect from multiple devices through rich application interfaces. The challenge for IT organizations is that bring-your-own-device (BYOD) initiatives are often wrapped in, and encumbered by, tactical issues—perpetually pushing strategic discussion to the back burner.

Working hard, but standing still

By focusing on a tactical approach, many IT organizations find themselves on the BYOD treadmill—they get a lot of exercise but never really get anywhere!  Developing an overarching strategy before setting out on the journey provides much needed guidance and positioning along the way. This isn’t a step-by-step plan, but rather a clear vision of the business challenges being addressed and the value being delivered back to the organization. This vision, including direction, a clear definition of phased success, and defined checkpoints along the way, should be articulated and understood throughout the organization.

Getting your organization to buy into the importance of an overarching strategy can be a tough sell, especially if near-term goals are looming. But it will pay off many times over. According to a recent study by IBM, “Those IT organizations that treat mobile as both a high priority and a strategic issue are much more likely to experience the benefits that mobile can bring to an organization. The July report, Putting Mobile First: Best Practices of Mobile Technology Leaders, reveals a strong correlation between mobile success and establishing a strategic mobile vision, along with external help to implement it.

Take the time – but not too much

Those IT organizations that achieve measurable success with their VDI and BYOD initiatives found the right balance between too little time developing a sound strategy and the all-too-common “analysis paralysis” of taking too much time. we  We have worked with customers that have found that balance in part by keeping a clear focus on the business value that BYOD solutions can provide and an eye toward what they need to achieve and deliver to the business to declare success.

Jumping straight to the tactical activities and placing orders for “guestimated” infrastructure without knowing the strategy that will support it are two of the most common pitfalls I see lead to failed or stalled BYOD initiatives. By focusing on the value mobility can deliver to the business rather than get bogged down in the technical details, a strategic exercise can be completed swiftly and deliberately, meeting the speed of change in today’s mobility.


Gary Osborne is an IT industry veteran and is part of the VMWare Global Professional Services engineering team responsible for the End User Computing Services Portfolio.  Prior to his current role, he provided field leadership for the VMware End User Computing Professional Services practice for the Americas.

It All Starts Here: Internal Implementation of Horizon Workspace at VMware

By Jim Zhang, VMWare Professional Services Consultant

VMware has had a dogfood tradition since previous CEO Paul Maritz’ instilled the practice of having VMware IT deploy VMware products for production use internally. As a VMware employee personally, I can understand some criticism to this practice, but I definitely believe it serves to build and deliver a solid and quality product to the market.

Prior to the release of VMware’s Horizon Suite, VMware IT provided Horizon Workspace to its employees in the production environment. It’s very exciting! Right now, I can use my iPhone and iPad to access my company files without being tied to my desk. Also, it is very easy to share a folder and files with other colleagues, expanding our ability to collaborate and also track various file versions. Additionally, with Workspace, I can access internal applications without further authentication after I login to the Horizon portal. Even my entitlement virtual desktops are still there!

While Mason and Ted discuss the IT challenges with mobility computing in this blog, we at VMware understand these challenges because ‘we eat our own dogfood’.  In this blog I’d like to share some of the key sizing concepts of each of the Horizon components and reference which sizes VMware IT utilized to deploy the Horizon Workspace for its 13,000+ employees.

Horizon Workspace is a vApp that generally has 5 Virtual Machines (VM) by default:

Lets go through each VM and see how to size it in each case:

1.  Configurator VA (virtual appliance): This is the first virtual appliance to be deployed. It is used to configure the vApp from a single point and deploy and configure the rest of the vApp. The Configurator VA is also used to add or remove other Horizon Workspace virtual appliances. There can only be one Configurator VA per vApp.

  • 1x Configurator VA is used. 2vCPU, 2G Memory

2.  Connector VA:  Enterprise deployments require more than one Connector VA to support different authentication methods, such as RSA SecureID and Kerberos SSO. To provide high availability when deploying more than one Connector VA, you must front-end the Connector VAs with a load balancer. Each Connector VA can support up to 30,000 users. Specific use cases, such as Kerberos, ThinApp integration, and View integration, require the Connector VA to be joined to the Windows domain.

  • 6x Connector VA is used. 2 vCPU, 4G Memory

3.  Gateway VA: The Gateway VA is the single namespace for all Horizon Workspace interaction. For high availability, place multiple Gateway VAs behind a load balancer. Horizon Workspace requires one Gateway VA for every two Data VAs, or one Gateway VA for every 2,000 users.

  • 4x Gateway VA is used: 2 vCPU, 8G Memory

4.  Management VA: aka Service VA. Enterprise deployments require two or more Service VAs. Each service VA can handle up to 100,000 users.

  • 2x Service VA is used: 2vCPU, 6G Memory (1 for HA)

5.  Data VM: Each Data VA can support up to 1,000 users. At least three Data VAs are required. The first Data VA is a master data node, the others are user data nodes. Each user data node requires its own dedicated volume. In proof of concept or small-scale pilot scenarios, you can use a Virtual Machine Disk (VMDK). For production, you must use NFS.

  • 11x Data VA is used: 6 vCPU, 32G Memory

6.  Database: Workspace only supports Postgres. For enterprise deployment best practice is to use an external Postgres database.

  • 2x Postgres Server is used: 4 vCPU, 4G Memory (1 for replication)

7.  MS Office Preview Server: Windows 7 Enterprise or Windows 2008 R2 Standard required; MS Office 2010 Professional, 64-bit required;Admin account w/ permissions to create local accounts; Disable UAC; Real-time conversion of documents

  • 3x MS Office Preview Server: 4vCPU, 4G Memory

 

If you want to learn more about the real deployment experience and best practices for deploying the Horzion Suite, please contact your local VMware Professional Services team. They have the breadth of experience and technical ability to help you achieve your project goals: from planning and design to implementation and maintenance. Also, be on the look out for upcoming Horizon reference guides being released from VMware soon. Good luck!

Jim Zhang joined VMware in November 2007 as a quality engineering manager for VMware View.  In 2011, he moved to Professional Services as consultant and solution architect.  Jim has extensive experience in desktop virtualization and workspace solution design and delivery.

Part II: Storage Boon or Bane – VMware View Storage Design Strategy & Methodology

By TJ Vatsa, VMWare EUC Consultant

INTRODUCTION Welcome to Part II of the VMware View Storage Design Strategy and Methodology blog. This blog is in continuation to Part I that can be found here. In the last blog, I had listed some of the most prevalent challenges that impede a predictable VMware View Storage design strategy.. In this blog, I will articulate some of the successful storage design approaches that are employed by VMware End User Computing (EUC) Consulting practice to overcome those challenges.

I’d like to reemphasize the fact that storage is very crucial to a successful VDI deployment. Should the VDI project be made prone to the challenges listed in Part I, Storage, for sure, will seem to be a “bane”. But, if the recommended design strategy listed below is followed, you would be surprised to find VDI Storage being a “boon” for a scalable and predictable VDI deployment.

With that in mind, let’s dive in. Some successful storage design approaches I’ve encountered are the following:

    • 1.     PERFORMANCE Versus CAPACITY Recommendation: “First performance and then capacity”
      Often times, capacity seems more attractive when compared to performance. But, is it really so? Let’s walk through an example.

 

    • a)     Let’s say vendor “A” is selling you a storage appliance, “Appliance A” that has a total capacity of 10TB, being delivered by 10 SATA drives of 1TB capacity each.

 

    • b)     On “Appliance A”, let’s say that each SATA drive delivers approximately 80 IOPS. So, for 10 drives, the total IOPS being delivered by the appliance is 800 IOPS (10 drives * 80 IOPS).

 

    • c)     Now let’s say that vendor “B” is selling you a storage appliance, “Appliance B” that also has a total capacity of 10TB, but it is being delivered by 20 SATA drives of 0.5TB capacity each. [Note: “Appliance B” may be expensive as there is more drives compared to “Appliance A”.]

 

  • d)     Now for “Appliance B”, assuming that the SATA drive specifications are the same as those of “Appliance A”, you should be expecting 1600 IOPS (20 drives * 80 IOPS)
    • It’s mathematically clear; “Appliance B” will be delivering twice as much IOPS than “Appliance A”. More storage IOPS invariably turns out to be a boon for a VDI deployment. Another important point to consider, is the fact that employing higher tier storage also ensures high IOPS availability. Case in point, replacing the SATA drives in the example above with SAS drives will certainly provide higher IOPS. SSD drives, while expensive, will provide even higher IOPS.

 

    • 2.     USER SEGMENTATIONRecommendation: Intelligent user segmentation that does not assume “one size fits all approach”.

As was explained in Part I, taking a generic user IOPS, say “X” and then multiplying that with the total number of VDI users in an organization say “Y”, may result in an Oversized or an Undersized Storage Array design. This approach may prove costly, either upfront or at a later date.

The recommended design approach is to intelligently categorize the user’s IOPS as “Small, Medium or High” based on the load a given category of users generate across the organization. As part of the common industry nomenclature for VDI users:

a)     Task Workers: associated with small IOPS.
b)     Knowledge Workers: associated with medium IOPS.
c)     Power Users: associated with high IOPS.

With these guidelines in mind, let me walk you through an example. Let’s say that Customer A’s Silicon Valley campus location has 1000 VDI users. Assuming that the user % split is:

a)     15% Task Workers with an average of 7 IOPS each
b)     70% Knowledge Workers with an average of 15 IOPS each
c)     15% Power Users with an average of 30 IOPS each

The resulting calculation of total estimated IOPS required will look similar to Table 1 below.

Key Takeaways:

      1. It is highly recommended to discuss/consult with the customer and to also make use of a desktop assessment tool to determine the user % distribution (split) as well as the average IOPS per user segmentation.
      2. Estimated capacity growth and the buffer percentage, is assumed to be 30%. This may vary for your customer based on the industry domain and other factors.
      3. This approach to IOPS calculation is more predictable based on user segmentation specific to a given customer’s desktop usage.
      4. You can apply this strategy to customers from Healthcare, Financial, Insurance Services, Manufacturing and other industry domains.
3.     OPERATIONSRecommendation: “Include Operational IOPS related to Storage Storms”.It is highly recommended to proactively account for IOPS related to the storage storms. Any lapse can result in a severely, painful VDI user experience during the storage storms – Patch Storm, Boot Storm and Anti-Virus (AV) storm.

Assuming that a desktop assessment tool is employed to do the analysis, it is recommended to analyze the user % split targeted during each of the storm operations listed above.

For instance, if the desktop operations team pushes OS/Application/AV patches in batches of 20% of the total user community, and the estimated IOPS is let’s say three times the steady state IOPS (explained in Part I), it will be prudent to include another attribute for operational IOPS to Table 1 listed above.

A similar, strategy should also be employed to account for the boot and the log-off storms.

I hope you will find this information handy and useful during your VDI architecture design and deployment strategy.

Until next time. Go VMware!

TJ Vatsa has worked at VMware for the past 3 years with over 19 years of expertise in the IT industry, mainly focusing on the enterprise architecture. He has extensive experience in professional services consulting, Cloud Computing, VDI infrastructure, SOA architecture planning, implementation, functional/solution architecture, and technical project management related to enterprise application development, content management and data warehousing technologies.