Home > Blogs > VMware Consulting Blog > Category Archives: End User Computing

Category Archives: End User Computing

VMware App Volumes Backup Utility Fling: Introduction

First published on VMware’s End-User Computing blog

By Dale Carter, Chris Halstead and Stéphane Asselin

In December 2014, VMware released VMware App Volumes, and since then, lots of new features have been added, and people love using App Volumes. Organizations use App Volumes not only in VMware environments, but also in many Citrix environments.

However, there has been one big request from our App Volumes users: Every time I talk to people about App Volumes, they ask about how to back up their AppStacks and writable volumes. Normal virtual-machine backup tools cannot back up App Volumes AppStacks and writable volumes because the AppStacks and writable volumes are not part of the vCenter inventory unless they are connected to a user’s virtual machine (VM). As I talked to other people within VMware, I found this question coming up more and more, so I started to think of how we could help.

Last summer during an internal conference, Travis Wood, Senior Solutions Architect at VMware, and I were throwing around a few ideas of how to address this request, and we came up with the idea of an App Volumes backup tool.

Because I do not have any programming skills, I started talking with Chris Halstead, End-User-Computing Architect at VMware, about the idea for this tool. Chris was instantly excited and agreed that this would be a great solution. Chris and I also enlisted Stéphane Asselin, Senior End-User-Computing Architect, to help with creating and testing the tool.

Over the last couple of months, Chris, Stéphane, and I have been working on the tool, and today we are happy to announce that the App Volumes Backup Utility has been released as a VMware Fling for everyone to download.

Use Case and Benefits

The issue with backing up App Volumes AppStacks and writable volumes is that these VMDK files do not show up in the vCenter inventory unless they are currently in use and connected to a user’s virtual desktop. The standard backup tools do not see the VMDKs on the datastore if they are not in the vCenter inventory, and you do not want to back up these files while users are connected to their desktops.

The use case for this tool was to provide a way to make your backup tools see the AppStack and writable-volume VMDKs when they are not connected to a user’s virtual desktop. We also did not want to create other virtual machines that would require an OS; we wanted to keep the footprint and resources to a minimum, and the cost down.

The benefits of using the App Volumes Backup Utility are

  • It connects AppStacks and writable volumes to a VM that is never in use and that also does not have an OS installed.
  • The solution is quick and uses very few resources. The only resource that the tool does use is a 1 MB storage footprint for each temporary backup VM you create.
  • The tool can be used in conjunction with any standard software that backs up your current virtual infrastructure.

How Does the Tool Work?

DCarter_app-volumes-backup-utility-19

In the App Volumes Backup Utility, we made it easy for your existing backup solution to see and back up all of the AppStacks and writable volumes. This is accomplished in a fairly straightforward way. Using the tool, you connect to both your App Volumes Manager and vCenter. Then, using the tool, you create a backup VM. This VM is only a shell, has no OS installed, and has a very small footprint of just 1 MB.

Note: This VM will never be powered on.

After the backup VM is created, you select which AppStacks and writable volumes you want to back up, and you attach them to the backup VM using the App Volumes Backup Utility.

After the AppStacks and writable volumes are attached, you can use your standard backup solution to back up the backup VM, including the attached VMDK files. After the backup is complete, open the tool and detach the AppStacks and writable volumes from the backup VM, and delete the backup VM.

For more details on how to use the tool, see the VMware App Volumes Backup Utility Fling: Instructions.

Download the App Volumes Backup Utility Fling, and feel free to give Chris Halstead, Stéphane Asselin, and me your feedback. You can comment on the Fling site or below this blog post, or find our details on this blog site and connect with us.


Dale CarterDale is a Senior Solutions Architect and member of the CTO Ambassadors. Dale focuses in the End User Compute space, where Dale has become a subject matter expert in a number of the VMware products. Dale has more than 20 years’ experience working in IT having started his career in Northern England before moving the Spain and finally the USA. Dale currently holds a number of certifications including VCP-DV, VCP-DT, VCAP-DTD and VCAP-DTA. For more blog post from Dale visit his website at http://vdelboysview.com

Chris_Halstead

Chris Halstead is an EUC Architect on the End User Computing Technical Marketing & Enablement team. He has over 20 years’ experience in the End User Computing space. Chris’ experience ranges from managing a global desktop environment for a Fortune 500 company, to managing and proving EUC professional services at a VMware partner–and most recently as an End User Computing SE for VMware. Chris has written four other VMware Flings, many detailed blog articles (http://chrisdhalstead.net), has been a VMware vExpert since 2012 and is active on Twitter at @chrisdhalstead

Stephane_Asselin

Stéphane Asselin with his twenty years experience in IT, is a Senior Consultant for the Global Center of Excellence (CoE) for the End-User Computing business unit at VMware. In his recent role, he had national responsibility for Canada for EUC planning, designing and implementing virtual infrastructure solutions and all processes involved. At VMware, Stephane has worked on EUC pre-sales activities, internal IP, product development and technical specialist lead on BETA programs. He has also done work as a Subject Matter Expert for project Octopus, Horizon, View, vCOps and ThinApp. Previously, he was with CA as Senior Systems Engineer where he has worked on Enterprise Monitoring pre sales activities and technical specialist. 

In his current role in the Global Center of Excellence at VMware, he’s one of the resources developing presentation materials and technical documentation for training and knowledge transfer to customers and peer systems engineers. Visit myeuc.net for more information.

Composite USB Devices Step by Step

Jeremy WheelerBy Jeremy Wheeler

Users have a love/hate relationship with VDI: they love the ability to access apps and information from any device, at any time, but they hate the usual trade-offs in performance and convenience. If you’re using VMware Horizon View, you’ve already overcome a huge acceptance hurdle, by providing a consistently great experience for knowledge workers, mobile workers and even 3D developers across devices, locations, media and connections.

But sometimes, peripherals don’t behave as expected in a VDI environment, which can lead to JWheeler Composite USB White Paperuser frustration. For example, when someone wants to use a Microsoft LifCam Cinema camera, they naturally expect to just plug it into a USB device and have it auto-connect to their VDI session. But if anyone in your organization has tried to do this, you already know that’s not the case. Fortunately, there is an easy workaround to fix the problem.

Download the white paper for the VMware-tested fix to this common problem.

 


Jeremy Wheeler is an experienced Consulting Architect for VMware’s Professional Services Organization, End-user Computing specializing in VMware Horizon Suite product-line and vRealize products such as vROps, and Log Insight Manager. Jeremy has over 18 years of experience in the IT industry. In addition to his past experience, Jeremy has a passion for technology and thrives on educating customers. Jeremy has 7 years of hands-¬‐on virtualization experience deploying full-life cycle solutions using VMware, CITRIX, and Hyper-V. Jeremy also has 16 years of experience in computer programming in various languages ranging from basic scripting to C, C++, PERL, .NET, SQL, and PowerShell.

Jeremy Wheeler has received acclaim from several clients for his in-¬‐depth and varied technical experience and exceptional hands-on customer satisfaction skills. In February 2013, Jeremy also received VMware’s Spotlight award for his outstanding persistence and dedication to customers and was nominated again in October of 2013

EUC Design Series: Horizon 7 Strategy for Desktop Evolution to IoT Revolution

TJBy TJ Vatsa

Introduction

Mobility and end-user computing (EUC) are evolving at a very rapid pace. With the recent announcements made by VMware around Horizon 7 it becomes all the more important to recalibrate and remap the emerging innovation trends to your existing enterprise EUC and application rationalization strategies. For business and IT leaders, burning questions emerge:

  • “What are these EUC innovations leading to, and why should it matter to my organization?”
  • “What is the end-user desktop in the EUC realm evolving into, and are these innovations a precursor to an IoT (Internet of Things) revolution?”
  • “What outcomes might we expect if we were to adopt these innovations in our organizations?”
  • “How do we need to restructure our existing EUC/mobility team to fully leverage the mobility evolution?”

Now there are enough questions to get your creative juices flowing! Let’s dive right in.

The What

Desktop virtualization revolutionized how end-user desktops with their applications and data were securely managed within the guard rails of a secure data center. These were essentially Generation1 (Gen1) desktops that were persistent (AKA full clone) desktops within a virtual machine (VM) container. While the benefit was mainly secure encapsulation within a data center, the downside was cumbersome provisioning with a bloated storage footprint. For instance, if you had one persistent desktop with a 50 GB base image and 100 users, you would be looking at 5,000 GB—or 5 TB—of storage. In an enterprise where we have thousands of users with unique operating system and application requirements, the infrastructure capital expenditures (CAPEX) and the associated operational expenditures (OPEX) would be through the roof.

The preceding scenario was solved by the Generation2 (Gen2) virtual desktops, which were classified as non-persistent (AKA linked clone) desktops. Gen2 desktops relied on a parent base-image (AKA a replica), and the resulting linked clones referenced this replica for all read operations, and had delta disks to store any individual writes. These desktops benefited from faster process automation using a Composer server (AKA desktop provisioning) that generated linked clones referencing a base replica image. This resulted in a significant reduction in the storage footprint and faster desktop provisioning times. This also aided in reducing the CAPEX and OPEX levels incurred in Gen1 desktops. However, the downside of desktop boot-up times was still not fully resolved because they are dependent on the storage media being used. Boot-up times were faster with flash storage and comparatively slower with spinning media storage. The OPEX associated with application management was still not fully resolved despite application virtualization technologies offered by various vendors. It still required management of multiple patches for desktop images and applications.

The panacea offered by the new Horizon 7 has accelerated the virtual desktop evolution to Generation3 (Gen3) desktops. Evolution to Gen3 results in just-in-time desktops and application stack delivery. This means you only have to patch the desktop once, clone it with its running state, and dynamically attach the application stack using VMware’s App Volumes. Gen3 virtual desktops from VMware have the benefits of Gen2 desktops, but without the operational overhead, resulting in reduced CAPEX and OPEX. Here is an infographic detailing the evolution:

TVatsa_Clone Desktop VM

Gen3 desktops pave the way for a Generation4+ (Gen4+) mobility platform that leverages VMware’s Enterprise Mobility Management (EMM) platform and the EUC platform into Workspace ONE, capable of tapping into all of the possibilities of mobility-enabled IoT solutions. The potential generated by these solutions is capable of being tapped across various vertical industries—healthcare, financial, retail, education, manufacturing, government and consumer packaged goods—creating an IoT revolution in days to come.

The Why

The innovations listed in the preceding section have the potential of transforming an enterprise’s business, IT and financial outcomes. The metrics to quantify these outcomes are best measured in the resulting CAPEX and OPEX reductions. The reduction in these expenditures not only fosters business agility as in accelerated M&A, but also enhances an organization’s workforce efficiency. The proof is in the pudding. Here is a sample snapshot of the outcomes from a healthcare customer:

TVatsa_Healthcare Customer Diagram

The How

While the mobility evolution and its leap to an IoT revolution is imminent with the promise of anticipated outcomes as mentioned earlier, the question still lingers: How do you align the roles within your organization to ride the wave of mobility transformation?

Here is a sample representation of the recommended roles for an enterprise mobility center of excellence (COE):

TVatsa_COE

Here is the description of field recommendations in terms of mandatory and recommended roles for an enterprise EUC/mobility transformation:

TVatsa_Proposed Org Roles

Conclusion

Given the rate at which enterprise mobility is evolving towards IoT, it is only a matter of time when every facet of our lives, from our work to home environments, will be fully transformed by this tectonic mobility driven IoT transformation. VMware’s mobility product portfolio, in combination with VMware’s experienced Professional Services Organization (PSO), can help you transform your enterprise onward in this revolutionary journey. VMware is ever-ready to be your trusted partner in this “DARE” endeavor. Until next time, go VMware!


TJ Vatsa is a principal architect and member of CTO Ambassadors at VMware representing the Professional Services organization. He has worked at VMware for more than five years and has more than 20 years of experience in the IT industry. During this time he has focused on enterprise architecture and applied his extensive experience in professional services and R&D to cloud computing, VDI infrastructure, SOA architecture planning and implementation, functional/solution architecture, enterprise data services and technical project management.

User Environment Manager: Personal Management and Profile Unity to UEM

Jeremy WheelerBy Jeremy Wheeler

User Environment Management is the concept of managing a user’s persona across devices and locations. Using dynamic contextual policy control, VMware User Environment Manager gives IT a comprehensive profile management tool that supports physical, virtual, and cloud-hosted desktops and applications.  These policies deliver a consistent experience that adapts to the end-user’s needs. Regardless of how delivery is performed, end-users can access their desktops and applications with personalized and consistent settings across devices. UEM is focused entirely on the context of the user, and not the device the user is working on.

Have a look at this User Environment Manager Migrations technical document for step-by-step instructions on preparation and migration of Persona Management to UEM, and preparation, configuration and  migration of Profile Unity to UEM.


Jeremy Wheeler, Consulting Architect with the VMware End-User Computing Professional Services team, created this paper.

VMware would like to acknowledge the following people for their contributions to this document:

  • Devon Cassidy, Technical Support Engineer End User Computing, Global Tech Lead, VMware
  • Pim van de Vis, Technical, IT Infrastructure Architect, VMware

Copying App Volumes AppStacks Between Environments That Use vSAN Storage

JeffSmallBy Jeffrey Davidson

There is an issue copying App Volumes AppStacks (VMDK files) if using Secure Copy (SCP) from one environment to another when using VSAN storage. This is not an App Volumes problem; it is related to the way VSAN stores VMDK data.

Clients will often want to create AppStacks in a test environment, then copy those AppStacks to a production environment, and finally import them into App Volumes. In situations where any of those environments use VSAN storage, you will not be able to copy (SCP) AppStacks (specifically VMDK files) between environments.

In this blog entry I will discuss a workaround to this issue, using an example in which the client has two VSAN environments (DEV and PROD), and needs to copy VMDK files between them.

The VMDK files created by App Volumes are nothing special and are traditionally comprised of two files.

What we normally identify as <filename>.vmdk is a type of header/metadata file. Meaning it only holds information regarding the geometry of the virtual disk and, as such, references a file that contains the actual data.

The file referenced is often called a “flat” file; this file contains the actual data of the VMDK. We can identify this file as it has the naming pattern of <filename>-flat.vmdk

On traditional block level storage these two files are normally stored together in the same folder, as shown in the example screenshot below.

JDavidson1

But VSAN storage is different; if you look at the contents of the “header” file you see something similar to the screenshot below. In this screenshot, the section in red is normally a reference to a “flat” file (example: Adobe_Acrobat_Reader -flat.vmdk). In the case where VSAN is the storage platform, we see something different. The screenshot below shows a reference to a VSAN device.

JDavidson2

VSAN storage employs object-level storage, which is different from traditional block-level storage. The VSAN objects are managed through a storage policy which, for example, can allow for greater redundancy for some virtual machines over others. Because the reference in the VMDK file points to a VSAN DOM object, it cannot be copied through traditional means (SCP).

To work around this issue you will need traditional block-level storage which acts as a “middle man” to allow the SCP copy of VMDK files between environments. You will also need SSH access enabled on one host in each environment.

The first step is to clone the VMDK you wish to copy from the VSAN volume to the traditional storage volume. Once on traditional storage you will be able to copy (SCP) the two VMDK files directly to a host in a different VSAN environment. After you have copied (SCP) the VMDK files to a destination VSAN environment, you will need to perform a clone action to re-integrate the VMDK with VSAN as a storage object, so it can be protected properly with VSAN.

The diagram below is an overview of the process to copy AppStack (VMDK) files between VSAN environments.

JDavidson3

The example below shows the commands required to copy an App Volumes AppStack (VMDK) between environments that use VSAN storage. Before executing these commands you should create a staging area in each environment where AppStacks will be copied temporarily before being copied between hosts for getting re-integrated in the destinations’ VSAN storage.

For example:

In the source environment, create the folder <path to block level storage>/AppVolumes_Staging

In the destination environment, create the folder <path to cloud volumes root folder>/cloudvolumes/staging

Step 1:

SSH into the host where the AppStack currently resides.

Execute the following command to clone the AppStack to block-level storage. Note that after you execute this command there are two files on the block-level storage. One is the header file, and the other is the “flat” file, which was previously integrated with VSAN as a storage object.

vmkfstools -d thin -i <VSAN path to App Stack>/cloudvolumes/apps/<filename>.vmdk <path to block level storage>/AppVolumes_Staging/<filename>.vmdk

Example:

vmkfstools -d thin -i /vmfs/volumes/vsan:4a65d9cbe47d44af-80f530e9e2b98ac5/76f05055-98b3-07ab-ef94-002590fd9036/apps/<filename>.vmdk /vmfs/volumes/54e5e55d-97561a60-50de-002590fd9036/AppVolumes_Staging/<filename>.vmdk


Step 2:

Execute the following commands to copy (SCP) an AppStack from one environment to another.

scp <path to vmdk clone on block level storage>/<filename>.vmdk root@<esxi mgt IP>:<path to staging folder>/<filename>.vmdk

scp <path to vmdk “flat” file clone on block level storage>/<filename>-flat.vmdk root@<esxi mgt IP>:<path to staging folder>/<filename>-flat.vmdk

Example:

scp /vmfs/volumes/54e5e55d-97561a60-50de-002590fd9036/AppVolumes_Staging/<filename>.vmdk root@10.10.10.10:/vmfs/volumes/vsan:265d91daeb2841db-82d3d8026326af8e/6efbac55-f2f7-f86a-033f-0cc47a59dc1c/Staging/<filename>.vmdk

scp /vmfs/volumes/54e5e55d-97561a60-50de-002590fd9036/AppVolumes_Staging/<filename>-flat.vmdk root@10.10.10.10:/vmfs/volumes/vsan:265d91daeb2841db-82d3d8026326af8e/6efbac55-f2f7-f86a-033f-0cc47a59dc1c/Staging/<filename>-flat.vmdk


Step 3:

Run the commands below to delete the AppStack from the staging folder on the source environment.

rm <path to staging folder>/<filename>.vmdk

rm <path to staging folder>/<filename>-flat.vmdk

Example:

rm /vmfs/volumes/54e5e55d-97561a60-50de-002590fd9036/AppVolumes_Staging/<filename>.vmdk

rm /vmfs/volumes/54e5e55d-97561a60-50de-002590fd9036/AppVolumes_Staging/<filename>-flat.vmdk


Step 4:

SSH into the host where the AppStack has been copied to. In this example the host IP address is 10.10.10.10.

Run the command below to clone the copied AppStack from the staging folder to the App Volumes “apps” folder, and re-integrate the VMDK into VSAN as a storage object.

vmkfstools -d thin -i <path to staging folder>/<filename>.vmdk <path to cloud volumes root folder>/cloudvolumes/apps/<filename>.vmdk

Example:

vmkfstools -d thin -i /vmfs/volumes/vsan:265d91daeb2841db-82d3d8026326af8e/6efbac55-f2f7-f86a-033f-0cc47a59dc1c/Staging/<filename>.vmdk /vmfs/volumes/vsan:265d91daeb2841db-82d3d8026326af8e/6efbac55-f2f7-f86a-033f-0cc47a59dc1c/apps/<filename>.vmdk


Step 5:

Run the commands below to delete the AppStack from the staging folder on the destination environment.

rm <path to staging folder>/<filename>.vmdk

rm <path to staging folder>/<filename>-flat.vmdk

Example:

rm /vmfs/volumes/vsan:265d91daeb2841db-82d3d8026326af8e/6efbac55-f2f7-f86a-033f-0cc47a59dc1c/Staging/<filename>.vmdk

rm /vmfs/volumes/vsan:265d91daeb2841db-82d3d8026326af8e/6efbac55-f2f7-f86a-033f-0cc47a59dc1c/Staging/<filename>-flat.vmdk
After completing these steps, you will have successfully copied a VMDK from one VSAN storage platform to another.

App Volumes also creates a “metadata” file during the creation of an AppStack, as shown in the screenshot below.

JDavidson4

The “metadata” file is a “text” file and should be copied to the destination environment so the AppStack (VMDK) can be imported into the destination App Volumes instance. Because this is a “text” file, it can be copied (SCP) without the cloning process and need for block-level storage as described in steps 1–5 above.


Jeffrey Davidson, Senior Consultant, VMware EUC. Jeffrey has over 15 years of IT experience and joined VMware in 2014. He is also a VCP5-DCV and VCP5-DT. He is a strong advocate of virtualization technologies, focusing on operational readiness with customers.

Simple VDI Load Testing with View Planner

Jack McMichaelBy Jack McMichael, Solutions Consultant

In the last few years it seems the number of customers asking for assistance in re-evaluating their “VDI 1.0” infrastructure is increasing at a faster rate than ever. It makes sense when you consider that in the rush to achieve datacenter consolidation many administrators were under pressure to just “make it happen.” Many of those administrators and architects didn’t have time to design their virtual desktop infrastructure (VDI) solution to scale and accommodate things our customers and users have grown accustomed to using every day, such as YouTube HD, Skype, and other resource-intensive applications.

Last year, VMware released their very popular internal tool View Planner for use to the general public for free. While it’s flown under the radar for a lot of customers, it can be an invaluable tool for judging where your VDI solution stands, and identifying where the stress cracks in your VDI infrastructure may be forming—or are already wide open.

The View Planner appliance is simple to install and fairly straightforward to set up for local tests. It’s capable of local-only load testing, as well as passive/remote connections with the VMware Horizon® Client.

Deploying View Planner

After deploying an Open Virtual Appliance in VMware vSphere®, configure your View Planner integrations in the Config tab of the administrator page. The AD and View integrations are optional, but can be used if you wish View Planner to deploy desktops and/or create and delete users.

Note that for best results, I recommend using IP addresses instead of hostnames. Create a service account for your credentials, and give it administrator privileges in both AD and in VMware vCenter™.

In this screenshot, you can see all three connectors configured. You can use the Test buttons to ensure the configuration works, but click Save first.

JMcMichael 1

Environment Preparation

For my simple test, I created a linked clone pool with the name VPDesktop-{n:fixed=3} in VMware Horizon View™. On this master snapshot, I added the View Planner Desktop Agent that you can download from the View Planner portal on the Packages tab.

Make sure you reboot your desktop before creating your snapshot. Once you reboot, you will likely see the desktop auto-login. If so, run the View Planner Agent as seen in this screenshot.

JMcMichael 2

 

Configuring Run Profiles

There are three test modes available: Local, Passive and Remote. Typically, Local mode will be used for load testing since it doesn’t require actual Horizon Client connections, but has the disadvantage of not replicating PCoIP performance impact. Passive mode will add PCoIP connections that are shared amongst client servers that host more than one client connection at a time. Remote mode will create a 1:1 relationship between clients and desktops, thus creating the most overall resource impact.

To configure a Run Profile for a simple load test, I recommend using Local as it doesn’t require the use of the Horizon Client, and is also easy to set up. Simply add the Workload profile you want to run into the Run Profile by clicking Add Group, and click Save to save the Run Profile. You can add multiple workload profiles if you desire, but for a simple test only one is required.

The most important thing to remember is that desktop names (and client names if you choose Passive or Remote) are case-sensitive. In this example, VPDesktop– is valid for VPDesktop-001, but not vpdesktop-001 or VPDesktop001.

JMcMichael 3

Running a Test

Simply click the Run button to start a test. If you run into trouble, View Planner will show you right away; by clicking the link on the appropriate box, you’ll see the exact error or success message.

JMcMichael 4

 

Once completed, you can view the results in the Per Stats column; they will look something like the example below.

JMcMichael 5

Summary

Overall, I found the View Planner tool to be great for simple and quick tests of a VDI environment. It shows you where resource contention exists, or singles out how an app may be creating resource gaps in your VMware ESXi™ hosts. The free downloadable version includes several standard templates that cover a variety of normal user application workloads. If you require more flexibility in your tests, a paid VMware Professional Services engagement offers a more feature-rich version to create customizable workload profiles and other goodies. Contact VMware Professional Services or a VMware Partner for an on-site evaluation.

 


Jack McMichael is a Solutions Consultant for the VMware Professional Services Engineering Global Technical and Professional Services team. Follow him on Twitter @jackwmc4 !

So You Virtualized Your Desktop Environment. Now what?

mmarx.phpBy Mike Marx

Most of my customers start with a low-risk user group consisting of a large number of users with identical application requirements. This is the common scenario when starting out on the virtual desktop infrastructure (VDI) journey and ‘testing the waters.’ With proper design efforts, initial implementations are highly successful.

I spend the majority of my consulting effort working with customers helping them create their initial VDI design. Designs can be simple or complicated, but they all utilize a common technical approach for success: understanding user requirements, and calculating infrastructure sizing. But I’m not blogging about technical calculations or infrastructure sizing. Instead I would like to address a VDI design challenge customers face as they expand their VDI design: user application assignments.

While resource requirements are simple to assess, calculate and scale, application delivery becomes increasingly challenging as more users are added to the design. VDI administrators struggle to manage increasing numbers of desktop users – each having unique application requirements.

Applications are easy to add to a large static group of user desktops using linked-clones. But when unique user groups are introduced, and application requirements change, administrators are confronted with the challenge of maintaining a large number of small desktop pools – or impacting large groups of users in order to change an application assignment.

So how do we design an effective stateless desktop and maintain application diversity amongst unique user groups? VMware Horizon AppVolumes is the answer.

Using AppVolumes, VDI designs become simple to understand and implement. Once applications are effectively removed from the VDI desktop, VDI administrators are left with a simple stateless desktop. But users aren’t productive with an empty desktop operating system; they need applications – and lots of them.

Without going into deep technical detail (there are excellent blogs on this topic already) AppVolumes captures the application files, folders and registry components, and encapsulates them into a transportable virtual disk called an AppStack. As the user logs on to a stateless desktop, the assigned AppStack(s) will automatically attach and merge the user’s applications with the desktop virtual machine.

Now users are presented with a stateless desktop that is uniquely assembled with all of their applications. AppVolumes’ attached applications interact with other applications— and the operating system—as if they were natively installed, so the user experience is seamless.

Now that applications are no longer an impediment to VDI designs, VDI administrators are able to support large groups of users and application requirements using the same stateless desktop pool. By following the KISS principle: “Keep It Simply Stateless,” AppVolumes will open the door to new design possibilities and wider adoption by users and IT administrators.


Mike Marx is a Consulting Architect with the End User Computing group at VMware. He has been an active consultant using VMware technologies since 2005.  His certifications include : VCAP-DTD, VCP-DT, VCA-WM, VCA-DT, VCP2-5 as well as being an expert in VMware View, Thinapp, vSphere and SRM.

Managing VMware NSX Edge and Manager Certificates

Spas_KaloferovBy Spas Kaloferov

di·ver·si·ty

“Diversity” was the first word that came to my mind when I joined VMware. I noticed the wide variety of different methods and processes utilized to replace certificates on the different VMware appliance products. For example, with VMware vRealizeTM OrchestratorTM, users must undergo a manual process to replace the certificate, but with VMware vRealizeTM AutomationTM administrators have a graphical user interface (GUI) option, and with VMware NSX ManagerTM there is another completely different GUI option to request and change for the certificate of the product.

 

Figure 1. SSL Certificates tab on the VMware NSX ManagerTM 

SSL Certificates tab on the VMware NSX Manager

This variety of certificate replacement methods and techniques is understandable as all of these VMware products are a result of different acquisitions. Although these products are great in their own unique ways, the lack of a common, smooth and user-friendly certificate replacement methodology has always filled the administrators and consultants with anxiety.

This anxiety often leads to certificate configuration issues among the majority of VMware family members, partners and end users. As a member of this family—and also of the majority—I recently felt this anxiety when I had to replace my VMware NSX Manager and NSX EdgeTM certificates.

pas·sion

I must say that up to the point where I had to replace these certificates, I had pretty awesome experiences installing and configuring VMware NSX Manager, and even developed advanced services like network load balancing. But I hit a minor roadblock with the certificates, and my passion to kick down any road block until it turns to dust wasn’t going to leave me alone.

ex·e·cu·tion

I got in touch with some of my awesome colleagues and NSX experts to get me back on the good experience track of NSX. As expected, they did (not that I have ever doubted them). Now, I was exploring the advanced VMware NSX Manager capabilities with full power – like SSL VPN-Plus where I had to again configure a certificate for my perimeter gateway edge device.

Figure 2. Server Settings tab of the SSL VPN-Plus setting on the VMware NSX EdgeTM

Server Settings tab of the SSL VPN-Plus setting on the VMware NSX Edge

This time I wasn’t anxious because I now had the certificate replacement process under control.

cus·to·mer

As our customers are core to our mission, we want to empower them by freeing them from certificate replacement challenges so they can spend their time and energy on more pressing technological issues. To help empower other passionate enthusiasts, and help keep them on the good experience track of NSX, I’ve decided to describe the certificate replacement processes I’ve been using and share them in a blog post to make them available to everyone.

com·mu·ni·ty

We are all connected. We approach each other with open minds and humble hearts. We serve by dedicating our time, talent, and energy – creating a thriving community together. Please visit Managing NSX Edge and Manager Certificates to learn more about the certificate replacement process.


Spas Kaloferov is an acting Solutions Architect member of Professional Services Engineering (PSE) for the Software-Defined Datacenter (SDDC) – a part of the Global Technical & Professional Solutions (GTPS) team. Prior to VMware, Kaloferov focused on cloud computing solutions.

Analyzing Virtual Desktop Login Time

By Gourav Bhardwaj with Matt Larson

GouravMatt LarsonOften when performing health checks a discussion arises about the login time and what constitutes login time. This article covers some of the common ways to look at login time and its underlying components.  You can look at login time using vCOps for View or a third-party user experience monitoring solution. In this example the login time is demonstrated using Stratusphere™ UX. Experienced system administrators can also use this process to troubleshoot slow login times.

 

 

Review Virtual Desktop login times using Stratusphere UX™

  1. First, ensure you are in the Stratusphere UX Interface.
    Stratusphere UX screen 1
  2. On the Inspector tab, choose Machine Diagnostic Summary, and then click Go.
    Stratusphere UX screen 2
  3. In the Date Range drop-down menu, select Last 24 Hours.Stratusphere UX screen 3
  4. In the results list, sort by Login Delay.Stratusphere UX screen 4
  5. Click the down-arrow next to the name of the machine. Click Drill-down to see machine inspection history.
    Stratusphere UX screen 5
  6. Select the down-arrow next to the hour that contains the slow login time. Click Drill-down to see inspection report details.
    Stratusphere UX screen 6

A lot of information will be provided, including the username of the user experiencing the issues, as well as information regarding processes. One important piece of information used to find what may be causing the slow logins is the CPU System Time(s) field. The graphic below shows VMWVvpsvc running long. This metric indicates some login slowness resulting from the profile being copied from the profile location using VMware’s persona management. This may be the result of a file server being in a location local to the user, but not local to the View environment.

Stratusphere UX screen 7

This information is helpful, as it says that the VMWVvpsvc was running for 94 seconds. We can assume this is mostly during login, but that only accounts for 94 seconds of a 351 second login delay. Clearly, more information is necessary. While turning to logs can be helpful (such as persona management, the system event log, the application event log, and various View and PCoIP logs), they can be time consuming to review, and often the information these logs provide is insufficient.

Using the Windows Performance Toolkit
The Windows Performance Toolkit is a set of tools provided in the Windows SDKs for both Windows 7 and Windows 8. It consists of two high level toolsets: A toolset to gather information, and a toolset to analyze information. Once users and systems have been found to have slow login times, the toolsets provided with the Windows Performance Toolkit can be employed to further ascertain what exactly is causing the slow logins.

Installation
This section details the installation process to get the tools on the system that is experiencing slow login times. This process assumes the use of the Windows 7 SDK. Below are the steps:

  1. Remove Visual C# 2010 – this may or may not be necessary. If the C# version of the vSphere Client is installed on the workstation, then that existing installation of Visual C# 2010 will need to be removed. Not to worry, the SDK puts C# back on there, and there is no impact to the vSphere client or other applications that may use Visual C# 2010.
  2. Install the Windows 7 SDK – this can be done HERE. Launch the winsdk_web.exe file and ensure that at least the Windows Performance Toolkit is selected, and then click Next. Once the installation has completed, move on to the next step.Windows SDK screenNote: In order to analyze Windows crash dumps (AKA BSOD) I keep the Debugging Tools for Windows installed as well.
  3. Install .NET 4.0 – this can be done from HERE. Again, this depends upon whether or not it is installed on the workstation in question.

This completes the installation. The installation can be verified by confirming that the program group exists on the Start Menu, or navigating to the installation directory, which defaults to C:\Program Files\Microsoft Windows Performance Toolkit, and confirm the existence of xbootmgr.exe and xperf.exe as seen in the images below.

Windows Screen 2Windows Screen 3

Using XPERF
The process to use XPERF to gather information regarding slow logins is as follows:

  1. Enable fast user switching in the registry or GPO.
  2. Create a local user account named Test, and add to the local administrators group. (Using an administrative user that is not the problematic user will also work.)
  3. From the console of the problematic workstation, log in as the user with administrative privileges.
  4. Launch a command line with elevated privileges, and navigate to C:\Program Files\Microsoft Windows Performance Toolkit.
  5. Launch the XPERF command:
    1. XPERF Command: xperf -on base+latency+dispatcher+NetworkTrace+Registry+FileIO -stackWalk CSwitch+ReadyThread+ThreadCreate+Profile -BufferSize 128 -start UserTrace -on “Microsoft-Windows-Shell-Core+Microsoft-Windows-Wininit+Microsoft-Windows-Folder Redirection+Microsoft-Windows-User Profiles Service+Microsoft-Windows-GroupPolicy+Microsoft-Windows-Winlogon+Microsoft-Windows-Security-Kerberos+Microsoft-Windows-User Profiles General+e5ba83f6-07d0-46b1-8bc7-7e669a1d31dc+63b530f8-29c9-4880-a5b4-b8179096e7b8+2f07e2ee-15db-40f1-90ef-9d7ba282188a” -BufferSize 1024 -MinBuffers 64 -MaxBuffers 128 -MaxFile 1024
  6. Using fast user switching, switch users, and login as the problematic user.
    1. Once the login has completed, stop the trace using the following command:
      xperf -stop UserTrace -d merged.etl
  7. Gather the merged.etl trace file for analysis.

Using XBOOTMGR
In some cases, it may not be possible to switch users using fast user switching. In many cases, it may be easier to have the user run XBOOTMGR. This tool, when run, reboots the system and tracks both the startup time and the login time. The analysis ends after a set period of time. Gather an XBOOTMGR analysis by performing the following:

  1. Launch a command line with elevated privileges, and navigate to C:\Program Files\Microsoft Windows Performance Toolkit.
  2. Run the following command:
    1. XBOOTMGR Command: xbootmgr -trace boot -traceflags base+latency+dispatcher -stackwalk profile+cswitch+readythread -notraceflagsinfilename -postbootdelay 120
  3. The system will prompt that it is being rebooted. Allow the reboot to occur.
  4. When the VM is started, have the user connect to the View desktop using the View client.
  5. When the user logs in, XBOOTMGR will present the user with a countdown of 120 seconds. Allow XBOOTMGR to collect data.
  6. Once complete, gather the *.etl trace file for analysis. It may take some time to merge the file.

Analysis
The trace file has been created, and now it is time to analyze the results. The analysis toolset available in the Windows 7 Performance Toolkit is slightly different than what is available in the Windows 8 Performance Toolkit.

Performance Analyzer from Windows 7 Performance Toolkit

Open with Performance Analyzer (From the Windows 7 Performance Toolkit)
Windows Performance Analyzer
The graph below shows the processes occurring during the Winlogon Init process. It is easy to see that VMWVvpsvc is running for approximately two minutes.
Windows Performance Analyzer Screen 1

By right clicking on the graph, one can Overlay Graphs from other categories. This graph shows the Winlogon process, as well as the overlay graphs for Boot Phases and CPU Usage. This can be helpful to see which boot phase the processes are running. Additionally, the CPU graph will show whether the process is running long because it has maxed out the available CPU capacity.
Windows Performance Analyzer Screen 2

These overlays can be tweaked by selecting the CheckPoints box in the top right corner of the graph.

CheckPoints Dialog
Windows Performance Analyzer from Windows 8 Performance Toolkit

Open with Performance Analyzer (From the Windows 8 Performance Toolkit).  The icon is shown below:

Windows8

Windows Screen

When looking at the same trace file as before, the graphs show that VMWVvpsvc was running for over 2 minutes. Moving the user files closer (from a network perspective) to the View desktop will help reduce the login time.

References
http://social.technet.microsoft.com/wiki/contents/articles/10128.tools-for-troubleshooting-slow-boots-and-slow-logons-sbsl.aspx

http://www.liquidwarelabs.com/products/stratusphere-ux


Gourav Bhardwaj is a VMware consulting architect who has created virtualized infrastructure designs across various verticals. He has assisted IT organizations of various Fortune 500 and Fortune 1000 companies, by creating designs and providing implementation oversight. His experience includes system architecture, analysis, solution design and implementation.

Matt Larson is an experienced, independent VMware consultant working in design, implementation and operation of VMware technologies. His interests lie in enterprise architecture related to datacenter and end user computing.

EUC Datacenter Design Series — EVO:RAIL VDI Scalability Reference

By TJ Vatsa with Fred Schimscheimer and Todd Dayton

End User Computing (EUC) has come of age and is continuing to mature by leaps and bounds. Customers are no longer considering virtual desktop infrastructure (VDI) as a tactical project but are looking at EUC holistically as an enterprise solution that accelerates EUC transformation. You can refer to the EUC Design 101 series here (Part 1, Part 2, and Part 3) or a consolidated perspective here (EUC Enterprise Solution). Having collaborated with my fellow colleagues Fred Schimscheimer and Todd Dayton (bios below) during the last few weeks, I intend to share the game changing revolution that VMware’s hyper-converged infrastructure solution is bringing to the EUC domain.

The Challenge
People familiar with VDI are well aware of the fact that a scalable production deployment requires systematic and thorough planning of the infrastructure, namely compute, storage and networking. This can be a daunting task for customers that are either chasing tight deadlines or do not have the available infrastructure or people resources. We have noticed this to be a perpetual challenge for many of our customers across different industry domains including healthcare, financial, insurance services, manufacturing and others.

The Panacea
During the last few years, hyper-converged appliances have been taking the industry by storm. By design these systems follow a modular, building block approach that scales out horizontally and is very quick to deploy. From the EUC infrastructure perspective, it has become necessary to acknowledge the efficiency of hyper-converged appliances. While there are vendors that have hyper-converged infrastructure that runs on VMware’s vSphere hypervisor, VMware’s foray into this domain, EVO:RAIL, was released for general availability during VMworld 2014 in San Francisco in September.

EVO:RAIL has been optimized for VMware’s vSphere and Virtual SAN technology with compute, storage and networking resources in a simple, integrated deployment, configuration, and management solution. EVO:RAIL is the next generation EUC building block for a Software Defined Data Center (SDDC).

Numbers Don’t Lie
During the last few months, our teams have been diligently testing and scaling EVO:RAIL for a variety of use cases such as EUC, Business Continuity and Disaster Recovery (BCDR) and X-in-a-box. The next few paragraphs will focus on our findings for Horizon 6 View desktops scalability.

You may be having lots of questions by now. So let’s take it one by one!

Q: What did the hardware configuration look like?
A: The test bed hardware infrastructure configuration was as follows:

EVO:RAIL Appliance

  • 4 x nodes
  • Each node
    • 2 x Intel E5-2620 @ 2.1 GHz
    • 192GB memory (12 x 16GB)
    • 3 x Hitachi SAS 10K 1.2TB MD
    • 1 x 400GB Intel S3700 SSD

Q: What did the software configuration look like?
A: The test bed View software configuration was as follows:

  • vSphere 5.5 + VSAN
  • Horizon View 6.0 (H6)

Table 1: Horizon 6 Configuration

Horizon 6 Configuration TableNote: vCSA=vCenter Server Appliance

Q: What did the VDI image configuration look like?
A: The test bed image configuration was as follows:

Table 2: Desktop Image Configuration

Desktop Image Configuration Table

Q: What types of View desktops did we test?
A: Horizon View 6, linked clone virtual desktops with floating assignments.

Q: What Horizon 6 configurations did we test?
A: The following configurations were tested using Reference Architecture Workload Code (RAWC):

Table 3: Load Test Configurations

Load Test Configurations

These configurations are pictorially represented in the following schematics:

Management Cluster and Desktop Cluster

 

Figure 1: Configurations #1a/#1b

The figure above represents EVO:RAIL appliances with separate Horizon 6 Management and Desktop clusters.

VDI-in-a-Box

Figure 2: Configuration #2

The figure above represents the EVO:RAIL appliance with both Horizon 6 Management and Desktop clusters in the same appliance. It also illustrates an N+1 configuration to support one node failure within the EVO:RAIL appliance.

Q: What did the results look like?
A: The following results were obtained after the configurations were stress tested using RAWC.

Test Category Results
RAWC Virtual SAN Observer
Config #1a Configuration 1a-RAWC Configuration 1a - VSAN
Config #1b Configuration 1b - RAWC Configuration 1b-VSAN
Config #2 Configuration 2 - RAWC Configuration 2 - VSAN

 

Note: Click the thumbnail images above to drill down into graph details.

Results Summary
The table below summarizes different test configurations and the tested consolidation ratios of numbers of virtual desktops to the EVO:RAIL appliance.

Table 4: Test Configuration Findings

Test Configuration Findings

We hope you will find this information to be useful and motivating. We are looking forward to you bravely adopting and implementing a VDI-in-a-box solution using VMware’s EVO:RAIL hyper-converged appliance in your Software Defined Data Center (SDDC).

Until next time, Go VMware!


Author

TJ VatsaTJ Vatsa is a Principal Architect and CTO Ambassador at VMware, representing the Professional Services organization. TJ has been working at VMware since 2010 and has over 20 years of experience in the IT industry. At VMware, TJ has focused on enterprise architecture and applied his extensive experience to cloud computing, virtual desktop infrastructure, SOA planning and implementation, functional/solution architecture, enterprise data services and technical project management. Catch TJ on Twitter, Facebook or LinkedIn.

Contributors

Fred SchimscheimerFred Schimscheimer has worked at VMware since 2007 and is currently a Staff Engineer in the EUC Office of the CTO. In his role, he helps out with prototyping, validating advanced development projects as well as doing product evaluations for potential acquisitions. He is the architect and author of RAWC – VMware’s first Reference Architecture Workload Simulator.

 

Todd DaytonTodd Dayton joined VMware in 2005 as the first field “Desktop Specialist” working on ACE (precursor to VDI). In his current role as a Principal Systems Engineer and CTO Ambassador, he continues to evangelize End User Computing (EUC) initiatives and opportunities for VMware’s customers.