Home > Blogs > VMware Consulting Blog

EUC Design Series: Application Rationalization and Workspace Management

TJBy TJ Vatsa

Introduction

Over the last few years, End User Computing (EUC) and the associated workspace mobility space have emerged to be transformational enterprise initiatives. Today’s workforce expects anytime and anywhere access to their applications, be it enterprise applications or user-installed applications (UIA), and everything in between. These expectations create newer opportunities, as well as newer challenges for the existing processes that are followed by enterprise and application architects. So what are the different facets of these challenges that the architects need to be aware of while analyzing and defining an enterprise application strategy? Let’s dive right in.

The What

Application rationalization is the process of strategizing an available set of corporate applications along the key perspectives of business priority, packaging, delivery, security, management and consumption to achieve a defined business outcome. The tangible artifact(s) The Whatof an application rationalization process is a leaner collection of one or more application catalogs. An application catalog is a logical grouping of application taxonomies based on a user’s roles and responsibilities within an organization, as well as within the enterprise. For instance, a user belonging to the finance department will have access to a department-specific catalog housing financial applications, as well as access to a corporate catalog housing all corporate-issued applications. While a user from the IT department will not need access to key financial applications used by a user from the finance department, they will have access to an IT-specific application catalog that may include applications like infrastructure monitoring. With end-user mobility/computing pervading every aspect of workforce productivity within the enterprise, organizations intend to leverage their existing investments in various application delivery platforms including those from VMware, Citrix, Microsoft and other vendors. The application rationalization process is an enabler of application governance, management and operations leading to minimal applications sprawl within the enterprise.

The Why

Traditionally, managing legacy applications has been a time-consuming and complex process from the perspective of application packaging, provisioning and monitoring. Delivery of such applications has been equally— if not more—complex. Add to that the constraints of application conflicts when it comes to supporting different devices and integration with other App Management 1 App Managementapplications. For instance, the requirement of integrating with the authentication process of an Identity Management (IDM) platform that all mission-critical applications need to support as part of the security directive coming from the Chief Information Security Officer’s (CISO) office.

So, first things first, we need to ask ourselves some of these key questions:

  • What are these applications, and what are the business priorities of these applications?
  • Do all these applications need to adhere to security directives and regulatory compliance directives such as HIPAA, PCI, etc., and if so, how soon?
  • Have the non-adherence risks been assessed, and what are the exceptions?
  • How do we package, provision, deliver, access, maintain, monitor and finally retire these applications?

What this means is that it is very important to make the available application catalog(s) lean in case they have become bulky over a given period of time due to inefficient Application Lifecycle Management (ALM) processes, mergers and acquisitions, emerging business priorities and other factors outside the control of enterprise, application and IT architects/leaders. Furthermore, the application portfolio(s) reflected in these collective catalogs need to be agile to support the ever-changing innovations in the areas of end-user mobility/computing, hybrid cloud, and the emerging Internet of Things-aware applications.

The How

A pragmatic approach to application rationalization relies on a strong foundation of people, processes and technology platforms. It is recommended to start by identifying some of the key application classifications along the lines of Mission Critical (MC), Business Critical (BC) and User Critical (UC) applications, and map these classifications to your user segmentation along the lines of key roles and responsibilities within and across the organizations. An existing organizational level RACI (Responsible, Accountable, Consulted, and Informed) matrix may come in very handy as part of this process. The information in the table below reflects a sample of how this could be accomplished.

The How

While the people and the processes parts may take multiple iterations, once these applications have been rationalized and the key stakeholders have been identified, we need to define an enterprise Application Management Architecture (AMA) to mature the EUC initiatives within an enterprise. The schematic below lists key components that help develop a mature Application Management Architecture.

App Management 1

What this means is that the AMA needs to address the following capabilities as illustrated in the schematic above:

  • Application packaging and isolation. For instance, whether the applications are natively installed in the base image or whether they are virtualized.
  • A unified application provisioning launch-pad for virtual, Web, Citrix XenApps, RDSH and SaaS applications.
  • Real-time application delivery for just-in-time desktops that would abstract the desktop guest operating system (GOS) from the end-user applications.
  • Unified authentication and application entitlement policy platform that supports Single Sign-on (SSO) and acts as a policy enforcement point (PEP) and a policy decision point (PDP).
  • Application maintenance capability that enables flexible patch management.
  • Application monitoring functionality that provides in-guest metrics for application performance monitoring.
  • Most importantly, supporting EUC mobility by interoperating with virtual, hybrid cloud and mobile platforms.

Conclusion

Now let’s tie it all together. VMware’s End User Computing (EUC) Workspace Environment Management (WEM) Solution includes VMware’s EUC product portfolio in combination with VMware’s experienced Professional Services Organization (PSO). This platform accelerates application rationalization initiatives by additionally providing application isolation, real-time application delivery and monitoring for Citrix and VMware environments. It facilitates comprehensive governance of end-user management with dynamic policy configuration so you can deliver a personalized environment to virtual, physical and cloud-hosted environments across devices. It is your fast track approach to success for your Application Rationalization initiatives within your enterprise where not only the technology—but also the people and processes—are given high priority. For additional information please visit VMware.

 

Find out more about Application Rationalization from the perspectives of an Enterprise EUC strategy and BCDR (Business Continuity and Disaster Recovery) by attending the following sessions at VMworld 2015, San Francisco.


TJ Vatsa is a Principal Architect and member of CTO Ambassadors at VMware representing the Professional Services organization. He has worked at VMware for the past 5+ years with more than 20 years of experience in the IT industry. During this time he has focused on enterprise architecture and applied his extensive experience in professional services and R&D to Cloud Computing, VDI infrastructure, SOA architecture planning and implementation, functional/solution architecture, enterprise data services and technical project management.

TJ holds a Bachelor of Engineering (BE) degree in Electronics and Communications from Delhi University, India, and has attained industry and professional certifications in enterprise architecture and technology platforms. He has also been a speaker and a panelist at industry conferences such as VMworld, VMware’s PEX (Partner Exchange), Briforum and BEAworld. He is an avid blogger who likes to write on real-life application of technology that drives successful business outcomes.

VMware Certificate Authority – My Favorite New Feature of vSphere 6.0 – Part 1 – The History

jonathanm-profileBy Jonathan McDonald

Anyone who knows me knows I have a particular passion (however misplaced that may be…) for the VMware Certificate Story. This multi-part blog series will discuss a bit of background on the certificate story, what you need to know about the architectural design of it in an environment and some new features you may not know about.

Let me start off by saying this passion started off several years ago when I was in Global Support Services, and I realized that too few people had an understanding about certificates. I am not even talking certificates in the context of VMware, but in general. This was compounded when we released vSphere 5.1 due to the fact that strict certificate checking was enabled.

A Bit of History

Although certificates were used for securing communication prior to vSphere 5.1, they were self-signed and there was no verification performed to ensure the certificate was valid. Therefore, for example, the certificate could be expired, or be used for multiple services at the same time (such as for the vCenter Server service and the vCenter Inventory service). This is obviously not a good practice, but it nevertheless was allowed.

When vCenter Single Sign-On was released with vSphere 5.1, it enforced strict certificate checking. This included not only the certificate uniqueness, but such information as the validity period of the certificates as well. Therefore, if any of the components were not using a unique and valid certificate, they would not be accepted when registering the different services as solutions in Single Sign-On. This would turn out to be a pretty large issue as upgrades would fail with very little detail as to why.

That being said, if all services in vCenter 5.1 and 5.5 have their certificate replaced, seven unique certificates are required:

  • vCenter Single Sign-On
  • vCenter Inventory Service
  • vCenter Server
  • vSphere Web Client
  • vSphere Web Client Log Browser
  • vCenter Update Manager
  • vCenter Orchestrator

The process to change the certificates is not straightforward and caused a significant amount of trouble amongst customers and global support services alike. This is when we raised it as a concern internally and helped to get a short-, medium- and long-term plan in place in time to make it easier to replace certificates when required. The plan was as follows:

  • Short term – We ensured the KB articles relating to certificate replacement were accurate and easy to follow.
  • Medium term – We helped in the development of the SSL Certificate Automation Tool, which dramatically reduced the number of steps and made it fairly easy to replace the certificates.
  • Long term – We forced focus on the issue so a solution could be built into the product.

Prior to moving from VMware Support to Professional Services Engineering we had released the tool and the larger plan was in place. The following are two blog posts I did for the tool:

http://blogs.vmware.com/kb/2013/04/introducing-the-vcenter-certificate-automation-tool-1-0.html

http://blogs.vmware.com/kb/2013/05/ssl-certificate-automation-tool-version-1-0-1.html

With vSphere 6.0 the larger long-term solution is finally coming to fruition with the introduction of the VMware Certificate Authority. It solves many of the problems that were seen.

Introduction to the VMware Certificate Authority

With vSphere 6.0, the base product installs an internal certificate authority (CA) called the VMware Certificate Authority. This is a part of the Platform Services Controller installation and has changed the architecture significantly for the better. No longer are the default certificates self-signed, rather, they are issued and signed by the VMware Certificate Authority.

This works in one of two ways:

  • VMware Certificate Authority acts as the root certificate authority. This is the default configuration and allows for an out-of-the-box configuration that is fully signed. All the clients need to do is to trust the root certificate and the communication is fully trusted.
  • VMware Certificate Authority acts as an Intermediate CA, integrating into an existing CA infrastructure in the environment. This allows for certificates to be issued that are already trusted throughout the environment.

In each of these two modes, it acts in the same way granting certificates to not only the solutions connected to the management infrastructure, but to ESXi hosts as well. This occurs when the solution/host is added to vCenter server. By default, communication is secure and trusted, and therefore, everything on the management network that was previously difficult to secure is trusted.

Introduction to the VMware Endpoint Certificate Store

In addition to the certificate authority itself, vSphere 6 certificates are now managed and stored in a “wallet.” This wallet is called the VMware Endpoint Certificate Store (VECS). The benefit here is certificates and private keys are no longer stored on disk in various locations. They are centrally managed in VECS on every vSphere node. This allows for a greatly simplified configuration for the environment because you no longer need to update trusts when the certificates are replaced because it is done automatically by VECS.

The VECS is installed on all platform services controller installation, including both embedded and external configurations.

JMcDonald Certificate Authority 1

The following different stores for certificates are used:

  • The Machine Certificates store contains the Machine SSL Certificate and private key, which is used for the Reverse Proxy, discussed next.
  • The Root CA Certificates store contains trusted root certificates and revocation lists, from any VMware Certificate Authority in the environment, or the third-party certificate authority being used. Solutions use this store to verify certificates.
  • The Solution User Certificates store contains the certificates and private keys for any solutions such as vCenter and the vSphere Web Client.

A single location for all certificates is a welcome change to the previous versions.

The Reverse Proxy – (Machine SSL Certificate)

Finally, before we get into the recommended architectures, the Reverse Proxy is the last thing I want to introduce. The change here addresses one of the biggest problems that was seen in previous versions of vCenter, which is that there are so many different services installed that require SSL communication. To be honest, the real challenge here is not the number of services rather, trying to get signed certificates all of them from the SSL administrator for the same host.

To combat this, solution users were consolidated with vCenter 6.0 to four vpxd, vpxd-, vsphere-webclient and machine. This is to say that where possible many of the various listening ports on the vCenter Server have been replaced with a single Reverse Web Proxy for communication. The Reverse Web Proxy uses the newly created Machine SSL Certificate to secure communication. Therefore, all communication to the different services is routed to the appropriate service based on the type of request through the reverse proxy. This can be seen in the figure below.

JMcDonald Certificate Authority 2

It is still possible to change the certificate of the solution users behind it, however, these are only used internally and do not necessarily need to be changed. More on this in the next part of this series.

With all of this background detail out of the way, I think it is a good place to pause. Part 2 of this article will discuss the architecture decision points and some configuration details. On another note, I am actually going to be off to VMworld very soon and will be manning the VMware booth, as well as speaking at several sessions! It is unlikely I will get part 2 done before then, but if you have any questions look for me (downstairs at the Solution Exchange, in the VMware Booth, at the Technology Advisor station), and we can talk more on this and other topics with the other members of my team!


Jonathan McDonald is a Technical Solutions Architect for the Professional Services Engineering team. He currently specializes in developing architecture designs for core Virtualization, and Software-Defined Storage, as well as providing best practices for upgrading and health checks for vSphere environments.

Managing Horizon Gold Images Across Multi-Site Deployments

Dale-Carter-150x150By Dale Carter

One of the challenges when deploying VMware Horizon across multiple sites or data centers is how to keep your Gold/Master images in sync and how to get them from one site to another.

In this blog I will show you how you can utilize the new Content Library that is part of vSphere 6 to help manage this challenge.

There is a caveat to using the content library – it does not currently manage VM Snapshots. This blog will also show how you can work around this caveat to make the solution work for your deployments.

The following steps will show you how to create a Content Library and then use the Content Library to move your Gold/Master images between sites.

Create Your Content Library

  1. Connect to the vCenter Web Client on your home site
  2. From the home menu select Content Libraries

DCarter Gold Images 1

  1. Click Create new content library

DCarter Gold Images 2

  1. Give the library a Name, select the vCenter Server and click Next

DCarter Gold Images 3

  1. Select Local content library and check the box for Publish content library externally then click Next

DCarter Gold Images 4

  1. Select the datastore you want to save the content library in and click Next

DCarter Gold Images 5

  1. Click Finish
  2. Right-click the new Home library and click Edit Settings

DCarter Gold Images 6

  1. Click Copy Link and then OK

DCarter Gold Images 7

  1. Now connect to the web client of the remote vCenter
  2. From the home menu select Content Libraries
  3. Click Create new content library
  4. Give the library a Name, select the vCenter Server and click Next

DCarter Gold Images 8

  1. Select Subscribed content library. Then paste the link into the first library in the box and click Next

DCarter Gold Images 9

  1. Select the datastore to save the content library and click Next
  2. Click Finish

The Content Libraries are now created at each site and are ready to have content published to the library.

The next steps are to publish the Gold/Master image to the home library and then deploy that image in the remote data center.

Publishing the Gold/Master Image

The following steps will show you how to publish the Gold/Master image with the latest Snapshot to the content library.

  1. Connect to the vCenter Web Client on your home site
  2. Under VMs and Templates right-click the Gold/Master image and click Clone – Clone to Template in Library

DCarter Gold Images 10

  1. Give the new template a name, select the correct Library and click Next

DCarter Gold Images 11

The template will now be published to the Content Library and then synced to the remote library. You can speed up the sync by connecting to the remote library, clicking Actions and Synchronize Library.

DCarter Gold Images 12

Publish Template to Remote Site

The following steps will show you how to deploy the new Gold/Master image with the latest Snapshot to the remote site from the content library.

  1. Connect to the vCenter Web Client on your remote site
  2. From the home menu select Content Libraries
  3. Select the Library and click Related Objects

DCarter Gold Images 13

  1. Right-click the correct template and click New VM from This Template

DCarter Gold Images 14

  1. Confirm the name of the new VM and the location and click Next

DCarter Gold Images 15

  1. Select the correct resource and click Next

DCarter Gold Images 16

  1. Confirm and click Next
  2. Select the disk format and the datastore location and click Next

DCarter Gold Images 17

  1. Select the required Network to deploy the VM to and click Next

DCarter Gold Images 18

  1. Click Finish

The VM will now be deployed to the remote data center. However, there is one last step required before you can use Horizon to deploy new desktops – create a Snapshot for the composer to use.

  1. Right-click the newly created VM and click Snapshots – Take Snapshot

DCarter Gold Images 19

  1. Give the Snapshot a name and click OK

DCarter Gold Images 20

 

The VM is now ready to be used by Horizon to deploy desktops with the latest Gold/Master image.


Dale Carter is a CTO Ambassador and VMware Senior Solutions Architect specializing in the EUC space, has worked in IT for more than 20 years. He is also a VCP4-DT, VCP5-DT, VCAP-DTD, and VCAP-DTA.

Real World Sessions for Citrix Architects at VMworld San Francisco

Nick JeffriesBy Nick Jeffries

With a week to go to VMworld US, I thought it would be a good idea to highlight some amazing sessions we have in store for those of you with an interest in server-based computing. Whether you’ve got to roll-up your sleeves and start migrating users off your XenApp 5.0 server soon but don’t know where to start, or maybe you’re a VMware vSphere expert and need some tips on ensuring your architecture includes best practice for VMware Horizon 6 hosted apps – there will be something for everyone.

The first session is EUC5126 – Citrix Migration to VMware Horizon: How to Do It and What You Need to Know on Tuesday, September 1, at 1 PM.

We’ll cover everything you need to know for a seamless migration from Citrix XenApp and XenDesktop onto Horizon 6, including where to start and what to assess up front, the tools available to speed things up, and pitfalls to avoid to make sure you are successful. We’ll also run through a comparison of the various Citrix components in relation to Horizon 6 – great for those of you with years of Citrix experience to get a better understanding of how things line up.

Another great server-based computing-related session is EUC5516 – Delivering the Next Generation of Hosted Applications on Tuesday, September 1, at 5:30 PM.

For this panel discussion we have lined up a group of experts with decades of real-world EUC experience between them, which should make for a lively and interesting discussion. The panel will discuss how technologies such as UEM, App Volumes, ThinApp and solutions from F5 can be used to deliver the next generation of hosted applications and virtual desktops, whether you are deploying VMware Horizon or simply enhancing your existing XenApp infrastructure.

And if you want to explore things further or discuss your Server-Based Computing project in greater detail, come and have a chat with the team at the Professional Services Demo Station between 12–4 PM Wednesday.

See you at VMworld 2015 next week!


Nick has over 17 years experience of both architecting and delivering successful enterprise class solutions. He remains at the forefront of emerging EUC and virtualization technologies and how they can be best used for solving real business problems.  Currently Nick is part of the global EUC Professional Services Engineering team developing new and emerging solutions and services for VMware.

Cloning AppStacks and Modifying Scripts

Jeremy WheelerBy Jeremy Wheeler

Recently while working onsite with a client I discovered they needed to have local Windows accounts created upon AppStack attachment as required by their application. The customer didn’t want to go through the process of recreating the AppStack to achieve this. I was able to solve this problem by injecting scripts into the VMDK of the AppStack. These scripts are called at the time a volume is dynamically attached, or at various points during system startup and logon. They are used in order only if present in the AppStack or Writable volume. If not present in the volume the batch file will be skipped.

All batch files are in the root of the AppStack or Writable Volume. They are only accessible on a system without an agent.

For example, if you assign a volume to a Windows system and there is a user logged in, you would see the following steps—taken automatically—in chronological order:

  • prestartup.bat runs under Windows SYSTEM. If the volume is attached from boot, this will run when SVSERVICE starts.
  • startup.bat runs under Windows SYSTEM. If the volume is attached from boot, this will run when SVSERVICE starts.
  • shellstart.bat runs under Windows USER. If the volume is attached before the user logs in, this is called just before the Windows shell launches.
  • startup_postsvc.bat runs under Windows SYSTEM. This will only occur if there are services or drivers on the AppStack or Writable Volume.
  • logon_postsvc.bat runs under Windows USER. This will only occur if there are services or drivers on the AppStack or Writable Volume.
  • allvolsattached.bat runs under Windows USER. If multiple volumes are attached at the same time (i.e., during user logon), then this is called only once.

These scripts may contain any scriptable actions and are used to customize Windows desktop and application actions at various points in time during the system startup and user login processes. This is to ensure AppStack and Writable Volume data will function appropriately and provide the user with the best possible experience.

These scripts are case sensitive and should be utilized and/or modified with caution.

Batch File Details:

Optional wait times for each batch file may be configured. These are just part of the agent machine. Wait times are defined in seconds and all settings are stored as REG_DWORD registry entries in the following Windows registry path.

HKLM\SYSTEM\CurrentControlSet\services\svservice\Parameters

Registry keys may also be created on the agent machine using the command line interface.

Example:

reg.exe add HKLM\SYSTEM\CurrentControlSet\services\svservice\Parameters /v KeyValue /t REG_DWORD /d 60

End User System Batch Files

The following is a list of each batch file used on the end-user system.

  • prestartup.bat – Launched under Windows SYSTEM when a volume is dynamically attached or during system startup prior to virtualization being activated.
    Optional wait time key: WaitPrestartup (default do not wait).
  • startup.bat – Launched under the Windows SYSTEM when a volume is dynamically attached or during system startup. (Right after the volume is virtualized)
    Optional wait time key: WaitStartup (default do not wait).
  • startup_postsvc.bat – Launched under the Windows SYSTEM after services have been started on the volume. This is only called when there are services on the volume, which are needed to be started (not called unless there are services on volume).
    Optional wait time key: WaitStartupPostSvc (default do not wait).
  • logon.bat – Launched under the Windows USER at logon and before Windows Explorer starts.
    Optional wait time key: WaitLogon (default wait until it finishes).
  • logon_postsvc.bat – Launched under the Windows USER after services have been started. This is only called when there are services on the volume, which are needed to be started (not called unless there are services on volume).
    Optional wait time key: WaitLogonPostsvc (default do not wait).
  • shellstart.bat – Launched under the Windows USER when a volume is dynamically attached or when Windows Explorer starts.
    Optional wait time key: WaitShellstart (default do not wait).
  • allvolattached.bat – Launched after all volumes have been processed (so if user has 3 AppStacks, this will be called after all 3 have loaded).
    Optional wait time key: WaitAllvolattached (default do not wait).
  • shellstop.bat – Launched under the Windows USER when Windows session logoff is initiated, but before Windows Explorer is terminated.
    Optional wait time key: WaitShellstop (default do not wait).
  • logoff.bat – Launched under the Windows USER during Windows session logoff when Windows Explorer has terminated, but before the volume has disconnected.
    Optional wait time key: WaitLogoff (default do not wait).
  • shutdown_presvc.bat – Launched under the Windows SYSTEM when the computer is being shut down before services have been stopped.
    Optional wait time key: WaitShutdownPresvc (default do not wait).
  • shutdown.bat – Launched under the Windows SYSTEM when the computer is being shut down after services have been stopped.
    Optional wait time key: WaitShutdown (default do not wait).

Provisioning System Batch Files

The following is a list of each batch file used on the provisioning system

post_prov.bat – Launched at the end of provisioning to conduct any one-time steps that should be performed at the end of provisioning. Invoked at the point of clicking the provisioning complete pop-up while the volume is still virtualized.
Optional wait time key: WaitPostProv (default wait forever).

The steps needed to perform such an operation are outlined here.

App Volumes Update Method

**** Initial preparation

JWheeler Cloning AppStacks 1

  1. Select the source AppStack and click ‘Update.’
  2. Give the new AppStack a name, select the appropriate storage, append the path with the new AppStack name, and enter a description if needed.
  3. Select ‘Create’ and ‘Wait for completion’ or ‘Perform in the background.’
  4. Select ‘Update.’
  5. Once ‘Update’ is selected you will need to wait until the AppStack is cloned. Once completed refresh your App Volumes Manager interface.
  6. The new AppStack you created should be present and show the status of ‘Un-provisioned.’

**** Provision Updated AppStack

JWheeler Cloning AppStacks 2

  1. From the App Volumes Manager interface, select ‘AppStacks’.
  2. Select your newly created AppStack (the one you just modified).
  3. Select ‘Provision.’
  4. Enter the name of the provisioning virtual machine. The provisioning machine is typically a clean virtual machine with patches and limited applications installed.
  5. Select the provisioning virtual machine.
  6. Select ‘Provision.’
  7. Select ‘Start Provisioning.’
  8. Once the AppStack is attached to your provisioning machine open the console to that virtual machine.
  9. You will be greeted with a dialog box that says you’re now in provisioning mode.
  10. Select Explorer and change the view to show hidden files/folders.
  11. Navigate to “C:\SnapVolumesTemp\MountPoints.”

Note: Under MountPoints you will discover links. If you go into each link you will find a set of files such as batch scripts (startup.bat, etc.) You can make your changes at this point.

  1. Once you complete your changes, re-hide hidden files/folders.
  2. Select ‘OK’ on the App Volumes dialog to finish the capture process.
  3. Select ‘Yes’ to the installation complete dialog.
  4. Select ‘OK’ to the next dialog box, which will reboot the virtual machine.
  5. Once the provisioning machine has rebooted, login to complete the process.
  6. Select ‘OK’ at the ‘Provisioning successful’ dialog box.

**** Editing an AppStack VMDK outside the Update option

JWheeler Cloning AppStacks 3

  1. Select a virtual machine that does NOT have App Volumes Agent installed.
  2. Edit the settings of the virtual machine and add a drive. (Edit Settings > Add… > Hard Disk > Use an existing virtual disk)
  3. Navigate through the storage tree to your newly created AppStack and select the VMDK (i.e., \cloudvolumes\apps\<your new app>\<your new app.vmdk>
  4. Select ‘OK’ on the virtual machine settings interface to commit changes.
  5. You should now see a new drive letter representing the new AppStack VMDK. Proceed to make any customizations you need.
  6. Once finished, edit the settings again of the virtual machine (you can do this step with the virtual machine powered-on or off).
  7. Select the newly added hard disk (the new AppStack VMDK you added).
  8. Select the button ‘Remove.’
  9. Select the button ‘Remove from virtual machine.’
  10. Select ‘OK’ to commit changes to the virtual machine.

Note: If you receive an error message that the VMDK is in shared-mode you can do one of two options to resolve this.

  • Select ‘Rescan’ in the App Volumes Manager portal > Volumes > AppStacks tab.
  • Delete .metadata file where the VMDK resides on the datastore. This option is typically needed if you clone the AppStack from the datastore side and don’t use the Update method as outlined above.

Your AppStack is now ready to test.


Jeremy Wheeler is an experienced senior consultant and architect for VMware’s Professional Services Organization, End-user Computing specializing in VMware Horizon Suite product-line and vRealize products such as vROps, and Log Insight Manager. Jeremy has over 18 years of experience in the IT industry. In addition to his past experience, Jeremy has a passion for technology and thrives on educating customers. Jeremy has 7 years of hands-¬‐on virtualization experience deploying full-life cycle solutions using VMware, CITRIX, and Hyper-V. Jeremy also has 16 years of experience in computer programming in various languages ranging from basic scripting to C, C++, PERL, .NET, SQL, and PowerShell.

Jeremy Wheeler has received acclaim from several clients for his in-¬‐depth and varied technical experience and exceptional hands-on customer satisfaction skills. In February 2013, Jeremy also received VMware’s Spotlight award for his outstanding persistence and dedication to customers and was nominated again in October of 2013

VMworld Preview: Just Because You COULD, Doesn’t Mean You SHOULD – VMware vSphere 6.0 Architecture

jonathanm-profileBy Jonathan McDonald

Have you noticed the ever-changing VMware vSphere architecture with the introduction of new services and technologies? If you answered yes, you should already know that the architectural configuration details in an environment become instrumentally important to the Software-Defined Data Center (SDDC). The foundation of the SDDC starts with vSphere; if architected correctly it will be much more than a platform for the environment.

At VMworld in San Francisco I will discuss lessons learned from our VMware Professional Services team. This discussion will bring real-world experience to light so that common issues can be addressed prior to the deployment of the solution, rather than after the fact.

Here is an example of what we will dive into: There are different architectures for the Platform Services Controller, from an embedded node to a maximum-sized configuration, as shown in the figure below.

JMcDonald VMworld Blog

To be able to use Enhanced Linked Mode however, it is important to understand the correct and supported architectures that allow for a design to be configured in a supported manner. This will ensure that the chances of a failure are minimized from the beginning.

To learn more, attend my session on Wednesday, September 2, 8-00 AM- 9:00 AM or on Thursday, September 3, at 1:30 PM. # INF4712


Jonathan McDonald is a Technical Solutions Architect for the Professional Services Engineering team. He currently specializes in developing architecture designs for core Virtualization, and Software-Defined Storage, as well as providing best practices for upgrading and health checks for vSphere environments.

VMworld 2015 US Sneak Peek: Successful Virtual SAN Evaluation/Proof-Of-Concepts

Julienne_PhamBy Julienne Pham

This is the first in a series of previews of our VMware Professional Services speaking sessions at VMworld 2015 US starting August 30th in San Francisco. Our Technical Architects will present “deep-dive” sessions in their areas of expertise. Read these previews, register early and enjoy.

Sneak Peek of Session STO4572: Successful Virtual SAN Evaluation/Proof-Of-Concepts

This is an update to last year’s Virtual SAN proof-of-concept (POC) talk. A lot has changed in the last year, and the idea of this session is to fill you in on all the potential “gotchas” you might encounter when trying to evaluate VMware Virtual SAN.

Cormac Hogan, Corporate Storage Architect, and Julienne Pham, Technical Solution Architect of VMware, will cover everything you need to know, including how to conduct various failure scenarios, and get the best performance. Thinking about deploying Virtual SAN? Then this session is for you.

This session will share key tips on how to conduct a successful Virtual SAN proof-of-concept. It will show you how to correctly and properly set up a Virtual SAN environment (hosts, storage and networking), verify it is operating correctly, and then show you how to test the Virtual SAN full functionality. This session will also highlight how to verify that VM Storage Policies are working correctly, as they are an integral part of SDS and Virtual SAN.

We will also discuss how a Virtual SAN handles failures, and how to test if it is handling events correctly. In addition, it will cover numerous monitoring tools that can be used during a POC, such as the Ruby vSphere Console, VSAN, Observer Web-based analysis tools, and the new Virtual SAN Health Service plug-in. After attending this session, you will be empowered to implement your own Virtual SAN POC.

 Learn more at VMworld 2015 US in San Francisco

STO4572: Successful Virtual SAN Evaluation/Proof-Of-Concepts

Monday, Aug. 31, 8:30 – 9:30 AM

 Speakers:

Cormac Hogan, Corporate Storage Architect, VMware

Julienne Pham, Technical Solution Architect, VMware


Julienne Pham is a Technical Solution Architect for the Professional Services Engineering team. She is specialised on SRM and core storage. Her focus is on VIO and BCDR space.

Copying App Volumes AppStacks Between Environments That Use vSAN Storage

JeffSmallBy Jeffrey Davidson

There is an issue copying App Volumes AppStacks (VMDK files) if using Secure Copy (SCP) from one environment to another when using VSAN storage. This is not an App Volumes problem; it is related to the way VSAN stores VMDK data.

Clients will often want to create AppStacks in a test environment, then copy those AppStacks to a production environment, and finally import them into App Volumes. In situations where any of those environments use VSAN storage, you will not be able to copy (SCP) AppStacks (specifically VMDK files) between environments.

In this blog entry I will discuss a workaround to this issue, using an example in which the client has two VSAN environments (DEV and PROD), and needs to copy VMDK files between them.

The VMDK files created by App Volumes are nothing special and are traditionally comprised of two files.

What we normally identify as <filename>.vmdk is a type of header/metadata file. Meaning it only holds information regarding the geometry of the virtual disk and, as such, references a file that contains the actual data.

The file referenced is often called a “flat” file; this file contains the actual data of the VMDK. We can identify this file as it has the naming pattern of <filename>-flat.vmdk

On traditional block level storage these two files are normally stored together in the same folder, as shown in the example screenshot below.

JDavidson1

But VSAN storage is different; if you look at the contents of the “header” file you see something similar to the screenshot below. In this screenshot, the section in red is normally a reference to a “flat” file (example: Adobe_Acrobat_Reader -flat.vmdk). In the case where VSAN is the storage platform, we see something different. The screenshot below shows a reference to a VSAN device.

JDavidson2

VSAN storage employs object-level storage, which is different from traditional block-level storage. The VSAN objects are managed through a storage policy which, for example, can allow for greater redundancy for some virtual machines over others. Because the reference in the VMDK file points to a VSAN DOM object, it cannot be copied through traditional means (SCP).

To work around this issue you will need traditional block-level storage which acts as a “middle man” to allow the SCP copy of VMDK files between environments. You will also need SSH access enabled on one host in each environment.

The first step is to clone the VMDK you wish to copy from the VSAN volume to the traditional storage volume. Once on traditional storage you will be able to copy (SCP) the two VMDK files directly to a host in a different VSAN environment. After you have copied (SCP) the VMDK files to a destination VSAN environment, you will need to perform a clone action to re-integrate the VMDK with VSAN as a storage object, so it can be protected properly with VSAN.

The diagram below is an overview of the process to copy AppStack (VMDK) files between VSAN environments.

JDavidson3

The example below shows the commands required to copy an App Volumes AppStack (VMDK) between environments that use VSAN storage. Before executing these commands you should create a staging area in each environment where AppStacks will be copied temporarily before being copied between hosts for getting re-integrated in the destinations’ VSAN storage.

For example:

In the source environment, create the folder <path to block level storage>/AppVolumes_Staging

In the destination environment, create the folder <path to cloud volumes root folder>/cloudvolumes/staging

Step 1:

SSH into the host where the AppStack currently resides.

Execute the following command to clone the AppStack to block-level storage. Note that after you execute this command there are two files on the block-level storage. One is the header file, and the other is the “flat” file, which was previously integrated with VSAN as a storage object.

vmkfstools -d thin -i <VSAN path to App Stack>/cloudvolumes/apps/<filename>.vmdk <path to block level storage>/AppVolumes_Staging/<filename>.vmdk

Example:

vmkfstools -d thin -i /vmfs/volumes/vsan:4a65d9cbe47d44af-80f530e9e2b98ac5/76f05055-98b3-07ab-ef94-002590fd9036/apps/<filename>.vmdk /vmfs/volumes/54e5e55d-97561a60-50de-002590fd9036/AppVolumes_Staging/<filename>.vmdk


Step 2:

Execute the following commands to copy (SCP) an AppStack from one environment to another.

scp <path to vmdk clone on block level storage>/<filename>.vmdk root@<esxi mgt IP>:<path to staging folder>/<filename>.vmdk

scp <path to vmdk “flat” file clone on block level storage>/<filename>-flat.vmdk root@<esxi mgt IP>:<path to staging folder>/<filename>-flat.vmdk

Example:

scp /vmfs/volumes/54e5e55d-97561a60-50de-002590fd9036/AppVolumes_Staging/<filename>.vmdk root@10.10.10.10:/vmfs/volumes/vsan:265d91daeb2841db-82d3d8026326af8e/6efbac55-f2f7-f86a-033f-0cc47a59dc1c/Staging/<filename>.vmdk

scp /vmfs/volumes/54e5e55d-97561a60-50de-002590fd9036/AppVolumes_Staging/<filename>-flat.vmdk root@10.10.10.10:/vmfs/volumes/vsan:265d91daeb2841db-82d3d8026326af8e/6efbac55-f2f7-f86a-033f-0cc47a59dc1c/Staging/<filename>-flat.vmdk


Step 3:

Run the commands below to delete the AppStack from the staging folder on the source environment.

rm <path to staging folder>/<filename>.vmdk

rm <path to staging folder>/<filename>-flat.vmdk

Example:

rm /vmfs/volumes/54e5e55d-97561a60-50de-002590fd9036/AppVolumes_Staging/<filename>.vmdk

rm /vmfs/volumes/54e5e55d-97561a60-50de-002590fd9036/AppVolumes_Staging/<filename>-flat.vmdk


Step 4:

SSH into the host where the AppStack has been copied to. In this example the host IP address is 10.10.10.10.

Run the command below to clone the copied AppStack from the staging folder to the App Volumes “apps” folder, and re-integrate the VMDK into VSAN as a storage object.

vmkfstools -d thin -i <path to staging folder>/<filename>.vmdk <path to cloud volumes root folder>/cloudvolumes/apps/<filename>.vmdk

Example:

vmkfstools -d thin -i /vmfs/volumes/vsan:265d91daeb2841db-82d3d8026326af8e/6efbac55-f2f7-f86a-033f-0cc47a59dc1c/Staging/<filename>.vmdk /vmfs/volumes/vsan:265d91daeb2841db-82d3d8026326af8e/6efbac55-f2f7-f86a-033f-0cc47a59dc1c/apps/<filename>.vmdk


Step 5:

Run the commands below to delete the AppStack from the staging folder on the destination environment.

rm <path to staging folder>/<filename>.vmdk

rm <path to staging folder>/<filename>-flat.vmdk

Example:

rm /vmfs/volumes/vsan:265d91daeb2841db-82d3d8026326af8e/6efbac55-f2f7-f86a-033f-0cc47a59dc1c/Staging/<filename>.vmdk

rm /vmfs/volumes/vsan:265d91daeb2841db-82d3d8026326af8e/6efbac55-f2f7-f86a-033f-0cc47a59dc1c/Staging/<filename>-flat.vmdk
After completing these steps, you will have successfully copied a VMDK from one VSAN storage platform to another.

App Volumes also creates a “metadata” file during the creation of an AppStack, as shown in the screenshot below.

JDavidson4

The “metadata” file is a “text” file and should be copied to the destination environment so the AppStack (VMDK) can be imported into the destination App Volumes instance. Because this is a “text” file, it can be copied (SCP) without the cloning process and need for block-level storage as described in steps 1–5 above.


Jeffrey Davidson, Senior Consultant, VMware EUC. Jeffrey has over 15 years of IT experience and joined VMware in 2014. He is also a VCP5-DCV and VCP5-DT. He is a strong advocate of virtualization technologies, focusing on operational readiness with customers.

EUC Professional Services Engineering (PSE) and VMworld

Dale-Carter-150x150By Dale Carter

VMworld in San Francisco is approaching very quickly. It’s a must-attend event for VMware customers, but there is a lot to take in, so I thought I would take a few minutes to highlight some key activities led by my team of End User Computing (EUC) consultants and architects that you won’t want to miss.

Our organization is called Professional Services Engineering (PSE) and is part of the Global Technical and Professional Services Organization. As VMware’s EUC subject matter experts, our team works with some of our largest EUC customers worldwide. From our experiences with these large organizations, our team is responsible for creating VMware’s EUC methodologies, which are then leveraged by our global EUC professional services organization.

VMworld Sessions Delivered by the PSE Team:

EUC4630 – Managing Users: A Deep Dive into VMware User Environment Manager

Managing end-user profiles can be challenging, and often the bane of a desktop administrator’s existence. To the rescue comes VMware’s User Environment Manager. In this session, attendees will be provided with a deep dive into UEM, including an architectural overview, available settings and configurations, and user environment management options. The session will also outline UEM deployment considerations and best practices, as well as discuss how to integrate UEM into a Horizon 6 environment. Attendees will even learn how UEM can be used to manage physical desktops.

EUC5516 – Delivering the Next Generation of Hosted Applications

VMware continues to innovate and evolve our EUC products with the introduction of Hosted Applications with Horizon 6, VMware UEM, App Volumes and Workspace. Join our experienced experts for a panel discussion on how VMware technologies can be used to support your existing Server Based Computing (SBC) infrastructure or move away from it all together onto a platform that addresses what people want, not just what a published application needs.

EUC4437 – Horizon View Troubleshooting – Looking Under the Hood

Attend one of the most popular EUC sessions from previous VMworlds! Learn from VMware’s best field troubleshooters on how to identify common issues and key problem domains within VMware Horizon View.

EUC4509 – Architecting Horizon for VSAN, the VCDX Way – VMware on VMware

VMware Horizon is a proven desktop virtualization solution that has been deployed around the world. Balancing the performance and cost of a storage solution for Horizon can be difficult and affects the overall return on investment. VMware Virtual SAN has provided architects with a new weapon in the battle for desktop virtualization. VSAN allows architects to design a low-cost, high-performance hybrid solution of solid-state and spinning disks, or go all-flash for ultimate desktop performance. Learn from two Double VCDXs on how to go about architecting your Horizon on VSAN solution to ensure it will provide the levels of performance your users need, with management simplicity that will keep your administrators happy and a cost that will ensure your project will be a success.

EUC5126 – Citrix Migration to VMware Horizon: How to Do It and What You Need to Know

Are you planning a migration from Citrix XenApp or XenDesktop to VMware Horizon? Or simply interested in learning how to do it? This is the session for you! Come hear from the architects of VMware’s Citrix migration strategies and services as they break down different approaches to migration using real-world case studies. We will dive deep into how to evaluate the state of the Citrix environment, assess system requirements, design the Horizon infrastructure, and then plan and perform the migration. By the end of the session you will know all the best practices, tips, tricks and tools available to make sure your migration from Citrix to VMware Horizon is a complete success!

VMworld Booth in the Solutions Exchange

We can also be found at the Professional Services demo station in the VMware booth Wednesday from 12–4 PM. Come by with your EUC questions or just discuss any EUC solutions you are looking to implement in your organization. I will be there along with my colleague Nick Jeffries.

VMworld Hands On Labs

Finally, my colleague Jack McMichaels and I will both be working in the VMworld Hands On Labs this year. The Hands On Labs are a great way to come and try all of the VMware technologies. If you have never attended a Hands On Lab at VMworld then I would highly encourage you to come and give them a go. They are a great way to learn if you have an hour or two to spare in your agenda.

See you in San Francisco!


Dale Carter is a CTO Ambassador and VMware Senior Solutions Architect specializing in the EUC space, has worked in IT for more than 20 years. He is also a VCP4-DT, VCP5-DT, VCAP-DTD, and VCAP-DTA.

VMware User Environment Manager and Application Profile Settings

JeffSmallBy Jeffrey Davidson

There has been a lot of focus on VMware UEM (formerly Immidio) in recent months since its acquisition and release as VMware User Environment Manager (UEM).

In this blog entry I will walk through the process of capturing Google Chrome settings and incorporating that configuration into UEM.

UEM can be deployed with configuration items for common applications like Microsoft Office, which saves a lot of work. For applications not included in UEM, you can use the UEM Application Profiler to capture specific configuration settings. Deploying UEM is generally not a time-consuming task, though it does require some thought and planning. A majority of your time will be spent configuring UEM, specifically, applications you wish to add to your UEM environment.

Windows applications generally store configuration information in the registry and/or files on the computer. The UEM Application Profiler “watches” an application, and captures the location where its settings are stored. This process is referred to as “application profiling,“ which basically is the process of understanding where an application stores its settings. The UEM Application Profiler can also capture specific settings, which can then be stored in UEM and enforced at logon. Today we will focus on capturing or “profiling” a new application and bringing that configuration into UEM.

There are a few things I’d recommend you do if you plan to profile applications.

  1. Install at least one desktop with the applications you wish to profile. We will refer to these systems as the UEM capture systems. In larger environments you may wish to deploy additional capture systems.
  2. Install the UEM Application Profiler on the capture systems. It is important to note that you cannot install the UEM client on the same system as the UEM Application Profiler. The UEM Application Profiler installation files are found in the “Optional Components” folder of the VMware-UEM-x.x.x.zip file.JDavidson UEM 1
  3. Take a snapshot of the capture systems in case you need to roll back.

We are now ready to begin profiling applications. We will begin by launching the UEM Application Profiler from the Start Menu on your capture system. We see a blank “Flex Config File” this is the file that will contain the application configuration once the “application profiling” is complete.

JDavidson UEM 2

It is helpful to have an understanding of the application before capture begins. I recommend researching where an application saves its configuration data before starting the profiling process; it will be time well spent.

In the case of Google Chrome, we know the application stores much of its configuration and settings in files in the user profile (C:\Users\username\AppData\Local\Google).

UEM Application Profiler has built-in registry and file system exclusions that prevent the Application Profiler from capturing data unrelated to the application being profiled. In order to successfully profile an application’s behavior, you may need to modify the exclusions path if an application saves configuration data in one of these locations.

In the case of Google Chrome, we know the application saves data in the local AppData folder; so we remove the exclusion so UEM Application Profiler will profile Chrome’s behavior in this location.

This is done by selecting “Manage Exclusions” above “File System” and removing the <LocalAppData>\* line as shown in the screenshot below.

JDavidson UEM 3

To begin the profile process, click “Start Session” and navigate to the location of Google Chrome and click “OK.” In order to profile an application, UEM Application Profiler must launch the application.

JDavidson UEM 4

It is generally sufficient to modify a few common settings, unless there is specific configuration/behavior you need to capture. In the case of Google Chrome, making a few common changes is sufficient. Once you’ve made these changes close Google Chrome and choose “Stop Analysis” in the Analyzing Application dialogue box.

JDavidson 5

After the profile process is complete you will see that the previously blank Flex Config File contains configuration data that can be saved and integrated into your UEM implementation. In some cases it may be necessary to edit the Flex Config File in order to remove any unwanted configuration paths. The image below shows the correct Flex Config for Google Chrome.

JDavidson UEM 6

To save the Flex Configuration click the save button and choose “Save Config File,” then navigate to the UNC path of your UEM config share.

I like to create a separate folder for each application to keep the folder structure clean. In this case it would look something like this:

\\UEMserver\uemconfig\general\Applications\Google Chrome\Google Chrome.ini

I recommend saving the new configuration to a UEM test environment first. The settings can be validated and changed, if necessary, before moving to a production UEM environment.

JDavidson UEM 7

This saves the configuration from the profile process to the UEM environment. The next time you open or refresh the UEM Management Console application list, you will see Google Chrome listed as an application.

JDavidson UEM 8

UEM users who log in after the new configuration has been added will have their Google Chrome settings persist across sessions.

The goal and benefit of UEM is capturing application-specific settings and maintain that application experience across heterogeneous desktop environments without conflicts.


Jeffrey Davidson, Senior Consultant, VMware EUC. Jeffrey has over 15 years of IT experience and joined VMware in 2014. He is also a VCP5-DCV and VCP5-DT. He is a strong advocate of virtualization technologies, focusing on operational readiness with customers.