Home > Blogs > VMware Consulting Blog > Monthly Archives: July 2015

Monthly Archives: July 2015

Copying App Volumes AppStacks Between Environments That Use vSAN Storage

JeffSmallBy Jeffrey Davidson

There is an issue copying App Volumes AppStacks (VMDK files) if using Secure Copy (SCP) from one environment to another when using VSAN storage. This is not an App Volumes problem; it is related to the way VSAN stores VMDK data.

Clients will often want to create AppStacks in a test environment, then copy those AppStacks to a production environment, and finally import them into App Volumes. In situations where any of those environments use VSAN storage, you will not be able to copy (SCP) AppStacks (specifically VMDK files) between environments.

In this blog entry I will discuss a workaround to this issue, using an example in which the client has two VSAN environments (DEV and PROD), and needs to copy VMDK files between them.

The VMDK files created by App Volumes are nothing special and are traditionally comprised of two files.

What we normally identify as <filename>.vmdk is a type of header/metadata file. Meaning it only holds information regarding the geometry of the virtual disk and, as such, references a file that contains the actual data.

The file referenced is often called a “flat” file; this file contains the actual data of the VMDK. We can identify this file as it has the naming pattern of <filename>-flat.vmdk

On traditional block level storage these two files are normally stored together in the same folder, as shown in the example screenshot below.

JDavidson1

But VSAN storage is different; if you look at the contents of the “header” file you see something similar to the screenshot below. In this screenshot, the section in red is normally a reference to a “flat” file (example: Adobe_Acrobat_Reader -flat.vmdk). In the case where VSAN is the storage platform, we see something different. The screenshot below shows a reference to a VSAN device.

JDavidson2

VSAN storage employs object-level storage, which is different from traditional block-level storage. The VSAN objects are managed through a storage policy which, for example, can allow for greater redundancy for some virtual machines over others. Because the reference in the VMDK file points to a VSAN DOM object, it cannot be copied through traditional means (SCP).

To work around this issue you will need traditional block-level storage which acts as a “middle man” to allow the SCP copy of VMDK files between environments. You will also need SSH access enabled on one host in each environment.

The first step is to clone the VMDK you wish to copy from the VSAN volume to the traditional storage volume. Once on traditional storage you will be able to copy (SCP) the two VMDK files directly to a host in a different VSAN environment. After you have copied (SCP) the VMDK files to a destination VSAN environment, you will need to perform a clone action to re-integrate the VMDK with VSAN as a storage object, so it can be protected properly with VSAN.

The diagram below is an overview of the process to copy AppStack (VMDK) files between VSAN environments.

JDavidson3

The example below shows the commands required to copy an App Volumes AppStack (VMDK) between environments that use VSAN storage. Before executing these commands you should create a staging area in each environment where AppStacks will be copied temporarily before being copied between hosts for getting re-integrated in the destinations’ VSAN storage.

For example:

In the source environment, create the folder <path to block level storage>/AppVolumes_Staging

In the destination environment, create the folder <path to cloud volumes root folder>/cloudvolumes/staging

Step 1:

SSH into the host where the AppStack currently resides.

Execute the following command to clone the AppStack to block-level storage. Note that after you execute this command there are two files on the block-level storage. One is the header file, and the other is the “flat” file, which was previously integrated with VSAN as a storage object.

vmkfstools -d thin -i <VSAN path to App Stack>/cloudvolumes/apps/<filename>.vmdk <path to block level storage>/AppVolumes_Staging/<filename>.vmdk

Example:

vmkfstools -d thin -i /vmfs/volumes/vsan:4a65d9cbe47d44af-80f530e9e2b98ac5/76f05055-98b3-07ab-ef94-002590fd9036/apps/<filename>.vmdk /vmfs/volumes/54e5e55d-97561a60-50de-002590fd9036/AppVolumes_Staging/<filename>.vmdk


Step 2:

Execute the following commands to copy (SCP) an AppStack from one environment to another.

scp <path to vmdk clone on block level storage>/<filename>.vmdk root@<esxi mgt IP>:<path to staging folder>/<filename>.vmdk

scp <path to vmdk “flat” file clone on block level storage>/<filename>-flat.vmdk root@<esxi mgt IP>:<path to staging folder>/<filename>-flat.vmdk

Example:

scp /vmfs/volumes/54e5e55d-97561a60-50de-002590fd9036/AppVolumes_Staging/<filename>.vmdk root@10.10.10.10:/vmfs/volumes/vsan:265d91daeb2841db-82d3d8026326af8e/6efbac55-f2f7-f86a-033f-0cc47a59dc1c/Staging/<filename>.vmdk

scp /vmfs/volumes/54e5e55d-97561a60-50de-002590fd9036/AppVolumes_Staging/<filename>-flat.vmdk root@10.10.10.10:/vmfs/volumes/vsan:265d91daeb2841db-82d3d8026326af8e/6efbac55-f2f7-f86a-033f-0cc47a59dc1c/Staging/<filename>-flat.vmdk


Step 3:

Run the commands below to delete the AppStack from the staging folder on the source environment.

rm <path to staging folder>/<filename>.vmdk

rm <path to staging folder>/<filename>-flat.vmdk

Example:

rm /vmfs/volumes/54e5e55d-97561a60-50de-002590fd9036/AppVolumes_Staging/<filename>.vmdk

rm /vmfs/volumes/54e5e55d-97561a60-50de-002590fd9036/AppVolumes_Staging/<filename>-flat.vmdk


Step 4:

SSH into the host where the AppStack has been copied to. In this example the host IP address is 10.10.10.10.

Run the command below to clone the copied AppStack from the staging folder to the App Volumes “apps” folder, and re-integrate the VMDK into VSAN as a storage object.

vmkfstools -d thin -i <path to staging folder>/<filename>.vmdk <path to cloud volumes root folder>/cloudvolumes/apps/<filename>.vmdk

Example:

vmkfstools -d thin -i /vmfs/volumes/vsan:265d91daeb2841db-82d3d8026326af8e/6efbac55-f2f7-f86a-033f-0cc47a59dc1c/Staging/<filename>.vmdk /vmfs/volumes/vsan:265d91daeb2841db-82d3d8026326af8e/6efbac55-f2f7-f86a-033f-0cc47a59dc1c/apps/<filename>.vmdk


Step 5:

Run the commands below to delete the AppStack from the staging folder on the destination environment.

rm <path to staging folder>/<filename>.vmdk

rm <path to staging folder>/<filename>-flat.vmdk

Example:

rm /vmfs/volumes/vsan:265d91daeb2841db-82d3d8026326af8e/6efbac55-f2f7-f86a-033f-0cc47a59dc1c/Staging/<filename>.vmdk

rm /vmfs/volumes/vsan:265d91daeb2841db-82d3d8026326af8e/6efbac55-f2f7-f86a-033f-0cc47a59dc1c/Staging/<filename>-flat.vmdk
After completing these steps, you will have successfully copied a VMDK from one VSAN storage platform to another.

App Volumes also creates a “metadata” file during the creation of an AppStack, as shown in the screenshot below.

JDavidson4

The “metadata” file is a “text” file and should be copied to the destination environment so the AppStack (VMDK) can be imported into the destination App Volumes instance. Because this is a “text” file, it can be copied (SCP) without the cloning process and need for block-level storage as described in steps 1–5 above.


Jeffrey Davidson, Senior Consultant, VMware EUC. Jeffrey has over 15 years of IT experience and joined VMware in 2014. He is also a VCP5-DCV and VCP5-DT. He is a strong advocate of virtualization technologies, focusing on operational readiness with customers.

EUC Professional Services Engineering (PSE) and VMworld

By Dale Carter

VMworld in San Francisco is approaching very quickly. It’s a must-attend event for VMware customers, but there is a lot to take in, so I thought I would take a few minutes to highlight some key activities led by my team of End User Computing (EUC) consultants and architects that you won’t want to miss.

Our organization is called Professional Services Engineering (PSE) and is part of the Global Technical and Professional Services Organization. As VMware’s EUC subject matter experts, our team works with some of our largest EUC customers worldwide. From our experiences with these large organizations, our team is responsible for creating VMware’s EUC methodologies, which are then leveraged by our global EUC professional services organization.

VMworld Sessions Delivered by the PSE Team:

EUC4630 – Managing Users: A Deep Dive into VMware User Environment Manager

Managing end-user profiles can be challenging, and often the bane of a desktop administrator’s existence. To the rescue comes VMware’s User Environment Manager. In this session, attendees will be provided with a deep dive into UEM, including an architectural overview, available settings and configurations, and user environment management options. The session will also outline UEM deployment considerations and best practices, as well as discuss how to integrate UEM into a Horizon 6 environment. Attendees will even learn how UEM can be used to manage physical desktops.

EUC5516 – Delivering the Next Generation of Hosted Applications

VMware continues to innovate and evolve our EUC products with the introduction of Hosted Applications with Horizon 6, VMware UEM, App Volumes and Workspace. Join our experienced experts for a panel discussion on how VMware technologies can be used to support your existing Server Based Computing (SBC) infrastructure or move away from it all together onto a platform that addresses what people want, not just what a published application needs.

EUC4437 – Horizon View Troubleshooting – Looking Under the Hood

Attend one of the most popular EUC sessions from previous VMworlds! Learn from VMware’s best field troubleshooters on how to identify common issues and key problem domains within VMware Horizon View.

EUC4509 – Architecting Horizon for VSAN, the VCDX Way – VMware on VMware

VMware Horizon is a proven desktop virtualization solution that has been deployed around the world. Balancing the performance and cost of a storage solution for Horizon can be difficult and affects the overall return on investment. VMware Virtual SAN has provided architects with a new weapon in the battle for desktop virtualization. VSAN allows architects to design a low-cost, high-performance hybrid solution of solid-state and spinning disks, or go all-flash for ultimate desktop performance. Learn from two Double VCDXs on how to go about architecting your Horizon on VSAN solution to ensure it will provide the levels of performance your users need, with management simplicity that will keep your administrators happy and a cost that will ensure your project will be a success.

EUC5126 – Citrix Migration to VMware Horizon: How to Do It and What You Need to Know

Are you planning a migration from Citrix XenApp or XenDesktop to VMware Horizon? Or simply interested in learning how to do it? This is the session for you! Come hear from the architects of VMware’s Citrix migration strategies and services as they break down different approaches to migration using real-world case studies. We will dive deep into how to evaluate the state of the Citrix environment, assess system requirements, design the Horizon infrastructure, and then plan and perform the migration. By the end of the session you will know all the best practices, tips, tricks and tools available to make sure your migration from Citrix to VMware Horizon is a complete success!

VMworld Booth in the Solutions Exchange

We can also be found at the Professional Services demo station in the VMware booth Wednesday from 12–4 PM. Come by with your EUC questions or just discuss any EUC solutions you are looking to implement in your organization. I will be there along with my colleague Nick Jeffries.

VMworld Hands On Labs

Finally, my colleague Jack McMichaels and I will both be working in the VMworld Hands On Labs this year. The Hands On Labs are a great way to come and try all of the VMware technologies. If you have never attended a Hands On Lab at VMworld then I would highly encourage you to come and give them a go. They are a great way to learn if you have an hour or two to spare in your agenda.

See you in San Francisco!


Dale is a Senior Solutions Architect and member of the CTO Ambassadors. Dale focuses in the End User Compute space, where Dale has become a subject matter expert in a number of the VMware products. Dale has more than 20 years experience working in IT having started his career in Northern England before moving the Spain and finally the USA. Dale currently hold a number of certifications including VCP-DV, VCP-DT, VCAP-DTD and VCAP-DTA.

For updates you can follow Dale on twitter @vDelboy

VMware User Environment Manager and Application Profile Settings

JeffSmallBy Jeffrey Davidson

There has been a lot of focus on VMware UEM (formerly Immidio) in recent months since its acquisition and release as VMware User Environment Manager (UEM).

In this blog entry I will walk through the process of capturing Google Chrome settings and incorporating that configuration into UEM.

UEM can be deployed with configuration items for common applications like Microsoft Office, which saves a lot of work. For applications not included in UEM, you can use the UEM Application Profiler to capture specific configuration settings. Deploying UEM is generally not a time-consuming task, though it does require some thought and planning. A majority of your time will be spent configuring UEM, specifically, applications you wish to add to your UEM environment.

Windows applications generally store configuration information in the registry and/or files on the computer. The UEM Application Profiler “watches” an application, and captures the location where its settings are stored. This process is referred to as “application profiling,“ which basically is the process of understanding where an application stores its settings. The UEM Application Profiler can also capture specific settings, which can then be stored in UEM and enforced at logon. Today we will focus on capturing or “profiling” a new application and bringing that configuration into UEM.

There are a few things I’d recommend you do if you plan to profile applications.

  1. Install at least one desktop with the applications you wish to profile. We will refer to these systems as the UEM capture systems. In larger environments you may wish to deploy additional capture systems.
  2. Install the UEM Application Profiler on the capture systems. It is important to note that you cannot install the UEM client on the same system as the UEM Application Profiler. The UEM Application Profiler installation files are found in the “Optional Components” folder of the VMware-UEM-x.x.x.zip file.JDavidson UEM 1
  3. Take a snapshot of the capture systems in case you need to roll back.

We are now ready to begin profiling applications. We will begin by launching the UEM Application Profiler from the Start Menu on your capture system. We see a blank “Flex Config File” this is the file that will contain the application configuration once the “application profiling” is complete.

JDavidson UEM 2

It is helpful to have an understanding of the application before capture begins. I recommend researching where an application saves its configuration data before starting the profiling process; it will be time well spent.

In the case of Google Chrome, we know the application stores much of its configuration and settings in files in the user profile (C:\Users\username\AppData\Local\Google).

UEM Application Profiler has built-in registry and file system exclusions that prevent the Application Profiler from capturing data unrelated to the application being profiled. In order to successfully profile an application’s behavior, you may need to modify the exclusions path if an application saves configuration data in one of these locations.

In the case of Google Chrome, we know the application saves data in the local AppData folder; so we remove the exclusion so UEM Application Profiler will profile Chrome’s behavior in this location.

This is done by selecting “Manage Exclusions” above “File System” and removing the <LocalAppData>\* line as shown in the screenshot below.

JDavidson UEM 3

To begin the profile process, click “Start Session” and navigate to the location of Google Chrome and click “OK.” In order to profile an application, UEM Application Profiler must launch the application.

JDavidson UEM 4

It is generally sufficient to modify a few common settings, unless there is specific configuration/behavior you need to capture. In the case of Google Chrome, making a few common changes is sufficient. Once you’ve made these changes close Google Chrome and choose “Stop Analysis” in the Analyzing Application dialogue box.

JDavidson 5

After the profile process is complete you will see that the previously blank Flex Config File contains configuration data that can be saved and integrated into your UEM implementation. In some cases it may be necessary to edit the Flex Config File in order to remove any unwanted configuration paths. The image below shows the correct Flex Config for Google Chrome.

JDavidson UEM 6

To save the Flex Configuration click the save button and choose “Save Config File,” then navigate to the UNC path of your UEM config share.

I like to create a separate folder for each application to keep the folder structure clean. In this case it would look something like this:

\\UEMserver\uemconfig\general\Applications\Google Chrome\Google Chrome.ini

I recommend saving the new configuration to a UEM test environment first. The settings can be validated and changed, if necessary, before moving to a production UEM environment.

JDavidson UEM 7

This saves the configuration from the profile process to the UEM environment. The next time you open or refresh the UEM Management Console application list, you will see Google Chrome listed as an application.

JDavidson UEM 8

UEM users who log in after the new configuration has been added will have their Google Chrome settings persist across sessions.

The goal and benefit of UEM is capturing application-specific settings and maintain that application experience across heterogeneous desktop environments without conflicts.


Jeffrey Davidson, Senior Consultant, VMware EUC. Jeffrey has over 15 years of IT experience and joined VMware in 2014. He is also a VCP5-DCV and VCP5-DT. He is a strong advocate of virtualization technologies, focusing on operational readiness with customers.

Geo-Location Based Traffic Management with F5 BIG-IP for VMware Products (PoC)

Spas_KaloferovBy Spas Kaloferov

The increasingly global nature of content and migration of multimedia content distribution from typical broadcast channels to the Internet make Geo-Location a requirement for enforcing access restrictions. It also provides the basis for traditional performance-enhancing and disaster recovery solutions.

Also of rising importance is cloud computing, which introduces new challenges to IT in terms of global load balancing configurations. Hybrid architectures that attempt to seamlessly use public and private cloud implementations for scalability, disaster recovery and availability purposes can leverage accurate Geo-Location data to enable a broader spectrum of functionality and options.

Geo-Location improves the performance and availability of your applications by intelligently directing users to the closest or best-performing server running that application, whether it be physical, virtual or in a cloud environment.

VMware vRealize Automation Center (vRA) will be one of the products in this Proof of Concept (PoC) for which use case(s) for Load balancing and geo-location traffic management will be presented. This PoC can be used as a test environment for any other product that supports F5 BIG-IP Local Traffic Manager (LTM) and F5 BIG-IP Global Traffic Manager (GTM). After completing this PoC you should have the lab environment needed and feel comfortable enough to be able to setup more advanced configurations on your own and according to your business needs and functional requirements.

One of the typical scenarios which involving Geo-Location based traffic management is the ability to achieve traffic redirection on the basis of the source of the DNS query.

Consider a software development company that is planning to implement vRealize Automation Center to provide private cloud access to its employees where they can develop and test their applications. Later in this article I sometimes refer to the globally available vRA private cloud application as GeoApp. Our GeoApp must provide access to the company’s private cloud infrastructure from multiple cities across the globe.

The company has data centers in two locations: Los Angeles (LA) and New York (NY). Each data center will host instance(s) of the GeoApp (vRealize Automation Center). Development (DEV) and Quality Engineering (QE) teams from both locations will access the GeoApp and use it to develop and test their homegrown software products.

Use Case 1

The company has made design decisions and is planning to implement the following to lay down the foundations for their private cloud infrastructure:

  • Deploy two GeoApp instances using vRealize Automation Center minimal setup in the LA data center for use by Los Angeles employees.
  • Deploy two GeoApp instances using vRealize Automation Center minimal setup in the NY data center for use by New York employees.

The company has identified the following requirements for their GeoApp implementation:

  • The GeoApp must be accessible to all the employees, regardless if they are in the Los Angeles or New York data center, under the single common URL geoapp.f5.vmware.com.
  • To ensure the employees get a responsive experience from the GeoApp (vRA) private cloud portal website, the company requires that LA employees be redirected to the Los Angeles data center and NY employees be redirected to New York data center.
  • The workload of the teams must be distributed across their dedicated local GeoApp (vRA) instances.

This is roughly represented by the diagram below:

SKaloferov vRA 1

  • In case of a failure of a GeoApp instance, the traffic should be load balanced between available instances in the local data center.

This is roughly represented by the diagram below:

SKaloferov vRA 2

Use Case 2 

The company has made design decision and is planning to implement the following to lay down the foundations for their private cloud infrastructure:

  • Deploy 1x GeoApp instance using VMware vRealize Automation Center (vRA) distributed setup in the Los Angeles  datacenter for use by the LA employees. In this case the GeoApp can be seen as a 3-Tier application, containing 2 GeoApp nodes in each tier.
  • Deploy 1x GeoApp instance using VMware vRealize Automation Center (vRA) distributed setup in the New York datacenter for use by the NY employees. In this case the GeoApp can be seen as a 3-Tier application, containing 2 GeoApp nodes in each tier.

The company has identified the following requirements for their GeoApp implementation:

  • The GeoApp must be accessible from all the employees, regardless if they are in the Los Angeles or the New York datacenter, under a single common URL geoapp-uc2.f5.vmware.com.
  • To ensure that the employees get a responsive experience from the GeoApp (vRA) private cloud portal website, the company requires that the Los Angeles employees be redirected to Los Angeles datacenter and the New York employees be redirected to New York datacenter.
  • The workload must be distributed across the Tier nodes of the local GeoApp (vRA) instance.

This is roughly represented by the diagram below:

SKaloferov vRA 3

  • In case of failure of a single Tier Node in a given GeoApp Tier, the workload should be forwarded to the remaining Tier Node in the local datacenter.

This is roughly represented by the diagram below:

SKaloferov vRA 4

  • In case of failure of all Tier Nodes in a given GerApp Tier , the workload of all tiers should be forwarded to the GeoApp instance in the remote datacenter

This is roughly represented by the diagram below:

SKaloferov vRA 5

Satisfying these requirements involves the implementation of two computing techniques:

  • Load balancing
  • Geo-Location-based traffic management

There are other software and hardware products that provide load balancing and/or Geo-Location capabilities, but we will be focusing on two of them to accomplish our goal:

  • For load balancing: F5 BIG-IP Local Traffic Manager (LTM)
  • For Geo-Location: F5 BIG-IP Global Traffic Manager (GTM)

Based on which deployment method you choose and what functional requirements you have you will then have to configure the following aspects of F5 BIG-IP devices, which will manage your traffic:

  • F5 BIG-IP LTM Pool
  • F5 BIG-IP LTM Pool Load Balancing Method
  • F5 BIG-IP LTM Virtual Servers
  • F5 BIG-IP GTM Pool
  • F5 BIG-IP GTM Pool Load Balancing Method (Preferred, Alternate, Fallback)
  • F5 BIG-IP GTM Wide IP Pool
  • F5 BIG-IP GTM Wide IP Pool Load Balancing Method
  • F5 BIG-IP GTM Distributed Applications Dependency Level

Implementing the above use case with GTM and LTM is roughly represented by the diagram below:

SKaloferov vRA 6

Implementing Use Case 2 (UC2) with GTM and LTM is roughly represented by the diagram below:

SKaloferov vRA 7

 To learn more about how to achieve the goal of Geo-Location Based Traffic Management using F5 BIG-IP Local Traffic manager (LTM) and F5 BIG-IP Global Traffic Manager (GTM) please visit Geo-Location Based Traffic Management with F5 BIG-IP for VMware Products (PoC)


Spas Kaloferov is an acting Solutions Architect member of Professional Services Engineering (PSE) for the Software-Defined Datacenter (SDDC) – a part of the Global Technical & Professional Solutions (GTPS) team. Prior to VMware, Kaloferov focused on cloud computing solutions.