Home > Blogs > VMware Consulting Blog

Copying App Volumes AppStacks Between Environments That Use vSAN Storage

JeffSmallBy Jeffrey Davidson

There is an issue copying App Volumes AppStacks (VMDK files) if using Secure Copy (SCP) from one environment to another when using VSAN storage. This is not an App Volumes problem; it is related to the way VSAN stores VMDK data.

Clients will often want to create AppStacks in a test environment, then copy those AppStacks to a production environment, and finally import them into App Volumes. In situations where any of those environments use VSAN storage, you will not be able to copy (SCP) AppStacks (specifically VMDK files) between environments.

In this blog entry I will discuss a workaround to this issue, using an example in which the client has two VSAN environments (DEV and PROD), and needs to copy VMDK files between them.

The VMDK files created by App Volumes are nothing special and are traditionally comprised of two files.

What we normally identify as <filename>.vmdk is a type of header/metadata file. Meaning it only holds information regarding the geometry of the virtual disk and, as such, references a file that contains the actual data.

The file referenced is often called a “flat” file; this file contains the actual data of the VMDK. We can identify this file as it has the naming pattern of <filename>-flat.vmdk

On traditional block level storage these two files are normally stored together in the same folder, as shown in the example screenshot below.

JDavidson1

But VSAN storage is different; if you look at the contents of the “header” file you see something similar to the screenshot below. In this screenshot, the section in red is normally a reference to a “flat” file (example: Adobe_Acrobat_Reader -flat.vmdk). In the case where VSAN is the storage platform, we see something different. The screenshot below shows a reference to a VSAN device.

JDavidson2

VSAN storage employs object-level storage, which is different from traditional block-level storage. The VSAN objects are managed through a storage policy which, for example, can allow for greater redundancy for some virtual machines over others. Because the reference in the VMDK file points to a VSAN DOM object, it cannot be copied through traditional means (SCP).

To work around this issue you will need traditional block-level storage which acts as a “middle man” to allow the SCP copy of VMDK files between environments. You will also need SSH access enabled on one host in each environment.

The first step is to clone the VMDK you wish to copy from the VSAN volume to the traditional storage volume. Once on traditional storage you will be able to copy (SCP) the two VMDK files directly to a host in a different VSAN environment. After you have copied (SCP) the VMDK files to a destination VSAN environment, you will need to perform a clone action to re-integrate the VMDK with VSAN as a storage object, so it can be protected properly with VSAN.

The diagram below is an overview of the process to copy AppStack (VMDK) files between VSAN environments.

JDavidson3

The example below shows the commands required to copy an App Volumes AppStack (VMDK) between environments that use VSAN storage. Before executing these commands you should create a staging area in each environment where AppStacks will be copied temporarily before being copied between hosts for getting re-integrated in the destinations’ VSAN storage.

For example:

In the source environment, create the folder <path to block level storage>/AppVolumes_Staging

In the destination environment, create the folder <path to cloud volumes root folder>/cloudvolumes/staging

Step 1:

SSH into the host where the AppStack currently resides.

Execute the following command to clone the AppStack to block-level storage. Note that after you execute this command there are two files on the block-level storage. One is the header file, and the other is the “flat” file, which was previously integrated with VSAN as a storage object.

vmkfstools -d thin -i <VSAN path to App Stack>/cloudvolumes/apps/<filename>.vmdk <path to block level storage>/AppVolumes_Staging/<filename>.vmdk

Example:

vmkfstools -d thin -i /vmfs/volumes/vsan:4a65d9cbe47d44af-80f530e9e2b98ac5/76f05055-98b3-07ab-ef94-002590fd9036/apps/<filename>.vmdk /vmfs/volumes/54e5e55d-97561a60-50de-002590fd9036/AppVolumes_Staging/<filename>.vmdk


Step 2:

Execute the following commands to copy (SCP) an AppStack from one environment to another.

scp <path to vmdk clone on block level storage>/<filename>.vmdk root@<esxi mgt IP>:<path to staging folder>/<filename>.vmdk

scp <path to vmdk “flat” file clone on block level storage>/<filename>-flat.vmdk root@<esxi mgt IP>:<path to staging folder>/<filename>-flat.vmdk

Example:

scp /vmfs/volumes/54e5e55d-97561a60-50de-002590fd9036/AppVolumes_Staging/<filename>.vmdk root@10.10.10.10:/vmfs/volumes/vsan:265d91daeb2841db-82d3d8026326af8e/6efbac55-f2f7-f86a-033f-0cc47a59dc1c/Staging/<filename>.vmdk

scp /vmfs/volumes/54e5e55d-97561a60-50de-002590fd9036/AppVolumes_Staging/<filename>-flat.vmdk root@10.10.10.10:/vmfs/volumes/vsan:265d91daeb2841db-82d3d8026326af8e/6efbac55-f2f7-f86a-033f-0cc47a59dc1c/Staging/<filename>-flat.vmdk


Step 3:

Run the commands below to delete the AppStack from the staging folder on the source environment.

rm <path to staging folder>/<filename>.vmdk

rm <path to staging folder>/<filename>-flat.vmdk

Example:

rm /vmfs/volumes/54e5e55d-97561a60-50de-002590fd9036/AppVolumes_Staging/<filename>.vmdk

rm /vmfs/volumes/54e5e55d-97561a60-50de-002590fd9036/AppVolumes_Staging/<filename>-flat.vmdk


Step 4:

SSH into the host where the AppStack has been copied to. In this example the host IP address is 10.10.10.10.

Run the command below to clone the copied AppStack from the staging folder to the App Volumes “apps” folder, and re-integrate the VMDK into VSAN as a storage object.

vmkfstools -d thin -i <path to staging folder>/<filename>.vmdk <path to cloud volumes root folder>/cloudvolumes/apps/<filename>.vmdk

Example:

vmkfstools -d thin -i /vmfs/volumes/vsan:265d91daeb2841db-82d3d8026326af8e/6efbac55-f2f7-f86a-033f-0cc47a59dc1c/Staging/<filename>.vmdk /vmfs/volumes/vsan:265d91daeb2841db-82d3d8026326af8e/6efbac55-f2f7-f86a-033f-0cc47a59dc1c/apps/<filename>.vmdk


Step 5:

Run the commands below to delete the AppStack from the staging folder on the destination environment.

rm <path to staging folder>/<filename>.vmdk

rm <path to staging folder>/<filename>-flat.vmdk

Example:

rm /vmfs/volumes/vsan:265d91daeb2841db-82d3d8026326af8e/6efbac55-f2f7-f86a-033f-0cc47a59dc1c/Staging/<filename>.vmdk

rm /vmfs/volumes/vsan:265d91daeb2841db-82d3d8026326af8e/6efbac55-f2f7-f86a-033f-0cc47a59dc1c/Staging/<filename>-flat.vmdk
After completing these steps, you will have successfully copied a VMDK from one VSAN storage platform to another.

App Volumes also creates a “metadata” file during the creation of an AppStack, as shown in the screenshot below.

JDavidson4

The “metadata” file is a “text” file and should be copied to the destination environment so the AppStack (VMDK) can be imported into the destination App Volumes instance. Because this is a “text” file, it can be copied (SCP) without the cloning process and need for block-level storage as described in steps 1–5 above.


Jeffrey Davidson, Senior Consultant, VMware EUC. Jeffrey has over 15 years of IT experience and joined VMware in 2014. He is also a VCP5-DCV and VCP5-DT. He is a strong advocate of virtualization technologies, focusing on operational readiness with customers.

EUC Professional Services Engineering (PSE) and VMworld

Dale-Carter-150x150By Dale Carter

VMworld in San Francisco is approaching very quickly. It’s a must-attend event for VMware customers, but there is a lot to take in, so I thought I would take a few minutes to highlight some key activities led by my team of End User Computing (EUC) consultants and architects that you won’t want to miss.

Our organization is called Professional Services Engineering (PSE) and is part of the Global Technical and Professional Services Organization. As VMware’s EUC subject matter experts, our team works with some of our largest EUC customers worldwide. From our experiences with these large organizations, our team is responsible for creating VMware’s EUC methodologies, which are then leveraged by our global EUC professional services organization.

VMworld Sessions Delivered by the PSE Team:

EUC4630 - Managing Users: A Deep Dive into VMware User Environment Manager

Managing end-user profiles can be challenging, and often the bane of a desktop administrator’s existence. To the rescue comes VMware’s User Environment Manager. In this session, attendees will be provided with a deep dive into UEM, including an architectural overview, available settings and configurations, and user environment management options. The session will also outline UEM deployment considerations and best practices, as well as discuss how to integrate UEM into a Horizon 6 environment. Attendees will even learn how UEM can be used to manage physical desktops.

EUC5516 - Delivering the Next Generation of Hosted Applications

VMware continues to innovate and evolve our EUC products with the introduction of Hosted Applications with Horizon 6, VMware UEM, App Volumes and Workspace. Join our experienced experts for a panel discussion on how VMware technologies can be used to support your existing Server Based Computing (SBC) infrastructure or move away from it all together onto a platform that addresses what people want, not just what a published application needs.

EUC4437 - Horizon View Troubleshooting - Looking Under the Hood

Attend one of the most popular EUC sessions from previous VMworlds! Learn from VMware's best field troubleshooters on how to identify common issues and key problem domains within VMware Horizon View.

EUC4509 - Architecting Horizon for VSAN, the VCDX Way - VMware on VMware

VMware Horizon is a proven desktop virtualization solution that has been deployed around the world. Balancing the performance and cost of a storage solution for Horizon can be difficult and affects the overall return on investment. VMware Virtual SAN has provided architects with a new weapon in the battle for desktop virtualization. VSAN allows architects to design a low-cost, high-performance hybrid solution of solid-state and spinning disks, or go all-flash for ultimate desktop performance. Learn from two Double VCDXs on how to go about architecting your Horizon on VSAN solution to ensure it will provide the levels of performance your users need, with management simplicity that will keep your administrators happy and a cost that will ensure your project will be a success.

EUC5126 - Citrix Migration to VMware Horizon: How to Do It and What You Need to Know

Are you planning a migration from Citrix XenApp or XenDesktop to VMware Horizon? Or simply interested in learning how to do it? This is the session for you! Come hear from the architects of VMware's Citrix migration strategies and services as they break down different approaches to migration using real-world case studies. We will dive deep into how to evaluate the state of the Citrix environment, assess system requirements, design the Horizon infrastructure, and then plan and perform the migration. By the end of the session you will know all the best practices, tips, tricks and tools available to make sure your migration from Citrix to VMware Horizon is a complete success!

VMworld Booth in the Solutions Exchange

We can also be found at the Professional Services demo station in the VMware booth Wednesday from 12–4 PM. Come by with your EUC questions or just discuss any EUC solutions you are looking to implement in your organization. I will be there along with my colleague Nick Jeffries.

VMworld Hands On Labs

Finally, my colleague Jack McMichaels and I will both be working in the VMworld Hands On Labs this year. The Hands On Labs are a great way to come and try all of the VMware technologies. If you have never attended a Hands On Lab at VMworld then I would highly encourage you to come and give them a go. They are a great way to learn if you have an hour or two to spare in your agenda.

See you in San Francisco!


Dale Carter is a CTO Ambassador and VMware Senior Solutions Architect specializing in the EUC space, has worked in IT for more than 20 years. He is also a VCP4-DT, VCP5-DT, VCAP-DTD, and VCAP-DTA.

VMware User Environment Manager and Application Profile Settings

JeffSmallBy Jeffrey Davidson

There has been a lot of focus on VMware UEM (formerly Immidio) in recent months since its acquisition and release as VMware User Environment Manager (UEM).

In this blog entry I will walk through the process of capturing Google Chrome settings and incorporating that configuration into UEM.

UEM can be deployed with configuration items for common applications like Microsoft Office, which saves a lot of work. For applications not included in UEM, you can use the UEM Application Profiler to capture specific configuration settings. Deploying UEM is generally not a time-consuming task, though it does require some thought and planning. A majority of your time will be spent configuring UEM, specifically, applications you wish to add to your UEM environment.

Windows applications generally store configuration information in the registry and/or files on the computer. The UEM Application Profiler “watches” an application, and captures the location where its settings are stored. This process is referred to as “application profiling,“ which basically is the process of understanding where an application stores its settings. The UEM Application Profiler can also capture specific settings, which can then be stored in UEM and enforced at logon. Today we will focus on capturing or “profiling” a new application and bringing that configuration into UEM.

There are a few things I’d recommend you do if you plan to profile applications.

  1. Install at least one desktop with the applications you wish to profile. We will refer to these systems as the UEM capture systems. In larger environments you may wish to deploy additional capture systems.
  2. Install the UEM Application Profiler on the capture systems. It is important to note that you cannot install the UEM client on the same system as the UEM Application Profiler. The UEM Application Profiler installation files are found in the “Optional Components” folder of the VMware-UEM-x.x.x.zip file.JDavidson UEM 1
  3. Take a snapshot of the capture systems in case you need to roll back.

We are now ready to begin profiling applications. We will begin by launching the UEM Application Profiler from the Start Menu on your capture system. We see a blank “Flex Config File” this is the file that will contain the application configuration once the “application profiling” is complete.

JDavidson UEM 2

It is helpful to have an understanding of the application before capture begins. I recommend researching where an application saves its configuration data before starting the profiling process; it will be time well spent.

In the case of Google Chrome, we know the application stores much of its configuration and settings in files in the user profile (C:\Users\username\AppData\Local\Google).

UEM Application Profiler has built-in registry and file system exclusions that prevent the Application Profiler from capturing data unrelated to the application being profiled. In order to successfully profile an application’s behavior, you may need to modify the exclusions path if an application saves configuration data in one of these locations.

In the case of Google Chrome, we know the application saves data in the local AppData folder; so we remove the exclusion so UEM Application Profiler will profile Chrome’s behavior in this location.

This is done by selecting “Manage Exclusions” above “File System” and removing the <LocalAppData>\* line as shown in the screenshot below.

JDavidson UEM 3

To begin the profile process, click “Start Session” and navigate to the location of Google Chrome and click “OK.” In order to profile an application, UEM Application Profiler must launch the application.

JDavidson UEM 4

It is generally sufficient to modify a few common settings, unless there is specific configuration/behavior you need to capture. In the case of Google Chrome, making a few common changes is sufficient. Once you’ve made these changes close Google Chrome and choose “Stop Analysis” in the Analyzing Application dialogue box.

JDavidson 5

After the profile process is complete you will see that the previously blank Flex Config File contains configuration data that can be saved and integrated into your UEM implementation. In some cases it may be necessary to edit the Flex Config File in order to remove any unwanted configuration paths. The image below shows the correct Flex Config for Google Chrome.

JDavidson UEM 6

To save the Flex Configuration click the save button and choose “Save Config File,” then navigate to the UNC path of your UEM config share.

I like to create a separate folder for each application to keep the folder structure clean. In this case it would look something like this:

\\UEMserver\uemconfig\general\Applications\Google Chrome\Google Chrome.ini

I recommend saving the new configuration to a UEM test environment first. The settings can be validated and changed, if necessary, before moving to a production UEM environment.

JDavidson UEM 7

This saves the configuration from the profile process to the UEM environment. The next time you open or refresh the UEM Management Console application list, you will see Google Chrome listed as an application.

JDavidson UEM 8

UEM users who log in after the new configuration has been added will have their Google Chrome settings persist across sessions.

The goal and benefit of UEM is capturing application-specific settings and maintain that application experience across heterogeneous desktop environments without conflicts.


Jeffrey Davidson, Senior Consultant, VMware EUC. Jeffrey has over 15 years of IT experience and joined VMware in 2014. He is also a VCP5-DCV and VCP5-DT. He is a strong advocate of virtualization technologies, focusing on operational readiness with customers.

Geo-Location Based Traffic Management with F5 BIG-IP for VMware Products (PoC)

Spas_KaloferovBy Spas Kaloferov

The increasingly global nature of content and migration of multimedia content distribution from typical broadcast channels to the Internet make Geo-Location a requirement for enforcing access restrictions. It also provides the basis for traditional performance-enhancing and disaster recovery solutions.

Also of rising importance is cloud computing, which introduces new challenges to IT in terms of global load balancing configurations. Hybrid architectures that attempt to seamlessly use public and private cloud implementations for scalability, disaster recovery and availability purposes can leverage accurate Geo-Location data to enable a broader spectrum of functionality and options.

Geo-Location improves the performance and availability of your applications by intelligently directing users to the closest or best-performing server running that application, whether it be physical, virtual or in a cloud environment.

VMware vRealize Automation Center (vRA) will be one of the products in this Proof of Concept (PoC) for which use case(s) for Load balancing and geo-location traffic management will be presented. This PoC can be used as a test environment for any other product that supports F5 BIG-IP Local Traffic Manager (LTM) and F5 BIG-IP Global Traffic Manager (GTM). After completing this PoC you should have the lab environment needed and feel comfortable enough to be able to setup more advanced configurations on your own and according to your business needs and functional requirements.

One of the typical scenarios which involving Geo-Location based traffic management is the ability to achieve traffic redirection on the basis of the source of the DNS query.

Consider a software development company that is planning to implement vRealize Automation Center to provide private cloud access to its employees where they can develop and test their applications. Later in this article I sometimes refer to the globally available vRA private cloud application as GeoApp. Our GeoApp must provide access to the company’s private cloud infrastructure from multiple cities across the globe.

The company has data centers in two locations: Los Angeles (LA) and New York (NY). Each data center will host instance(s) of the GeoApp (vRealize Automation Center). Development (DEV) and Quality Engineering (QE) teams from both locations will access the GeoApp and use it to develop and test their homegrown software products.

Use Case 1

The company has made design decisions and is planning to implement the following to lay down the foundations for their private cloud infrastructure:

  • Deploy two GeoApp instances using vRealize Automation Center minimal setup in the LA data center for use by Los Angeles employees.
  • Deploy two GeoApp instances using vRealize Automation Center minimal setup in the NY data center for use by New York employees.

The company has identified the following requirements for their GeoApp implementation:

  • The GeoApp must be accessible to all the employees, regardless if they are in the Los Angeles or New York data center, under the single common URL geoapp.f5.vmware.com.
  • To ensure the employees get a responsive experience from the GeoApp (vRA) private cloud portal website, the company requires that LA employees be redirected to the Los Angeles data center and NY employees be redirected to New York data center.
  • The workload of the teams must be distributed across their dedicated local GeoApp (vRA) instances.

This is roughly represented by the diagram below:

SKaloferov vRA 1

  • In case of a failure of a GeoApp instance, the traffic should be load balanced between available instances in the local data center.

This is roughly represented by the diagram below:

SKaloferov vRA 2

Use Case 2 

The company has made design decision and is planning to implement the following to lay down the foundations for their private cloud infrastructure:

  • Deploy 1x GeoApp instance using VMware vRealize Automation Center (vRA) distributed setup in the Los Angeles  datacenter for use by the LA employees. In this case the GeoApp can be seen as a 3-Tier application, containing 2 GeoApp nodes in each tier.
  • Deploy 1x GeoApp instance using VMware vRealize Automation Center (vRA) distributed setup in the New York datacenter for use by the NY employees. In this case the GeoApp can be seen as a 3-Tier application, containing 2 GeoApp nodes in each tier.

The company has identified the following requirements for their GeoApp implementation:

  • The GeoApp must be accessible from all the employees, regardless if they are in the Los Angeles or the New York datacenter, under a single common URL geoapp-uc2.f5.vmware.com.
  • To ensure that the employees get a responsive experience from the GeoApp (vRA) private cloud portal website, the company requires that the Los Angeles employees be redirected to Los Angeles datacenter and the New York employees be redirected to New York datacenter.
  • The workload must be distributed across the Tier nodes of the local GeoApp (vRA) instance.

This is roughly represented by the diagram below:

SKaloferov vRA 3

  • In case of failure of a single Tier Node in a given GeoApp Tier, the workload should be forwarded to the remaining Tier Node in the local datacenter.

This is roughly represented by the diagram below:

SKaloferov vRA 4

  • In case of failure of all Tier Nodes in a given GerApp Tier , the workload of all tiers should be forwarded to the GeoApp instance in the remote datacenter

This is roughly represented by the diagram below:

SKaloferov vRA 5

Satisfying these requirements involves the implementation of two computing techniques:

  • Load balancing
  • Geo-Location-based traffic management

There are other software and hardware products that provide load balancing and/or Geo-Location capabilities, but we will be focusing on two of them to accomplish our goal:

  • For load balancing: F5 BIG-IP Local Traffic Manager (LTM)
  • For Geo-Location: F5 BIG-IP Global Traffic Manager (GTM)

Based on which deployment method you choose and what functional requirements you have you will then have to configure the following aspects of F5 BIG-IP devices, which will manage your traffic:

  • F5 BIG-IP LTM Pool
  • F5 BIG-IP LTM Pool Load Balancing Method
  • F5 BIG-IP LTM Virtual Servers
  • F5 BIG-IP GTM Pool
  • F5 BIG-IP GTM Pool Load Balancing Method (Preferred, Alternate, Fallback)
  • F5 BIG-IP GTM Wide IP Pool
  • F5 BIG-IP GTM Wide IP Pool Load Balancing Method
  • F5 BIG-IP GTM Distributed Applications Dependency Level

Implementing the above use case with GTM and LTM is roughly represented by the diagram below:

SKaloferov vRA 6

Implementing Use Case 2 (UC2) with GTM and LTM is roughly represented by the diagram below:

SKaloferov vRA 7

 To learn more about how to achieve the goal of Geo-Location Based Traffic Management using F5 BIG-IP Local Traffic manager (LTM) and F5 BIG-IP Global Traffic Manager (GTM) please visit Geo-Location Based Traffic Management with F5 BIG-IP for VMware Products (PoC)


Spas Kaloferov is an acting Solutions Architect member of Professional Services Engineering (PSE) for the Software-Defined Datacenter (SDDC) – a part of the Global Technical & Professional Solutions (GTPS) team. Prior to VMware, Kaloferov focused on cloud computing solutions.

"Network Functions Virtualization (NFV) for Dummies" Blog Series - Part 1

What is it and what are the touch points between telecommunications and IT Enterprise computing?

Gary Hamilton

By Gary Hamilton

In his first of a multi-part blog, Gary Hamilton, Senior Cloud Solution Architect, with VMware Professional Services describes his experience with NFV and helping telecommunications customers transform their technology platform. In this first blog, he describes the difference between the telco IT platform and enterprise IT platforms and how the network functions virtualization approach is changing the  industry.

Read more on NFV here: http://blogs.vmware.com/telco/nfv-dummies-blog-series-1/


Gary Hamilton is a Senior Cloud Management Solutions Architect at VMware and has worked in various IT industry roles since 1985, including support, services and solution architecture; spanning hardware, networking and software. Additionally, Gary is ITIL Service Manager certified and a published author. Before joining VMware, he worked for IBM for over 15 years, spending most of his time in the service management arena, with the last five years being fully immersed in cloud technology. He has designed cloud solutions across Europe, the Middle East and the US, and has led the implementation of first of a kind (FOAK) solutions. Follow Gary on Twitter @hamilgar.

Complex Apps

Jeremy WheelerBy Jeremy Wheeler

From time to time we all come across that extremely complicated application that an organization needs packaged – and of course, it has a lot of moving parts. In this blog entry I will walk through a proven process that has worked successfully, unlike the typical packaging style where, if you make a mistake, you are back at square one. An important key to keep in mind in this blog is the “disposable virtual machine.” I consider a disposable virtual machine an App Volumes Provisioning that will eventually become contaminated, and you will not be able to revert to a clean-state using a snapshot.

Note: Not utilizing a 'disposable' provisioning machine will place your normal provisioning machine at risk. The very end of this process involves removing ALL snapshots from the virtual machine.

JWheeler Complex Apps Stage 1

 

1. Prepare a 'disposable' provisioning machine. This virtual machine will lose all its snapshots once you finish this process, so it's best not to use your typical provisioning machine.

2. Point the App Volumes Manager to the Provisioning virtual machine to start the provisioning process.

JWheeler Complex App Stage 2

3. Install any prerequisite applications such as Java, etc.

4. Power down the Provisioning virtual machine and take a snapshot, using this as more of a bookmark in case you need to go back. The snapshot process will capture all the virtual machine elements including the attached App Volume VMDK file as long as you are still in provisioning mode when you powered down the virtual machine.

5. Power on the virtual machine and continue installing any core applications, or your target application. One step my application required was an installation of SQL Express with an imported database, so I installed SQL Express during this step.

6. Power down the Provisioning machine once SQL is cleanly installed and has created another snapshot.

JWheeler Complex Apps Stage 3

7. Power on the provisioning virtual machine, and create any custom databases, accounts, etc.

8. Power down the virtual machine once you have completed all your installs and are ready to complete the App Volumes capture process.

9. Edit the virtual machine’s snapshots (VM > Snapshot > Snapshot Manager) and then 'Remove All Snapshots.'

10. Once the virtual machine’s snapshots have all been removed, you need to consolidate the redo logs. (VM > Snapshot > Consolidate)

11. Once consolidation has completed, power on the virtual machine

12. Select 'OK' on the App Volumes dialog box to complete the provisioning process and let the virtual machine reboot.

13. Login to the virtual machine and you should have the message that provisioning has finished successfully. Select 'OK'

14. Provisioning is now complete and the VMDK should successfully detach from the virtual machine.

Once you complete these steps I recommend a lot of testing to validate the application is performing as expected.


Jeremy Wheeler is an experienced senior consultant and architect for VMware’s Professional Services Organization, End-user Computing specializing in VMware Horizon Suite product-line and vRealize products such as vROps, and Log Insight Manager. Jeremy has over 18 years of experience in the IT industry. In addition to his past experience, Jeremy has a passion for technology and thrives on educating customers. Jeremy has 7 years of hands-¬‐on virtualization experience deploying full-life cycle solutions using VMware, CITRIX, and Hyper-V. Jeremy also has 16 years of experience in computer programming in various languages ranging from basic scripting to C, C++, PERL, .NET, SQL, and PowerShell.

Jeremy Wheeler has received acclaim from several clients for his in-¬‐depth and varied technical experience and exceptional hands-on customer satisfaction skills. In February 2013, Jeremy also received VMware’s Spotlight award for his outstanding persistence and dedication to customers and was nominated again in October of 2013

Microsoft Office Options with App Volumes

Jeremy WheelerBy Jeremy Wheeler

Working with various customers I've discovered challenges when it comes to placing Microsoft Office into AppStacks. VMware has a few models out there that are supported, and they do work well. But a few things to keep in mind when dealing with Office:

  1. Only Office 2010 and 2013 are supported.
  2. Office core bits can be presented once, to the endpoint.

JWheeler AppStacks 1

 

In the diagram above:

1: Office core bits installed into AppStack

2: Office + Microsoft Project, Office icons hidden (Optional: Hide core office icons from start menu)

3: Office + Visio, Office icons hidden (Optional: Hide core office icons from start menu)

4: Office + Project + Visio (Optional: Hide core office icons from start menu)

5: Office + Project + Visio

JWheeler AppStacks 2

6: Base Gold Image installed with Office core bits, and one of three AppStacks that contain:

A) Visio

B) Project

C) Visio + Project

Note: If you hide/delete core office icons from the start menu (such as for Word, Excel, etc.) and you only present Project and/or Visio, don't simply delete the 'Office Tools' folder. You can clean up some of the icons in that folder, but if you delete it, nothing will show in the start menu.


Jeremy Wheeler is an experienced senior consultant and architect for VMware’s Professional Services Organization, End-user Computing specializing in VMware Horizon Suite product-line and vRealize products such as vROps, and Log Insight Manager. Jeremy has over 18 years of experience in the IT industry. In addition to his past experience, Jeremy has a passion for technology and thrives on educating customers. Jeremy has 7 years of hands-¬‐on virtualization experience deploying full-life cycle solutions using VMware, CITRIX, and Hyper-V. Jeremy also has 16 years of experience in computer programming in various languages ranging from basic scripting to C, C++, PERL, .NET, SQL, and PowerShell.

Jeremy Wheeler has received acclaim from several clients for his in-¬‐depth and varied technical experience and exceptional hands-on customer satisfaction skills. In February 2013, Jeremy also received VMware’s Spotlight award for his outstanding persistence and dedication to customers and was nominated again in October of 2013

VMware App Volumes Multi-vCenter and Multi-Site Deployments

Dale-Carter-150x150By Dale Carter

With the release of VMware App Volumes 2.9 comes one of the most requested features so far: multi-vCenter support. With multi-vCenter support it is now possible to manage virtual desktops and AppStacks from multiple vCenter instances within the same App Volume Manager.

The following graphic shows how this works:

DCarter App Volumes

With this new feature App Volumes can now be used to support the Horizon Block and Pod architecture with just one App Volumes manager, or cluster of managers.

Now that we can support multi-vCenters, I started to wonder if this new capability could be leveraged across multiple sites to help support multiple site deployments.

After speaking with the App Volumes Product Manager, I am happy to confirm that, “Yes,” you can use this new feature to support multi-site deployments – as long as you are using the supported SQL database.

The architecture for this type of deployment would look like this:

DCarter App Volumes 2

 

I would recommend that App Volumes Managers at each site be clustered. Read the following blog to learn how to cluster App Volumes Managers: http://blogs.vmware.com/consulting/2015/02/vmware-appvolumes-f5.html

Although 2.9 is just a point release, this is one of the biggest features added so far for multi-vCenter support.

To add a second―or more―vCenter instance to App Volumes, follow these simple steps:

  1. Login to the App Volumes Manager
  2. Select Configuration, then Machine Manager, and then click Add Machine Manager
    DCarter App Volumes 3
  3. Enter the vCenter information and click Save.
    DCarter App Volumes 4
  4. Follow these steps for each vCenter instance you want to add.

Dale Carter is a CTO Ambassador and VMware Senior Solutions Architect specializing in the EUC space, has worked in IT for more than 20 years. He is also a VCP4-DT, VCP5-DT, VCAP-DTD, and VCAP-DTA.

VIO – A Closer Look from a vSphere Administrator View.

Julienne_PhamBy Julienne Pham

A version of VMware Integrated OpenStack was released in April, and for the DevOps team, the Cloud API did not change from one vendor to another. So, how does it impact me as a vSphere administrator?

This article will go through how VIO can be managed through the vSphere Web Client.

After deploying the VIO cluster services instance, this is what I have noticed:

One Single Pane of Glass

Yes, the initial OVA deployment involves the registration of the VMware Integrated OpenStack services to vCenter Service, so there is no surprise to see the OpenStack logo on the vSphere Web Client.

Instead, I find it very practical to be able to configure all the services through the same vSphere Web Client.

JPham vSphere Web Client

 

Backup and Restore Configuration File

I like this feature in particular, if you are like me, someone who is constantly multitasking, it is really a gain of time when you have to redeploy the OpenStack services or troubleshoot any deployment issue.

The installation requirements can be found on this article: http://blogs.vmware.com/openstack/vmware-integrated-openstack-first-look/

It is quite an exhaustive list, and it can save a lot time especially when you have to fill the IP range.

JPham vSphere Web Client 2

Managing the VIO OpenStack Management Cluster

Depending on the business demand, you can scale-up and scale-down your Nova Compute or other storage configurations from your actual setup within the VMware Integrated OpenStack web interface.

You can:

  • Get the summary of OpenStack Services cluster
  • Add new Nova Compute
  • Add new Nova Storage
  • Add new Glance Storage resources
  • Eventually patch your OpenStack Cluster

JPham vSphere Web Client 3

 

High Availability (HA)/Disaster Recovery Services (DRS) Rules

VMware has designed an OpenStack architecture in high availability mode, but does it integrate with the vSphere HA or DRS? In this example, during the OpenStack services cluster deployment, DRS rules get created to ensure the same OpenStack Service virtual machines are not hosted on the same VMware ESXi host.

JPham vSphere Web Client 4

 

Once configured, the things you need to remember are:

  1. DevOps will provision daily virtual machine workload. You need to make sure you have some controls, monitoring and alarms set on the vSphere infrastructure to prevent any massive production disruption, e.g., a case where a virtual machine provisioning script keeps running in a loop … and might impact the full vSphere environment.
  2. Maintenance. For application awareness, you need to go through the vSphere VMware Integrated web plugin and shut down the cluster from there, not from the vSphere web virtual machine inventory, as it has no application dependency awareness.
  3. We are providing a validated VIO architecture. If you are customizing OpenStack services virtual machines manually, I would suggest backing up the virtual machines and calling support if needed to validate it – as with any future upgrade, your configuration might be overwritten and not persistent.

Julienne Pham is a Technical Solution Architect for the Professional Services Engineering team. She is specialised on SRM and core storage. Her focus is on VIO and BCDR space.

Simple VDI Load Testing with View Planner

Jack McMichaelBy Jack McMichael, Solutions Consultant

In the last few years it seems the number of customers asking for assistance in re-evaluating their “VDI 1.0” infrastructure is increasing at a faster rate than ever. It makes sense when you consider that in the rush to achieve datacenter consolidation many administrators were under pressure to just “make it happen.” Many of those administrators and architects didn’t have time to design their virtual desktop infrastructure (VDI) solution to scale and accommodate things our customers and users have grown accustomed to using every day, such as YouTube HD, Skype, and other resource-intensive applications.

Last year, VMware released their very popular internal tool View Planner for use to the general public for free. While it’s flown under the radar for a lot of customers, it can be an invaluable tool for judging where your VDI solution stands, and identifying where the stress cracks in your VDI infrastructure may be forming—or are already wide open.

The View Planner appliance is simple to install and fairly straightforward to set up for local tests. It’s capable of local-only load testing, as well as passive/remote connections with the VMware Horizon® Client.

Deploying View Planner

After deploying an Open Virtual Appliance in VMware vSphere®, configure your View Planner integrations in the Config tab of the administrator page. The AD and View integrations are optional, but can be used if you wish View Planner to deploy desktops and/or create and delete users.

Note that for best results, I recommend using IP addresses instead of hostnames. Create a service account for your credentials, and give it administrator privileges in both AD and in VMware vCenter™.

In this screenshot, you can see all three connectors configured. You can use the Test buttons to ensure the configuration works, but click Save first.

JMcMichael 1

Environment Preparation

For my simple test, I created a linked clone pool with the name VPDesktop-{n:fixed=3} in VMware Horizon View™. On this master snapshot, I added the View Planner Desktop Agent that you can download from the View Planner portal on the Packages tab.

Make sure you reboot your desktop before creating your snapshot. Once you reboot, you will likely see the desktop auto-login. If so, run the View Planner Agent as seen in this screenshot.

JMcMichael 2

 

Configuring Run Profiles

There are three test modes available: Local, Passive and Remote. Typically, Local mode will be used for load testing since it doesn’t require actual Horizon Client connections, but has the disadvantage of not replicating PCoIP performance impact. Passive mode will add PCoIP connections that are shared amongst client servers that host more than one client connection at a time. Remote mode will create a 1:1 relationship between clients and desktops, thus creating the most overall resource impact.

To configure a Run Profile for a simple load test, I recommend using Local as it doesn’t require the use of the Horizon Client, and is also easy to set up. Simply add the Workload profile you want to run into the Run Profile by clicking Add Group, and click Save to save the Run Profile. You can add multiple workload profiles if you desire, but for a simple test only one is required.

The most important thing to remember is that desktop names (and client names if you choose Passive or Remote) are case-sensitive. In this example, VPDesktop- is valid for VPDesktop-001, but not vpdesktop-001 or VPDesktop001.

JMcMichael 3

Running a Test

Simply click the Run button to start a test. If you run into trouble, View Planner will show you right away; by clicking the link on the appropriate box, you’ll see the exact error or success message.

JMcMichael 4

 

Once completed, you can view the results in the Per Stats column; they will look something like the example below.

JMcMichael 5

Summary

Overall, I found the View Planner tool to be great for simple and quick tests of a VDI environment. It shows you where resource contention exists, or singles out how an app may be creating resource gaps in your VMware ESXi™ hosts. The free downloadable version includes several standard templates that cover a variety of normal user application workloads. If you require more flexibility in your tests, a paid VMware Professional Services engagement offers a more feature-rich version to create customizable workload profiles and other goodies. Contact VMware Professional Services or a VMware Partner for an on-site evaluation.

 


Jack McMichael is a Solutions Consultant for the VMware Professional Services Engineering Global Technical and Professional Services team. Follow him on Twitter @jackwmc4 !