Home > Blogs > VMware Consulting Blog > Tag Archives: Dale Carter

Tag Archives: Dale Carter

Link VMware Horizon Deployments Together with Cloud Pod Architecture

By Dale Carter

VMware has just made life easier for VMware Horizon administrators. With the release of VMware Horizon 6.1, VMware has added a popular feature—from the Horizon 6 release—to the web interface. Using Cloud Pod Architecture you can now link a number of Horizon deployments together to create a larger global pool – and these pools can span two different locations.

Cloud Pod Architecture in Horizon 6 was sometimes difficult to configure because you had to use a command line interface on the connection brokers. Now, with Horizon 6.1, you can configure and manage Cloud Pod Architecture via the Web Admin Portal, and this greatly improves the Cloud Pod Architecture feature.

When you deploy Cloud Pod Architecture with Horizon 6.1 you can:

  • Enable Horizon deployments across multiple data centers
  • Replicate new data layers across Horizon connection servers
  • Support a single namespace for end-users with a global URL
  • Assign and manage desktops and users with the Global Entitlement layer

The significant benefits you gain include:

  • The ability to scale Horizon deployments to multiple data centers with up to 10,000 sessions
  • Horizon deployment support for active/active and disaster recovery use cases
  • Support for geo-roaming users

This illustration shows how two Horizon deployments—one in Chicago and another in London—are linked together.

DCarter View 6.1

To configure Cloud Pod Architecture for supporting a global name space you first:

  • Set up at least two Horizon Connection Servers – one at each site; each server would have desktop pools
  • Test them to ensure they work properly, including assigning users (or test users) to the environments

Following this initial step you create global pools, then configure local pools with global pools, and finally set up user entitlements, which can be done from any Horizon Connection Server.

For more detailed information, and for a complete walk-through on setting up your Cloud Pod Architecture feature, read the white paper “Cloud Pod Architecture with VMware Horizon 6.1“.


Dale is a Senior Solutions Architect and member of the CTO Ambassadors. Dale focuses in the End User Compute space, where Dale has become a subject matter expert in a number of the VMware products. Dale has more than 20 years experience working in IT having started his career in Northern England before moving the Spain and finally the USA. Dale currently hold a number of certifications including VCP-DV, VCP-DT, VCAP-DTD and VCAP-DTA.

For updates you can follow Dale on twitter @vDelboy

VMware App Volumes™ with F5’s Local Traffic Manager

By Dale Carter, Senior Solutions Architect, End User Computing & Justin Venezia, Senior Solutions Architect, F5 Networks

App Volumes™—a result of VMware’s recent acquisition of Cloud Volumes—provides an alternative, just-in-time method for integrating and delivering applications to virtualized desktop- and Remote Desktop Services (RDS)-based computing environments. With this real-time application delivery system, applications are delivered by attaching virtual disks (VMDKs) to the virtual machine (VM) without modifying the VM – or the applications themselves. Applications can be scaled out with superior performance, at lower costs, and without compromising the end-user experience.

For this blog post, I have colluded with Justin Venezia – one of my good friends and a former colleague now working at F5 Networks. Justin and I will discuss ways to build resiliency and scalability within the App Volumes architecture using F5’s Local Traffic Manager (LTM).

App Volumes Nitty-Gritty

Let’s start out with the basics. Harry Labana’s blog post gives a great overview of how App Volumes work and what it does. The following picture depicts a common App Volumes conceptual architecture:

HLabana AppVolumes

 

Basically, App Volumes does a “real time” attachment of applications (read-only and writable) to virtual desktops and RDS hosts using VMDKs. When the App Volumes Agent checks in with the manager, the App Volumes Manager (the brains of App Volumes) will attach the necessary VMDKs to the virtual machines through a connection with a paired vCenter. The App Volumes Agent manages the redirection of file system calls to AppStacks (read-only VMDK of applications) or Writeable Volumes (a user-specific writeable VMDK). Through the Web-based App Volumes Manager console, IT administrators can dynamically provision, manage, or revoke applications access. Applications can even be dynamically delivered while users are logged into the RDS session or virtual desktop.

The App Volumes Manager is a critical component for administration and Agent communications. By using F5’s LTM capabilities, we can intelligently monitor the health of each App Volumes Manager server, balance and optimize the communications for the App Volume Agents, and build a level of resiliency for maximum system uptime.

Who is Talking with What?

As with any application, there’s always some back-and-forth chatter on the network. Besides administrator-initiated actions to the App Volumes Manager using a web browser, there are four other events that will generate traffic through the F5’s BIG-IP module; these four events are very short, quick communications. There aren’t any persistent or long-term connections kept between the App Volumes Agent and Manager.

When an IT administrator assigns an application to a desktop/user that is already powered on and logged in, the App Volumes Manager talks directly with vCenter and attaches the VMDK. The Agent then handles the rest of the integration of the VMDK into the virtual machine. When this event occurs, the agent never communicates with the App Volumes Manager during this process.

Configuring Load Balancing with App Volume Managers

Setting up the load balancing for App Volumes Manager servers is pretty straightforward. Before we walk through the load-balancing configuration, we’ll assume your F5 is already set up on your internal network and has the proper licensing for LTM.

Also, it’s important to ensure the App Volume agents will be able to communicate with the BIG-IP’s virtual IP address/FQDN assigned to App Volumes Manager; take the time to check routing and access to/from the agents and BIG-IP.

Since the App Volumes Manager works with both HTTP and HTTPS, we’ll show you how to load balance App Volumes using SSL Termination. We’ll be doing SSL Bridging: SSL from the client to the F5 → it is decrypted → it is re-encrypted and sent to the App Volumes Manager server. This method will allow the F5 to use advanced features—such as iRules and OneConnect—while maintaining a secure, end-to-end connection.

Click here to get a step-by-step guide on integrating App Volumes Manager servers with F5’s LTM. Here are some prerequisites you’ll need to consider before you start:

  • Determine what the FQDN will be and what virtual IP address will be used.
  • Add the FQDN and virtual IP into your company’s DNS.
  • Create and/or import the certificate that will be used; this blog post, does not cover creating, importing and chaining certificates.

The certificate should contain the FQDN that we will use for load balancing. We can actually leave the default certificates on the App Volumes Manager servers. BIG-IP will handle all the SSL translations, even with self-signed certificates created on the App Volumes servers. A standard, 2,048-bit web server (with private key) will work well with the BIG-IP, just make sure you import and chain the Root and Intermediate Certificates with the Web Server Certificate.

Once you’re done running through the instructions, you’ll have some load-balanced App Volumes Manager servers!

Again, BIG thanks to Justin Venezia from the F5 team – you can read more about Justin Venezia and his work here.


Dale is a Senior Solutions Architect and member of the CTO Ambassadors. Dale focuses in the End User Compute space, where Dale has become a subject matter expert in a number of the VMware products. Dale has more than 20 years experience working in IT having started his career in Northern England before moving the Spain and finally the USA. Dale currently hold a number of certifications including VCP-DV, VCP-DT, VCAP-DTD and VCAP-DTA.

For updates you can follow Dale on twitter @vDelboy

Justin Venezia is a Senior Solutions Architect for F5 Networks

Upgrading VMware Horizon View with Zero Downtime

By Dale Carter, Senior Solutions Architect, End-User Computing

Over the last few years working with VMware Horizon View and doing many upgrades, two of the biggest issues I would hear from customers when planning for an upgrade was: “Why do we have to have so much downtime, and with seven connection brokers, why do we have to take them all down at once?”

These questions and issues came up when I was speaking to Engineering about the upgrade process and making it smoother for the customer.

I was told that, in fact, this was not the case, and you did not have to take all connection brokers down during the upgrade process; you can upgrade one connection broker at a time while the other servers are happily running.

This has been changed in View 6, and the upgrade documentation now reflects it. You can find the document here.

In this blog I will show you how to upgrade a cluster of connection servers with zero downtime. For this post I will be upgrading my View 5.3 servers to View 6.0.1

Here are the steps needed to upgrade a View pod with zero downtime:

  1. Follow all prerequisites in the upgrade document referenced above, including completing all backups and snapshots.
  2. In the load balancer managing the View servers, disable the server that is going to be upgraded from the load balanced pool.
  3. Log in to the admin console.
  4. Disable the connection server you are going to upgrade. From the View Configuration menu select Server, then select Connection Servers and highlight the correct server. Finally, click Disable.
    DCarter 1
  5. Click OK. The view server will now be disabled.
    DCarter 2
  6. Log in to the View connection server and launch the executable. For this example I will launch VMware-viewconnectionserver-x86_64-6.0.1-2088845.exe. NOTE: We did not disable any services at this point.
  7. Click Next.
    D Carter 3
  8. Accept the license agreement, and click Next.
  9. Click Install.
    DCarter 4
  10. Once the process is done click Finish.
    D Carter 5
  11. Now back in the Admin Console enable the connection server by clicking Enable. Also notice the new version has been installed.
    D Carter 6
  12. In the load balancer managing the View servers, enable the server that has been upgraded in the load balanced pool.
  13. Follow step 2 – 12 to upgrade all of your View servers.
    D Carter 7

Security Servers

If one of the connection servers is paired with a security server then there are a couple of additional steps to cover.

The following steps will need to be done to upgrade a connection server that is paired with a security server.

  1. In the load balancer managing the View Security servers, disable the server that is going to be upgraded from the load balanced pool.
  2. Follow all pre-requisites in the upgrade document referenced above, including disabling IPsec rules for the security server and take snapshots.
  3. Prepare the security server to be upgraded. From the View Configuration menu select Server, then select Security Servers. Highlight the correct server, click More Commands, and then click Prepare for Upgrade or Reinstall.
    D Carter 8
  4. Click OK.
  5. Upgrade the paired Connection server outlined in steps 2 – 12.
  6. Log in to the View Security server and launch the executable. For this example I will launch VMware-viewconnectionserver-x86_64-6.0.1-2088845.exe.
  7. Click Next.
    D Carter 9
  8. Accept the License agreement and click Next.
  9. Confirm the paired Connection server and click Next.
  10. Enter the pairing password and click Next.
  11. Confirm the configuration and click Next.
  12. Click Install.
  13. In the load balancer managing the View Security servers, enable the server that has been upgraded in the load balanced pool.

Dale is a Senior Solutions Architect and member of the CTO Ambassadors. Dale focuses in the End User Compute space, where Dale has become a subject matter expert in a number of the VMware products. Dale has more than 20 years experience working in IT having started his career in Northern England before moving the Spain and finally the USA. Dale currently hold a number of certifications including VCP-DV, VCP-DT, VCAP-DTD and VCAP-DTA.

For updates you can follow Dale on twitter @vDelboy

App Volumes AppStacks vs. Writable Volumes

By Dale Carter, Senior Solutions Architect, End-User Computing

With the release of VMware App Volumes I wanted to take the time to explain the difference between AppStacks and Writable Volumes, and how the two need to be designed as you start to deploy App Volumes.

The graphic below shows the traditional way to manage your Windows desktop, as well as the way things have changed with App Volumes and the introduction of “Just-in-time” apps.

DCarter AppVolumes v Writable Volumes 1

 

So what are the differences between AppStacks and Writable Volumes?

AppStacks

An AppStack is a virtual disk that contains one or more applications that can be assigned to a user as a read-only disk. A user can have one or many AppStacks assigned to them depending on how the IT administrator manages the applications.

When designing for AppStacks it should be noted that an AppStack is deployed in a one-to-many configuration. This means that at any one time an AppStack could be connected to one or hundreds of users.

DCarter AppVolumes v Writable Volumes 2

 

When designing storage for an AppStack it should also be noted that App Volumes do not change the IOPS required for an application, but it does consolidate the IOPS to a single virtual disk. So like any other virtual desktop technology it is critical to know your applications and their requirements; it is recommended to do an application assessment before moving to a large-scale deployment. Lakeside Software and Liquidware Labs both publish software for doing application assessments.

For example, if you know that on average the applications being moved to an AppStack use 10 IOPS, and that the AppStack has 100 users connected to it, you will require 1,000 IOPS average (IOPS pre-user x number of users) to support that AppStack; you can see it is key to designing your storage correctly for AppStacks.

In large-scale deployments it may be recommended to create copies of AppStacks and place them across storage LUNs, and assign a subset of users to each AppStack for best performance.

DCarter AppVolumes v Writable Volumes 3

 

Writable Volumes

Like AppStacks, a Writable Volume is also a virtual disk, but unlike AppStacks a Writable Volume is configured in a one-to-one configuration, and each user has their own assigned Writeable Volume.

DCarter AppVolumes v Writable Volumes 4

 

When an IT administrator assigns a Writable Volume to a user, the first thing the IT administrator will need to decide is what type of data the user will be able to store in the Writable Volumes. There are three choices :

  • User Profile Data Only
  • User Installed Applications Only
  • Both Profile Data and User Installed Applications

It should be noted that App Volumes are not a Profile Management tool, but can be used alongside any currently used User-Environment Management tool.

When designing for Writable Volumes, the storage requirement will be different than it is when designing for AppStacks. Where an AppStack will require all Read I/O, a Writable Volume will require both Read and Write I/O. The IOPS for a Writable Volume will also vary per user depending on the individual user and how they use their data; it will also vary depending on the type of data the IT administrator allows the user to store in their Writable Volume.

IT administrators should monitor their users and how they access their Writable Volume; this will help them manage how many Writable Volumes can be configured on a single storage LUN.

Hopefully this blog helps describe the differences between AppStacks and Writable Volumes, and the differences that should be taken into consideration when designing for each.

I would like to thank Stephane Asselin for his input on this blog.


Dale is a Senior Solutions Architect and member of the CTO Ambassadors. Dale focuses in the End User Compute space, where Dale has become a subject matter expert in a number of the VMware products. Dale has more than 20 years experience working in IT having started his career in Northern England before moving the Spain and finally the USA. Dale currently hold a number of certifications including VCP-DV, VCP-DT, VCAP-DTD and VCAP-DTA.

For updates you can follow Dale on twitter @vDelboy

App Volumes AppStack Creation

Dale CarterBy Dale Carter, Senior Solutions Architect, End-User Computing

VMware App Volumes provide just-in-time application delivery to virtualized desktop environments. With this real-time application delivery system, applications are delivered to virtual desktops through VMDK virtual disks, without modifying the VM or applications themselves. Applications can be scaled out with superior performance, at lower costs, and without compromising end-user experience.

In this blog post I will show you how easy it is to create a VMware App Volumes AppStack and how that AppStack can then be easily deployed to up to hundreds of users

When configuring App Volumes with VMware Horizon View an App Volumes AppStack is a read-only VMDK file that is added to a user’s virtual machine, and then the App Volumes Agent merges the two or more VMDK files so the Microsoft Windows operating system sees the files as just one drive. This way the applications look to the Windows OS as if they are natively installed and not on a separate disk.

To create an App Volumes AppStack follow these simple steps.

  1. Log in to the App Volumes Manager Web interface.
  2. Click Volumes.
    DCarter Volumes
  3. Click Create AppStack.
    DCarter AppStack
  4. Give the AppStack a name. Choose the storage location and give it a description (optional). Then click Create.
    DCarter Create AppStack
  5. Choose to either Perform in the background or Wait for completion and click Create.
    DCarter Create
  6. vCenter will now create a new VMDK for the AppStack to use.
  7. Once vCenter finishes creating the VMDK the AppStack will show up as Un-provisioned. Click the + sign.
    DCarter
  8. Click Provision
    .
    DCarter Provision
  9. Search for the desktop that will be used to install the software. Select the Desktop and click Provision.
    DCarter Provision AppStack
  10. Click Start Provisioning.
    DCarter Start Provisioning
  11.  vCenter will now attach the VMDK to the desktop.
  12. Open the desktop that will be used for provisioning the new software. You will see the following message: DO NOT click OK. You will click OK after the install of the software.
    DCarter Provisioning Mode
  13. Install the software on the desktop. This can be just one application or a number of applications. If reboots are required between installs that is OK. App Volumes will remember where you are after the install.
  14. Once all of the software has been installed click OK.
    DCarter Install
  15. Click Yes to confirm and reboot.
    DCarter Reboot
  16. Click OK.
    DCarter 2
  17. The desktop will now reboot. After the reboot you must log back in to the desktop.
  18. After you log in you must click OK. This will reconfigure the VMDK on the desktop.
    DCarter Provisioning Successful
  19. You can now connect to the App Volumes Manager Web interface and see that the AppStack is ready to be assigned.
    DCarter App Volumes Manager

Once you have created the AppStack you can assign the AppStack to an Active Directory object. This could be a user, computer or user group.

To assign an AppStack to a user, computer or user group, follow these simple steps.

  1. Log in to the App Volumes Manager Web interface.
  2. Click Volumes.
    DCarter Volumes Dashboard
  3. Click the + sign by the AppStack you want to assign.
  4. Click Assign.
    DCarter Assign
  5. Search for the Active Director object. Select the user, computer, OU or user group to assign the AppStack to. Click Assign.
    DCarter Assign Dashboard
  6. Choose either to assign the AppStack at the next login or immediately, and click Assign.
    DCarter Active Director
  7. The users will now have the AppStack assigned to them and will be able to launch the applications as they would any normal application.
    DCarter AppStack Assign

By following these simple steps you will be able to quickly create an AppStack and simply deploy that AppStack to your users.


Dale is a Senior Solutions Architect and member of the CTO Ambassadors. Dale focuses in the End User Compute space, where Dale has become a subject matter expert in a number of the VMware products. Dale has more than 20 years experience working in IT having started his career in Northern England before moving the Spain and finally the USA. Dale currently hold a number of certifications including VCP-DV, VCP-DT, VCAP-DTD and VCAP-DTA.

For updates you can follow Dale on twitter @vDelboy

Horizon View: RDS PCoIP Design Tips

By Dale Carter, Consulting Architect, End-User Computing

With the release of VMware Horizon View has come the ability to not only configure virtual desktops but also virtual applications hosted on Windows RDS servers.

In this post, I will cover a couple of things that you should take into consideration when configuring virtual applications and how they might affect the sizing of your View Cluster and the number of connection servers you will need.

There are many different papers and posts on how to configure RDS servers themselves, so I will not be touching on that in this post. I want to discuss the effects of how the PCoIP connections connect to RDS servers and what you should look out for.

Scenario 1
The following diagram shows my first configuration. This includes a virtual desktop cluster and a single RDS farm. RDS Farm A in this example is hosting five applications: Word, Excel, Power Point, Visio and Lync.

Virtual Desktop Scenario 1

In this scenario if a user launches a virtual desktop and then an application, the user would be using a maximum of two PCoIP connections through the Horizon View infrastructure. It’s important to know that when configuring RDS with just one farm, if a user then launches a second application or all five applications, then all these applications will launch using the same PCoIP connection. This means that all five applications for that user would be running on the same RDS host. In this scenario, you need to make sure that each of your RDS hosts can handle all users opening all applications on each of the hosts.

The Horizon View connection servers do load balance a user’s connection when the user first connects to an RDS host. Users will always be sent to the RDS host with the lowest number of connections; however, once they are connected they will always go to the same RDS host until they completely disconnect from all applications.

In this scenario, if you have 300 users and they all launch Word, each RDS server will have 100 connections all running Word. It is also possible in this scenario that Servers A and B will only be running 100 instances of Word; whereas Server C could be running 100 instances of all five of the different software applications. This is why it is critical that the RDS servers are configured correctly.

Scenario 2
In the second configuration, I split the application across RDS host farms. The following diagram shows two RDS farms. The first, Farm A, is hosting Word, Excel and PowerPoint. The second, Farm B, is hosting Visio and Lync.

Virtual Desktop Scenario 2

 

Now in this scenario, if a user launches a virtual desktop and then the applications Word and Visio, we have managed to lighten the load on the RDS servers. By separating the application into different RDS farms, we now know that each RDS server is not going to get as much load when a user opens these applications. However, instead of a user only using two PCoIP connections the user is now using three PCoIP connections.

Conclusion
Given this information, it becomes more important than ever to know your users’ environment and the applications the users are using. When deploying Horizon View into your environment and taking advantage of the new hosted application functionality you need to ask yourself the following questions:

  • How many applications will be installed on each RDS host?
  • What is the hardware configuration of the RDS host?
  • How many RDS farms will be required?
  • How many PCoIP sessions will each user require?

For larger environments, the question might be: Will one or more View deployments be required? As the environments get larger, it might be a better design to have one View deployment for desktop connections and a separate deployment for hosted applications. In this scenario, VMware Workspace can become that central location for users to access all of their desktops and applications. With VMware Workspace 2.0, it is now possible to configure more that one View environment, giving you the option of multiple View environments that are all accessible from the one Workspace front end.


Dale is a Senior Solutions Architect and member of the CTO Ambassadors. Dale focuses in the End User Compute space, where Dale has become a subject matter expert in a number of the VMware products. Dale has more than 20 years experience working in IT having started his career in Northern England before moving the Spain and finally the USA. Dale currently hold a number of certifications including VCP-DV, VCP-DT, VCAP-DTD and VCAP-DTA.

For updates you can follow Dale on twitter @vDelboy

Horizon Workspace Tips: Increased Performance for the Internal Database

By Dale Carter, Consulting Architect, End User Computing

During my time deploying VMware Horizon Workspace 1.5 to a very large corporation with a very large Active Directory (AD) infrastructure, I noticed that the internal Horizon database would have performance issues when syncing the database with AD.

After discussing the issues with VMware Engineering we found a number of ways to improve performance of the database during these times. Below I’ve outlined the changes I made to Horizon Workspace to increase performance for the internal database.

I should note that the VMware best practice for production environments is to use an external database. However in some deployments customers still prefer to use the internal database; for instance, for a Pilot deployment.

Service-va sizing

It is very important to size this VM correctly; this is where the database sits and the VM that will be doing most of the work. It is very important that this VM not be undersized. The following is a recommended size for the service-va, but you should monitor this VM and change as needed.

  • 6 x CPUs
  • 16GBs x RAM

Database Audit Queue

If you have a very large users population, then you will need to increase the audit queue size to handle the huge deluge of messages generated by entitling a large volume of users to an application at once. VMware recommends that the queue be at least three times the number of users. Make this change to the database with the following SQL:

  1. Log in to the console on the service-va as root
  2. Stop the Horizon Frontend service

service horizon-frontend stop

  1. Start PSQL as horizon user.  You will be prompted for a password.

psql -d “saas” -U horizon

  1. Increase Audit queue size

INSERT INTO “GlobalConfigParameters” (“strKey”, “idEncryptionMethod”, “strData”)

VALUES (‘maxAuditsInQueueBeforeDropping’, ‘3’, ‘125000’);

  1. Exit

\q

  1. Start the Horizon Frontend service

service horizon-frontend start.

  1. Start the Horizon Frontend service

service horizon-frontend start.

Adding Indexes to the Database

A number of indexes can be added to the internal database to improve performance when dealing with a large number of users.

The following commands can be run on the service-va to add these indexes

  1. Log in to the console on the service-va as root
  2. Stop the Horizon Frontend service

service horizon-frontend stop

  1. Start PSQL as horizon user.  You will be prompted for a password.

psql -d “saas” -U horizon

  1. Create an index on the UserEntitlement table

CREATE INDEX userentitlement_resourceuuid

ON “UserEntitlement”

USING btree

(“resourceUuid” COLLATE pg_catalog.”default”);

  1. Create 2nd index

CREATE INDEX userentitlement_userid

ON “UserEntitlement”

USING btree

(“userId”);

  1. Exit

\q

  1. Start the Horizon Frontend service

service horizon-frontend start.

I would also like to point out that these performance issues have been fixed in the up and coming Horizon 1.8 release.  For now, though, I hope this helps. Feel free to leave any questions in the comments of this post.


Dale is a Senior Solutions Architect and member of the CTO Ambassadors. Dale focuses in the End User Compute space, where Dale has become a subject matter expert in a number of the VMware products. Dale has more than 20 years experience working in IT having started his career in Northern England before moving the Spain and finally the USA. Dale currently hold a number of certifications including VCP-DV, VCP-DT, VCAP-DTD and VCAP-DTA.

For updates you can follow Dale on twitter @vDelboy