Home > Blogs > VMware Consulting Blog

vSphere Datacenter Design – vCenter Architecture Changes in vSphere 6.0 – Part 2

jonathanm-profileBy Jonathan McDonald

In Part 1 the different deployment modes for vCenter and Enhanced Linked Mode were discussed. In part 2 we finish this discussion by addressing different platforms, high availability and recommended deployment configurations for vCenter.

Mixed Platforms

Prior to vSphere 6.0, there was no interoperability between vCenter for Windows and the vCenter Server Linux Appliance. After a platform was chosen, a full reinstall would be required to change to the other platform. The vCenter Appliance was also limited in features and functionality.

With vSphere 6.0, they are functionally the same, and all features are available in either deployment mode. With Enhanced Linked Mode both versions of vCenter are interchangeable. This allows you to mix vCenter for Windows and vCenter Server Appliance configurations.

The following is an example of a mixed platform environment:

JMcDonald pt 2 (1)

This mixed platform environment provides flexibility that has never been possible with the vCenter Platform.

As with any environment, the way it is configured is based on the size of the environment (including expected growth) and the need for high availability. These factors will generally dictate the best configuration for the Platform Services Controller (PSC).

High Availability

Providing high availability protection to the Platform Services Controller adds an additional level of overhead to the configuration. When using an embedded Platform Services Controller, protection is provided in the same way that vCenter is protected, as it is all a part of the same system.

Availability of vCenter is critical due to the number of solutions requiring continuous connectivity, as well as to ensure the environment can be managed at all times. Whether it is a standalone vCenter Server, or embedded with the Platform Services Controller, it should run in a highly available configuration to avoid extended periods of downtime.

Several methods can be used to provide higher availability for the vCenter Server system. The decision depends on whether maximum downtime can be tolerated, failover automation is required, and if budget is available for software components.

The following table lists methods available for protecting the vCenter Server system and the vCenter Server Appliance when running in embedded mode.

Redundancy Method Protects
vCenter Server system?
Protects
vCenter Server Appliance?
Automated protection using vSphere HA Yes Yes
Manual configuration and manual failover, for example, using a cold standby. Yes Yes
Automated protection using Microsoft Clustering Services (MSCS) Yes No

If high availability is required for an external Platform Services Controller, protection is provided by adding a secondary backup Platform Services Controller, and placing them both behind a load balancer.

The load balancer must support Multiple TCP Port Balancing, HTTPS Load Balancing, and Sticky Sessions.  VMware has currently tested several load balancers including F5 and Netscaler, however does not directly support these products. See the vendor documentation regarding configuration details for any load balancer used.

Here is an example of this configuration using a primary and a backup node.

JMcDonald pt 2 (2)

With vCenter 6.0, connectivity to the Platform Services Controller is stateful, and the load balancer is only used for its failover ability. So active-active connectivity is not recommended for both nodes at the same time, or you risk corruption of the data between nodes.

Note: Although it is possible to have more than one backup node, it is normally a waste of resources and adds a level of complexity to the configuration for little gain. Unless there is an expectation that more than a single node could fail at the same time, there is very little benefit to configuring a tertiary backup node.

Scalability Limitations

Prior to deciding the configuration for vCenter, the following are the scalability limitations for the different configurations. These can have an impact on the end design.

Scalability Maximum
Number of Platform Services Controllers per domain

8

Maximum PSCs per vSphere Site, behind a single load balancer

4

Maximum objects within a vSphere domain (Users, groups, solution users)

1,000,000

Maximum number of VMware solutions connected to a single PSC

4

Maximum number of VMware products/solutions per vSphere domain

10

Deployment Recommendations

Now that you understand the basic configuration details for vCenter and the Platform Services Controller, you can put it all together in an architecture design. The choice of a deployment architecture can be a complex task depending on the size of the environment.

The following are some recommendations for deployment. But please note that VMware recommends virtualizing all the vCenter components because you gain the benefits of vSphere features such as VMware HA. These recommendations are provided for virtualized systems; physical systems need to be protected appropriately.

  • For sites that will not use Enhanced Linked Mode, use an embedded Platform Services Controller.
    • This provides simplicity in the environment, including a single pane-of-glass view of all servers while at the same time reducing the administrative overhead of configuring the environment for availability.
    • High availability is provided by VMware HA. The failure domain is limited to a single vCenter Server, as there is no dependency on external component connectivity to the Platform Services Controller.
  • For sites that will use Enhanced Linked Mode use external Platform Service Controllers.
    • This configuration uses external Platform Services controllers and load balancers (recommended for high availability). The number of controllers depends on the size of the environment:
      • If there are two to four VMware solutions – You will only need a single Platform Services Controller if the configuration is not designed for high availability; two Platform Services Controllers will be required for high availability behind a single load balancer.
      • If there are four to eight VMware solutions – Two Platform Services Controllers must be linked together if the configuration is not designed for high availability; four will be required for a high-availability configuration behind two load balancers (two behind each load balancer).
      • If there are eight to ten VMware solutions – Three Platform Services Controllers should be linked together for a high-availability configuration; and six will be required for high availability configured behind three load balancers (two behind each load balancer).
    • High availability is provided by having multiple Platform Services Controllers and a load balancer to provide failure protection. In addition to this, all components are still protected by VMware HA. This will limit the failure implications of having a single Platform Services Controller, assuming they are running on different ESXi hosts.

With these deployment recommendations hopefully the process of choosing a design for vCenter and the Platform Services Controller will be dramatically simplified.

This concludes this blog series. I hope this information has been useful and that it demystifies the new vCenter architecture.

 


Jonathan McDonald is a Technical Solutions Architect for the Professional Services Engineering team. He currently specializes in developing architecture designs for core Virtualization, and Software-Defined Storage, as well as providing best practices for upgrading and health checks for vSphere environments.

vSphere Datacenter Design – vCenter Architecture Changes in vSphere 6.0 – Part 1

jonathanm-profileBy Jonathan McDonald

As a member of VMware Global Technology and Professional Services at VMware I get the privilege of being able to work with products prior to its release. This not only gets me familiar with new changes, but also allows me to question—and figure out—how the new product will change the architecture in a datacenter.

Recently, I have been working on exactly that with vCenter 6.0 because of all the upcoming changes in the new release. One of my favorite things about vSphere 6.0 is the simplification of vCenter and associated services. Previously, each individual major service (vCenter, Single Sign-On, Inventory Service, the vSphere Web Client, Auto Deploy, etc.) was installed individually. This added complexity and uncertainty in determining the best way to architect the environment.

With the release of vSphere 6.0, vCenter Server installation and configuration has been dramatically simplified. The installation of vCenter now consists of only two components that provide all services for the virtual datacenter:

  • Platform Services Controller – This provides infrastructure services for the datacenter. The Platform Services Controller contains these services:
    • vCenter Single Sign-On
    • License Service
    • Lookup Service
    • VMware Directory Service
    • VMware Certificate Authority
  • vCenter Services – The vCenter Server group of services provides the remainder of the vCenter Server functionality, which includes:
    • vCenter Server
    • vSphere Web Client
    • vCenter Inventory Service
    • vSphere Auto Deploy
    • vSphere ESXi Dump Collector
    • vSphere Syslog Collector (Microsoft Windows)/VMware Syslog Service (Appliance)

So, when deploying vSphere 6.0 you need to understand the implications of these changes to properly architect the environment, whether it is a fresh installation, or an upgrade. This is a dramatic change from previous releases, and one that is going to be a source of many discussions.

To help prevent confusion, my colleagues in VMware Global Support, VMware Engineering, and I have developed guidance on supported architectures and deployment modes. This two-part blog series will discuss how to properly architect and deploy vCenter 6.0.

vCenter Deployment Modes

There are two basic architectures that can be used when deploying vSphere 6.0:

  • vCenter Server with an Embedded Platform Services Controller – This mode installs all services on the same virtual machine or physical server as vCenter Server. The configuration looks like this:

JMcDonald 1

This is ideal for small environments, or if simplicity and reduced resource utilization are key factors for the environment.

  • vCenter Server with an External Platform Services Controller – This mode installs the platform services on a system that is separate from where vCenter services are installed. Installing the platform services is a prerequisite for installing vCenter. The configuration looks as follows:

JMcDonald 2

 

This is ideal for larger environments, where there are multiple vCenter servers, but you want a single pane-of-glass for the site.

Choosing your architecture is critical, because once the model is chosen, it is difficult to change, and configuration limits could inhibit the scalability of the environment.

Enhanced Linked Mode

As a result of these architectural changes, Platform Services Controllers can be linked together. This enables a single pane-of-glass view of any vCenter server that has been configured to use the Platform Services Controller domain. This feature is called Enhanced Linked Mode and is a replacement for Linked Mode, which was a construct that could only be used with vCenter for Windows. The recommended configuration when using Enhanced Linked Mode is to use an external platform services controller.

Note: Although using embedded Platform Services Controllers and enabling Enhanced Linked Mode can technically be done, it is not a recommended configuration. See List of Recommended topologies for vSphere 6.0 (2108548) for further details.

The following are some recommend options on how—and how not to—configure Enhanced Linked Mode.

  • Enhanced Linked Mode with an External Platform Services Controller with No High Availability (Recommended)

In this case the Platform Services Controller is configured on a separate virtual machine, and then the vCenter servers are joined to that domain, providing the Enhanced Linked Mode functionality. The configuration would look this way:

JMcDonald 3

 

There are benefits and drawbacks to this approach. The benefits include:

  • Fewer resources consumed by the combined services
  • More vCenter instances are allowed
  • Single pane-of-glass management of the environment

The drawbacks include:

  • Network connectivity loss between vCenter and the Platform Service Controller can cause outages of services
  • More Windows licenses are required (if on a Windows Server)
  • More virtual machines to manage
  • Outage on the Platform Services Controller will cause an outage for all vCenter servers connected to it. High availability is not included in this design.
  • Enhanced Linked Mode with an External Platform Services Controller with High Availability (Recommended)

In this case the Platform Services Controllers are configured on separate virtual machines and configured behind a load balancer; this provides high availability to the configuration. The vCenter servers are then joined to that domain using the shared Load Balancer IP address, which provides the Enhanced Linked Mode functionality, but is resilient to failures. This configuration looks like the following:

JMcDonald 4

There are benefits and drawbacks to this approach. The benefits include:

  • Fewer resources are consumed by the combined services
  • More vCenter instances are allowed
  • The Platform Services Controller configuration is highly available

The drawbacks include:

  • More Windows licenses are required (if on a Windows Server)
  • More virtual machines to manage
  • Enhanced Linked Mode with Embedded Platform Services Controllers (Not Recommended)

In this case vCenter is installed as an embedded configuration on the first server. Subsequent installations are configured in embedded mode, but joined to an existing Single Sign-On domain.

Linking embedded Platform Services Controllers is possible, but is not a recommended configuration. It is preferred to have an external configuration for the Platform Services Controller.

The configuration looks like this:

JMcDonald 5

 

  • Combination Deployments (Not Recommended)

In this case there is a combination of embedded and external Platform Services Controller architectures.

Linking an embedded Platform Services Controller and an external Platform Services Controller is possible, but again, this is not a recommended configuration. It is preferred to have an external configuration for the Platform Services Controller.

Here is as an example of one such scenario:

JMcDonald 6

  • Enhanced Linked Mode Using Only an Embedded Platform Services Controller (Not Recommended)

In this case there is an embedded Platform Services Controller with vCenter Server linked to an external standalone vCenter Server.

Linking a second vCenter Server to an existing embedded vCenter Server and Platform Services Controller is possible, but this is not a recommended configuration. It is preferred to have an external configuration for the Platform Services Controller.

Here is an example of this scenario:

JMcDonald 7

 

Stay tuned for Part 2 of this blog post where we will discuss the different platforms for vCenter, high availability and different deployment recommendations.


Jonathan McDonald is a Technical Solutions Architect for the Professional Services Engineering team. He currently specializes in developing architecture designs for core Virtualization, and Software-Defined Storage, as well as providing best practices for upgrading and health checks for vSphere environments.

Link VMware Horizon Deployments Together with Cloud Pod Architecture

Dale-Carter-150x150By Dale Carter

VMware has just made life easier for VMware Horizon administrators. With the release of VMware Horizon 6.1, VMware has added a popular feature—from the Horizon 6 release—to the web interface. Using Cloud Pod Architecture you can now link a number of Horizon deployments together to create a larger global pool – and these pools can span two different locations.

Cloud Pod Architecture in Horizon 6 was sometimes difficult to configure because you had to use a command line interface on the connection brokers. Now, with Horizon 6.1, you can configure and manage Cloud Pod Architecture via the Web Admin Portal, and this greatly improves the Cloud Pod Architecture feature.

When you deploy Cloud Pod Architecture with Horizon 6.1 you can:

  • Enable Horizon deployments across multiple data centers
  • Replicate new data layers across Horizon connection servers
  • Support a single namespace for end-users with a global URL
  • Assign and manage desktops and users with the Global Entitlement layer

The significant benefits you gain include:

  • The ability to scale Horizon deployments to multiple data centers with up to 10,000 sessions
  • Horizon deployment support for active/active and disaster recovery use cases
  • Support for geo-roaming users

This illustration shows how two Horizon deployments—one in Chicago and another in London—are linked together.

DCarter View 6.1

To configure Cloud Pod Architecture for supporting a global name space you first:

  • Set up at least two Horizon Connection Servers – one at each site; each server would have desktop pools
  • Test them to ensure they work properly, including assigning users (or test users) to the environments

Following this initial step you create global pools, then configure local pools with global pools, and finally set up user entitlements, which can be done from any Horizon Connection Server.

For more detailed information, and for a complete walk-through on setting up your Cloud Pod Architecture feature, read the white paper “Cloud Pod Architecture with VMware Horizon 6.1“.


Dale Carter, a VMware Senior Solutions Architect specializing in the EUC space, has worked in IT for more than 20 years. He is also a VCP4-DT, VCP5-DT, VCAP-DTD, and VCAP-DTA.

What’s New? In a word “Everything”!

Lots of version releases this week.  Great work to all teams involved!

VMware vSphere 6.0n (ESXi, vCenter)

6
12-Mar-15
VMware vRealize Operations for Horizon 6.x
6.1.0
12-Mar-15
VMware vRealize Automation 6.x
6.2.1
12-Mar-15
VMware vRealize Orchestrator 6.x
6.0.1
12-Mar-15
vRealize Business Advanced\Enterprise 8.x
8.2.1
12-Mar-15
vRealize Business Standard 6.x
6.1.0
12-Mar-15
vRealize Business Standard 6.x for vSphere
6.1
12-Mar-15
vRealize Code Stream 1.x
1.1.0
12-Mar-15
vRealize Infrastructure Navigator 5.x
5.8.4
12-Mar-15
vRealize Operations Manager 5.x
5.8.5
12-Mar-15
VMware vCloud Networking and Security
5.5.4
12-Mar-15
VMware vCenter Site Recovery Manager 6.x
6
12-Mar-15
VMware Virtual SAN 6.x
6
12-Mar-15
VMware vSphere Data Protection 6.x
6
12-Mar-15
VMware vSphere Replication 6.x
6
12-Mar-15
VMware Integrated Open Stack
1
12-Mar-15
VMware View 6.x
6.1
12-Mar-15
VMware Horizon Client for Windows 3.x
3.3
12-Mar-15
VMware Workspace Portal 2.x
2.1.1
12-Mar-15
VMware App Volumes 2.x
2.6
12-Mar-15
VMware vSphere PowerCLI 6.x
6
12-Mar-15
vRealize Orchestrator Active Directory plugin
2.0.0
12-Mar-15
vRealize Orchestrator vRealize Automation plugin
6.2.1
12-Mar-15

 

Thanks to “Amy C” for the compiled list!

Top 5 Tips When Considering a VMware Virtual SAN Storage Solution

By Mark Moretti

Is a software-defined storage platform right for you? How do you approach evaluating a virtualized storage environment? What are the key considerations to keep in mind? What are VMware customers doing? And, what are the experts recommending?

We recently asked our VMware Professional Services consultants these questions. We asked them to provide us with some of the key tips they provide to their best customers. What did we get? A short list of “tips” that identify how to approach the consideration process for a VMware Virtual SAN solution. It’s not a trivial decision. Your IT decisions are not made in a vacuum. You have existing compute and storage infrastructure, so you need to know what the impact of your decisions will be—in advance.

Read this short list of recommendations, share it with your staff and engage in a conversation on transforming your storage infrastructure.

Top Tips 1

Read now: Top 5 Tips When Considering VMware’s Virtual SAN Storage Solution


Mark Moretti is a Senior Services Marketing Manager for VMware.

Use Horizon View to Access Virtual Desktops Remotely – Without a VPN

 

By Eric Monjoin and Xavier Montaron

VMware Horizon View enables you to access a virtual desktop from anywhere, anytime. You can work remotely from your office or from a cybercafé, or anywhere else as long as there is a network connection to connect you to Horizon View infrastructure. It’s an ideal solution – but external connections can be risky.

So, how do you protect and secure your data? How do you authorize only some users—or groups of users—to connect from an external network without establishing a VPN connection?

You can achieve this by relaying into an external solution like F5 Networks’ BIG-IP Access Policy Manager (APM). It can perform pre-authentication checks to end-points based on criteria like user rights, desktop compliancy, antivirus up-to-date, and more. Or, you can simply use the built-in capabilities of Horizon View, which is perfect if you are a small or medium company with a limited budget.

There are two ways to achieve this with Horizon View:

  •  Pool tagging
  •  Two-factor authentication

Pool Tagging

Pool tagging consists of setting one or more tags on each View Connection Server (see Figure 1) and restricting desktop pools using those tags to specific brokers (see Figure 2).

EMonjoin Figure 1

Figure 1. View Connection Server tagging

In the following example a tag “EXTERNAL” has been created for brokers paired with a View Security Server, and it is dedicated to an external connection with the tag “INTERNAL,” which has been created for brokers dedicated to internal connections only. Only desktop pools assigned with the “EXTERNAL” tag will be available, and will appear in the desktop pool list while connected to a broker used for external connections.

EMonjoin Figure 2

Figure 2. Desktop pools tagging

As shown in Table 1, if you fail to restrict a pool with a tag, that pool will be available on all View Connection Servers. So, as soon as you start using tags, you have to use tags for all of your desktop pools.

Connection to View Connection Server with following tags Desktop pools with following restricted tag set Pool appears in desktop pools list
EXTERNAL EXTERNAL YES
EXTERNAL INTERNAL NO
INTERNAL EXTERNAL NO
INTERNAL INTERNAL YES
INTERNAL or EXTERNAL INTERNAL and EXTERNAL YES
INTERNAL or EXTERNAL “None” YES

Table 1. TAG relationships between VCS and desktop pools

Keep in mind that when using tags, it is implied that the administrator has created specific pools for external connections, and specific pools for internal connections.

 

Two-Factor Authentication

The other method when using Horizon View is two-factor authentication. This requires two separate methods of authentication to increase security.

The mechanism is simple; you first authenticate yourself using a one-time password (OTP) passcode as seen in Figure 3. These are generated approximatively every 45 seconds depending on the solution provider. If the provided credentials are authorized, a second login screen appears (see Figure 4) where you enter your Active Directory login and password used for single sign-on to the hosted virtual desktop.

EMonjoin Figure 3

Figure 3. OTP login screen

EMonjoin Figure 4

Figure 4. Domain login screen

 

The advantages with this solution are:

  • Enhanced security You need to have the OTP passcode (the user’s token) and must know the user’s Active Directory login and password.
  • Simplicity There is no need to create two separate desktop pools – one for external connections and another for internal connections.
  • You can be selective Distribute tokens only to employees who require external access.

The most commonly and widely implemented solution is RSA Security from EMC (see below), but you can also use any solution that is RADIUS-compliant.

For more detailed information you can read the white paper “ How to Set Up 2-Factor Authentication in Horizon View with Google Authenticator.” It describes how to set up FreeRADIUS and Google Authenticator to secure external connections, and authorize only specific users or groups of users to connect to Horizon View. This solution was successfully implemented at no cost at the City Hall in Drancy, France, by its chief information officer, Xavier Montaron.

 

Sources:

F5 BIG-IP Access Policy Manager 

http://www.f5.com/pdf/white-papers/f5-vmware-view-wp.pdf

https://support.f5.com/content/kb/en-us/products/big-ip_apm/manuals/product/apm-vmware-integration-implementations-11-4-0/_jcr_content/pdfAttach/download/file.res/BIG-IP_Access_Policy_Manager__VMware_Horizon_View_Integration_Implementations.pdf

RSA SecureID

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2003455

https://gallery.emc.com/servlet/JiveServlet/download/1971-24-4990/VMware_Horizon_View_52_AM8.0.pdf

 

 


Eric MonjoinEric Monjoin joined VMware France in 2009 as PSO Senior Consultant after spending 15 years at IBM as a Certified IT Specialist. Passionate for new challenges and technology, Eric has been a key leader in the VMware EUC practice in France. Recently, Eric has moved to the VMware Professional Services Engineering organization as Technical Solutions Architect. Eric is certified VCP6-DT, VCAP-DTA and VCAP-DTD and was awarded vExpert for the 4th consecutive year.


Xavier_MontaronXavier Montaron owns a Master in Computer Science from EPITECH school and has a strong developer background. He joined Town Hall of Drancy during December 2007 in the CIO organization, and became the actual CIO since 2010. Town Hall of Drancy has been a long-time IT innovator and user of VMware technology, both for infrastructure servers as well as for VDI, where all desktops have been fully virtualized since 2011 with Horizon View. Town Hall of Drancy recently has decided to externalize all servers and VDI infrastructure and are now hosted by OVH, a global leader in internet hosting based in France.

VMware App Volumes™ with F5′s Local Traffic Manager

Dale-Carter-150x150By Dale Carter, Senior Solutions Architect, End User Computing & Justin Venezia, Senior Solutions Architect, F5 Networks

App Volumes™—a result of VMware’s recent acquisition of Cloud Volumes—provides an alternative, just-in-time method for integrating and delivering applications to virtualized desktop- and Remote Desktop Services (RDS)-based computing environments. With this real-time application delivery system, applications are delivered by attaching virtual disks (VMDKs) to the virtual machine (VM) without modifying the VM – or the applications themselves. Applications can be scaled out with superior performance, at lower costs, and without compromising the end-user experience.

For this blog post, I have colluded with Justin Venezia – one of my good friends and a former colleague now working at F5 Networks. Justin and I will discuss ways to build resiliency and scalability within the App Volumes architecture using F5′s Local Traffic Manager (LTM).

App Volumes Nitty-Gritty

Let’s start out with the basics. Harry Labana’s blog post gives a great overview of how App Volumes work and what it does. The following picture depicts a common App Volumes conceptual architecture:

HLabana AppVolumes

 

Basically, App Volumes does a “real time” attachment of applications (read-only and writable) to virtual desktops and RDS hosts using VMDKs. When the App Volumes Agent checks in with the manager, the App Volumes Manager (the brains of App Volumes) will attach the necessary VMDKs to the virtual machines through a connection with a paired vCenter. The App Volumes Agent manages the redirection of file system calls to AppStacks (read-only VMDK of applications) or Writeable Volumes (a user-specific writeable VMDK). Through the Web-based App Volumes Manager console, IT administrators can dynamically provision, manage, or revoke applications access. Applications can even be dynamically delivered while users are logged into the RDS session or virtual desktop.

The App Volumes Manager is a critical component for administration and Agent communications. By using F5′s LTM capabilities, we can intelligently monitor the health of each App Volumes Manager server, balance and optimize the communications for the App Volume Agents, and build a level of resiliency for maximum system uptime.

Who is Talking with What?

As with any application, there’s always some back-and-forth chatter on the network. Besides administrator-initiated actions to the App Volumes Manager using a web browser, there are four other events that will generate traffic through the F5’s BIG-IP module; these four events are very short, quick communications. There aren’t any persistent or long-term connections kept between the App Volumes Agent and Manager.

When an IT administrator assigns an application to a desktop/user that is already powered on and logged in, the App Volumes Manager talks directly with vCenter and attaches the VMDK. The Agent then handles the rest of the integration of the VMDK into the virtual machine. When this event occurs, the agent never communicates with the App Volumes Manager during this process.

Configuring Load Balancing with App Volume Managers

Setting up the load balancing for App Volumes Manager servers is pretty straightforward. Before we walk through the load-balancing configuration, we’ll assume your F5 is already set up on your internal network and has the proper licensing for LTM.

Also, it’s important to ensure the App Volume agents will be able to communicate with the BIG-IP’s virtual IP address/FQDN assigned to App Volumes Manager; take the time to check routing and access to/from the agents and BIG-IP.

Since the App Volumes Manager works with both HTTP and HTTPS, we’ll show you how to load balance App Volumes using SSL Termination. We’ll be doing SSL Bridging: SSL from the client to the F5 → it is decrypted → it is re-encrypted and sent to the App Volumes Manager server. This method will allow the F5 to use advanced features—such as iRules and OneConnect—while maintaining a secure, end-to-end connection.

Click here to get a step-by-step guide on integrating App Volumes Manager servers with F5′s LTM. Here are some prerequisites you’ll need to consider before you start:

  • Determine what the FQDN will be and what virtual IP address will be used.
  • Add the FQDN and virtual IP into your company’s DNS.
  • Create and/or import the certificate that will be used; this blog post, does not cover creating, importing and chaining certificates.

The certificate should contain the FQDN that we will use for load balancing. We can actually leave the default certificates on the App Volumes Manager servers. BIG-IP will handle all the SSL translations, even with self-signed certificates created on the App Volumes servers. A standard, 2,048-bit web server (with private key) will work well with the BIG-IP, just make sure you import and chain the Root and Intermediate Certificates with the Web Server Certificate.

Once you’re done running through the instructions, you’ll have some load-balanced App Volumes Manager servers!

Again, BIG thanks to Justin Venezia from the F5 team – you can read more about Justin Venezia and his work here.


Dale Carter, a VMware Senior Solutions Architect specializing in the EUC space, has worked in IT for more than 20 years. He is also a VCP4-DT, VCP5-DT, VCAP-DTD, and VCAP-DTA.

Justin Venezia is a Senior Solutions Architect for F5 Networks

Upgrading VMware Horizon View with Zero Downtime

Dale-Carter-150x150By Dale Carter, Senior Solutions Architect, End-User Computing

Over the last few years working with VMware Horizon View and doing many upgrades, two of the biggest issues I would hear from customers when planning for an upgrade was: “Why do we have to have so much downtime, and with seven connection brokers, why do we have to take them all down at once?”

These questions and issues came up when I was speaking to Engineering about the upgrade process and making it smoother for the customer.

I was told that, in fact, this was not the case, and you did not have to take all connection brokers down during the upgrade process; you can upgrade one connection broker at a time while the other servers are happily running.

This has been changed in View 6, and the upgrade documentation now reflects it. You can find the document here.

In this blog I will show you how to upgrade a cluster of connection servers with zero downtime. For this post I will be upgrading my View 5.3 servers to View 6.0.1

Here are the steps needed to upgrade a View pod with zero downtime:

  1. Follow all prerequisites in the upgrade document referenced above, including completing all backups and snapshots.
  2. In the load balancer managing the View servers, disable the server that is going to be upgraded from the load balanced pool.
  3. Log in to the admin console.
  4. Disable the connection server you are going to upgrade. From the View Configuration menu select Server, then select Connection Servers and highlight the correct server. Finally, click Disable.
    DCarter 1
  5. Click OK. The view server will now be disabled.
    DCarter 2
  6. Log in to the View connection server and launch the executable. For this example I will launch VMware-viewconnectionserver-x86_64-6.0.1-2088845.exe. NOTE: We did not disable any services at this point.
  7. Click Next.
    D Carter 3
  8. Accept the license agreement, and click Next.
  9. Click Install.
    DCarter 4
  10. Once the process is done click Finish.
    D Carter 5
  11. Now back in the Admin Console enable the connection server by clicking Enable. Also notice the new version has been installed.
    D Carter 6
  12. In the load balancer managing the View servers, enable the server that has been upgraded in the load balanced pool.
  13. Follow step 2 – 12 to upgrade all of your View servers.
    D Carter 7

Security Servers

If one of the connection servers is paired with a security server then there are a couple of additional steps to cover.

The following steps will need to be done to upgrade a connection server that is paired with a security server.

  1. In the load balancer managing the View Security servers, disable the server that is going to be upgraded from the load balanced pool.
  2. Follow all pre-requisites in the upgrade document referenced above, including disabling IPsec rules for the security server and take snapshots.
  3. Prepare the security server to be upgraded. From the View Configuration menu select Server, then select Security Servers. Highlight the correct server, click More Commands, and then click Prepare for Upgrade or Reinstall.
    D Carter 8
  4. Click OK.
  5. Upgrade the paired Connection server outlined in steps 2 – 12.
  6. Log in to the View Security server and launch the executable. For this example I will launch VMware-viewconnectionserver-x86_64-6.0.1-2088845.exe.
  7. Click Next.
    D Carter 9
  8. Accept the License agreement and click Next.
  9. Confirm the paired Connection server and click Next.
  10. Enter the pairing password and click Next.
  11. Confirm the configuration and click Next.
  12. Click Install.
  13. In the load balancer managing the View Security servers, enable the server that has been upgraded in the load balanced pool.

Dale Carter, a VMware Senior Solutions Architect specializing in the EUC space, has worked in IT for more than 20 years. He is also a VCP4-DT, VCP5-DT, VCAP-DTD, and VCAP-DTA.

Understanding View Disposable Disks

Travis WoodBy Travis Wood, VCDX-97

When VMware introduced Linked-Clones in View 4.5 there was a new type of disk included called the Disposable Disk. The purpose of this disk was to redirect certain volatile files away from the OS Disk to help reduce linked-clone growth.  I have read a lot of designs that utilize disposable disks but it has become clear that there is a lot of confusion and misunderstanding about what they do and exactly how they function.  This confusion is highlighted in a View whitepaper called View Storage Considerations which describes disposable disks as:

Utilizing the disposable disk allows you to redirect transient paging and temporary file operations to a VMDK hosted on an alternate datastore. When the virtual machine is powered off, these disposable disks are deleted.

The three elements from this paragraph I want to demystify are:

  1. What is redirected to the disposable disk?
  2. Where are disposable disks hosted?
  3. When are disposable disks deleted/refreshed?

What is redirected?

By default there are three elements that are redirected to the disposable disk.  The first is the Windows swap file, View Composer will redirect the Swap file from C: to the disposable disk. It is recommended to set this to a specific size to make capacity planning easier.

 

TWood1

 

The other elements that are redirected are the System Environment Variables TMP and TEMP.  By default, the User TEMP and TMP Environment Variables are NOT redirected.  However it is highly recommended to remove the User TEMP and TMP Environment variables, if this is done then Windows will use the System Variables instead and the user temporary files will then be redirected to the disposable disk.

TWood4

 

 

Where is the disposable disk stored?

There is a common misconception that like the User Data Disk, the Disposable Disk can be redirected to a different tier.  This is not the case and the Disposable Disk is always stored with the OS Disk.  In later versions of View you can choose the drive letter within the GUI for the Disposable Disk to avoid conflicts with mapped drives, but this setting and the size are the only customizations you can make to the disposable disk.

When is the disposable disk refreshed?

This is the question that tends to cause the most confusion.  Many people I have spoken to have said that it is refreshed when the user logs off, whilst others say it’s on reboot.  The Disposable Disk is actually only refreshed when View powers off the VM. User initiated shutdown & reboots as well as power actions within vCenter do not impact the disposable disk.  The following actions will cause the disposable disk to be refreshed:

  • Rebalance
  • Refresh
  • Recompose
  • VM powered off due to the Pool Power Policy set to “Always Powered Off”

This is quite important to understand, as if the Pool Power Policy is set to any of the other settings (Powered On, Do Nothing or Suspend) then your disposable disks are not getting refreshed automatically.

What does all this mean?

Understanding Disposable Disks and their functionality will enable you to design your environment appropriately.  The View Storage Reclamation Feature that was introduced in View 5.2 uses an SE Sparse disk for the OS Disk, this allows View to shrink OS disks if files are deleted from within the OS.  However only the OS disk is created as an SE Sparse disk, User Data Disks and Disposable Disks are created as a standard VMDK.  The key difference with this feature compared with Disposable Disks, is it relies on files being deleted from within the Guest Operating System, where as the Disposable Disk is deleted along with all the files it contains when View powers off the VM.  It is also important to note, that currently SE Sparse disks are not supported on VSAN.

If you choose to use Disposable Disks in your design, then depending on your power cycle you may want to add an operational task for administrators to periodically change the Power On setting for the pool within a maintenance window to refresh the Disposable Disk.  This is particularly important for the use case of Persistent Desktops which have long refresh/recompose cycles.


Travis Wood is a VMware Senior Solutions Architect

MomentumSI Brings New DevOps and Cloud Professional Services to VMware

By now, it is common knowledge that VMware has evolved beyond server MomentumSI_logovirtualization and is a leading Private Cloud, Hybrid Cloud, and End-User Computing provider.  To enable the transformational business outcomes that these technologies support, we have continued to invest in building the best Professional Services team in the industry.

I am excited to share that in Q4 2014, VMware acquired MomentumSI, a leading IT consultancy that expands our capabilities to help our customers transform their IT processes and infrastructures into strategic advantage.

MomentumSI is a pure-play Professional Services business that served many of the same Fortune 500 companies that VMware does today. The company focused on four key solution areas:

  • Building DevOps capabilities for customers, leveraging technologies such as Docker, Puppet, Chef, Jenkins, Salt and Ansible
  • Architecting and implementing OpenStack Private Clouds
  • Enabling Hybrid Cloud solutions, with an emphasis on AWS and vCloud Air
  • Modernizing applications for cloud environments

The MomentumSI team has joined the Americas Professional Services Organization (PSO).  Together, the combined practice will assist our clients in achieving business results through IT transformation.

So with that, we welcome the MomentumSI team to the VMware family and look forward to expanding the value that we can deliver to our customers.

For more information on the services MomentumSI is bringing to VMware, please visit http://page.momentumsi.com/vmware.

Bret