Home > Blogs > VMware Consulting Blog > Tag Archives: IT Architecture

Tag Archives: IT Architecture

How to Add a Linux Machine as PowerShell Host in vRO

By Spas Kaloferov

Introduction

In this article we will look into the alpha version of Microsoft Windows PowerShell v6 for both Linux and Microsoft Windows. We will show how to execute PowerShell commands between Linux , Windows, and VMware vRealize Orchestrator (vRO):

  • Linux to Windows
  • Windows to Linux
  • Linux to Linux
  • vRO to Linux

We will also show how to add a Linux PowerShell (PSHost) in vRO.

Currently, the alpha version of PowerShell v6 does not support the PSCredential object, so we cannot use the Invoke-Command command to programmatically pass credentials and execute commands from vRO, through a Linux PSHost, to other Linux machines, or Windows machines. Conversely, we cannot execute from vRO –> through a Windows PSHost –> to Linux Machines.

To see how we used the Invoke-Command method to do this, see my blog Using CredSSP with the vCO PowerShell Plugin (SKKB1002).

In addition to not supporting the PSCredential object, the alpha version doesn’t support WinRM. WinRM is Microsoft’s implementation of the WS-Management protocol, a standard Simple Object Access Protocol (SOAP)-based, firewall-friendly protocol that enables hardware and operating systems from different vendors to interoperate. Therefore, when adding a Linux machine as a PowerShell host in vRO, we will be using SSH instead of WinRM as the protocol of choice.

The PowerShell v6 RTM version is expected to support WinRM, so we will be able to add the Linux PSHost with WinRM, and not SSH.

So, let’s get started.

Continue reading

The Anatomy of an Instant Clone

By Travis Wood

If you’ve used Horizon View over the last few years, then you most likely have come across linked clones. Linked clones use a parent image, called a “replica,” that serves read requests to multiple virtual machines (VMs), and the writes in each desktop are captured on their own delta disk. Replicas can also be used to change desktop update methodologies; instead of updating every desktop, you can update the parent image and recompose the rest of the desktops.

Horizon 7 has introduced a new method of provisioning with Instant Clones. Instant Clones are similar to linked clones in that all desktops read from a replica disk and write to their own disk, but Instant Clone takes it one step further by doing the same thing with memory. Instant Clones utilize a new feature of vSphere 6 where desktop VMs are forked (that is, Instant Clones are created) off a running VM—instead of cloning a powered-off VM—which provides savings for provisioning, updates, and memory utilization.

Golden Image

With Instant Clones you start with your golden image, in a way that is similar to linked clones. The golden image is the VM you install the operating system on, then join to the domain, and install user applications on; you follow the same OS optimizations procedures you would use for Instant Clones.

When you’re done, release its IP address, shut it down, and create a snapshot. Now you are ready to create your Instant Clone desktop pool. This VM should have VM Tools installed, along with the Horizon Agent with the Instant Clone module. It is NOT possible to have the Instant Clone and Composer modules co-installed, so you will always need different snapshots if using Instant Clones and linked clones from the same golden image. Reservations can be set on the golden image and they will be copied to the Instant Clones, reducing the size of the VSwap file. It is important to note that the golden image must be on storage that’s accessible to the host you are creating your Instant Clone desktop pool on.

Template

When you create your pool, Horizon will create a template. A template is a linked clone from your golden image, created on the same datastore as the golden image. It will have the name cp-template, and will be in the folder ClonePrepInternalTemplateFolder. Template disk usage is quite small, about 60 MB. There will be an initial power-on after the template is created, but it will then shut off.

TWood_Horizon Template

Replica

Next, Horizon will create a replica, which is the same as a Linked Clone replica. It is a thin-provisioned, full clone of the template VM. This will serve as the common read disk for all of your Instant Clones, so it can be tiered onto appropriate storage through the Horizon Administrator console, the same way it is done with Linked Clones. Of course, if you are using VSAN, there is only one datastore, so tiering is done automatically. Horizon will also create a CBRC Digest file for the replica. The replica will be call cp-replica-GUID and will be in the folder ClonePrepReplicaVmFolder. The disk usage of the replica will be depend on how big your Gold Master is, but remember, it’s thin provisioned and not powered on, so you will not have VSwap functionality.

TWood_Horizon Replica

Parent

Horizon will now create the final copy of the original VM, called a parent, which will be used to fork the running VMs. The parent is created on every host in the cluster; remember, we are forking running VMs here, so every host needs to have a running VM. These will be placed on the same datastore as the desktop VMs, where there will be one per host per datastore. Because these are powered on, they have a VSwap file the size of the allocated vMEM. In addition, there will be a small delta disk to capture the writes booting the parent VM and the VMX Overhead VSwap file, but this—and the sum of the other disks—is relatively small, at about 500 MB. These will be placed in ClonePrepReplicaVmFolder.

TWood_Horizon Parent

Something you’ll notice with the parent VM is that it will use 100% of its allocated memory, causing a vCenter alarm.

TWood_vCenter Alarm

TWood_Virtual Machine Error

Instant Clones

OK! At this point, we are finally ready to fork! Horizon will create the Instant Clones based on the provisioning settings, which can be upfront or on-demand. Instant Clones will have a VSwap file equal to the size of the vMEM—minus any reservations set on the Gold Master, plus a differencing disk.

The amount of growth for the differencing disk will depend on how much is written to the local VM during the user’s session, but it is deleted on logout. When running View Planner tests, this can grow to about 500 MB, which is the same as when using View Planner for Linked Clones. The provisioning of Instant Clones will be fast! You’ll see much lower resource utilization of your vCenter Server and less IO on your disk subsystem because there is no boot storm from the VMs powering on.

TWood_vCenter Server

Conclusion

Instant Clones are a great new feature in Horizon 7 that take the concept of Linked Clones one step further. They bring the advantages of:

  • Reducing boot storms
  • Decreasing provisioning times
  • Decreasing change windows
  • Bringing savings to storage utilization

Instant Clones introduce a number of new objects: replicas, parents, and templates. It is important to understand not only how these are structured, but also their interrelationships, in order to plan your environment accordingly.


Travis is a Principal Architect in the Global Technology & Professional Services team, specializing in End User Computing.  He is also a member of the CTO Ambassadors program which connects the global field with R&D and engineering.

VMware Validated Design for SDDC 2.0 – Now Available

Jonathan McDonaldBy Jonathan McDonald

Recently I have been involved in a rather cool project inside VMware, aimed at validating and integrating all the different VMware products. The most interesting customer cases I see are related to this work because oftentimes products work independently without issue—but together can create unique problems.

To be honest, it is really difficult to solve some of the problems when integrating many products together. Whether we are talking about integrating a ticketing system, building a custom dashboard for vRealize Operations Manager, or even building a validation/integration plan for Virtual SAN to add to existing processes, there is always the question, “What would the experts recommend?”

The goal of this project is to provide a reference design for our products, called a VMware Validated Design. The design is a construct that:

  • Is built by expert architects who have many years of experience with the products as well as the integrations
  • Allow repeatable deployment of the end solution, which has been tested to scale
  • Integrates with the development cycle, so if there is an issue with the integration and scale testing, it can be identified quickly and fixed by the developers before the products are released.

All in all, this has been an amazing project that I’ve been excited to work on, and I am happy to be able to finally talk about it publicly!

Introducing the VMware Validated Design for SDDC 2.0

The first of these designs—under development for some time—is the VMware Validated Design for SDDC (Software-Defined Data Center). The first release was not available to the public and only internal to VMware, but on July 21, 2016, version 2.0 was released and is now available to everyone! This design builds not only the foundation for a solid SDDC infrastructure platform using VMware vSphere, Virtual SAN, and VMware NSX, but it builds on that foundation using the vRealize product suite (vRealize Operations Manager, vRealize Log Insight, vRealize Orchestrator, and vRealize Automation).

The VMware Validated Design for SDDC outcome requires a system that enables an IT organization to automate the provisioning of common, repeatable requests and to respond to business needs with more agility and predictability. Traditionally, this has been referred to as Infrastructure-as-a-Service (IaaS); however, the VMware Validated Design for SDDC extends the typical IAAS solution to include a broader and more complete IT solution.

The architecture is based on a number of layers and modules, which allows interchangeable components to be part of the end solution or outcome, such as the SDDC. If a particular component design does not fit the business or technical requirements for whatever reason, it should be able to be swapped out for another similar component. The VMware Validated Design for SDDC is one way of putting an architecture together that has been rigorously tested to ensure stability, scalability, and compatibility. Ultimately, however, the system is designed to ensure the desired outcome will be achieved.

The conceptual design is shown in the following diagram:

JMCDonald_VVD Conceptual Design

As you can see, the design brings a lot more than just implementation details. It includes many common “day two” operational tasks such as management and monitoring functions, business continuity, and security.

To simplify such a complex design, it has been broken up into:

  • A high-level Architecture Design
  • A Detailed Design with all the design decisions included
  • Implementation guidance.

Let’s take an in-depth look.

Continue reading

BCDR: Some Things to Consider When Upgrading Your VMware Disaster Recovery Solution

Julienne_PhamBy Julienne Pham

Once upon a time, you protected your VMs with VMware Site Recovery Manager, and now you are wondering how to upgrade your DR solution with minimum impact on the environment. Is it as seamless as you think?

During my days in Global Support and working on customer Business Continuity/Disaster Recovery (BCDR) projects, I found it intriguing how vSphere components can put barriers in an upgrade path. Indeed, one of the first things I learned was that timing and the update sequence of my DR infrastructure was crucial to keep everything running, and with as little disruption as possible.

Here If we look more closely, this is a typical VMware Site Recovery Manager setup:

JPham_SRM 6x

And in a pyramid model, we have something like this:

JPham_SRM Pyramid

Example of a protected site

So, where do we start our upgrade?

Upgrade and maintain the foundation

You begin with the hardware. Then, the vSphere version you are upgrading to. You’ll see a lot of new features available, along with bug fixes, so your hardware and firmware might need some adjustments to support new features and enhancements. It is important at a minimum to check the compatibility of the hardware and software you are upgrading to.

In a DR scenario, it is important to check storage replication compliance

This is where you ensure your data replicates according to your RPO.

If you are using vSphere Replication or Storage Array Replication, you should check the upgrade path and the dependency with vSphere and SRM.

  • As an example, VR cannot be upgraded directly from 5.8 to 6.1
  • You might need to update the Storage Replication Adaptor too.
  • You can probably find other examples of things that won’t work, or find work-arounds you’ll need.
  • You can find some useful information in the VMware Compatibility Guide

Architecture change

If you are looking to upgrade from vSphere 5.5 to 6.1, for example, you should check if you need to migrate from a simple SSO install to an external one for more flexibility, as you might not be able to change in the future. As VMware SRM is dependent on the health of vCenter, you might be better off looking first into upgrading this component as a prerequisite.

Before you start you might want to check out the informative blog, “vSphere Datacenter Design – vCenter Architecture Changes in vSphere 6.0 – Part 1.”

The sites are interdependent

Once the foundation path is planned out, you have to think about how to minimize business impact.

Remember that if your protected site workload is down, you can always trigger a DR scenario, so it is in your best interest to keep the secondary site management layer fully functional and upgrade VMware SRM and vCenter at the last resort.

VMware upgrade path compatibility

Some might assume that you can upgrade from one version to another without compatibility issues coming up. Well, to avoid surprises, I recommend looking into our compatibility matrix, and validate the different product version upgrade paths.

For example, the upgrade of SRM 5.8 to 6.1 is not supported. So, what implications might be related to vCenter and SRM compatibility during the upgrade?

JPham_Upgrade Path Sequence

Back up, back up, back up

The standard consideration is to run backups before every upgrade. A snapshot VM might not be enough in certain situations if you are in different upgrade stages at different sites. You need to carefully plan and synchronise all different database instances for VMware Site Recovery Manager and vCenter—at both sites and eventually vSphere Replication databases.

I hope this addresses some of the common questions and concerns that might come up when you are thinking of upgrading SRM. Planning and timing are key for a successful upgrade. Many components are interdependent, and you need to consider them carefully to avoid an asynchronous environment with little control over outcomes. Good luck!


Julienne Pham is a Technical Solution Architect for the Professional Services Engineering team. She is specialised on SRM and core storage. Her focus is on VIO and BCDR space.

Troubleshooting Tips: Orchestrator PowerShell Plug-in

By Spas Kaloferov

Background and General Considerations

In this post will we will take a look at some common issues one might experience when using the VMware vRealize Orchestrator (vRO) PowerShell Plug-In, especially when using HTTPS protocol or Kerberos authentication for the PowerShell Host (PSHost).

Most use cases require that the PowerShell script run with some kind of administrator-level permissions in the target system that vRO integrates with. Here are some of them:

  • Add, modify, or remove DNS records for virtual machines.
  • Register IP address for a virtual machine in an IP management system.
  • Create, modify, or remove a user account mailbox.
  • Execute remote PowerShell commands against multiple Microsoft Windows operating systems in the environment.
  • Run a PowerShell script (.ps1) file from within a PowerShell script file from vRO.
  • Access mapped network drives from vRO.
  • Interact with Windows operating systems that have User Access Control (UAC) enabled.
  • Execute PowerCLI commands.
  • Integrate with Azure.

When you add a PowerShell Host, you must specify a user account. That account will be used to execute all PowerShell scripts from vRO. In most use cases, like the one above, that account must be an administrator account in the corresponding target system the script interacts with. In most cases, this is a domain-level account.

In order to successfully add the PowerShell Host to that account—and use that account when executing scripts from vRO—some prerequisites need to be met. In addition, the use cases mentioned require the PowerShell Host to be prepared for credential delegation (AKA Credential Security Service Provider [CredSSP], double-hop authentication or multi-hop authentication).

To satisfy the above use cases for adding a PowerShell Host in vRO:

The high-level requirements are:

  • Port: 5986
  • PowerShell remote host type: WinRM
  • Transport protocol: HTTPS (recommended)
  • Authentication: Kerberos
  • User name: <Administrator_user_name>

The low-level requirements are:

  • PSHost: Configure WinRM and user token delegation
  • PSHost: Configure Windows service principal names (SPNs) for WinRM
  • PSHost: Import a CA signed-server certificate containing Client Authentication and Server authentication Exchange Key Usage Properties
  • PSHost: Configure Windows Credential Delegation using the Credential Security Service Provider (CredSSP) module
  • vRO: Edit the Kerberos Domain Realm (krb5.conf) on the vCO Appliance (Optional/Scenario specific)
  • vRO: Add the PS Host as HTTPS host with Kerberos authentication
  • vRO: Use the Invoke-Command cmdlet in your PowerShell code

Troubleshooting Issues when Adding a PSHost

To resolve most common issues when adding a PSHost for use with HTTPS transport protocol and Kerberos authentication, follow these steps:

  1. Prepare the Windows PSHost.

For more information on all the configurations needed on the PSHost, visit my blog, “Using CredSSP with the vCO PowerShell Plug-in.”

  1. After preparing the PSHost, test it to make sure it accepts the execution or removes PowerShell commands.

Start by testing simple commands. I like to use the $env:computername PowerShell command that returns the hostname of the PSHost. You can use the winrs command in Windows for the test. Here’s an example of the syntax:

winrs -r:https://lan1dc1.vmware.com:5986 -u:vmware\administrator -p:VMware1! powershell.exe $env:computername

 

Continue by testing a command that requires credential delegation. I like to use a simple command, like dir \\<Server_FQDN\<sharename>, that accesses a share residing on a computer other than the PSHost itself. Here’s an example of the syntax:

winrs -r:https://lan1dc1.vmware.com:5986 -ad -u:vmware\administrator -p:VMware1! powershell.exe dir \\lan1dm1.vmware.com\share


Note
: Make sure to specify the –ad command line switch.

  1. Prepare the vRO so it can handle Kerberos authentication. You need this in order to use a domain-level account when adding the PSHost.

For more information about the Kerberos configuration on vRO for single domain, visit my blog, “Using CredSSP with the vCO PowerShell Plugin.”

If you are planning to add multiple PSHosts and are using domain-level accounts for each PSHost that are from different domains (e.g., vmware.com and support.vmware.com) you need to take this into consideration when preparing vRO for Kerberos authentication.

For more information about the Kerberos configuration on vRO for multiple domains, visit my blog, “How to add PowerShell hosts from multiple domains with Kerberos authentication to the same vRO.”

If you make a mistake in the configuration, you might see the following error then adding the PSHost:

Cannot locate default realm (Dynamic Script Module name : addPowerShellHost#12
tem: ‘Add a PowerShell host/item8′, state: ‘failed’, business state: ‘Error’, exception: ‘InternalError: java.net.ConnectException: Connection refused (Workflow:Import a certificate from URL with certificate alias / Validate (item1)#5)’
workflow: ‘Add a PowerShell

 

If this is the case, go back and re-validate the configurations.

  1. If the error persists, make sure the conf file is correctly formatted.

For more information about common formatting mistakes, visit my blog, “Wrong encoding or formatting of Linux configuration files can cause problems in VMware Appliances.”

  1. Make sure you use the following parameters when adding the PSHost:
    • Port: 5986
    • PowerShell remote host type: WinRM
    • Transport protocol: HTTPS (recommended)
    • Authentication: Kerberos
    • User name: <Administrator_user_name>

Note: In order to add the PSHost, the user must be a local administrator on the PSHost.

  1. If you still cannot add the host, make sure your VMware appliance can authenticate successfully using Kerberos against the domains you’ve configured. To do this you can use the ldapsearch command and test Kerberos connectivity to the domain.

Here is an example of the syntax:

vco-a-01:/opt/vmware/bin # ldapsearch -h lan1dc1.vmware.com -D “CN=Administrator,CN=Users,DC=vmware,DC=com” -w VMware1! -b “” -s base “objectclass=*”
  1. If your authentication problems continue, most likely there is a general authentication problem that might not be directly connected to the vRO appliance, such as:
    • A network related issue
    • Blocked firewall ports
    • DNS resolution problems
    • Unresponsive domain controllers

Troubleshooting Issues when Executing Scripts

Once you’ve successfully added the PSHost, it’s time to test PowerShell execution from the vRO.

To resolve the most common issues when executing PowerShell scripts from vRO, follow these steps:

  1. While in vRO go to the Inventory tab and make sure you don’t see the word “unusable” in front of the PSHost name. If you do, remove the PSHost and add it to the vRO again.
  1. Use the Invoke an external script workflow that is shipped with vRO to test PowerShell execution commands. Again, start with a simple command, like $env:computername.

Then, process with a command that requires credential delegation. Again, as before, you can use a command like dir \\<Server_FQDN\<sharename>.

Note: This command doesn’t support credential delegation, so a slight workaround is needed to achieve this functionality. You need to wrap the command you want to execute around an Invoke-Command command.

For more information on how to achieve credential delegation from vRO, visit my blog, “Using CredSSP with the vCO PowerShell Plug-in.”

If you try to execute a command that requires credential delegation without using a workaround, you will receive an error similar to the following:

PowerShellInvocationError: Errors found while executing script <script>: Access is denied


SKaloferov_Power Shell Error

  1. Use the SilentlyContinue PowerShell error action preference to suppress output from “noisy” commands. Such commands are those that generate some kind of non-standard output, like:
    • Progress par showing the progress of the command execution
    • Hashes and other similar content

Finally, avoid using code in your commands or scripts that might generate popup messages, open other windows, or open other graphical user interfaces.


Spas Kaloferov is an acting Solutions Architect member of Professional Services Engineering (PSE) for the Software-Defined Datacenter (SDDC) – a part of the Global Technical & Professional Solutions (GTPS) team. Prior to VMware, Kaloferov focused on cloud computing solutions.

How to Configure HA LDAP Server with the vRO Active Directory Plug-in Using F5 BIG-IP

By Spas Kaloferov

In this post we will demonstrate how to configure a highly availability (HA) LDAP server to use with the VMware vRealize Orchestrator Server (vRO) Active Directory Plug-in. We will accomplish this task using F5 BIG-IP, which can also be used to achieve LDAP load balancing.

The Problem

The Configure Active Directory Server workflow part of the vRO Active Directory Plug-in allows you to configure a single active directory (AD) host via IP or URL. For example:

SKaloferov_Configure Active Directory

Q: What if we want to connect to multiple AD domain controller (DC) servers to achieve high availability?
A: One way is to create additional DNS records for those servers with the same name, and use that name when running the workflow to add the AD server. DNS will return based on round robin, any of the given AD servers.

Q: Will this prevent me from hitting a DC server that is down or unreachable?
A: No, health checks are not performed to determine if a server is down.

Q: How can I implement a health checking mechanism to determine if a given active directory domain controller server is down, so that this is not returned to vRO?
A: By using F5 BIG-IP Virtual Server configured for LDAP request.

Q: How can I configure that in F5?
A: This is covered in the next chapter.

The Solution

We can configure an F5 BIG-IP device to listen for and satisfy LDAP requests in the same way we configured it for vIDM in an earlier post.

To learn more on how to configure F5 BIG-IP Virtual Server to listen for and satisfy LDAP requests, visit the “How to set vIDM (SSO) LDAP Site-Affinity for vRA“ blog, and read the Method 2: Using F5 BIG-IP chapter.

In this case we will use the same F5 BIG-IP Virtual Server (VS) we created for the vIDM server:

  1. Log in to vRO and navigate to the Workflows tab.
  2. Navigate to Library > Microsoft > Active Directory > Configuration and start the Configure Active Directory Server
  3. In the Active Directory Host IP/URL field provide the FQDN of the VS you created.
  4. Fill in the rest of the input parameters as per your AD requirements.
  5. Click Submit.

SKaloferov_Active Directory Server

Go to the Inventory tab; you should see that the LDAP server has been added, and you should be able to expand and explore the inventory objects coming from that plug-in.

SKaloferov_LDAP

Now, in my case, I have two LDAP servers lying behind the virtual server.

SKaloferov_F5 Standalone

I will shut the first one down and see if vRO will continue to work as expected.

SKaloferov_F5 Standalone Network Map

Right-click the LDAP server and select Reload.

SKaloferov_LDAP Reload

Expand again and explore the LDAP server inventory. Since there is still one LDAP server that can satisfy requests it should work.

Now let’s check to see what happens if we simulate a failure of all the LDAP servers.

SKaloferov_LDAP Pool

Right-click the LDAP server and select Reload.

You should see an error because there are no LDAP servers available to satisfy queries.

SKaloferov_Plugin Error

Additional resources

My dear friend Oliver Leach wrote a blog post on a similar/related topic. Make sure to check it out at: “vRealize Orchestrator – connecting to more than one domain using the Active Directory plugin.”


Spas Kaloferov is an acting Solutions Architect member of Professional Services Engineering (PSE) for the Software-Defined Datacenter (SDDC) – a part of the Global Technical & Professional Solutions (GTPS) team. Prior to VMware, Kaloferov focused on cloud computing solutions.

EUC Design Series: Horizon 7 Strategy for Desktop Evolution to IoT Revolution

TJBy TJ Vatsa

Introduction

Mobility and end-user computing (EUC) are evolving at a very rapid pace. With the recent announcements made by VMware around Horizon 7 it becomes all the more important to recalibrate and remap the emerging innovation trends to your existing enterprise EUC and application rationalization strategies. For business and IT leaders, burning questions emerge:

  • “What are these EUC innovations leading to, and why should it matter to my organization?”
  • “What is the end-user desktop in the EUC realm evolving into, and are these innovations a precursor to an IoT (Internet of Things) revolution?”
  • “What outcomes might we expect if we were to adopt these innovations in our organizations?”
  • “How do we need to restructure our existing EUC/mobility team to fully leverage the mobility evolution?”

Now there are enough questions to get your creative juices flowing! Let’s dive right in.

The What

Desktop virtualization revolutionized how end-user desktops with their applications and data were securely managed within the guard rails of a secure data center. These were essentially Generation1 (Gen1) desktops that were persistent (AKA full clone) desktops within a virtual machine (VM) container. While the benefit was mainly secure encapsulation within a data center, the downside was cumbersome provisioning with a bloated storage footprint. For instance, if you had one persistent desktop with a 50 GB base image and 100 users, you would be looking at 5,000 GB—or 5 TB—of storage. In an enterprise where we have thousands of users with unique operating system and application requirements, the infrastructure capital expenditures (CAPEX) and the associated operational expenditures (OPEX) would be through the roof.

The preceding scenario was solved by the Generation2 (Gen2) virtual desktops, which were classified as non-persistent (AKA linked clone) desktops. Gen2 desktops relied on a parent base-image (AKA a replica), and the resulting linked clones referenced this replica for all read operations, and had delta disks to store any individual writes. These desktops benefited from faster process automation using a Composer server (AKA desktop provisioning) that generated linked clones referencing a base replica image. This resulted in a significant reduction in the storage footprint and faster desktop provisioning times. This also aided in reducing the CAPEX and OPEX levels incurred in Gen1 desktops. However, the downside of desktop boot-up times was still not fully resolved because they are dependent on the storage media being used. Boot-up times were faster with flash storage and comparatively slower with spinning media storage. The OPEX associated with application management was still not fully resolved despite application virtualization technologies offered by various vendors. It still required management of multiple patches for desktop images and applications.

The panacea offered by the new Horizon 7 has accelerated the virtual desktop evolution to Generation3 (Gen3) desktops. Evolution to Gen3 results in just-in-time desktops and application stack delivery. This means you only have to patch the desktop once, clone it with its running state, and dynamically attach the application stack using VMware’s App Volumes. Gen3 virtual desktops from VMware have the benefits of Gen2 desktops, but without the operational overhead, resulting in reduced CAPEX and OPEX. Here is an infographic detailing the evolution:

TVatsa_Clone Desktop VM

Gen3 desktops pave the way for a Generation4+ (Gen4+) mobility platform that leverages VMware’s Enterprise Mobility Management (EMM) platform and the EUC platform into Workspace ONE, capable of tapping into all of the possibilities of mobility-enabled IoT solutions. The potential generated by these solutions is capable of being tapped across various vertical industries—healthcare, financial, retail, education, manufacturing, government and consumer packaged goods—creating an IoT revolution in days to come.

The Why

The innovations listed in the preceding section have the potential of transforming an enterprise’s business, IT and financial outcomes. The metrics to quantify these outcomes are best measured in the resulting CAPEX and OPEX reductions. The reduction in these expenditures not only fosters business agility as in accelerated M&A, but also enhances an organization’s workforce efficiency. The proof is in the pudding. Here is a sample snapshot of the outcomes from a healthcare customer:

TVatsa_Healthcare Customer Diagram

The How

While the mobility evolution and its leap to an IoT revolution is imminent with the promise of anticipated outcomes as mentioned earlier, the question still lingers: How do you align the roles within your organization to ride the wave of mobility transformation?

Here is a sample representation of the recommended roles for an enterprise mobility center of excellence (COE):

TVatsa_COE

Here is the description of field recommendations in terms of mandatory and recommended roles for an enterprise EUC/mobility transformation:

TVatsa_Proposed Org Roles

Conclusion

Given the rate at which enterprise mobility is evolving towards IoT, it is only a matter of time when every facet of our lives, from our work to home environments, will be fully transformed by this tectonic mobility driven IoT transformation. VMware’s mobility product portfolio, in combination with VMware’s experienced Professional Services Organization (PSO), can help you transform your enterprise onward in this revolutionary journey. VMware is ever-ready to be your trusted partner in this “DARE” endeavor. Until next time, go VMware!


TJ Vatsa is a principal architect and member of CTO Ambassadors at VMware representing the Professional Services organization. He has worked at VMware for more than five years and has more than 20 years of experience in the IT industry. During this time he has focused on enterprise architecture and applied his extensive experience in professional services and R&D to cloud computing, VDI infrastructure, SOA architecture planning and implementation, functional/solution architecture, enterprise data services and technical project management.

Hybrid Cloud and Hybrid Cloud Manager

Michael_FrancisBy Michael Francis

Disclaimer: This blog is not a technical deep dive on Hybrid Cloud Manager; it talks to the components of the product and the design decisions around the product. It assumes the reader has knowledge of the product and its architecture.

Recently, I have been involved with the design and deployment of Hybrid Cloud Manager for some customers. It has been a very interesting exercise to work through the design and the broader implications.

Let’s start with a brief overview of Hybrid Cloud Manager. Hybrid Cloud Manager is actually comprised of a set of virtual appliances that reside both on-premises and in vCloud Air. The product is divided into a management plane, control plane, and data plane.

  • The management plane is instantiated by a plugin in the vSphere Web Client.
  • The control plane is instantiated by the Hybrid Cloud Manager virtual appliance.
  • The data plane is instantiated by a number of virtual appliances – Cloud Gateway, Layer 2 Concentrator, and the WAN Opto appliance.

The diagram below illustrates these components and their relationships to each other on-premises and the components in vCloud Air.

MFrancis_Logical Architecture Hybrid Cloud Manager

Figure 1 – Logical Architecture Hybrid Cloud Manager

The Hybrid Cloud Manager provides virtual machine migration capability, which is built on two functions: virtual machine replication[1] and Layer 2 network extension. The combination of these functions provides an organization with the ability to migrate workloads without the logistical and technical issues traditionally associated with migrations to a public cloud; specifically, the outage time to copy on-premises virtual machines to a public cloud, and virtual machine re-addressing.

During a recent engagement that involved the use of Hybrid Cloud Manager, it became very obvious that even though this functionality simplifies the migration, it does not diminish the importance of the planning and design effort prior to any migration exercises. Let me explain.

Importance of Plan and Design

When discussing a plan, I am really discussing the importance of a discovery that deeply analyses
on-premises virtual workloads. This is critical, as the Hybrid Cloud Manager creates such a seamless extension to the on-premises environment, we need to understand:

  • Which workloads will be migrated
  • Which networks the workloads reside on
  • What compute isolation requirements exist
  • How and where network access control is instantiated on-premises

Modification of a virtual network topology in Public Cloud can be a disruptive operation; just as it is in the data center. Introducing an ability to stretch layer 2 network segments into the Public Cloud and migrating out of a data center into Public Cloud increases the number of networks and the complexity of the topology of the networks in the Public Cloud. So the more planning that can be done early the less likely disruptions to services will need to occur later.

One of the constraints in the solution revolves around stretching layer 2 network segments. A Layer 2 network segment located on-premises can be ‘stretched’ to one virtual data center in vCloud Air. So we have some implications of which workloads exist on a network segment, and which vCloud Air virtual data centers will be used to host the workloads on the on-premises network segment. This obviously influences the creation of virtual data centers in vCloud Air, and the principals defined in the design, which influence when additional virtual data centers are stood up – compared with growing an existing virtual data center.

Ideally, an assessment of on-premises workloads would be performed prior to any hybrid cloud design effort. This assessment would be used to size subsequent vCloud Air virtual data centers; plus, it would discover information about the workload resource isolation that drives the need for workload separation into multiple virtual data centers. For instance, the requirement to separate test/development workloads from production workloads with a ‘hard’ partition would be one example of a requirement that would drive a virtual data center design.

During this discovery we would also identify which workloads reside on which networks, and which networks require ‘stretching’ into vCloud Air. This would surface any issues we may face due to the constraint that we can only stretch a Layer 2 segment into one virtual data center.[2] This assessment really forms the ‘planning’ effort in this discussion.

Design Effort

The design effort involves designs for vCloud Air and Hybrid Cloud Manager. I believe the network design of vCloud Air is a critical element. We need to determine whether to use:

  • Dynamic routing or static routing
  • Subnet design and its relationship to routing summarization
  • Routing paths to the Internet
  • Estimated throughputs required for any virtual routing devices
  • Other virtual network services
  • Egress optimization functionality from Hybrid Cloud Manager
  • And finally, we need to determine where security network access points are required

The other aspect is the design of the virtual compute containers, such as virtual data centers in vCloud Air. The design for vCloud Air should define the expected virtual data center design over the lifecycle of the solution. It would define the compute resource assignment to each virtual data center initially, and over the lifecycle as anticipated growth is factored in. During the growth of use, the requirements for throughput will increase on the networking components in vCloud Air, so the design should articulate guidance around when an increase in size of virtual routing devices will need to occur.

The vCloud Air platform is an extension of the on-premises infrastructure. It is a fundamental expectation that operations teams have visibility into the health of the infrastructure, and that capacity planning of infrastructure is taking place. Similarly, there is a requirement to ensure that the vCloud Air platform and associated services are healthy and capacity managed. We should be able to answer the question, “Are my virtual data center routing devices of the right size, and is their throughput sufficient for the needs of the workloads hosted in vCloud Air?” Ideally we should have a management platform that treats vCloud Air as an extension to our on-premises infrastructure.

This topic could go much deeper, and there are many other considerations as well, such as, “Should I place some management components in vCloud Air,” or, “Should I have a virtual data center in vCloud Air specifically assigned to host these management components?”

I believe today many people take an Agile approach to their deployment of public cloud services, such as networking and virtual compute containers. But I believe if you are implementing such a hybrid interface as offered by Hybrid Cloud Manager, there is real benefit to a longer term view to the design of vCloud Air services to minimise risk if we paint ourselves into a corner in the future.

Some Thoughts on Hybrid Cloud Manager Best Practices

Before wrapping up this blog, I wanted to provide some thoughts on some of the design decisions regarding Hybrid Cloud Manager.

In a recent engagement we considered best practices for placement of appliances, and we came up with the following design decisions.

MFrancis_Design Decision 1

MFrancis_Design Decision 2

MFrancis_Design Decision 3

Key Takeaways

The following are the key takeaways from this discussion:

  • As Hybrid Cloud Manager provides a much more seamless extension of the on-premises data center, deeper thought and consideration needs to be put into the design of the vCloud Air public cloud services.
  • To effectively design vCloud Air services for Hybrid Cloud requires a deep understanding of the on-premises workloads, and how they will leverage the hybrid cloud extension.
  • Network design and ongoing network access controlling operational changes need to be considered.
  • Management and monitoring of the vCloud Air services acts as an extension of the data center needs to be included in the scope of a Hybrid Cloud solution.

[1] Leverages the underlying functionality of vSphere Replication; but isn’t a full vSphere Replication architecture.

[2] This constraint could be overcome; however, the solution would require configurations that would make other elements of the design sub-optimal; for example, disabling the use of egress optimization.


Michael Francis is a Principal Systems Engineer at VMware, based in Brisbane.

Configuring NSX SSL VPN-Plus

Spas_KaloferovBy Spas Kaloferov

One of the worst things you can do is to buy a great product like VMware NSX Manager and not use its vast number of functionalities. If you are one of those people and want to “do better” then this article is for you. Will take a look how to configure SSL VPN-Plus functionality in VMware NSX. With SSL VPN-Plus, remote users can connect securely to private networks behind a NSX Edge gateway. By doing so remote users can access servers and applications in the private networks.

Consider a software development company that has made design decision and is planning to extend it’s existing network infrastructure and allow remote users access to some segments of it’s internal network. To accomplish this the company will be utilizing the already existing VMware NSX Manager network infrastructure platform to create a Virtual Private Network (VPN).

The company has identified the following requirements for their VPN implementation:

  • The VPN solution should utilize SSL certificate for communication encryption and be used with standard Web browser.
  • The VPN solution should use Windows Active Directory (AD) as identity source to authenticate users.
  • Only users within a given AD organizational unit (OU) should be granted access to the VPN.
  • Users should be utilizing User Principal Names (UPN’s) to authenticate to the VPN.
  • Only users who have accounts with specific characteristics, like those having an Employee ID associated with their account, should be able to authenticate to the VPN.

If you have followed one of my previous articles Managing VMware NSX Edge and Manager Certificates, you have already made the first step towards configuring SSL VPN-Plus.

Configuring SSL VPN-Plus is a straightforward process, but fine tuning it’s configuration to meet your needs might sometimes be a bit tricky. Especially when configuring Active Directory for authentication. We will look into a couple of examples how to use the Login Attribute Name and Search Filter parameters fine grain and filter the users who should be granted VPN access.

Edit Authentication Server tab on VMware NSX Edge:

SKaloferov Edit Authentication Server

Please visit Configuring NSX SSL VPN-Plus to learn more about the configuration.


Spas Kaloferov is an acting Solutions Architect member of Professional Services Engineering (PSE) for the Software-Defined Datacenter (SDDC) – a part of the Global Technical & Professional Solutions (GTPS) team. Prior to VMware, Kaloferov focused on cloud computing solutions.

“Network Functions Virtualization (NFV) for Dummies” Blog Series – Part 2

Common Use Cases for the Transition to NFV

Gary HamiltonBy Gary Hamilton

In my previous blog posting I discussed the question – What is NFV? This blog post looks at the network functions that will be delivered as virtual network functions (VNFs), instead of as hardware appliances. SDx Central offers a nice article that helps with some of these definitions.

In very simple terms (remember this is a blog series for IT people, not network experts), a network function is a network capability that provides application and service support, and can be delivered as a composite whole, like an application. SDx Central does a good job in the aforementioned article of grouping these network functions by categorizing them into umbrella or macro use cases. By leveraging these macro use cases, I am able to provide a layman’s description of what these use cases are attempting to achieve, and the service support they deliver. The macro use cases I will focus on in my explanation are “virtual customer edge” and “virtual core and aggregation”, because these are the two use cases that are generally being tackled first from a NFV perspective.

Use Case – Connecting a remote office (using vCPE)

In layman terms, the SDx Central “customer edge” use case focuses on how to connect a remote office, or remote branch, to a central data centre network, and extending the central data centre’s network services into that remote office. In order to deliver this connectivity, a CPE (customer premises equipment) device is used. Generally, the types of device used would be a router, switch, gateway, firewall, etc., (these are all CPEs) providing anything from Layer 2 QoS (quality of service) services to Layer 7 intrusion detection. The Layer 2 and Layer 7 references are from the OSI Model. vCPE (virtual customer premises equipment) is the virtual CPE, delivered using the NFV paradigm.

The following diagram was taken from the ETSI Use Case document (GS NFV 001 v1.1.1 2013-10), referring to the vCPE device as vE-CPE. (I’ll discuss the significance of ETSI in a future blog post)

This diagram illustrates how vCPEs are used to connect the remote branch offices to the central data centre. It also illustrates that it is OK to mix non-virtualised CPEs with vCPEs in an infrastructure. Just as in the enterprise IT cloud world, the data and applications leveraging the virtual services are not aware – and do not care – whether these services are virtual or physical. The only thing that matters is whether the non-functional requirements (NFRs) of the application are effectively met. Those NFRs include requirements like performance and availability.

This particular use case has two forms, or variants –

  • Remote vCPE (or customer-premise deployed)
  • Centralised vCPE (deployed within the data centre)

The diagram below shows examples of both variants, where vCPEs are deployed in branch offices, as well as centrally. The nature of the application being supported, and its NFRs, would normally dictate placement requirements. A satellite/cable TV set-top box is a consumer example of a “customer-premise deployed” CPE.

GHamilton Customer Premise Deployed CPE

Use Case – Virtualising the mobile core network (using vIMS)

The SDx Central “Virtual core and aggregation” use cases are focused on the mobile core network (Evolved Packet Core – EPC) and IP Multimedia Subsystem (IMS). In layman terms, this is about the transportation of packets across a mobile operator’s network. This is focused on mobile telephony.

IMS is an architectural network framework for the delivery of telecommunications services using IP (internet protocol). When IMS was conceived in the 1990s by the 3rd Generation Partnership Project (3GPP), it was intended to provide an easy way for the worldwide deployment of telecoms networks that would interface with the existing public switched telephone network (PSTN), thereby providing flexibility, expandability, and the easy on-boarding of new services from any vendor. It was also hoped that IMS would provide a standard for the delivery of voice and multimedia services. This vision has fallen short in reality.

IMS is a standalone system, designed to act as a service layer for applications. Inherent in its design, IMS provides an abstraction layer between the application and the underlining transport layer, as shown in the following diagram of the 3GPP/TISPAN IMS architecture overview.

An example of an application based on IMS is VoLTE, which stands for “Voice over 4G LTE” wireless network. Examples of VoLTE applications are Skype and Apple’s FaceTime.

GHamilton 3GPP and TISPAN

Use Case – Virtualising the mobile core network (using vEPC)

While IMS is about supporting applications by providing application server functions, like session management and media control, EPC is about the core network, transporting voice, data and SMS as packets.

EPC (Evolved Packet Core) is another initiative from 3GPP for the evolution of the core network architecture for LTE (Long-Term Evolution – 4G). The 3GPP website provides a very good explanation of its evolution, and description of LTE here.

In summary, EPC is a packet-only network for data, voice and SMS, using IP. The following diagram shows the evolution of the core network, and the supporting services.

  • GSM (2G) relied on circuit-switching networks (the aforementioned PSTN)
  • GPRS and UMTS (3G) are based on a dual-domain network concept, where:
    • Voice and SMS still utilise a circuit-switching network
    • But data uses a packet-switched network
  • EPS (4G) is fully dependent on a packet-switching network, using IP.

GHamilton Voice SMS Data

Within an EPS service, EPC provides the gateway services and user management functions as shown in the following diagram. In this simple architecture:

  • A mobile phone or tablet (user equipment – UE) is connected to an EPC over an LTE network (a radio network) via an eNodeB (a radio tower base station).
  • A Serving GW transports IP data traffic (the user plane) between the UE and external network.
  • The PDN GW is the interface between the EPC and external network, for example, the Internet and/or an IMS network, and allocates IP addresses to the UEs. PDN stands for Public Data Network.
    • In a VoLTE architecture, a PCRF (Policy and Charging Rule Function) component works with the PDN GW, providing real-time authorisation of users, and setting up the sessions in the IMS network.
  • The HSS (Home Subscriber Server) is a database with user-related and subscriber-related information. It supports user authentication and authorisation, as well as call and session setup.
  • The MME (Mobility Management Entity) is responsible for mobility and security management on the control plane. It is also responsible for the tracking of the UE in idle-mode.

GHamilton EPC E-UTRAN

In summary, EPC, IMS and CPE are all network functions that deliver key capabilities that we take for granted in our world today. EPC and IMS support the mobile services that have become a part of our daily lives, and frankly, we probably would not know what to do without them. CPE supports the network interconnectivity that is a part of the modern business world. These are all delivered using very specialised hardware appliances. The NFV movement is focused on delivering these services using software running on virtual machines, running on a cloud, instead of using hardware appliances.

There are huge benefits to this movement.

  • It will be far less expensive to utilise shared, commodity infrastructure for all services, versus expensive, specialised appliances that cannot be shared.
  • Operational costs are far less expensive because the skills to support the infrastructure are readily available in the market.
  • It costs far less to bring a new service on-board, because it entails deploying some software in VMs, versus the acquisition and deployment of specialised hardware appliances.
  • It costs far less to fail. If a new service does not attract the expected clientele, the R&D and deployment costs of that service will be far less with NFV than in the traditional model.
  • It will be significantly faster to bring new services to market. Writing and testing new software is much faster than building and testing new hardware.
  • The costs of the new virtual network functions (VNFs) will be less because the entry point is far lower because it is now about developing software versus building a hardware appliance. We see evidence of this, where a lot of new players have come into the Network Equipment Provider (NEP) market (the suppliers of the VNFs), therefore creating more competition, which drives down prices.

All sounds great. But, we have to be honest, there are serious questions to be answered/addressed –

  • Can the VNFs deliver the same level of service as the hardware appliances?
  • Can the Telco operators successfully transform their current operating models to support this new NFV paradigm?
  • Can a cloud meet the non-functional requirements (NFRs) of the VNFs?
  • Are the tools within the cloud fit for purpose for Telco grade workloads and services?
  • Are there enough standards to support the NFV movement?

All great questions that I will try to answer in future blogs. The European Telecommunications Standard Institute (ETSI), an independent, not-for-profit organisation that develops standards via consensus of their members, has been working on the answers to some of these questions. Others are being addressed by cloud vendors, like VMware.


Gary Hamilton is a Senior Cloud Management Solutions Architect at VMware and has worked in various IT industry roles since 1985, including support, services and solution architecture; spanning hardware, networking and software. Additionally, Gary is ITIL Service Manager certified and a published author. Before joining VMware, he worked for IBM for over 15 years, spending most of his time in the service management arena, with the last five years being fully immersed in cloud technology. He has designed cloud solutions across Europe, the Middle East and the US, and has led the implementation of first of a kind (FOAK) solutions. Follow Gary on Twitter @hamilgar.