Home > Blogs > VMware Consulting Blog > Monthly Archives: February 2016

Monthly Archives: February 2016

Configuring NSX SSL VPN-Plus

Spas_KaloferovBy Spas Kaloferov

One of the worst things you can do is to buy a great product like VMware NSX Manager and not use its vast number of functionalities. If you are one of those people and want to “do better” then this article is for you. Will take a look how to configure SSL VPN-Plus functionality in VMware NSX. With SSL VPN-Plus, remote users can connect securely to private networks behind a NSX Edge gateway. By doing so remote users can access servers and applications in the private networks.

Consider a software development company that has made design decision and is planning to extend it’s existing network infrastructure and allow remote users access to some segments of it’s internal network. To accomplish this the company will be utilizing the already existing VMware NSX Manager network infrastructure platform to create a Virtual Private Network (VPN).

The company has identified the following requirements for their VPN implementation:

  • The VPN solution should utilize SSL certificate for communication encryption and be used with standard Web browser.
  • The VPN solution should use Windows Active Directory (AD) as identity source to authenticate users.
  • Only users within a given AD organizational unit (OU) should be granted access to the VPN.
  • Users should be utilizing User Principal Names (UPN’s) to authenticate to the VPN.
  • Only users who have accounts with specific characteristics, like those having an Employee ID associated with their account, should be able to authenticate to the VPN.

If you have followed one of my previous articles Managing VMware NSX Edge and Manager Certificates, you have already made the first step towards configuring SSL VPN-Plus.

Configuring SSL VPN-Plus is a straightforward process, but fine tuning it’s configuration to meet your needs might sometimes be a bit tricky. Especially when configuring Active Directory for authentication. We will look into a couple of examples how to use the Login Attribute Name and Search Filter parameters fine grain and filter the users who should be granted VPN access.

Edit Authentication Server tab on VMware NSX Edge:

SKaloferov Edit Authentication Server

Please visit Configuring NSX SSL VPN-Plus to learn more about the configuration.


Spas Kaloferov is an acting Solutions Architect member of Professional Services Engineering (PSE) for the Software-Defined Datacenter (SDDC) – a part of the Global Technical & Professional Solutions (GTPS) team. Prior to VMware, Kaloferov focused on cloud computing solutions.

“Network Functions Virtualization (NFV) for Dummies” Blog Series – Part 2

Common Use Cases for the Transition to NFV

Gary HamiltonBy Gary Hamilton

In my previous blog posting I discussed the question – What is NFV? This blog post looks at the network functions that will be delivered as virtual network functions (VNFs), instead of as hardware appliances. SDx Central offers a nice article that helps with some of these definitions.

In very simple terms (remember this is a blog series for IT people, not network experts), a network function is a network capability that provides application and service support, and can be delivered as a composite whole, like an application. SDx Central does a good job in the aforementioned article of grouping these network functions by categorizing them into umbrella or macro use cases. By leveraging these macro use cases, I am able to provide a layman’s description of what these use cases are attempting to achieve, and the service support they deliver. The macro use cases I will focus on in my explanation are “virtual customer edge” and “virtual core and aggregation”, because these are the two use cases that are generally being tackled first from a NFV perspective.

Use Case – Connecting a remote office (using vCPE)

In layman terms, the SDx Central “customer edge” use case focuses on how to connect a remote office, or remote branch, to a central data centre network, and extending the central data centre’s network services into that remote office. In order to deliver this connectivity, a CPE (customer premises equipment) device is used. Generally, the types of device used would be a router, switch, gateway, firewall, etc., (these are all CPEs) providing anything from Layer 2 QoS (quality of service) services to Layer 7 intrusion detection. The Layer 2 and Layer 7 references are from the OSI Model. vCPE (virtual customer premises equipment) is the virtual CPE, delivered using the NFV paradigm.

The following diagram was taken from the ETSI Use Case document (GS NFV 001 v1.1.1 2013-10), referring to the vCPE device as vE-CPE. (I’ll discuss the significance of ETSI in a future blog post)

This diagram illustrates how vCPEs are used to connect the remote branch offices to the central data centre. It also illustrates that it is OK to mix non-virtualised CPEs with vCPEs in an infrastructure. Just as in the enterprise IT cloud world, the data and applications leveraging the virtual services are not aware – and do not care – whether these services are virtual or physical. The only thing that matters is whether the non-functional requirements (NFRs) of the application are effectively met. Those NFRs include requirements like performance and availability.

This particular use case has two forms, or variants –

  • Remote vCPE (or customer-premise deployed)
  • Centralised vCPE (deployed within the data centre)

The diagram below shows examples of both variants, where vCPEs are deployed in branch offices, as well as centrally. The nature of the application being supported, and its NFRs, would normally dictate placement requirements. A satellite/cable TV set-top box is a consumer example of a “customer-premise deployed” CPE.

GHamilton Customer Premise Deployed CPE

Use Case – Virtualising the mobile core network (using vIMS)

The SDx Central “Virtual core and aggregation” use cases are focused on the mobile core network (Evolved Packet Core – EPC) and IP Multimedia Subsystem (IMS). In layman terms, this is about the transportation of packets across a mobile operator’s network. This is focused on mobile telephony.

IMS is an architectural network framework for the delivery of telecommunications services using IP (internet protocol). When IMS was conceived in the 1990s by the 3rd Generation Partnership Project (3GPP), it was intended to provide an easy way for the worldwide deployment of telecoms networks that would interface with the existing public switched telephone network (PSTN), thereby providing flexibility, expandability, and the easy on-boarding of new services from any vendor. It was also hoped that IMS would provide a standard for the delivery of voice and multimedia services. This vision has fallen short in reality.

IMS is a standalone system, designed to act as a service layer for applications. Inherent in its design, IMS provides an abstraction layer between the application and the underlining transport layer, as shown in the following diagram of the 3GPP/TISPAN IMS architecture overview.

An example of an application based on IMS is VoLTE, which stands for “Voice over 4G LTE” wireless network. Examples of VoLTE applications are Skype and Apple’s FaceTime.

GHamilton 3GPP and TISPAN

Use Case – Virtualising the mobile core network (using vEPC)

While IMS is about supporting applications by providing application server functions, like session management and media control, EPC is about the core network, transporting voice, data and SMS as packets.

EPC (Evolved Packet Core) is another initiative from 3GPP for the evolution of the core network architecture for LTE (Long-Term Evolution – 4G). The 3GPP website provides a very good explanation of its evolution, and description of LTE here.

In summary, EPC is a packet-only network for data, voice and SMS, using IP. The following diagram shows the evolution of the core network, and the supporting services.

  • GSM (2G) relied on circuit-switching networks (the aforementioned PSTN)
  • GPRS and UMTS (3G) are based on a dual-domain network concept, where:
    • Voice and SMS still utilise a circuit-switching network
    • But data uses a packet-switched network
  • EPS (4G) is fully dependent on a packet-switching network, using IP.

GHamilton Voice SMS Data

Within an EPS service, EPC provides the gateway services and user management functions as shown in the following diagram. In this simple architecture:

  • A mobile phone or tablet (user equipment – UE) is connected to an EPC over an LTE network (a radio network) via an eNodeB (a radio tower base station).
  • A Serving GW transports IP data traffic (the user plane) between the UE and external network.
  • The PDN GW is the interface between the EPC and external network, for example, the Internet and/or an IMS network, and allocates IP addresses to the UEs. PDN stands for Public Data Network.
    • In a VoLTE architecture, a PCRF (Policy and Charging Rule Function) component works with the PDN GW, providing real-time authorisation of users, and setting up the sessions in the IMS network.
  • The HSS (Home Subscriber Server) is a database with user-related and subscriber-related information. It supports user authentication and authorisation, as well as call and session setup.
  • The MME (Mobility Management Entity) is responsible for mobility and security management on the control plane. It is also responsible for the tracking of the UE in idle-mode.

GHamilton EPC E-UTRAN

In summary, EPC, IMS and CPE are all network functions that deliver key capabilities that we take for granted in our world today. EPC and IMS support the mobile services that have become a part of our daily lives, and frankly, we probably would not know what to do without them. CPE supports the network interconnectivity that is a part of the modern business world. These are all delivered using very specialised hardware appliances. The NFV movement is focused on delivering these services using software running on virtual machines, running on a cloud, instead of using hardware appliances.

There are huge benefits to this movement.

  • It will be far less expensive to utilise shared, commodity infrastructure for all services, versus expensive, specialised appliances that cannot be shared.
  • Operational costs are far less expensive because the skills to support the infrastructure are readily available in the market.
  • It costs far less to bring a new service on-board, because it entails deploying some software in VMs, versus the acquisition and deployment of specialised hardware appliances.
  • It costs far less to fail. If a new service does not attract the expected clientele, the R&D and deployment costs of that service will be far less with NFV than in the traditional model.
  • It will be significantly faster to bring new services to market. Writing and testing new software is much faster than building and testing new hardware.
  • The costs of the new virtual network functions (VNFs) will be less because the entry point is far lower because it is now about developing software versus building a hardware appliance. We see evidence of this, where a lot of new players have come into the Network Equipment Provider (NEP) market (the suppliers of the VNFs), therefore creating more competition, which drives down prices.

All sounds great. But, we have to be honest, there are serious questions to be answered/addressed –

  • Can the VNFs deliver the same level of service as the hardware appliances?
  • Can the Telco operators successfully transform their current operating models to support this new NFV paradigm?
  • Can a cloud meet the non-functional requirements (NFRs) of the VNFs?
  • Are the tools within the cloud fit for purpose for Telco grade workloads and services?
  • Are there enough standards to support the NFV movement?

All great questions that I will try to answer in future blogs. The European Telecommunications Standard Institute (ETSI), an independent, not-for-profit organisation that develops standards via consensus of their members, has been working on the answers to some of these questions. Others are being addressed by cloud vendors, like VMware.


Gary Hamilton is a Senior Cloud Management Solutions Architect at VMware and has worked in various IT industry roles since 1985, including support, services and solution architecture; spanning hardware, networking and software. Additionally, Gary is ITIL Service Manager certified and a published author. Before joining VMware, he worked for IBM for over 15 years, spending most of his time in the service management arena, with the last five years being fully immersed in cloud technology. He has designed cloud solutions across Europe, the Middle East and the US, and has led the implementation of first of a kind (FOAK) solutions. Follow Gary on Twitter @hamilgar.

Planning a DRP Solution for VMware Mirage Infrastructure

Eric_MonjoinBy Eric Monjoin

When I first heard about VMware Mirage in 2012—when Wanova was acquired by VMware and Mirage was integrated into our portfolio—it was seen more as a backup solution for desktops, or a tool for migrating from Windows XP to Windows 7. And, with an extension, it was possible to easily migrate a physical desktop to a virtual one, so most of the time when we had to design a Mirage solution, the question of DRP or HA came up as, “Why backup a backup solution?” Mirage was not seen as a strategic tool.

This has changed, and VMware Mirage is now totally integrated as an Extended UNIX Code tool to manage user desktops through different use cases. Of course, we still have backup and migration use cases, but we also have more and more customers who are using it to ensure desktops conform to IT rules and policies to ensure infrastructure reliability for Mirage. In this post we’ll describe how to design a reliable infrastructure, or at least give the key points for different scenarios.

Let’s first have a look at the different components of a Mirage infrastructure:

EMonjoin Basic Mirage Components

Figure 1 – Basic VMware Mirage Components

  1. Microsoft SQL Database—The MS SQL database contains all the configurations and settings of the Mirage infrastructure. This component is critical; if the Microsoft DB fails, then all Mirage transactions and services—Mirage Management Server service and Mirage Server service—
  2. SMB Shared Volumes—These could be any combination of NAS, Windows Server, desktop files, apps, base layers, or USMT files—all stored on theses volumes (except small files and meta-data.)
  3. Mirage Management Server—This is used to manage the Mirage infrastructure, but also acts as a MongoDB server instance on Mirage V5.4 and beyond. If it fails, administration is not possible until a new one is installed, but there’s no way to recover desktops since small files stored in the MongoDB are no longer available.
  4. Mirage Server—This is used by Mirage clients to connect into. Often, many Mirage servers are installed and placed behind load-balancers to provide redundancy and scalability.
  5. Web Management—A standard Web server service can be used to manage Mirage using a Web interface instead of the Mirage Management Console. The installation is quite simple and does not require extra configuration, but note that data is not stored on the Web management server.
  6. File Portal—Similar to Web management above, it is a standard Web server service used by end users to retrieve their files using a Web interface, and again, data is not stored on the file portal server.
  7. Mirage Gateway—This is used by end users to connect to Mirage infrastructure from an external network.

Now, let’s take a look at the different components of VMware Mirage and see which components can be easily configured for a reliable and redundant solution:

  • Mirage Management Server—This is straightforward, and actually mandatory, because with MongoDB, we need to install at least one more management server, and the MongoDB will synchronize automatically. The last point is to use a VIP on a load-balancer to connect to, and to route traffic to any available management server. The maximum number of Mirage management servers is seven due to MongoDB restrictions. Keep in mind that more than two members can reduce performance as you must wait for acknowledgement from all members for each writing operation to the database. The recommended number of management servers is two.
  • Mirage Server—By default we install at least two Mirage servers or more; one Mirage server per 1,000 centralized virtual desktops (CVDs) or 1,500 (depending on the hardware configuration), plus one for redundancy and use load-balancers to route client traffic to any available Mirage server.
  • Web Management and File Portal—Since these are just Web applications installed over Microsoft IIS servers, we can deploy them on two or more Web servers and use load-balancers in order to provide the required redundancy.
  • Mirage Gateway—This is an appliance and is the same as the previous component; we just have to deploy a new appliance and configure load-balancers in front of them. Like the Mirage server, there is a limitation concerning the number of connections per Mirage gateway, so do not exceed one appliance per 3,000 endpoints, and add one for resiliency.

Note: Most components can be used with a load-balancer in order to get the best performance and prevent issues like frequent disconnection, so it is recommended to the set load-balancer to support the following:

  • Two TCP connections per endpoint, and up to 40,000 TCP connections for each Mirage cluster
  • Change MSS in FastL4 protocol (F5) from 1460 to 900
  • Increase timeout from five minutes to six hours

Basically, all Mirage components can be easily deployed in a redundant way, but they rely on two other external components, both of which are key: the Microsoft SQL database and the SMB shared volumes, both of which work jointly. This means we have to pay special attention to which scenario is privileged:

  • Simple backup
  • Database continuity
  • Or full disaster recovery

The level of effort required is not the same and depends on the RPO/RTO required.

So let’s have a look on the different scenarios available:

  1. Backup and RestoreThis solution consists of performing a backup and restore of both Microsoft SQL database and storage volumes in case a major issue occurs on either component. This solution seems relatively simple to implement and looks inexpensive as well. It could be implemented if the attending RPO/RTO is not high. In this case, you have a few hours to restore the service, and there is no need to restore data that has been recently backed up. Restoring lost data backed up in the last couple of hours is automatic and quick. Remember, even if you lose your Mirage storage, all data is still available on the end-users’ desktop; it will just take time to centralize them again. However, this is not an appropriate scenario for large infrastructures with thousands of CVDs as it can take months to re-centralize all the desktops. If you want to use this solution, make sure that both the Microsoft SQL database and the SMB volumes are backed up at the same time. Basically, this means stopping Mirage services, performing a backup of the database using SQL Manager to get a snapshot of the storage volumes, and stopping MongoDB from backing up files. In case of failure, you have to stop Mirage (if it has not already done that by itself) and restore the last database backup and revert to the latest snapshot on the storage side. Keep in mind you must follow this sequence: first, stop all mirage services, and then the MongoDB services.
  1. Protect Microsoft SQL DatabaseSome customers are more focused on keeping the database intact, and this implies using Microsoft SQL clustering. However, VMware Mirage does not use ODBC connections, so it is not aware of having to move to a different Microsoft SQL instance if the main one has failed. The solution resides in using Microsoft SQL AlwaysOn technology, which is a combination of the Microsoft SQL clustering and the Microsoft failover cluster. It provides synchronization between “non-shared” volumes among nodes, but is also a virtual IP and virtual network name that will move to the remaining node in case of disaster, or during a maintenance period.
  2. Full Disaster Recovery/Multisite Scenario—This last scenario concerns customers who require a full disaster recovery scenario between two data centers with a high level of RPO/RTO. All components are duplicated at each data center with load-balancers to route traffic to a Mirage management server, Mirage server, or Web management/File portal IIS server. This implies using the second scenario in order to provide Microsoft SQL high availability, and also to perform a synchronous replication between two storage nodes. Be aware that synchronous replication can highly affect storage controller performance. While this is the most expensive of the scenarios since it requires extra licenses, it is the fastest way to recover from a disaster. An intermediate scenario could be to have two Mirage management servers (one per data center), but to shut down Mirage services, and replicate SQL database and storage volumes during the weekend.

EMonjoin Multi-Site Mirage

Figure 2 – E.g. Multi-Site Mirage Infrastructure

For scenario two and three, the installation and configuration of Microsoft SQL AlwaysOn in a VMware Mirage infrastructure is explained further in the white paper.


Eric Monjoin joined VMware France in 2009 as PSO Senior Consultant after spending 15 years at IBM as a Certified IT Specialist. Passionate for new challenges and technology, Eric has been a key leader in the VMware EUC practice in France. Recently, Eric has moved to the VMware Professional Services Engineering organization as Technical Solutions Architect. Eric is certified VCP6-DT, VCAP-DTA and VCAP-DTD and was awarded vExpert for the 4th consecutive year.