Nicolas Vermandé (VCDX#055) is practice lead for Private Cloud & Infrastructure  at Kelway, a VMware partner. Nicolas covers the Software-Defined Data Center on his blog www.my-sddc.om,

This series of posts describes a specific use case for VMware NSX in the context of Disaster Recovery. The goal is to demonstrate the routing and programmability capabilities through a lab scenario. This first part presents the NSX components and details the use case. The second part will show how to deploy the lab and the third part will deal with APIs and show how to use python to execute REST API calls to recreate the required NSX components at the recovery site.

Introduction

When considering dual datacenter strategy with VMs recovery in mind, one important decision is whether to adopt an active/active or active/standby model. The former is generally much more complex to manage because it requires double the work in terms of procedures, testing and change controls. In addition, capacity management becomes challenging as you need to accommodate physical resources to be able to to run all workloads within whatever site. On top of that, stretched VLANs are sometimes deployed across datacenters so that recovered VMs can keep their IP addresses. This is generally very costly if you want to leverage proper L2 extension technology, such as Cisco OTV.

Alternatively, in a SDDC environment, you can leverage VMware NSX to efficiently manage connectivity and network changes required in the event of a full site failover. NSX gives you the ability to maintain the same IP address scheme for all you workloads by leveraging APIs, with little effort. Or with more granularity, you could even move a single subnet as part of a specific recovery plan. NSX will make this possible by providing:

  • An overlay network that allows you to decouple the backend VM network from the physical network. NSX-V is using VXLAN, each ESXi host acting as a VTEP.
  • Programmability through RESTful APIs that allows you to provision Logical Switches and modify Logical Routers configuration in seconds.
  • Dynamic routing protocol (OSPF, IS-IS, BGP) that will advertise VM subnets to your enterprise network, making them accessible for end users or other applications (North-South or East-West traffic)

NSX Components

As many NSX introduction blog posts can be found on the web (like here or here), I’m not gonna spend much time on this topic. NSX components are:

  • NSX Manager: it’s the single point of configuration and the REST API (and UI) interface. It is provided as a VM appliance and is actually the only appliance you have to manually install. There is a 1:1 mapping between the vCenter Server an the NSX Manager. The manager is responsible for deploying NSX Controllers, NSX Edge Gateways and Logical Router Controllers. It also installs the Distributed Routing and the firewall kernel modules on ESXi hosts, as well as the User World Agent (UWA). NSX configuration is accessible through vCenter once you’ve installed the NSX plugin.
  •  NSX Controller: it provides a control plane to distribute VXLAN Logical Routing and Switching network information to ESXi hosts. It also enables ARP suppression to reduce flooding. It is typically implemented as a 3-node cluster and maintains MAC, ARP and VTEP tables. It is finally responsible for installing routes into each ESXi host.
  • Logical Switch (LS): it acts as the L2 domain boundary for VMs, identified by a VXLAN ID (VNI) and associated with a specific subnet. Its vCenter representation is a distributed Portgroup with specific capabilities.
  • Distributed Logical Router (DLR): it’s the distributed L3 first-hop for VM traffic. As its name suggests, it’s completely distributed. You can think about it as an anycast gateway, where each ESXi corresponds to a node, sharing a single virtual IP and virtual MAC address. The data-path routing process runs within each ESXi in vmkernel space and enables East-West traffic optimisation, avoiding well-known hair-pinning effects when VMs want to talk to their default gateway.
  • Logical Router Control VM: it provides the DLR with a control plane and can be deployed as a redundant pair of VM appliances, in an active/standby fashion. It supports both OSPF and BGP as dynamic routing protocols. The Control VM receives its initial configuration from the NSX Manager.
  • Edge Services Gateway (ESG): it provides network perimeter services to the virtual environment. It is intended for North-South communication, i.e. between the physical and the virtual network or at the edge of your tenant. It is NOT distributed, meaning that its placement is critical. It can run in HA-mode, where the appliances are deployed in an active/standby fashion. The HA mechanism doesn’t rely on VMware HA (as some people at Cisco seem to think), but with minimum common sense, you’re gonna create a DRS anti-affinity rule to separate active and stanby VMs. Depending on specific requirements, the edge gateway can be deployed with several sizes:
    • Compact (1vCPU – 512MB RAM)
    • Large (2 vCPUs – 1GB RAM)
    • Quad-Large (4 vCPUs – 1GB RAM)
    • X-Large (6 vCPUs – 8GB RAM).

Available services include: Firewall, NAT, DHCP, Routing, Load-Balancing, Site-to-Site VPN, SSL VPN and L2VPN.

  • Distributed Firewall (DFW): It enables distributed security capabilities at VM NIC level as an East-West L2-L4 stateful firewall. The module is present on each ESXi host as a kernel module and therefore removes any form of bottleneck. If you need more bandwidth, just add a new host! It also includes the Service Composer feature, which allows you to create specific services by integrating additional 3rd party capabilities to the firewall, such as endpoint services (e.g. Anitivirus, Data Security) and deep packet inspection (Palo Alto). I have to say that this feature is one of the most compelling to me!

The following picture shows how those components fit together:

togetherNSX

Basic Understanding

To understand NSX concepts, it’s useful to map vSphere network components to NSX components:

In a traditional vSphere environment, a VM wishing to communicate with the outside world first hits a virtual port on the virtual switch. This virtual port is part of a Portgroup, which is basically a group of virtual network ports tagged with a specific VLAN ID. In the NSX world, when a VM is part of a Logical Switch, it hits a virtual port member of a Portgroup specifically created by the NSX Manager. It is created on every host member of the VDS, like a traditional distributed Portgroup. However, the difference is that all egress frames hitting this Portgroup will be forwarded inside a VXLAN tunnel, tagged with a specific external VLAN ID to transport the VXLAN frames on the physical network .

The role of the Logical Router is to connect two or more Logical Switches together, enabling routing between the corresponding subnets (you can assume 1 LS = 1 subnet). It also advertises (and learns) prefixes and routes to its neighbor(s) if a dynamic routing protocol has been activated. Alternatively, you can also configure static routes.

As an example, the following diagram shows the DLR establishing adjacency with the ESG, which is also running a dynamic routing protocol, and advertises VM subnets to the physical world. The ESG has its internal interface connected to a VXLAN and its uplink connected to a VLAN. As a result, the physical network can learn about the virtual network, and vice-versa.

example

Lab Architecture

Now that you’ve had a basic introduction to NSX principles, I can detail my scenario. In my lab environment, I’ve simulated the following architecture:

globalNSX

I didn’t actually deploy two sets of controllers and two managers linked to two different Virtual Centers in separate physical datacenters. Instead I’ve created logical containers called “Transport Zones” to make both virtual datacenters completely independent from a data-plane standpoint. The goal here is to demonstrate how to integrate virtual network operations into an orchestrated Disaster Recovery Plan with NSX. The only requirement is the ability to run a script as part of your DR procedures. This may be ultimately be achieved by VMware Site Recovery Manager, or another orchestration tool.

This architecture represents a traditional dual datacenter environment, connected over a L3 IP cloud. In a standard network environment, it basically means that you have to change VMs IP addresses upon recovery. (There are other alternatives, like host routes, RHI and NAT, but these solutions come at a certain complexity cost).

The main goal of the scenario is to show how to provide a flexible orchestrated Disaster Recovery solution without having to change VMs IP addresses. Let’s see how we can achieve this with NSX. The order of operations would be:

  1. Disconnect LS1 and LS2 in DC1.
  2. Create new LS in DC2: DR_LS1 and DR_LS2 (Or pre-create them without connecting the upstream DLR).
  3. Add two new interfaces to DLR2 in DC2, with the same IP addresses as previously used by DLR1 to connect LS1 and LS2. In this way, we don’t have to change the default gateway of the recovered VMs.
  4. Connect those interfaces to the corresponding LS.
  5. Recover VMs in DC2.
  6. Connect VMs to the appropriate LS.
  7. Boot VMs and test connectivity.
  8. Check route updates on the physical network.

Note: I’m assuming here that security devices configuration are synchronized between datacenters.

NSX

Because OSPF is running within the virtual network on both DLR1 and DLR2, routing updates will be sent up to the IP cloud to reflect that DR_LS1 and DR_LS2 subnets are now reachable through DC2. In the same way, because LS1 and LS2 have been disconnected from DLR1, corresponding routes will be removed to reflect that LS1 and LS2 subnets are not reachable in DC1 anymore. Magic??!! No, just awesome technology :-)

The next post will focus on how to deploy this lab environment.

Nicolas