posted

1 Comment

By Trey Tyler, Sr. Solutions Strategist

Taking a VLAN and extending that broadcast domain across two sites can be identified by many names including; Data Center Interconnect (DCI), Data Center Extension (DCE), Extended Layer 2 Network, Stretched Layer 2 Network, Stretched VLAN, Extended VLAN, Stretched Deploy, Layer 2 VPN.  With all these different aliases, it’s no wonder why people are confused by how any of it works, what the implications of routing different networks between sites are, or why you should consider stretching networks in first place.

 

Stretching a network allows VMs talk over the same broadcast domain when they exist at different physical locations, removing the need to re-architect your network topology.  Additionally, this allows you to retain IP and MAC addresses when you vMotion a VM between these locations, which can be very useful from a licensing perspective for some software applications.  These capabilities create the hybrid ‘feel’ of your datacenters by allowing you to grow or shrink applications at either site without having to touch the networking.

 

There are multiple ways to implement a stretched layer 2 network between datacenters ranging from hardware-based solutions such as Cisco’s OTV on the Nexus 7000, to options with VMware’s NSX platform like the one Tom Fojta expertly explains here. The downfall with this is you may lack control over the physical equipment your cloud provider utilizes, or not have the level of access required to leverage these options.  And if you’ve found a way to stretch these networks you now need to worry about potential loops, managing flow affinity to prevent traffic drops and duplicate ARP responses.

 

VMware has developed a solution to solve these issues with the combination of vCloud Air, and Hybrid Cloud Manager (HCM). By leveraging HCM in a vCloud Air environment you can stretch networks to the cloud regardless of the physical equipment in use and without vCenter or NSX access.  HCM can also tackle the routing complexities introduced when stretching multiple networks between two sites.

 

Stretched Layer 2 Data Path:
HCM creates layer 2 stretched networks with help from the Layer 2 Concentrator (L2C), one of its fleet VM’s. A L2C can stretch existing VLAN or VXLAN port groups from the on-premises data center to vCloud Air by truncating networks on either side, and creating a secure tunnel end to end to protect and pass VM network traffic.

 

During deployment of an L2C a new port group on the Distributed Virtual Switch will be created and the L2C will be attached to this port group.  The L2C’s connection to the vDS will be trunked for the VLAN being stretched, then enabled as a “Sink Port”.  The Sink Port feature allows traffic destined for VMware MACs the vDS does not have an entry for, to be seen by the port its enabled on.  The L2C may now listen and receive traffic for VM’s that are located within the vCloud Air environment.

 

Layer 2 Extension

 

Stretched L2 Traffic Flows:
When stretching a Layer 2 network to vCloud Air, attached machines will rely on the local datacenter’s edge router for all routing actions as well as for firewall protection.  This allows you to manage access controls and routing behavior for cloud VMs through the on-premises interface. By retaining the local datacenter’s edge router as the default gateway during a long distance vMotion to vCloud Air, all existing network connections will continue working after the successful relocation.

 

Keeping the local router as our primary gateway does come with a new set of challenges. Cloud side VMs communicating to separate cloud side networks must first travel to the on-premises router, then return to the destination network in the cloud as shown in the below graphic.  This is sometimes referred to as tromboning since the data path resembles the bending pipes of a trombone.  When network traffic is forced to trombone, you can expect elevated latency between VMs, as well as inefficient utilization of the connection between sites.

 

Layer 2 Extension

Tromboning traffic flow between two cloud side VMs on different networks

 

Why Can’t I Just Use The Cloud Side Edge?
If you try to allow VMs on different stretched networks to communicate with each another directly through the local edge, you can resolve tromboning issues, though this comes with a cost.

 

  • This causes network paths taken in either direction to become asymmetrical. When using stateful firewalls with asymmetric traffic flows, firewalls on either end will terminate these flows.
  • VMs that are vMotioned to the cloud will drop existing connections when the vMotion completes since your default gateway changes from the local datacenter to the cloud.

 

Luckily, we have a solution to these issues with a feature called “Proximity Routing”

 

Layer 2 Extension

Asymmetrical traffic flow between two VMs on different networks

 

Proximity Routing to the Rescue:
Proximity Routing was introduced to resolve symptoms caused by network tromboning, avoid asymmetrical routing, and prevent dropped connections during vMotion mentioned earlier.

 

When this feature is enabled, VMs on a stretched network will utilize the edge routers which are closest in proximity. To prevent traffic from navigating to the incorrect gateway, Layer 2 Concentrators will now filter ARP requests to the gateway address.

 

Additionally, the cloud side router is now aware of all cloud side VMs on stretched networks.  All other IPs on these stretched networks are assumed to reside on-premises.

 

/32 routes are created for each cloud side VM on a stretched network.  These routes are distributed to the on-premises edge through BGP as shown in the below graphic.  The /32 routes will take precedence over less specific routes, forcing traffic to follow the same path on its return trip.

 

Layer 2 Extension

The /32 route distribution and symmetrical traffic flow between two VMs on different networks

 

To prevent dropped traffic after a vMotion to the cloud, migrated VMs will be configured to trombone to the on-premises data center until the VM has been rebooted in the cloud.   Once reset, the VM will then look to the cloud side router as its default gateway.

 

HCM, Solving Your Hybrid Cloud Woes:

With Proximity Routing in play, your hybrid applications will communicate using the most efficient paths to other VMs allowing you to realize a true hybrid cloud scenario.  HCM takes care of avoiding inefficiencies of tromboning traffic, maintaining symmetrical flow affinity, and preventing duplicate ARP responses.  Since this stands on its own without any special physical equipment, and setup taking only a few minutes (Deployment and VLAN Stretch), testing a proof of concept is now simplified.

 

Proximity routing and Layer 2 extensions also play a huge part in Data Center migration to the cloud as well by enabling vMotion Capabilities, and improving network performance for hybrid applications. I’ll talk be about how these two come into play in detail  during an upcoming webcast planned for Thursday, April 27th.  To attend this tech talk, sign up here!

 

For more information on vCloud Air and Hybrid Cloud Manager, please visit vcloud.vmware.com.