VMware Cloud Provider

Edge Cluster Design and Migration Considerations for VMware vCloud Director 9.7 (Part 3 of 3)

By Daniel Paluszek and Abhinav Mishra

Part 1 – Introduction to Edge Clusters in VMware vCloud Director 9.7 (Part 1 of 3)

Part 2 – Setting up Edge Clusters in VMware vCloud Director 9.7 (Part 2 of 3)

In this post, we will be reviewing some of the considerations when using Edge Clusters inside of vCloud Director 9.7.

Edge Cluster Traffic Flow and Broadcast Traffic

In the above diagram, we can see how traffic traverses from a North/South perspective. Tenant traffic will route through the transport VXLAN network (overlay VLAN) between the Compute Clusters to its destined tenant Edge which resides on the Edge Clusters. From there, the Edge would egress it out to the northbound network.

From a broadcast traffic perspective, only the “DV-Switch Edge” would receive any northbound requests which minimizes the sprawl of this traffic to the workload environment (Compute Clusters).

Edge Cluster Considerations

There are a few considerations when using Edge Cluster services inside of vCD 9.7.

  1. General Considerations
    1. This feature is currently supported only for NSX-V ESGs.
    2. API support is only for 9.7 and is not supported for anything below this version. Note that it is also the vCD Cloud/Open API that is being used here.
    3. This is intended only for oVDC Edges – internal edges (such as vApp Edges) and other types of routers – such as Universal Distributed Logical Routers (UDLRs) – still go through standard placement.
    4. This solution replaces the metadata configuration that was introduced in vCD 9.0. As stated before, when an Edge Cluster has been configured, this takes precedence over any metadata configuration inside of the pVDC.
  2. Network Pools
    1. Before adding the Edge Cluster in VCD, the provider must add the Edge Cluster to the Transport Zone in NSX (if utilizing VXLAN). The provider must sync the VXLAN network pool – this will allow for vCD to be in sync with NSX from an overlay view.
    2. VXLAN Network Pool – 
      1. The Edge Cluster must be part of the VXLAN network pool and backing transport zone.
      2. If the Edge Cluster is not part of the transport zone, then the Edge Cluster cannot be assigned to the oVDC using that stated VXLAN network pool.
    3. VLAN Network Pool – 
      1. If a VLAN network pool is desired, the Edge Cluster and VLAN network pool must be on the same distributed vSwitch (DVS).
    4. External Networks
      1. When Edge Clusters are utilized, one can configure a dedicated distributed vSwitch with discrete External Networks.
      2. When deploying an org VDC Edge and utilizing Edge Clusters, the provider should select the explicit External Network attached to the Edge Cluster.
      3. The UI will show any External Network that is correlated to an Edge Cluster if the Org VDC has an edge cluster assigned to it.
    5. Primary and Secondary Edge Cluster
      1. After an Edge Cluster has been instantiated, it can be assigned to an oVDC as a Primary or Secondary Cluster.
      2. This provides distinct advantages to load balance tenants between Edge Clusters while also achieving a higher level of availability.
      3. Right now, there is a network pool consideration in vCD: While the Primary and Secondary Edge Clusters can use different resource pools and different storage profiles, they must share the same DV-Switch. In essence, both the active and standby Edge nodes must connect to the same External Network. vCD doesn’t have the ability to specify an external network to connect for the active Edge node and another one for the standby when HA is enabled.
    6. Northbound (WAN/External Network) Connectivity
      1. By utilizing a dedicated Edge Cluster for vCD, the provider can discretely control the span of Layer 2 (L2) broadcast traffic for northbound VLAN(s) throughout the entire cloud management platform.
      2. In this design, the compute/payload environment would only traverse workload traffic over the overlay transport zone VLAN (NSX VTEP). Therefore, Compute Cluster(s) would not receive any broadcast traffic which will minimize potential flooding of traffic.
      3. Moreover, from a security perspective, the dedicated Edge Cluster is our dedicated demarcation to the outside network – tenant workloads must traverse through the Edge Cluster to egress to the outside world.

 

Migrating from Pre-9.7 Edge Placement to 9.7 Edge Cluster

There are two approaches to migrating from a metadata placement configuration to the new Edge Cluster configuration. In this section, we will review both supported options.

  1. Migration to new Edge Cluster and redeployment of tenant Edge 
    1. In this scenario, the provider would like to migrate all edges to a newly created vSphere cluster that will instantiated as the vCD Edge Cluster.
    2. As discussed in the step by step configuration section, the provider would instantiate the new edge cluster(s) and then apply the primary and secondary Edge Clusters to the explicit organization VDC.
    3. On redeployment of the tenant Edge, vCD will redeploy the Edge into the Edge Cluster.
    4. Note that this requires a minimal amount of downtime as the Edge node would be recreated on the destination resource construct.
  2. Application of Edge Cluster in relation to existing metadata placement
    1. In this scenario, the provider can utilize the existing Edge location and apply the new Edge Cluster configuration.
    2. If existing storage policy and System vDC resource pool are utilized, this can mitigate redeployment of tenant Edges. The original metadata solution used the default storage policy of the Org VDC.
    3. The provider would need to establish the new edge cluster(s) inside of vCD but also ensuring that the storage policy and System vCD resource pool are utilized.
    4. On application of Edge Cluster location on the Network Profile of all associated org VDCs, the provider can then safely remove the metadata configuration from the provider VDC configuration.

Videos

In the following two videos, Abhinav and Daniel review the Edge Cluster considerations and configuring them for a provider and organization.

FAQ

  1. Does this work with NSX-T in vCD 9.7?
    1. Today, this only works with NSX-V.
  2. Do I have to utilize a dedicated Edge Cluster in vCloud Director?
    1. Edge Cluster is optional but highly recommended for large vCD deployments. As discussed before, this will minimize the span of broadcast traffic while providing a higher level of availability to tenant Edges.
  3. Is this supported for vCD 9.5?
    1. This is only supported inside of vCD 9.7 and beyond when using version 32 of the API.
  4. What is the interoperability between Edge Cluster and Cross-VDC Networking?
    1. Cross-VDC Networking will work with Edge Clusters.
    2. The UDLR Control VM and Universal Routers are independent of the Edge Cluster solution. This is configured by establishing the fault domain on the vCenter object inside of the vCD Provider UI.
  5. How many Edge Clusters can one have in a vCD instance?
    1. Currently, we have tested up to ten (10) Edge Clusters inside of a vCD instance.
    2. However, an org VDC can only have two (2) – a Primary and Secondary – configured for placement inside of its VDC Network Profile.
  6. What happens when I have an Edge Cluster configured via the NetworkProfile along with the metadata placement configured at the provider VDC level?
    1. Edge Cluster configuration takes precedence and will deploy the Tenant Edge to the specified Edge Cluster(s).
  7. What about Internal Edges for vApps – do we use Edge Clusters?
    1. At this time, internal vApp Edges do not support Edge Clusters.
    2. Standard placement takes place for vApp Edges.

Summary

In summary, vCD 9.7 provides enhanced availability and control for Edge placement. This is a great addition that allows a provider to discretely optimize traffic as well as provide a higher level of availability of virtual network services. We hope this was informational and provides a level of clarity on the rationale behind Edge Clusters.