Each datacenter is unique and is designed to serve the specific business needs. To serve these business needs, you could have a small or a large ESXi/KVM footprint. NSX-T Data Center can be leveraged to provide networking and security benefits regardless of the size of your datacenter. This blog focuses on a critical infrastructure component of the NSX-T Data Center i.e. NSX-T Edge node. Refer to my previous blogs, where I have discussed how the centralized components of a logical router are hosted on Edge nodes and also, provide centralized services like N-S routing, NAT, DHCP, Load balancing, VPN etc. To consume these services, traffic from compute nodes must go to the Edge node.  

These NSX-T Edge nodes could be hosted in a dedicated Edge cluster or a collapsed Management and Edge cluster as discussed in the NSX-T Reference design guide. NSX-T Edge nodes could also be hosted in Compute Cluster in small Datacenter topologies, making it a Collapsed Compute and Edge Cluster design. Please refer to NSX-T Reference design guide to understand the pros/cons of using a dedicated cluster vs a shared cluster. 

In this blog, I will cover various deployment options of NSX-T VM form factor Edge node. Let’s start with the simplest deployment option.

Dedicated Management and Edge Cluster OR Collapsed Management and Edge Cluster

Installation of Edge node VM remains the same in both the cases. You don’t need to install NSX-T VIBs on the host where NSX-T Management components and Edge nodes are hosted. Simply put, NSX-T Management components and Edge node VMs are deployed on the host leveraging VSS/VDS port groups and all the physical NICs (pNICs) on that host are owned by VSS or VDS. Use of a VDS is highly encouraged due to the support and flexibility benefits it provides over the VSS. This is the simplest and recommended configuration. 

Now, what if you want to bring centralized services (hosted on Edge node) closer to the workloads or don’t want this traffic to leave the rack unless it’s N-S traffic? This brings us to the next design choice I.e. Collapsed Compute and Edge cluster. 

Collapsed Compute and Edge cluster

Moving these centralized services closer to your compute workloads have bandwidth advantages and hence, the ask for a collapsed Compute and Edge cluster. Also, this deployment option serves well to customers with small datacenter topologies where it’s hard to dedicate hardware for different clusters.  

NSX-T compute host always has NSX-T bits installed and is always configured as Transport node. This implies that N-VDS is installed on the host and N-VDS owns one or more physical NICs of the host. NSX-T Edge deployment on a compute host can be further categorized based on whether the Edge VM is deployed on VSS/VDS or N-VDS. Following are these deployment options: 

  • NSX-T Edge VM deployment leveraging VSS/VDS portgroups on a Compute host. 
  • NSX-T Edge VM deployment leveraging N-VDS vlan backed logical switches on a Compute host. 

Before we get into details of deployment options, let’s make sure that we understand the basics of N-VDS.  

What is N-VDS?

N-VDS is the next generation virtual distributed switch installed by NSX-T Manager on Transport nodes such as ESXi, KVM, Edge node etc. Its job is to forward traffic between components running on the transport nodes (e.g., between virtual machines) or between internal components and the physical network. Just like VSS or VDS, N-VDS owns one or more physical NICs and cannot share a physical NIC with VSS, VDS or any other N-VDS. 


Please see the detailed CPU/NIC compatibility matrix before getting started. Let’s start with the simplest install and see how this topology looks like. 

NSX-T Edge VM deployed on a host with no NSX-T bits installed (Dedicated Edge Cluster or Collapsed Management and Edge Cluster)

Edge node VM in NSX-T can have a total of 4 vNICs- one of the vNICs is dedicated for Management traffic and the rest of the interfaces are assigned to DPDK fast path. These fast path interfaces are used for sending Overlay traffic and uplink traffic towards TOR (Top of Rack) switches. It depends on the design whether you choose one or more fast path interfaces for overlay or uplink traffic (explained in the NSX-T reference design guide). Typically, customers choose one fastpath interface for overlay traffic and two fastpath interfaces for uplink traffic. However, if you choose to use 3 vNICs of the Edge VM, make sure that the 4th vNIC is disconnected. 

The following diagram shows an NSX-T Edge node VM hosted on an ESXi host leveraging the VSS/VDS port groups configured for different vlans. NSX-T bits are not installed on this ESXi hypervisor and hence, you don’t see any N-VDS here. Some important points to note in the following diagram: 

  • VSS/VDS has dedicated pNICs, P0 and P1. VSS/VDS port groups are used for Edge VM deployment. All the VMKernel interfaces like Management, vMotion, Storage etc. use VSS/VDS as well. 
  • No Vlan tagging done on Edge uplink profile as I am leveraging vlan tags at the portgroup level.
  • 4 Vlan backed portgroups have been defined. Portgroups Edge-Mgmt-PG and Edge-Transport-PG are configured for failover order teaming policy with pNIC P0 as active and pNIC P1 as standby.  
  • Portgroups Edge-Uplink1-PG and Edge-Uplink2-PG have only one pNIC each, pNIC P0 and P1 respectively. Routing adjacency will be established between Edge VM and TORs leveraging these portgroups. The teaming policy used here ensures that the routing protocol traffic destined to a specific TOR exits the hypervisor on appropriate uplink interface.  

NSX-T EdgeFigure 1: NSX-T Edge VM deployed on VSS or VDS portgroups

Following video shows the NSX-T Edge install on VSS or VDS portgroups. In this video, I talk about uplink profile, different teaming policies at portgroup level, why I chose those policies and Edge node configuration as a Transport node.

Now, let’s look at the Collapsed Compute and Edge Cluster topology. 

NSX-T Edge VM deployed on a compute host using VSS/VDS port groups in Collapsed Compute and Edge Cluster

In a Collapsed Compute and Edge Cluster topology, compute host is prepared for NSX-T which implies that this host has N-VDS installed and is also configured with a Tunnel End point. This deployment option is NOT recommended on a host with two pNICs. The simple reason being, this host has two virtual switches now, a VSS/VDS and a N-VDS, each consuming one pNIC. So, there is no redundancy for either of the virtual switches. A host with 4 pNICs or more is recommended in this deployment option. NSX-T Edge VM deployment or configuration doesn’t change from option 1 as the Edge VM still leverages the VSS/VDS portgroups. 

The following diagram shows an NSX-T Edge node VM installed on an ESXi host leveraging the VSS port groups and a N-VDS. Some important points to note in the following diagram: 

  • VSS/VDS has dedicated pNICs, P0 and P1. VSS/VDS port groups are used for Edge VM deployment. All the VMKernel interfaces like Management, vMotion, Storage etc. use VSS/VDS as well. 
  • N-VDS has dedicated pNICs, P2 and P3.  
  • Compute host is a Transport node and has a TEP (Tunnel End Point) IP. Compute host will use pNICs, P2 and/or P3 (depending upon the teaming policy), to send overlay traffic to other Compute hosts (E-W traffic) and to Edge nodes (N-S traffic or to consume centralized services). 
  • Edge node is also a Transport node and has a TEP IP just like compute host. Edge node sends and receives overlay traffic from Compute hosts on pNIC P0 or P1 (depending upon the teaming policy) via Edge-Transport-PG. The same NICs are leveraged to send N-S traffic via the Edge-Uplink-PGs. 

Figure 2: NSX-T Edge VM deployed on VSS or VDS portgroups in a Collapsed Compute and Edge Cluster

Let’s take a look at the third deployment option. 

NSX-T Edge VM deployed on a compute host using N-VDS Logical switches in Collapsed Compute and Edge Cluster

In this deployment option, a 2 pNIC compute host is used and Edge VM is deployed on N-VDS. Since N-VDS consumes both NICs in this case, we must move all the VMkernel interfaces from VSS or VDS to N-VDS.  Some important points to note in the following diagram: 

  • NSX-T Edge VM leverages Vlan backed logical switches (Mgmt, Transport and Uplink) defined on N-VDS.  
  • All the logical switches (Overlay and Vlan backed) are using the same teaming policy as N-VDS allows only one teaming policy by default unless you use vlan pinning feature that was introduced in NSX-T Data Center 2.2 release. 
  • Compute TEP IP and Edge TEP IP must be in different vlan. Hence, you need an additional vlan on underlay. N-S traffic from Compute workloads is encapsulated in GENEVE and sent to Edge node with Source IP as Compute TEP and destination IP as Edge TEP. Since these TEPs must sit in different vlan/subnets, this traffic must be routed via TOR. 
  • In the diagram, we have just shown one VMKernel interface moved to N-VDS. However, if you have more VMkernel interfaces like vMotion, Storage etc., you would need to move all of them to N-VDS as well.  

NSX-T EdgeFigure 3: NSX-T Edge VM deployed on N-VDS in a Collapsed Compute and Edge Cluster 

Following video covers Edge VM installation on N-VDS.

Finally, you could use vlan pinning feature to pin vlan traffic from different vlan backed logical switches to a specific uplink interface. For instance, traffic from a vlan backed logical switch Edge-uplink-LS can use just Uplink1 and another vlan backed logical switch could use Uplink2. 


Your decision to choose a dedicated or collapsed cluster for NSX-T Data Center Edge VM, would mostly depend upon factors such as workload type, compliance, scale and performance requirements. Whether you decide to choose a dedicated cluster design or a collapsed cluster design, NSX-T Data Center would still provide you with complete flexibility in Edge deployment for both the designs. 

Check out the NSX-T Reference design guide to see our recommendations.