Networking (NSX) Home Page VMware Cloud Foundation

NSX Edge Bridge and Promiscuous Mode: Avoiding a Common Error

This blog documents a common configuration error encountered when deploying an NSX Edge virtual machine (VM) running a bridge and attaching it to a Distributed Virtual Port Group (DVPG) in promiscuous mode.

Correct Configuration of the NSX Edge Bridge

The NSX Edge bridge extends an NSX segment to a VLAN. The process is straightforward: the Edge where the bridge is instantiated includes:

  • A Tunnel End Point (TEP) connecting to the NSX segment on one side.
  • A VLAN interface, provided by a virtual NIC (VNIC) attached to a DVPG on the host.

Traffic received on the NSX segment is forwarded to the VLAN interface, and vice versa.

The Edge VM is going to inject traffic in the VLAN on behalf of the multiple VMs in the NSX segment. In the example shown in the above diagram, the Edge VM VNIC will:

  • Transmit traffic into the DVPG with source MAC addresses M1 and M2, which are not the MAC address of the Edge VM VNIC. Therefore, the DVPG must be configured to accept “forged transmits.”
  • Receive traffic with destination MAC addresses M1 and M2. By default, a VNIC only receives unicast traffic targeted at its own MAC address. MAC learning or promiscuous mode must be enabled on the DVPG so that the VNIC can receive traffic for other MAC addresses.

The preferred approach is to configure MAC learning on the DVPG. However, if you choose to use promiscuous mode, VMware documentation advises that you must run the following ESXCLI command on the host where the Edge VM resides:

esxcli system settings advanced set -o /Net/ReversePathFwdCheckPromisc -i 1

You must also ensure that the Edge VM cannot relocate via vMotion to an ESXi host that lacks this specific configuration. Failure to configure this may result in connectivity issues detailed below.

Promiscuous Mode and Flooded Traffic

An unintended side effect of promiscuous mode is the duplication of traffic on hosts with multiple uplinks connected to the same Layer 2 network, a common configuration for vSphere redundancy.

Consider the following scenario:

  • A frame is broadcast within the physical infrastructure and received by both ESXi uplinks, resulting in two copies of the same frame arriving at the DVPG.
  • The VMware Distributed Switch (VDS) performs a forwarding check that ensures that a VM only receives one copy of the flooded traffic, based on the DVPG’s teaming policy.

However, if promiscuous mode is enabled without the ESXCLI command mentioned at the top of this document, the VDS does not apply the forwarding check. The frame from the standby uplink is thus also delivered to the VM’s VNIC. Consequently, the VM receives duplicate frames, as represented below:

Promiscuous mode is generally not recommended in production due to its negative impact on performance. VMs connected to a DVPG in promiscuous mode will needlessly receive all traffic directed to the DVPG and discard most of it. Worse, they will also receive duplicate traffic, as seen in this example.

Promiscuous Mode and Bridging

The problems caused by promiscuous mode become more severe when the VM is an NSX Edge performing bridging. Consider the following scenario:

  • VM1 on ESXi1 sends traffic that is flooded across a segment extended to a VLAN by an Edge bridge on ESXi2. The packet arrives at the bridge’s overlay interface and is forwarded to the VLAN uplink.
  • The DVPG, operating in promiscuous mode, forwards the traffic on one of its uplinks, say uplink2. Since uplink1 and uplink2 are Layer 2 adjacent, the packet returns to ESXi2 via uplink1.
  • In a non-promiscuous DVPG, this packet would not be forwarded back to the Edge VM’s VNIC thanks to the VDS forwarding check. However, with promiscuous mode enabled, the packet is forwarded again, leading to the bridge re-flooding the packet into the NSX segment, as represented below:

This behavior causes incorrect MAC address learning on the NSX segment.

  • As you can see in the above diagram, the packet initially sourced by VM1’s VNIC is injected by the bridge back into the segment. When receiving this traffic, the hosts update their mac address table for the segment. The MAC address M1 is now learned as coming from the bridge port connected to the segment on ESXi2.
  • When VM2 attempts to send unicast traffic to VM1, the bogus MAC address table entry for the MAC address of VM1 leads the traffic to be forwarded to the physical network via the bridge, where it is dropped, as depicted below:

This results in a very confusing network problem. The traffic between VM2 and VM1 is not supposed to go through the bridge, however, this traffic is impacted as soon as the bridge is enabled.

Best Practices

If two bridges are configured on the same segment and each introduces a half-loop due to promiscuous mode misconfiguration, you risk creating a full bridging loop, which can severely disrupt network performance. Yes, yes, I’ve used the “L-word” in a document talking about bridging, sorry about that… But all those troubles can be easily dodged:

  • Avoid the promiscuous mode (or at least configure it properly.) 
  • Instead, configure MAC learning on the DVPG to which the bridges are attached.
    Benefits of MAC learning:
    • Eliminates the risk of traffic loops through the uplinks.
    • Simplifies configuration; there is no need to manually configure the ESXCLI command on every host.
    • The VDS will intelligently learn the MAC addresses of the VMs, ensuring only relevant traffic is forwarded to the Edge VNIC. This significantly improves performance compared to promiscuous mode, which forwards all traffic indiscriminately to the Edge VNIC.