Uncategorized

Q&A from Our Recent NSX Webinar

A few weeks ago we offered a free webinar called Yes, You Really Need VMware NSX based on the very popular session VMware Certified Instructor (VCI) John Kreuger delivered at VMworld US 2015.

As always when we present on VMware NSX, there were lots of good questions from the audience and we wanted to share a few with you while you check out the recording.

Do you support Multicast any time soon to handle unknown unicast?
Currently, our control plan supports 3 mechanisms for broadcast, unknown unicast, and multicast traffic generated by guest VMs. The VTEPs can Unicast copies of those frames to all other VTEPs in the Transport Zone, VTEPs can Multicast the frame to a multicast group for the VNI, or they can leverage a Hybrid Mode, which sends a copy of the frame to a multicast address for VTEPs on the local subnet while sending an additional copy via unicast to a VTEP proxy on each remote VTEP subnet.

What routing protocols are supported in distributed logical router?
The DLR supports OSPF and BGP.

How does NSX work with Horizon environment?
Please see this solution brief for more information.

Does the main ESG for N/S traffic need jumbo frames enabled for uplink/internal interfaces?
No. The ESG is a simply a virtual machine attached to multiple networks. Jumbo Frames are not required for any virtual machine connectivity. The Jumbo Frames requirement is only for the VTEPs that carry the VXLAN traffic between ESXi hosts.

Within NSX/VM hypervisor virtualization, can we work directly with subnets and not even identify any xVLAN or VLAN as part of the VM image?
Just like we’ve done for years, the workings of the hypervisor are transparent to the virtual machine. The virtual machine has never concerned itself with VLAN membership, and the same holds true for VXLAN.

Do you have more information about remote site VPN?
Please look here for more information.

Were any tests made regarding performance/delay/BW when using “non NSX” vs NSX (vxvlan)?
We find that VXLAN provided a minimal impact on bandwidth and latency, allowing for line-rate transmission. We generally see a 2-5% ESXi host CPU overhead associated with the encapsulation/deencapsulation process.