The NSX-T 2.5 release marks a cornerstone in NSX-T as announced at VMworld 2019 by SVP Umesh Mahajan. 2019 has been a year of phenomenal growth for VMware’s NSX-T with its wide adoption by enterprises across several verticals. In 2019, we introduced two ground-breaking releases NSX-T 2.4 and NSX-T 2.5. With these two releases, we are fully embarking on enterprise ready system becoming the de-facto enterprise software-defined networking (SDN) platform of choice.
To support our customers in their network and security virtualization journey, we introduced the NSX-T design guide on the NSX-T 2.0 release and provided design guidance on how customers should design their data centers with NSX-T.
Today, we are excited to announce the next version of the NSX-T design guide based on generally available NSX-T release 2.5. It is the foundation overhaul to design guidance and leading best practices. There have been numerous L2-L7 features additions and platform enhancements since NSX-T release 2.0. This design guide covers functional aspects of these enhancements and provides design guidance for them.
What readers can expect in the new NSX-T Design Guide:
- Packet walks
- Detailed explanation of several key features: switching, routing, bridging, load balancer, firewall etc.
- Clear recommendations on NSX-T design for your data center based on your applications needs, throughput, performance, convergence etc.
- Performance chapter at the end as requested by many loyal customers that eliminates the myths about NSX-T performance
What’s New in the NSX-T Design Guide:
Let’s dive deeper into what’s new in the NSX-T design guide.
Platform Enhancements
Let’s start with an architectural change that merges NSX-T manager and NSX-T controller into one NSX-T unified appliance. It introduces redundancy for NSX management plane and consolidated management, policy and controller into three VMs with separation of each function.
Design Enhancements – Small or Mid Size to Cloud-Scale Data Centers
NSX-T offers an extensible and flexible architecture that’s built to scale. So, whether you are a small data center with 4 ESXi hosts or a large enterprise data center with 1,000+ hosts and massive scale requirements, NSX-T can be leveraged to provide networking and security benefits.
Your data center should be resilient to tolerate any run time failures and highly scalable to accommodate for growth. More often than not, the NSX-T design discussion around this topic quickly turns into a discussion of a number of hosts that you need to host NSX-T management/controller and edge components. While the answer to this depends on factors like cost, throughout, convergence requirements, scale, growth etc., NSX-T doesn’t impose any restrictions on the placement of NSX-T management and edge components. We have several production deployments with NSX-T deployed in a 4-node ESXi cluster and NSX-T manager and edge VMs sitting right next to the compute workloads.
Having said that, the question remains: when should you dedicate separate hosts for running management, compute and edge vs. using a shared cluster for management, compute and edge? This design guide discusses the rationales in choosing one design over the other and provides clear guidance around the following topologies:
- Dedicated cluster(s) for Management, Edge and Compute
- Shared Management and Edge cluster with dedicated Compute clusters
- Shared cluster for Management, Edge and Compute
- 2 pNIC host vs. 4 pNIC host design
The design guide also covers topology and considerations for VxRail and/or VCF (VMware Cloud Foundation) stack integration with NSX-T.
We discussed these design choices in VMworld sessions Next-Generation Reference Design with NSX-T: Part 1 and Part 2.
Business requirements, challenges and priorities aren’t the same for a SMB data center design vs. a large cloud-scale data center. So, the design considerations for these data centers are different as well. We also discussed NSX-T design in another VMworld session fine-tuned for small to mid-size data center customers.
Resilient, Optimized and Simplified Edge Node
Edge node is a critical component of the overall NSX-T architecture as it provides centralized services and connectivity to physical fabric. North-South throughput and convergence play a key role in choosing the edge node right for your data center. The NSX-T design guide covers these design choices in depth. While recommending these design choices, we wanted to ensure that the recommended design is resilient, optimal, consistent design for both Edge VM and bare metal edge, and a design that solves all the use cases.
Design Considerations
Resilient
Let’s discuss each one of these design considerations and start with resiliency. We have introduced a new enhancement in NSX-T 2.5 release named as Failure domain. This feature compliments high availability and protects a service against a rack failure while in-built high availability protects against failures such as host failure, NIC failure, TOR failure etc. This feature can also provide protection against host failure if multiple edge VMs are hosted on the same host.
Optimal and Deterministic
Moving on to the next design consideration for edge, i.e. an optimal and deterministic design that not only provides symmetric bandwidth for both overlay and north-south traffic but also maximizes throughput. This version of the design guide introduces a simpler way to configure Edge connectivity, referred to as “Single N-VDS Design”.
To achieve this “Single N-VDS Design”, we have leveraged the following two key features:
- Multi-TEP support on Edge – This feature provides load balancing for overlay traffic from edge by using different tunnel endpoints (TEP) each using a separate uplink and hence, different physical NICs.
- Named teaming policy – N-S traffic going towards physical TORs can now be pinned to a specific uplink or physical NIC. This allows users to run a simple and deterministic routing topology rather than navigating through a slew of issues that arise from routing over mLAG or similar technologies.
Simple and Consistent
Providing a consistent design for bare metal edge and VM form factor edge deployed on any vSwitch like VSS, VDS or N-VDS has been a special focus. The following diagram shows a bare metal edge node and VM edge node both leveraging multi-TEP and named teaming policy thereby using a single N-VDS for both overlay and North-South traffic.
Scalable and Future-Proof
Future proof and scalable design that addresses more use cases without changing the configuration on the Edge node. With the single N-VDS design, as shown above, you can add service interfaces at will without changing the port groups that Edge VM is connected to. A separate N-VDS can be dedicated for bridging use cases on the same Edge node (VM or bare metal).
Routing and Bridging Enhancements – Enterprise to Service Provider
Bridging plays an important role in providing layer 2 connectivity between virtualized and non-virtualized environments or layer 2 connectivity between overlay and traditional VLAN workloads. This design guide introduces the NSX-T Bridge, a service that can be instantiated on an NSX-T Edge. The key benefits of using bridging on the data plane development kit (DPDK) enabled edges are high throughput and scalable traffic forwarding performance. Bridging design choices with both bare metal edges and edge VMs are discussed.
NSX-T 2.4 release introduced IPv6 routing support in single tier and multi-tier topologies with dual stack support on all interfaces. MP-BGP with support for both IPv4 and IPv6 address families, along with BGP route influencing knobs, IPv6 route redistribution, filtering etc. were the key features supported in this release. With the NSX-T 2.5 release, support for duplicate address detection and SLAAC has been added. We are also glad to announce that NSX-T 2.5 has obtained IPv6 Ready Logo from IPv6 Forum.
To learn more, visit the IPv6 Ready program website.
Other layer 3 enhancements that are discussed in this revision of the NSX-T design guide:
- Inter-SR Routing
- Support for back to back Tier-0 topologies
Load Balancing
This revision of the design Guide covers NSX-T Load Balancing capabilities and its technical implementation. Load balancing deployment modes like In-line load balancing mode and one-arm load balancing mode are discussed in detail.
Security Enhancements
A practical approach on how to get started with NSX-T security; start and build micro-segmentation policies in phases is one of the main additions to this revision of the design guide. Distributed firewall for VLAN backed workloads is a very common use case where a customer can enhance the security posture for existing applications without changing the network design. The NSX-T design guide also covers deployments options for distributed firewall for both overlay and VLAN workloads.
Other security enhancements that are discussed in this revision of the NSX-T design guide:
- Layer 7 APP-ID based firewall policy for both NSX-T distributed and gateway firewalls
- Service insertion capability for both distributed & gateway firewall to provide advanced firewall services like IPS/IDS using integration with partner ecosystem
Performance
Last but not the least, this revision of the design guide includes a section dedicated to performance that focuses on performance related considerations for both traffic flows within the NSX-T domain (traffic going east/west) and also traffic flows going into and out of a NSX-T domain ( traffic going north/south). It provides guidance in terms of features to look for when choosing NICs for both compute and edges and how to confirm what’s supported for any given NIC. While the focus of this section is on typical DC workloads, it also includes references to other resources for telco type workloads.
Happy Reading!
Resources:
This design guide is a collaborative effort of the Networking and Security Business Unit Technical Product Management team and we encourage our readers to send NSX-T Design feedback to [email protected]. We thank early adopters of NSX-T who have provided valuable feedback, including internal VMware teams.
NSX-T resources:
Comments
0 Comments have been added so far