posted

0 Comments

VMware is pleased to announce that VMware NSX-T Data Center 2.2.0 has been released on June 5, 2018!

 

With this release we have introduced a number of excellent new features for on-premise deployments as well as the ability to manage Microsoft Azure based workloads as part of the NSX Cloud product. VMware NSX-T Data Center has also been updated to provide networking and security infrastructure for VMware Cloud on AWS.

Here is list of highlighted features that may be of most interested to customers. Note that this is not a complete list of new features. Please see the release notes for this release for all of the details.

Management of Workloads in Microsoft Azure

 

One of the most interesting new features of NSX-T Data Center 2.2 is the enablement of NSX Cloud, managing networking and security for applications running natively in public clouds, now including Microsoft Azure. This feature enables a true hybrid cloud with management of network security in a single view. This feature is detailed well in the following blog by Jonathan Morin so instead of repeating all the details here it is highly recommended you review this page:  https://blogs.vmware.com/networkvirtualization/2018/06/nsx-cloud-a-new-and-improved-model-for-end-to-end-networking-and-security.html/

Enhanced Data Path Mode in N-VDS

 

The NSX-T N-VDS now supports a high-performance mode called Enhanced Data Path when used in conjunction with vSphere 6.7. With this mode, NSX-T now provides a hypervisor host based virtual switch that is 3 to 5 times the performance of the current VSS/VDS switches that ship by default with vSphere. In this mode, N-VDS provides superior performance for both small and large packet sizes. This new capability is very popular in the NFV market where telecommunication operators want to have a highly performant virtual switch without having to sacrifice any of the benefits of virtualization like vMotion, Predictive DRS. The Enhanced Data Path mode implements some of the key DPDK techniques like Poll Mode Driver, Flow cache, and optimized packet copy. Some of the specific benefits of the ENS mode include:

  • Ease of Configuration: Easy allocation of compute resources to the N-VDS for data-plane intensive workloads.
  • N-VDS Load Balancer: NSX-T has a built-in switch load balancer that automatically aligns critical elements such as VNF processing cores, lcore handling, DPDK PMD and, the NIC on the same NUMA node.
  • Full vSphere Support: The underlying N-VDS supports key vSphere functionality like HA, vMotion and DRS.
  • Linear Scale: Performance scales in a linear way as newer and higher capacity network interface cards are adopted by the industry.  The N-VDS provides the flexibility to add additional lcores for PMD operation and exhibits a linear traffic increase with each additional lcore.

For more details on this excellent new feature please see the blog by Samuel Kommu here: https://blogs.vmware.com/networkvirtualization/2018/06/myth-busted-who-says-software-based-networking-performance-does-not-match-physical-networking.html/

Improved Controller Cluster Deployment Experience

 

NSX-T now supports the ability to automatically deploy a controller cluster onto a vCenter defined compute cluster. The NSX admin can configure one or more compute managers (i.e. vCenter servers) for the discovery of vSphere compute clusters of hosts. Once these compute clusters are discovered the deployment of controllers into a highly available group can be done via the NSX GUI and API. In a previous release of NSX-T the same workflow as provided for virtual machine based NSX Edge nodes introduced in the previous release. The combination of automatic controller and Edge virtual machine deployments significantly improves the deployment of NSX-T in a vSphere environment. For KVM customers they can continue to deploy controllers and bare-metal Edges manually or automate the process with their own automation tools (e.g. Ansible for controllers and PXE for Edges).

Guest VLAN Tagging

 

NSX-T now supports the ability to extend VLAN tags to guest VMs attached to NSX-T VLAN backed logical switches. This allows the NSX admin the ability to have a guest virtual machine to be “wired” to multiple VLANs at the same time. With this feature you can configure one or more VLAN tags on a NSX-T VLAN backed logical switch and guests attached to the switch will receive packets tagged with the specified VLANs. One example where this feature could be used is to wire a single virtual NIC on an Ubuntu machine to multiple VLAN sub-interfaces. One could be used for management and the other for application traffic.

Load Balancing Enhancements

 

One key feature of network load balancing is the ability to load balance the secure HTTPS protocol and terminate secure sessions on the NSX-T load balancer. In this release of NSX-T VMware now offers the ability to do both with the in-build NSX-T load balancing feature. In addition there are new real-time statistical graphics associated with the virtual IP address of the load balancer. These include the following:

  • Concurrent Connections
  • New Connection Rate
  • Throughput
  • HTTP Request Rate

NSX-T 2.2 also includes the higher access log granularity by allowing configuration on both a per load balancer basis as well as on a per virtual server basis.

Finally, this release has a number of other miscellaneous enhancements including the following:

  • WebSocket application support (“enhanced” HTTP protocol)
  • Ability to define per Virtual Server a second pool (sorry server) to use in case all members of first pool is down
  • New load balancing rules for  “Match cookie value” and “match value case insensitive”
  • Support for multiple L4 port ranges
  • Support for the “Weighted Least Connection” load balancing algorithm
  • Slow start enabled automatically for the following load balancing algorithms. This allows for a new server to be added to an existing load balancing pool with gradual load of new connections.
    • Least Connection
    • Weighted Least Connection
  • POST requests can now be limited in size. This setting is via API only.

High Performance Layer 2 Bridge with Firewall

 

With this release NSX-T supports the ability to bridge layer 2 between GENEVE encapsulated networks and VLAN based networks on the Edge node. This allows the NSX admin the ability to leverage the high performance, DPDK based feature provided by NSX Edge nodes. In addition, a NSX admin can also implement a firewall at this layer 2 boundary.

VLAN Ports on Logical Routers

 

With this release of NSX-T we now offer support for VLAN backed downlinks on logical routers. This means the NSX admin now has the ability to connect VLAN-backed logical switches to Tier0 or Tier1 logical routers on a downlink. This provides the NSX admin the ability to deliver NSX-T Edge services on VLAN networks. In addition, the NSX admin can connect NSX-T logical routing to other technologies using a VLAN based layer 2 network as the connection point.

Terraform Provider

 

NSX-T now has an officially supported Terraform Provider for the automation of NSX-T logical objects such as switches, routers, firewall rules and grouping. The NSX-T Terraform Provider can be installed automatically in Terraform using the “terraform init” command or can be downloaded as source from the Drivers and Tools tab under the NSX-T download page on vmware.com. Here is a link to the provider on the Terraform web site: https://www.terraform.io/docs/providers/nsxt/index.html

Yasen Simeonov has written an excellent blog on how to use the NSX-T Terraform Provider on April 10th, 2018. It can be found here: https://blogs.vmware.com/networkvirtualization/2018/04/nsx-t-automation-with-terraform.html/

Network I/O Control v3 on the N-VDS

 

Network IO control (NIOCv3) allows configurable limits and shares on the network for both system-generated and user-defined network resource pools based on the capacity of the physical adapters on a ESXi host. It also enables fine-grained resource control at the VM network adapter level similar to the model that you use for allocating CPU and memory resources. In Network I/O Control version 2, you configure bandwidth allocation for virtual machines at the physical adapter level. In contrast, NIOC version 3 lets you set up bandwidth allocation for virtual machines at the level of the entire distributed switch (N-VDS).

Customer Experience Improvement Program

 

NSX now supports the VMware Customer Experience Improvement Program in which product usage information is collected and reported back to VMware to improve the quality and scale of NSX. The current focus collecting telemetry information on NSX-T is focused on product scale where we collect the number of various critical objects in use in each customer deployment of the product. Examples include logical switches, logical routers, firewall rules and sections, and number of hosts configured with NSX. This program is optional and the NSX admin can opt-out if desired. Note that no personally identifiable information is collected.