VMware NSX-T Data Center 2.4 was a major release adding new functionality for virtualized network and security for public, private and hybrid clouds. The release includes a rich set of features including IPv6 support, context-aware firewall, network introspection features, a new intent-based networking user interface and many more.

Along with these features, another important infrastructure change is the ability to deploy highly-available clustered management and control plane.

NSX-T 2.4 Unified Appliance Cluster

What is the Highly-Available Cluster?

The highly-avilable cluster consists of three NSX nodes where each node contains the management plane and control plane services. The three nodes form a cluster to give a highly-available management plane and control plane. It provides application programming interface (API) and graphical user interface (GUI) for clients. It can be accessed from any of the manager or a single VIP associated with the cluster. The VIP can be provided by NSX or can be created using an external Load Balancer. It makes operations easier with less systems to monitor, maintain and upgrade.

Besides a NSX cluster, you will have to create Transport Zones, Host and Edge Transport Nodes to consume NSX-T Data Center.

  • A Transport Zone defines the scope of hosts and virtual machines (VMs) for participation in the network.
  • Transport Node is a node that is capable of participating in an NSX-T Data Center overlay or NSX-T Data Center VLAN networking (Edge, Hypervisor, Bare-metal)

You can, of course, manually deploy the three NSX-T node cluster, create Transport Zones and Transport Nodes and make it ready for consumption going through GUI workflows. The better approach is through automation. It will it a lot faster and less error prone.

You can completely automate the deployment today using VMware’s NSX-T Ansible Modules.

NSX-T with Ansible

 

What is Ansible?

Ansible is an agent-less IT-automation engine that works over secure shell (SSH). Once installed, there are no databases to configure and there are no daemons to start or keep running.

Ansible performs automation and orchestration via Playbooks. Playbooks are a YAML definition of automation tasks that describe how a particular piece of work needs to be done. A playbook consists of a series of ‘plays’ that define automation across a set of hosts, known as ‘inventory’. Each ‘play’ consists of multiple ‘tasks’ that can target one or many hosts in the inventory. Each task is a call to an Ansible module.

Ansible modules interact with NSX-T Data Center using standard representational state transfer (REST) APIs and the only requirement is IP connectivity to NSX-T Data Center.

 

Ansible Install

First, identify an Ansible control machine to run Ansible on. It can be a virtual machine, a container or your laptop. Ansible supports a variety of operating systems as its control machine. A full list and instructions can be found here.

VMware’s NSX-T Ansible modules use OVFTool to interact with vCenter Server to deploy the NSX Manager VMs. Once Ansible is installed on the control machine, download OVFTool and install by executing the binary. Other required packages can be easily installed with:

VMware’s NSX-T Ansible modules are completely supported by VMware and is available for download from the official download site. It can also be downloaded from the GitHub repository.

 

Deploying the First NSX-T Node

Now that you have an environment ready, lets jump right in and see how we can deploy the first NSX-T node. The playbook shown below deploys the first node and waits for all the required services to come online. The playbook and the variables file required to deploy the first NSX-T node is part of the GitHub repository and can be found under the examples folder.

All variables referenced in the playbook are defined in the file deploy_nsx_cluster_vars.yml. The relevant variables corresponding to the playbook above are:

All variables are replaced using Jinja2 substitution. Once you have the variables file customized, copy the file 01_deploy_first_node.yml and the customized variables file deploy_nsx_cluster_vars.yml to the main Ansible folder (two-levels up). Running the playbook to deploy your very first NSX-T node is done with a single command:

The -v in the above command gives a verbose output. You can chose to ignore the -v altogether or increase the verbosity by giving in more ‘v‘s: ‘-vvvv‘. The playbook deploys a NSX Manager node, configures it and checks to make sure all required services are up. You now have a fully functional single-node NSX node! You can now access the new simplified UI by accessing the node’s IP or FQDN.

 

Configuring the Compute Manager

Configuring a Compute Manager to your NSX makes it very easy to prepare your Hosts as a Transport Node. With NSX-T Data Center, you can configure one or more vCenter servers with your NSX Manager. In the playbook below, we invoke the module nsxt_fabric_compute_managers on items defined in compute_managers:

The with_items tells Ansible to loop to all available Compute Managers and add each of them, one-by-one. The corresponding variables rare:

Running the playbook is similar to before. Just copy over the 02_configure_compute_manager.yml to the main Ansible folder and run it:

Once the play is complete, you will see 2 vCenters configured with your NSX Manager.

 

Forming the NSX Cluster

To form a NSX Cluster, you have to deploy 2 more nodes. Again, we use a playbook specifically written for this:

As before, we invoke the module multiple times, once each with items defined in additional_nodes. Running the playbook again is a simple step:

You now have a 3 node highly-available cluster. No need for you to deal with any cluster joins or node UUIDs.

 

Configure Transport Zone, Transport Nodes and Edge Clusters

At this point, you are ready to deploy the rest of the logical entities required to consume your NSX deployment. Here, I have defined all the tasks required to deploy Transport Zones, Transport Nodes (which includes Host Nodes and Edge Nodes) and Edge Clusters:

Creation of a Transport Node Profile makes it easier to Configure NSX at a cluster level. In my case, I am assigning IPs to the Transport Nodes using the created IP Pool.

Note that an Edge Node is created as a Transport Node. This means the module that creates a Standalone Transport Node creates an Edge Node too! Of course, the  variables required to create an Edge Node will be slightly different than that of adding a new Host Transport Node. In my example below, you can see the variables required to create an Edge Node.

The node_deployment_info block contains all the required fields to deploy an Edge VM. Just like deploying an Edge Node through the UI, ansible module requires that a compute manager be configured with NSX. The module takes cares of deploying the Edge Node and adding it to the NSX management and control plane.

In the example on GitHub, the above tasks of creating the logical entities are split as separate files for easy management. If you want to run all of them together, you can include them in a single playbook:

Deleting Entities

Deleting through Ansible is as easy as creating them. Just change the “state” to “absent” in the variable file. This tells Ansible to remove the entity if it exists.

Then, run the playbooks in the reverse order:

 

Automating using VMware’s NSX-T Ansible modules makes it very easy to manage your infrastructure. You just have to save the variable files in your favorite version control system. Most importantly, the variable files represent your setup. Therefore, saving it allows you to easily replicate the setup or do a deploy-and-destroy as and when required.

 

NSX-T Data Center Resources

To learn more information about NSX-T Data Center check out the following resources: