VMware NSX-T Data Center 2.4 was a major release adding new functionality for virtualized network and security for public, private and hybrid clouds. The release includes a rich set of features including IPv6 support, context-aware firewall, network introspection features, a new intent-based networking user interface and many more.
Along with these features, another important infrastructure change is the ability to deploy highly-available clustered management and control plane.
What is the Highly-Available Cluster?
The highly-avilable cluster consists of three NSX nodes where each node contains the management plane and control plane services. The three nodes form a cluster to give a highly-available management plane and control plane. It provides application programming interface (API) and graphical user interface (GUI) for clients. It can be accessed from any of the manager or a single VIP associated with the cluster. The VIP can be provided by NSX or can be created using an external Load Balancer. It makes operations easier with less systems to monitor, maintain and upgrade.
Besides a NSX cluster, you will have to create Transport Zones, Host and Edge Transport Nodes to consume NSX-T Data Center.
- A Transport Zone defines the scope of hosts and virtual machines (VMs) for participation in the network.
- Transport Node is a node that is capable of participating in an NSX-T Data Center overlay or NSX-T Data Center VLAN networking (Edge, Hypervisor, Bare-metal)
You can, of course, manually deploy the three NSX-T node cluster, create Transport Zones and Transport Nodes and make it ready for consumption going through GUI workflows. The better approach is through automation. It will it a lot faster and less error prone.
You can completely automate the deployment today using VMware’s NSX-T Ansible Modules.
What is Ansible?
Ansible is an agent-less IT-automation engine that works over secure shell (SSH). Once installed, there are no databases to configure and there are no daemons to start or keep running.
Ansible performs automation and orchestration via Playbooks. Playbooks are a YAML definition of automation tasks that describe how a particular piece of work needs to be done. A playbook consists of a series of ‘plays’ that define automation across a set of hosts, known as ‘inventory’. Each ‘play’ consists of multiple ‘tasks’ that can target one or many hosts in the inventory. Each task is a call to an Ansible module.
Ansible modules interact with NSX-T Data Center using standard representational state transfer (REST) APIs and the only requirement is IP connectivity to NSX-T Data Center.
Ansible Install
First, identify an Ansible control machine to run Ansible on. It can be a virtual machine, a container or your laptop. Ansible supports a variety of operating systems as its control machine. A full list and instructions can be found here.
VMware’s NSX-T Ansible modules use OVFTool to interact with vCenter Server to deploy the NSX Manager VMs. Once Ansible is installed on the control machine, download OVFTool and install by executing the binary. Other required packages can be easily installed with:
1 |
pip install --upgrade pyvmomi pyvim requests |
VMware’s NSX-T Ansible modules are completely supported by VMware and is available for download from the official download site. It can also be downloaded from the GitHub repository.
Deploying the First NSX-T Node
Now that you have an environment ready, lets jump right in and see how we can deploy the first NSX-T node. The playbook shown below deploys the first node and waits for all the required services to come online. The playbook and the variables file required to deploy the first NSX-T node is part of the GitHub repository and can be found under the examples folder.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 |
$> cat 01_deploy_first_node.yml --- # # Playbook to deploy the first NSX Appliance node. Also checks the node # status # - hosts: 127.0.0.1 connection: local become: yes vars_files: - deploy_nsx_cluster_vars.yml tasks: - name: deploy NSX Manager OVA nsxt_deploy_ova: ovftool_path: "/usr/bin" datacenter: "{{ nsx_node1['datacenter'] }}" datastore: "{{ nsx_node1['datastore'] }}" portgroup: "{{ nsx_node1['portgroup'] }}" cluster: "{{ nsx_node1['cluster'] }}" vmname: "{{ nsx_node1['hostname'] }}" hostname: "{{ nsx_node1['hostname'] }}" dns_server: "{{ dns_server }}" dns_domain: "{{ domain }}" ntp_server: "{{ ntp_server }}" gateway: "{{ gateway }}" ip_address: "{{ nsx_node1['mgmt_ip'] }}" netmask: "{{ netmask }}" admin_password: "{{ nsx_password }}" cli_password: "{{ nsx_password }}" path_to_ova: "{{ nsx_ova_path }}" ova_file: "{{ nsx_ova }}" vcenter: "{{ compute_manager['mgmt_ip'] }}" vcenter_user: "{{ compute_manager['username'] }}" vcenter_passwd: "{{ compute_manager['password'] }}" deployment_size: "small" role: "nsx-manager nsx-controller" - name: Check manager status nsxt_manager_status: hostname: "{{ nsx_node1['hostname'] }}" username: "{{ nsx_username }}" password: "{{ nsx_password }}" validate_certs: "{{ validate_certs }}" wait_time: 50 |
All variables referenced in the playbook are defined in the file deploy_nsx_cluster_vars.yml. The relevant variables corresponding to the playbook above are:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 |
$> cat deploy_nsx_cluster_vars.yml { "nsx_username": "admin", "nsx_password": "myPassword!myPassword!", "validate_certs": False, "nsx_ova_path": "/home/vmware", "nsx_ova": "nsx-unified-appliance-2.4.0.0.0.12456291.ova", "domain": "mylab.local", "netmask": "255.255.224.0", "gateway": "10.114.200.1", "dns_server": "10.114.200.8", "ntp_server": "10.114.200.8", "nsx_node1": { "hostname": "mynsx-01.mylab.local", "mgmt_ip": "10.114.200.11", "datacenter": "Datacenter", "cluster": "Management", "datastore": "datastore1", "portgroup": "VM Network" } } |
All variables are replaced using Jinja2 substitution. Once you have the variables file customized, copy the file 01_deploy_first_node.yml and the customized variables file deploy_nsx_cluster_vars.yml to the main Ansible folder (two-levels up). Running the playbook to deploy your very first NSX-T node is done with a single command:
1 |
ansible-playbook 01_deploy_first_node.yml -v |
The -v in the above command gives a verbose output. You can chose to ignore the -v altogether or increase the verbosity by giving in more ‘v‘s: ‘-vvvv‘. The playbook deploys a NSX Manager node, configures it and checks to make sure all required services are up. You now have a fully functional single-node NSX node! You can now access the new simplified UI by accessing the node’s IP or FQDN.
Configuring the Compute Manager
Configuring a Compute Manager to your NSX makes it very easy to prepare your Hosts as a Transport Node. With NSX-T Data Center, you can configure one or more vCenter servers with your NSX Manager. In the playbook below, we invoke the module nsxt_fabric_compute_managers on items defined in compute_managers:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
$> cat02_configure_compute_manager.yml --- # # Playbook to register Compute Managers with NSX Appliance # - hosts: 127.0.0.1 connection: local become: yes vars_files: - deploy_nsx_cluster_vars.yml tasks: - name: Register compute manager nsxt_fabric_compute_managers: hostname: "{{ nsx_node1.hostname }}" username: "{{ nsx_username }}" password: "{{ nsx_password }}" validate_certs: "{{ validate_certs }}" display_name: "{{ item.display_name }}" server: "{{ item.mgmt_ip }}" origin_type: "{{ item.origin_type }}" credential: credential_type: "{{ item.credential_type }}" username: "{{ item.username }}" password: "{{ item.password }}" state: present with_items: - "{{compute_managers}}" |
The with_items tells Ansible to loop to all available Compute Managers and add each of them, one-by-one. The corresponding variables rare:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
"compute_managers": [ { "display_name": "vcenter-west", "mgmt_ip": "10.114.200.6", "origin_type": "vCenter", "credential_type": "UsernamePasswordLoginCredential", "password": "myFirstPassword!" }, { "display_name": "vcenter-east", "mgmt_ip": "10.114.200.8", "origin_type": "vCenter", "credential_type": "UsernamePasswordLoginCredential", "password": "mySecondPassword!" } ] |
Running the playbook is similar to before. Just copy over the 02_configure_compute_manager.yml to the main Ansible folder and run it:
1 |
ansible-playbook 02_configure_compute_manager.yml -v |
Once the play is complete, you will see 2 vCenters configured with your NSX Manager.
Forming the NSX Cluster
To form a NSX Cluster, you have to deploy 2 more nodes. Again, we use a playbook specifically written for this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 |
$> cat 03_deploy_second_third_node.yml --- # # Deploys remaining NSX appliance nodes and forms a cluster. Requires the first # NSX appliance node to be deployed and at least one Compute Manager registered. # - hosts: 127.0.0.1 connection: local become: yes vars_files: - deploy_nsx_cluster_vars.yml tasks: - name: Deploying additional nodes nsxt_controller_manager_auto_deployment: hostname: "{{ nsx_node1.hostname }}" username: "{{ nsx_username }}" password: "{{ nsx_password }}" validate_certs: "{{ validate_certs }}" deployment_requests: - roles: - CONTROLLER - MANAGER form_factor: "SMALL" user_settings: cli_password: "{{ nsx_password }}" root_password: "{{ nsx_password }}" deployment_config: placement_type: VsphereClusterNodeVMDeploymentConfig vc_name: "{{ compute_managers[0]['display_name'] }}" management_network_id: "{{ item.portgroup_moid }}" hostname: "{{ item.hostname }}" compute_id: "{{ item.cluster_moid }}" storage_id: "{{ item.datastore_moid }}" default_gateway_addresses: - "{{ gateway }}" dns_servers: - "{{ dns_server }}" ntp_servers: - "{{ ntp_server }}" management_port_subnets: - ip_addresses: - "{{ item.mgmt_ip }}" prefix_length: "{{ item.prefix }}" state: present with_items: - "{{ additional_nodes }}" |
As before, we invoke the module multiple times, once each with items defined in additional_nodes. Running the playbook again is a simple step:
1 |
ansible-playbook 03_deploy_second_third_node.yml -v |
You now have a 3 node highly-available cluster. No need for you to deal with any cluster joins or node UUIDs.
Configure Transport Zone, Transport Nodes and Edge Clusters
At this point, you are ready to deploy the rest of the logical entities required to consume your NSX deployment. Here, I have defined all the tasks required to deploy Transport Zones, Transport Nodes (which includes Host Nodes and Edge Nodes) and Edge Clusters:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 |
$> cat setup_infra.yml --- - hosts: 127.0.0.1 connection: local become: yes vars_files: - setup_infra_vars.yml tasks: - name: Create transport zone nsxt_transport_zones: hostname: "{{ nsx_node1.mgmt_ip }}" username: "{{ nsx_username }}" password: "{{ nsx_password }}" validate_certs: "{{ validate_certs }}" resource_type: "TransportZone" display_name: "{{ item.display_name }}" description: "{{ item.description }}" transport_type: "{{ item.transport_type }}" host_switch_name: "{{ item.host_switch_name }}" state: "{{ state }}" with_items: - "{{ transport_zones }}" - name: Create IP Pools nsxt_ip_pools: hostname: "{{ nsx_node1.mgmt_ip }}" username: "{{ nsx_username }}" password: "{{ nsx_password }}" validate_certs: "{{ validate_certs }}" display_name: "{{ item.display_name }}" subnets: "{{ item.subnets }}" state: "{{ state }}" with_items: - "{{ ip_pools }}" - name: Create Transport Node Profiles nsxt_transport_node_profiles: hostname: "{{ nsx_node1.mgmt_ip }}" username: "{{ nsx_username }}" password: "{{ nsx_password }}" validate_certs: "{{ validate_certs }}" resource_type: TransportNodeProfile display_name: "{{ item.display_name }}" description: "{{ item.description }}" host_switch_spec: resource_type: StandardHostSwitchSpec host_switches: "{{ item.host_switches }}" transport_zone_endpoints: "{{ item.transport_zone_endpoints }}" state: "{{ state }}" with_items: - "{{ transport_node_profiles }}" - name: Create Transport Nodes nsxt_transport_nodes: hostname: "{{ nsx_node1.mgmt_ip }}" username: "{{ nsx_username }}" password: "{{ nsx_password }}" validate_certs: "{{ validate_certs }}" display_name: "{{ item.display_name }}" host_switch_spec: resource_type: StandardHostSwitchSpec host_switches: "{{ item.host_switches }}" transport_zone_endpoints: "{{ item.transport_zone_endpoints }}" node_deployment_info: "{{ item.node_deployment_info }}" state: "{{ state }}" with_items: - "{{ transport_nodes }}" - name: Add edge cluster nsxt_edge_clusters: hostname: "{{ nsx_node1.mgmt_ip }}" username: "{{ nsx_username }}" password: "{{ nsx_password }}" validate_certs: "{{ validate_certs }}" display_name: "{{ item.display_name }}" cluster_profile_bindings: - profile_id: "{{ item.cluster_profile_binding_id }}" members: "{{ item.members }}" state: "{{ state }}" with_items: - "{{ edge_clusters }}" |
Creation of a Transport Node Profile makes it easier to Configure NSX at a cluster level. In my case, I am assigning IPs to the Transport Nodes using the created IP Pool.
Note that an Edge Node is created as a Transport Node. This means the module that creates a Standalone Transport Node creates an Edge Node too! Of course, the variables required to create an Edge Node will be slightly different than that of adding a new Host Transport Node. In my example below, you can see the variables required to create an Edge Node.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 |
{ "display_name": "EdgeNode-01", "description": "NSX Edge Node 01", "host_switches": [ { "host_switch_profiles": [ { "name": "nsx-edge-single-nic-uplink-profile", "type": "UplinkHostSwitchProfile" }, { "name": "LLDP [Send Packet Disabled]", "type": "LldpHostSwitchProfile" } ], "host_switch_name": "nvds", "pnics": [ { "device_name": "fp-eth0", "uplink_name": "uplink-1" } ], "ip_assignment_spec": { "resource_type": "StaticIpPoolSpec", "ip_pool_name": "TEP-IP-Pool" } } ], "transport_zone_endpoints": [ { "transport_zone_name": "Overlay-TZ" } ], "node_deployment_info": { "deployment_type": "VIRTUAL_MACHINE", "deployment_config": { "vm_deployment_config": { "vc_name": "vcenter", "compute_id": "domain-c7", "storage_id": "datastore-21", "host_id": "host-20", "management_network_id": "network-16", "hostname": "edgenode-01.lab.local", "data_network_ids": [ "network-16", "dvportgroup-24", "dvportgroup-24" ], "management_port_subnets": [ { "ip_addresses": [ "10.114.200.16" ], "prefix_length": 27 } ], "default_gateway_addresses": [ "10.114.200.1" ], "allow_ssh_root_login": true, "enable_ssh": true, "placement_type": "VsphereDeploymentConfig" }, "form_factor": "MEDIUM", "node_user_settings": { "cli_username": "admin" , "root_password": "myPassword1!myPassword1!", "cli_password": "myPassword1!myPassword1!", "audit_username": "audit", "audit_password": "myPassword1!myPassword1!" } }, "resource_type": "EdgeNode", "display_name": "EdgeNode-01" }, } |
The node_deployment_info block contains all the required fields to deploy an Edge VM. Just like deploying an Edge Node through the UI, ansible module requires that a compute manager be configured with NSX. The module takes cares of deploying the Edge Node and adding it to the NSX management and control plane.
In the example on GitHub, the above tasks of creating the logical entities are split as separate files for easy management. If you want to run all of them together, you can include them in a single playbook:
1 2 3 4 5 6 7 8 9 |
$> cat run_everything.yml --- - import_playbook: 01_deploy_transport_zone.yml - import_playbook: 02_define_TEP_IP_Pools.yml - import_playbook: 03_create_transport_node_profiles.yml - import_playbook: 04_create_transport_nodes.yml - import_playbook: 05_create_edge_cluster.yml $> ansible-playbook run_everything.yml -v |
Deleting Entities
Deleting through Ansible is as easy as creating them. Just change the “state” to “absent” in the variable file. This tells Ansible to remove the entity if it exists.
1 2 3 4 5 6 7 |
{ . . "state": "absent", . . } |
Then, run the playbooks in the reverse order:
1 2 3 4 5 6 7 8 9 10 |
$> cat delete_everything.yml --- - import_playbook: 05_create_edge_cluster.yml - import_playbook: 04_create_transport_nodes.yml - import_playbook: 03_create_transport_node_profiles.yml - import_playbook: 02_define_TEP_IP_Pools.yml - import_playbook: 01_deploy_transport_zone.yml $> ansible-playbook delete_everything.yml -v |
Automating using VMware’s NSX-T Ansible modules makes it very easy to manage your infrastructure. You just have to save the variable files in your favorite version control system. Most importantly, the variable files represent your setup. Therefore, saving it allows you to easily replicate the setup or do a deploy-and-destroy as and when required.
NSX-T Data Center Resources
To learn more information about NSX-T Data Center check out the following resources:
- Ansible for NSX-T GitHub: https://github.com/vmware/ansible-for-nsxt
- NSX Data Center product page, customer stories, and technical resources
- VMware NSX YouTube Channel, including 40+ technical Light Board videos!
- NSX-T Data Center 2.4 Direct Download, Download Page, Documentation Link
Comments
0 Comments have been added so far