虚拟云网络

通过ansible自动化部署NSX-T 2.5.1

在安装NSX-T的时候,我们需要安装nsxmgr、配置TZ、安装edge-node节点等组件,而且安装这些组件的时候,需要一定安装顺序并手工安装。要成功安装NSX-T,要我们对NSX-T的架构有一定的了解决。另外,在我自己的lab里边,需要经常搭建NSX-T的测试环境,每次手工安装测试环境都需要至少一天的时间,耗时耗力。那么我们有没有办法以代码的形式安装NSX-T呢?

Ansible是一款运维自动化的工具,通过它可以实现NSX-T的自动化部署,并且代码可以重用。使用ansible之后,安装NSX-T只需1小时即可完成,在安装之前,我们只需将相应的参数填入到yml文件中即可。安装过程全部自动化。后续NSX-T环境出现问题,我们还可以通过ansible删除NSX-T,然后再重新安装。

https://github.com/vmware/ansible-for-nsxt提供了NSX-T ansible模块,本文主要介绍通过ansible以代码的方式安装NSX-T基础架构。

安装要求和环境准备

1、安装好3台ESX主机(当然更多台也可以)

2、安装VC,并将3台ESX主机加入到VC

3、安装ubuntu linux,我安装的是16.04.6,当然安装centos也可以

4、在unbutu上安装ansible,采用以下命令

apt install make git make python-setuptools gcc python-dev libffi-dev libssl-dev python-packaging
apt-get install software-properties-common
apt-add-repository ppa:ansible/ansible
apt-get update
apt-get install ansible
apt-get install python-pip
pip install --upgrade pyvmomi pyvim requests

5、准备NSX-T安装包,并将它copy到指定的目录/home/vmware

在linux中创建一个目录,/home/vmware/,把NSX-T 2.5.1的安装包放到这个目录下。

安装包的文件名:nsx-unified-appliance-2.5.1.0.0.15314292.ova

6、在ubuntu上安装ovftool

下载 VMware-ovftool-4.3.0-7948156-lin.x86_64.bundle,然后将它cop到linuxe的home目录下

chmod +x VMware-ovftool-4.3.0-7948156-lin.x86_64.bundle
sudo ./VMware-ovftool-4.3.0-7948156-lin.x86_64.bundle

查看ovftool版本,ovftool –version

NSX-T ansible脚本下载

https://github.com/vmware/ansible-for-nsxt

将ansible自动化安装脚本下载到本地linux中,并且将它解压到/root/ansible-for-nsxt-master下,进入examples下,里边有两个目录,分别是deploy_nsx_cluster和setup_infra,将这两个目录下的文件全部copy到/root/ansible-for-nsxt-master下,注意,一定要copy到/root/ansible-for-nsxt-master/下,否则会出现ansible脚本无法执行的情况

cp /root/ansible-for-nsxt-master/examples/deploy_nsx_cluster/* /root/ansible-for-nsxt-master/
cp /root/ansible-for-nsxt-master/examples/setup_infra/* /root/ansible-for-nsxt-master/

自动化安装NSX-Manager,并加入VC

1、编辑deploy_nsx_cluster_vars.yml

# Copyright 2018 VMware, Inc.
# SPDX-License-Identifier: BSD-2-Clause OR GPL-3.0-only
#
# Variables file for deploying NSX-T Cluster
#
{

#
# Common NSX Appliance variables
# 以下是nsx-manager的用户名和密码
"nsx_username": "admin",
"nsx_password": "VMware123456",
"validate_certs": False,

#
# OVA/OVF Information. Path can be on local file system or a HTTP URL
# nsx-manager ova的文件路径
"nsx_ova_path": "/home/vmware",
"nsx_ova": "nsx-unified-appliance-2.5.1.0.0.15314292.ova",

#
# Common network details. This assumes all NSX appliance nodes are on the
# same subnet. If there is a need to deploy NSX appliance nodes which are
# on different subnets, add node specific details in the blocks below and
# use them in the playbooks instead.
# 为nsx-manager配置子网掩码、网关、dns和ntp
"domain": "corp.local",
"netmask": "255.255.255.0",
"gateway": "192.168.110.1",
"dns_server": "192.168.110.10",
"ntp_server": "192.168.110.10",

#
# First NSX appliance node. Defined separate based on the consumption.
# Accepts both IP (IPv4) and FQDN for 'mgmt_ip'
# 部署第一个nsx-manager,在vc中显示的名称是nsxmgr-01b.corp.local
# mgmt_ip为nsx-manager的管理IP
# datacenter为在vc中显示的Datacenter,cluster为VC中显示的cluster名称
# nsx-manager这个VM要安装到哪一个datastore上,它的管理网连接到VM Network
"nsx_node1": {
"hostname": "nsxmgr-01b.corp.local",
"mgmt_ip": "192.168.110.199",
"datacenter": "Datacenter",
"cluster": "Cluster",
"datastore": "datastore1",
"portgroup": "VM Network"
},

#
# Additional nodes defined as an array so that its easier to iterate
# through them in the playbook.
# NOTE: The Datacenter/Cluster/Datastore/Network requires the vCenter MOID
# (Module Object ID) and not the name
# 如果是需要一个nsx-manager,下边的additional_nodes不需要配置。
"additional_nodes": [
{
"hostname": "mynsx-02.mylab.local",
"mgmt_ip": "10.114.200.12",
"prefix": "27",
"datacenter_moid": "datacenter-2",
"cluster_moid": "domain-c7",
"datastore_moid": "datastore-15",
"portgroup_moid": "network-16"
},
{
"hostname": "mynsx-03.mylab.local",
"mgmt_ip": "10.114.200.13",
"prefix": "27",
"datacenter_moid": "datacenter-2",
"cluster_moid": "domain-c9",
"datastore_moid": "datastore-21",
"portgroup_moid": "network-16"
}
],

#
# One or more compute managers that have to be registered with NSX
# 把VC注册到nsx-manager上,vc的名称是vcsa-01b,管理IP是192.168.110.200,密码是XXX
"compute_managers": [
{
"display_name": "vcsa-01b",
"mgmt_ip": "192.168.110.200",
"origin_type": "vCenter",
"credential_type": "UsernamePasswordLoginCredential",
"username": "[email protected]",
"password": "VMware12"
}
]
}

注:如果通过ansbile安装nsx-t 2.5,需要修改01_deploy_first_node.yml,将role改为NSX-Manager

……
# Note: In case of deploying NSX 2.5, the role should be "NSX Manager". The below
# is valid for NSX 2.4 release
# role: "nsx-manager nsx-controller"
role: "NSX Manager"
……

2、创建一个名称为create_nsxmgr.yml的文件,文件中包括自动执行01_deploy_first_node.yml和02_configure_compute_manager.yml,文件内容如下:

root@AnsibleVM:~/ansible-for-nsxt-master# cat create_nsxmgr.yml
---
- import_playbook: 01_deploy_first_node.yml
- import_playbook: 02_configure_compute_manager.yml

3、执行ansible自动化创建nsxmgr的脚本文件

root@AnsibleVM:~/ansible-for-nsxt-master#ansible-planbook create_nsxmgr.yml -v

4、到vc中查看,已经创建了nsxmgr-01b.corp.local,然后登录到nsxmgr,vc已经注册到nsxmgr.

ansible自动化安装NSX其它组件

1、编辑 setup_infra_vars.yml

# 每台esx主机带4块网卡,vmnic0用于管理,vmnic1用于overlay,vmnic1用于vlan
# 创建两个nvds,分别是nvds-overlay和nvds-vlan,创建两个TZ,分别是overlay-TZ和VLAN-TZ
# 将部署两个edge-node,并加入edge-cluster,edge-node的eth0用于管理,fp-eth0用于overlay,fp-eth1用于vlan,将edge-node虚拟机部署在esx主机上
# Copyright 2018 VMware, Inc.
# SPDX-License-Identifier: BSD-2-Clause OR GPL-3.0-only
#
#
# Variables file for Day-0/1 setup
# Creates the following:
# - 2 Transport Zones
# - 1 IP Pool (used by Edge)
# - 1 Transport Node Profile with 2 TZ endpoints
# - 2 Edge Transport Nodes
# - 2 ESX Host Transport Nodes
# - 1 Edge Cluster with the 2 Edge Nodes
#
{

#
# Flag to create or delete all the objects
# Accepts: 'present' to create; 'absent' to delete
#
"state": "present",

#
# Common NSX Appliance variables
#
"nsx_username": "admin",
"nsx_password": "VMware1!VMware1!",
"validate_certs": False,

#
# First NSX appliance node. Defined separate based on the consumption.
# Accepts both IP (IPv4) and FQDN for 'mgmt_ip'
#
"nsx_node1": {
"hostname": "msxmgr-01b.corp.local",
"mgmt_ip": "192.168.110.199",
"datacenter": "Datacenter",
"cluster": "Cluster",
"datastore": "datastore1",
"portgroup": "VM Network"
},

"transport_zones": [
{
"display_name": "Overlay-TZ",
"description": "NSX Configured Overlay Transport Zone",
"transport_type": "OVERLAY",
"host_switch_name": "nvds-overlay"
},
{
"display_name": "VLAN-TZ",
"description": "NSX Configured VLAN Transport Zone",
"transport_type": "VLAN",
"host_switch_name": "nvds-vlan"
}
],

# 指定TEP的地址池,网关IP地址
"ip_pools": [
{
"display_name": "TEP-IP-Pool",
"subnets": [
{
"allocation_ranges": [
{
"start": "1.1.1.201",
"end": "1.1.1.210"
}
],
"gateway_ip": "1.1.1.1",
"cidr": "1.1.1.0/24"
}
]
}
],

"transport_node_profiles": [
{
"display_name": "Compute-Profile-1",
"description": "Compute Transport Node Profile",
"host_switches": [
{
"host_switch_profiles": [
{
"name": "nsx-default-uplink-hostswitch-profile",
"type": "UplinkHostSwitchProfile"
},
{
"name": "nsx-default-nioc-hostswitch-profile",
"type": "NiocProfile"
},
{
"name": "LLDP [Send Packet Disabled]",
"type": "LldpHostSwitchProfile"
}
],
"host_switch_name": "nvds-overlay",
"pnics": [
{
"device_name": "vmnic1",
"uplink_name": "uplink-1"
}
],
"ip_assignment_spec":
{
"resource_type": "StaticIpPoolSpec",
"ip_pool_name": "TEP-IP-Pool"
}
},
{
"host_switch_profiles": [
{
"name": "nsx-default-uplink-hostswitch-profile",
"type": "UplinkHostSwitchProfile"
},
{
"name": "nsx-default-nioc-hostswitch-profile",
"type": "NiocProfile"
},
{
"name": "LLDP [Send Packet Disabled]",
"type": "LldpHostSwitchProfile"
}
],
"host_switch_name": "nvds-vlan",
"pnics": [
{
"device_name": "vmnic2",
"uplink_name": "uplink-1"
}
],
}
],
"transport_zone_endpoints": [
{
"transport_zone_name": "Overlay-TZ"
},
{
"transport_zone_name": "VLAN-TZ"
}
]
}
],

"transport_nodes": [
{
"display_name": "EdgeNode-01",
"description": "NSX Edge Node 01",
"host_switches": [
{
"host_switch_profiles": [
{
"name": "nsx-edge-single-nic-uplink-profile",
"type": "UplinkHostSwitchProfile"
},
{
"name": "LLDP [Send Packet Disabled]",
"type": "LldpHostSwitchProfile"
}
],
"host_switch_name": "nvds-overlay",
"pnics": [
{
"device_name": "fp-eth0",
"uplink_name": "uplink-1"
}
],
"ip_assignment_spec":
{
"resource_type": "StaticIpPoolSpec",
"ip_pool_name": "TEP-IP-Pool"
}
},
{
"host_switch_profiles": [
{
"name": "nsx-edge-single-nic-uplink-profile",
"type": "UplinkHostSwitchProfile"
},
{
"name": "LLDP [Send Packet Disabled]",
"type": "LldpHostSwitchProfile"
}
],
"host_switch_name": "nvds-vlan",
"pnics": [
{
"device_name": "fp-eth1",
"uplink_name": "uplink-1"
}
],
}
],
"transport_zone_endpoints": [
{
"transport_zone_name": "Overlay-TZ"
},
{
"transport_zone_name": "VLAN-TZ"
}
],
# vm_name是在nsxmgr中显示的名字,
# compute_id代于的是vc中cluster的ID,storage_id/host_id/management_network_id都可以通过https://vc-fqdn/mod中获得
# data_network_ids可以通过https://vc-fqdn/mod中获得,第一行的network-20代表要将edge-node管理口连接的VM Network,
# 第二行和第三行的network-20代表将edge-node ovelay和vlan连接的VM Network
"node_deployment_info": {
"deployment_type": "VIRTUAL_MACHINE",
"deployment_config": {
"vm_deployment_config": {
"vc_name": "vcsa-01b",
"compute_id": "domain-c7",
"storage_id": "datastore-21",
"host_id": "host-15",
"management_network_id": "network-20",
"hostname": "edgenode-01.corp.local",
"data_network_ids": [
"network-20",
"network-20",
"network-20"
# "dvportgroup-24"
],
"management_port_subnets": [
{
"ip_addresses": [ "192.168.110.204" ],
"prefix_length": 24
}
],
"default_gateway_addresses": [ "192.168.110.1" ],
"allow_ssh_root_login": true,
"enable_ssh": true,
"placement_type": "VsphereDeploymentConfig"
},
"form_factor": "MEDIUM",
"node_user_settings": {
"cli_username": "admin" ,
"root_password": "VMware123456",
"cli_password": "VMware123456",
"audit_username": "audit",
"audit_password": "VMware123456"
}
},
"resource_type": "EdgeNode",
"display_name": "EdgeNode-01"
},
},
{
"display_name": "EdgeNode-02",
"description": "NSX Edge Node 02",
"host_switches": [
{
"host_switch_profiles": [
{
"name": "nsx-edge-single-nic-uplink-profile",
"type": "UplinkHostSwitchProfile"
},
{
"name": "LLDP [Send Packet Disabled]",
"type": "LldpHostSwitchProfile"
}
],
"host_switch_name": "nvds-overlay",
"pnics": [
{
"device_name": "fp-eth0",
"uplink_name": "uplink-1"
}
],
"ip_assignment_spec":
{
"resource_type": "StaticIpPoolSpec",
"ip_pool_name": "TEP-IP-Pool"
}
},
{
"host_switch_profiles": [
{
"name": "nsx-edge-single-nic-uplink-profile",
"type": "UplinkHostSwitchProfile"
},
{
"name": "LLDP [Send Packet Disabled]",
"type": "LldpHostSwitchProfile"
}
],
"host_switch_name": "nvds-vlan",
"pnics": [
{
"device_name": "fp-eth1",
"uplink_name": "uplink-1"
}
],
}
],
"transport_zone_endpoints": [
{
"transport_zone_name": "Overlay-TZ"
},
{
"transport_zone_name": "VLAN-TZ"
}
],
"node_deployment_info": {
"deployment_type": "VIRTUAL_MACHINE",
"deployment_config": {
"vm_deployment_config": {
"vc_name": "vcsa-01b",
"compute_id": "domain-c7",
"storage_id": "datastore-23",
"host_id": "host-18",
"management_network_id": "network-20",
"hostname": "edgenode-02.corp.local",
"data_network_ids": [
"network-20",
"network-20",
"network-20"
],
"management_port_subnets": [
{
"ip_addresses": [ "192.168.110.205" ],
"prefix_length": 24
}
],
"default_gateway_addresses": [ "192.168.110.1" ],
"allow_ssh_root_login": true,
"enable_ssh": true,
"placement_type": "VsphereDeploymentConfig"
},
"form_factor": "MEDIUM",
"node_user_settings": {
"cli_username": "admin" ,
"root_password": "VMware123456",
"cli_password": "VMware123456",
"audit_username": "audit",
"audit_password": "VMware123456"
}
},
"resource_type": "EdgeNode",
"display_name": "EdgeNode-02"
},
},
{
"resource_type": "TransportNode",
"display_name": "esx-01b",
"description": "Host Transport Node for first ESXi host",
"host_switches": [
{
"host_switch_name": "nvds-overlay",
"host_switch_profiles": [
{
"name": "nsx-default-uplink-hostswitch-profile",
"type": "UplinkHostSwitchProfile"
},
{
"name": "nsx-default-nioc-hostswitch-profile",
"type": "NiocProfile"
},
{
"name": "LLDP [Send Packet Disabled]",
"type": "LldpHostSwitchProfile"
}
],
"pnics": [
{
"device_name": "vmnic1",
"uplink_name": "uplink-1"
}
],
"ip_assignment_spec": {
"resource_type": "StaticIpPoolSpec",
"ip_pool_name": "TEP-IP-Pool"
}
},
{
"host_switch_name": "nvds-vlan",
"host_switch_profiles": [
{
"name": "nsx-default-uplink-hostswitch-profile",
"type": "UplinkHostSwitchProfile"
},
{
"name": "nsx-default-nioc-hostswitch-profile",
"type": "NiocProfile"
},
{
"name": "LLDP [Send Packet Disabled]",
"type": "LldpHostSwitchProfile"
}
],
"pnics": [
{
"device_name": "vmnic2",
"uplink_name": "uplink-1"
}
],
}
],
"transport_zone_endpoints": [
{
"transport_zone_name": "Overlay-TZ"
},
{
"transport_zone_name": "VLAN-TZ"
}
],
# thumbprint通过ssh登录到每台ESX中获得,openssl x509 -in /etc/vmware/ssl/rui.crt -fingerprint -sha256 -noout
"node_deployment_info": {
"resource_type": "HostNode",
"ip_addresses": ["192.168.110.201"],
"os_type": "ESXI",
"host_credential": {
"username": "root",
"password": "VMware12",
"thumbprint": "E1:52:39:84:A9:D1:A1:13:3E:74:12:CD:B2:10:EA:B3:D9:47:5C:3C:F3:AD:45:A4:BB:3F:3F:ED:DB:52:E9:49"
}
}
},
{
"resource_type": "TransportNode",
"display_name": "esx-02b",
"description": "Host Transport Node for second ESXi host",
"host_switches": [
{
"host_switch_name": "nvds-overlay",
"host_switch_profiles": [
{
"name": "nsx-default-uplink-hostswitch-profile",
"type": "UplinkHostSwitchProfile"
},
{
"name": "nsx-default-nioc-hostswitch-profile",
"type": "NiocProfile"
},
{
"name": "LLDP [Send Packet Disabled]",
"type": "LldpHostSwitchProfile"
}
],
"pnics": [
{
"device_name": "vmnic1",
"uplink_name": "uplink-1"
}
],
"ip_assignment_spec": {
"resource_type": "StaticIpPoolSpec",
"ip_pool_name": "TEP-IP-Pool"
}
},
{
"host_switch_name": "nvds-vlan",
"host_switch_profiles": [
{
"name": "nsx-default-uplink-hostswitch-profile",
"type": "UplinkHostSwitchProfile"
},
{
"name": "nsx-default-nioc-hostswitch-profile",
"type": "NiocProfile"
},
{
"name": "LLDP [Send Packet Disabled]",
"type": "LldpHostSwitchProfile"
}
],
"pnics": [
{
"device_name": "vmnic2",
"uplink_name": "uplink-1"
}
],
}
],
"transport_zone_endpoints": [
{
"transport_zone_name": "Overlay-TZ"
},
{
"transport_zone_name": "VLAN-TZ"
}
],
"node_deployment_info": {
"resource_type": "HostNode",
"ip_addresses": ["192.168.110.202"],
"os_type": "ESXI",
"host_credential": {
"username": "root",
"password": "VMware12",
"thumbprint": "B1:97:F3:72:70:59:A9:6E:22:9B:8B:ED:AC:CB:82:F9:30:B3:23:FF:B0:92:1E:49:27:0B:92:ED:09:7C:36:03"
}
}
},
{
"resource_type": "TransportNode",
"display_name": "esx-03b",
"description": "Host Transport Node for third ESXi host",
"host_switches": [
{
"host_switch_name": "nvds-overlay",
"host_switch_profiles": [
{
"name": "nsx-default-uplink-hostswitch-profile",
"type": "UplinkHostSwitchProfile"
},
{
"name": "nsx-default-nioc-hostswitch-profile",
"type": "NiocProfile"
},
{
"name": "LLDP [Send Packet Disabled]",
"type": "LldpHostSwitchProfile"
}
],
"pnics": [
{
"device_name": "vmnic1",
"uplink_name": "uplink-1"
}
],
"ip_assignment_spec": {
"resource_type": "StaticIpPoolSpec",
"ip_pool_name": "TEP-IP-Pool"
}
},
{
"host_switch_name": "nvds-vlan",
"host_switch_profiles": [
{
"name": "nsx-default-uplink-hostswitch-profile",
"type": "UplinkHostSwitchProfile"
},
{
"name": "nsx-default-nioc-hostswitch-profile",
"type": "NiocProfile"
},
{
"name": "LLDP [Send Packet Disabled]",
"type": "LldpHostSwitchProfile"
}
],
"pnics": [
{
"device_name": "vmnic2",
"uplink_name": "uplink-1"
}
],
"ip_assignment_spec": {
"resource_type": "StaticIpPoolSpec",
"ip_pool_name": "TEP-IP-Pool"
}
}
],
"transport_zone_endpoints": [
{
"transport_zone_name": "Overlay-TZ"
},
{
"transport_zone_name": "VLAN-TZ"
}
],
"node_deployment_info": {
"resource_type": "HostNode",
"ip_addresses": ["192.168.110.203"],
"os_type": "ESXI",
"host_credential": {
"username": "root",
"password": "VMware12",
"thumbprint": "A9:D8:90:C6:E8:3C:4C:44:85:42:A4:C9:47:62:D7:8C:04:01:0E:C0:7B:BA:96:F1:90:6A:AF:A7:C1:A5:BC:C5"
}
}
}
],

# cluster_profile_binding_id需要登录到nsxmgr中去查看
"edge_clusters": [
{
"display_name": "Edge-Cluster-01",
"cluster_profile_binding_id": "91bcaa06-47a1-11e4-8316-17ffc770799b",
"members": [
{
"transport_node_name": "EdgeNode-01"
},
{
"transport_node_name": "EdgeNode-02"
}
]
}
]
}

2、创建一个名称为create_everything.yml的文件,用于自动执行所有的安装工作

root@AnsibleVM:~/ansible-for-nsxt-master# cat create_everything.yml
---
- import_playbook: 01_deploy_transport_zone.yml
- import_playbook: 02_define_TEP_IP_Pools.yml
- import_playbook: 03_create_transport_node_profiles.yml
- import_playbook: 04_create_transport_nodes.yml
- import_playbook: 05_create_edge_cluster.yml

3、执行ansible脚本,创建所有的资源

root@AnsibleVM:~/ansible-for-nsxt-master# ansible-playbook create_everything.yml

4、登录nsxmgr,验证创建的资源

部署了两个TZ,分别是Overlay-TZ和VLAN-TZ,每个TZ有不同的nvds

部署了两个edge-node虚拟机,并将他们加入到一个edge-cluster中

ESX主机变成了传输节点

总结

我们通过ansible自动化安装nsxmgr,只需将相应的参数填入到yml文件中即可。我们可以执行ansible-playbook自动化安装nsxmgr、vc加入到nsxmgr、加esx主机加入到传输节点,edge-node自动化安装,并加入到edge-cluster中。