VMware vRealize Automation 7.3 introduces the support for multiple NICs on all nodes. In this blog post, we will cover the steps to configure your vRA environment with multiple NICs and look at some vRA 7.3 and beyond multiple NIC use cases. This blog should be helpful for anyone looking to deploy vRealize Automation 7.3 and beyond with multiple NICs. The three use cases we will look at are:
- Separate User and Infrastructure Networks – 2 NICs
- Additional NIC for IaaS nodes to join Active Directory Domain – 2 NICs
- 2-Arm Inline Load Balancer with 3 VA’s – 3 NICs
Configure vRealize Automation 7.3 and beyond environment with additional NICs
Configuring your vRA environment with multiple NICs is easy!
Add additional NICs to your VA’s before installing vRA:
For details on adding additional NICs to your vRA environment before installing vRA, please refer to the documentation page: Add NICs Before Running the Installer
Add additional NICs to your VA’s after installing vRA:
For details on adding additional NICs to your vRA environment after installing vRA, please refer to the documentation page: Add NICs After Installing vRA
vRA Limitations with more than 2 NICs:
vRA has some limitations when configured with more than 2 NICs:
-VIDM needs access to database and Active Directory
-VIDM needs access to the load balancer URL if in clustered environment
-Above requirements should be covered by first 2 NICs in the setup.
-3rd NIC can exist but cannot be used or recognized by VIDM at all
-3rd NIC cannot be used to connect to Active Directory. Use 1st or 2nd NIC when configuring Identity source
For additional details regarding installing and configuring your vRealize Automation environment, refer to the vRealize Automation documentation.
Use Case 1: Separate User and Infrastructure networks
For this use case, we look at a vRA setup which is configured on a network used to host an organization’s Infrastructure that end users do not have access to. A second NIC is added to the vRA VA’s to provide end users with access to vRA, and prevent them from gaining access to resources configured on the “Infrastructure network”.
Topology:
Hostname and IP examples:
NOTE: The FQDN of the vRA appliances and VIP must be the same on both networks. Split DNS may be required so that the vRA node’s and VIP’s FQDN on the Infrastructure network resolve to the Infrastructure network IPs, and the vRA node’s and VIP’s FQDN on the User network resolve to the User network IP addresses. See the above table for clarification.
Firewall:
In this use case, we are using NSX security policies to block all traffic from the user network to the vRA Nodes and VIP on the User Network side, except for ports 443 (HTTPS) and 8444 (Remote Console).
We also configure firewall rules on our NSX Edge Load Balancer for additional security.
These settings allow end users to access and use vRealize Automation, and access the remote console for any managed VMs they provision with vRealize Automation. All other ports are blocked to prevent end users from gaining unnecessary access to the VAs.
Configuration:
To configure this topology with a vRA HA setup, proceed with the normal vRA HA installation but add the following steps before installing:
- Configure your vRA nodes with a second NIC for the User Network and make sure to load balance them
- Make sure to set the appropriate firewall rules on the User Network so that users can only access port 443 and 8444 from the user network
- Make sure to use the same FQDN’s for both IPs on your vRA appliances, and the same FQDN for both VIPs. Split DNS may be required in order for you to implement this.
If you already have a vRealize Automation 7.3 environment installed and configured, you can add a second NIC to your nodes following the same steps above.
Use Case 2: Additional NIC for IaaS nodes to join Active Directory
In this Use Case, all nodes in a distributed vRA setup are deployed on an Infrastructure network, but there is no Active Directory server on the Infrastructure network. vRA requires the IaaS nodes be joined to a domain and use domain service accounts to run the IaaS services. So here we have Active Directory deployed on a separate network and need to add a second NIC to our IaaS nodes and attach it to that network, so they can join the domain and use domain service accounts.
Topology:
Hostname and IP examples:
NOTE: The FQDN of all nodes must be the same for both IP addresses in DNS. See the above table.
Configuration:
To configure this topology:
- Add a second NIC to the IaaS nodes before installing vRA
- Join the IaaS nodes to the domain
- Ensure the FQDN for each node is the same on both networks in DNS.
- When installing vRA, use domain users from the Active Directory you joined your IaaS nodes to, to run the IaaS services.
Use Case 3: 2-Arm Inline Load Balancer with 3 VA’s and 3 NICs
In this Use Case, the vRA VA’s have 3 NICs. The first NIC is used for a 2 Arm Inline load balancing network that users have access to, in order to provide User Access to the vRA environment. The 2nd NIC is used to connect each component to each other and the rest of the infrastructure such as Active Directory and vCenter Server. The 3rd NIC is used to provide remote console access to the ESXi hosts deployed in the vCenter endpoint that vRA is managing.
vRO is embedded with vRA and is typically used in that configuration, however for this use case we will use external vRO servers to illustrate how that would be configured. Each external vRO server will also have 3 NICs and will be part of the 2 Arm Inline load balancer configuration.
vRA Limitations with more than 2 NICs:
vRA has some limitations when configured with more than 2 NICs:
-VIDM needs access to database and Active Directory
-VIDM needs access to the load balancer URL if in clustered environment
-Above requirements should be covered by first 2 NICs in the setup.
-3rd NIC can exist but cannot be used or recognized by VIDM at all
-3rd NIC cannot be used to connect to Active Directory. Use 1st or 2nd NIC when configuring Identity source
Topology:
Hostname and IP examples:
NOTE: In this Use Case, we are not using a Split DNS configuration. The IPs in DNS are the Infrastructure Network IPs and the VIPs defined on the User/LB Network. The Active Directory DNS servers are replicated with each other and the end users have access to one of the replicated DNS servers, so the users can resolve the VIP FQDNs.
How to Configure:
- If you are configuring multiple NICs on your vRA environment before installing the product, please refer to the vRA documentation for steps on adding additional NICs before installing: Add NICs Before Running the Installer If you’ve already installed vRA and want to add additional NICs, follow the documentation: Add NICs After Installing vRA
- If you need to configure static routes for your VA’s or IaaS nodes, please refer to the vRA documentation: Configure Static Routes with vRA
- Configure your Load Balancer for both the Transit and User/LB Networks, and ensure the Load Balancer has an interface on each network. You should configure the pools, monitoring, etc the same as described in the Load Balancing guide. VIPs will be on the User/LB network and Pool Members will be on the Transit network. Refer to the vRA Load Balancing Guide for additional Load Balancing details.
- To add additional NICs to your external vRO nodes, refer to the vRO documentation: Adding Network Interface Controllers to External vRO
- To configure static routers on your external vRO nodes, refer to the vRO documentation: Configure Static Routes on External vRO Nodes
- Create DNS records for each component using the Infrastructure Network IP address.
- Create DNS records for each VIP using the User/LB IP address
- Install vRA as you normally would. Follow the vRA documentation for more details. Installing vRealize Automation
- When configuring a directory for user authentication, ensure to configure it in High Availability mode. Details can be found in the documentation: Configure Directories Management for High Availability
- After installation is complete, Users can log into vRA using the vRA FQDN VIP.
Software Provisioning Configuration:
If you are using Software Provisioning in a vRA environment configured with a 2-Arm Inline Load Balancer, you will need to specify some custom properties in order for Software Provisioning to work. The best place to specify these custom properties is on the endpoint in vRA, because the custom properties will then apply to all blueprints configured with that endpoint.
Custom properties:
software.agent.service.url = https://VRA-LB-FQDN/software-service/api
software.ebs.url = https://VRA-LB-FQDN/event-broker-service/api
agent.download.url = https://VRA-LB-FQDN/software-service/resources/nobel-agent.jar
Replace “VRA-LB-FQDN” with the FQDN of your vRA LB VIP
Wrapping things up
vRealize Automation 7.3 and beyond provide the ability to add additional NICs to your vRA and IaaS nodes. We highlighted three use cases here although using multiple NICs is applicable to many different use cases.
For additional details regarding installing and configuring your vRealize Automation 7.3 environment, refer to the vRealize Automation documentation.