Ever since its launch, Azure VMware Solution (AVS) has been getting a lot of traction. Organizations are leveraging it to outsource their vSphere infrastructure to Microsoft without having to refactor all their workloads. Then, once the workloads are close to other Azure services, they can leverage the native Azure services.
The architecture AVS uses is based on VMware Cloud Foundation, meaning it has vSphere, NSX-T, and vSAN built-in. It also means that VMware’s Cloud Management products should work seamlessly. All that was left is to certify the products for use on AVS. I’m excited to report that we have just completed certification of vRealize Network Insight (vRNI) 6.0 and vRealize Network Insight Cloud on AVS.
vRealize Network Insight on Azure VMware Solution
Completing the certification process means that we can fully support the deployment in AVS. Our engineers ran comprehensive tests on the platform for our primary use cases, and everything came back green!
The use cases that have been validated are:
- Application Awareness: Discover, Plan Migration (with VMware HCX) and Day 2 Operations
- Security: Flow Visibility, Planning, and Firewall Rule Recommendations
- Alerts and Analytics: Pro-active Alerting, Top-talkers, Outliers, Dynamic Thresholds, Flows (including Latency)
- Dashboards: NSX-T Manager, vCenter, ESXi Hosts, VMs, Applications, and more
- Hybrid Network Troubleshooting:
- Inter SDDC Path
- SDDC to Internet
- On-prem to SDDC over Policy-Based VPN via NSX
- VMware HCX: Stretched L2 VLAN stitched flows
Visibility into Azure Components
vRNI currently does not support native Azure components that could be attached to your AVS SDDCs. Visibility into things like the ExpressRoute, Virtual Network Gateways, Load Balancers, or VPNs instantiated from Azure directly will be coming in a future version of vRNI. Please reach out if you would like to work with us.
Apart from the connectivity troubleshooting use case, the other use cases around application awareness, security planning, pro-active network monitoring, and migration planning are definitely enough to get our customers excited.
Configuring Azure VMware Solution in vRealize Network Insight
Adding AVS to vRNI is the same as adding a VMware Cloud on AWS SDDC to vRNI. In vRNI, you’ll add the AVS vCenter as a VMC on AWS vCenter data source and NSX as a regular NSX-T Manager data source. The reason to use the VMC on AWS vCenter data source is the way Microsoft sets up the vCenter permissions (similar to VMC on AWS).
There are three steps per SDDC; deploy a collector into the SDDC and then add vCenter and the NSX-T Manager as data sources. These steps are the same for both vRNI on-premises and vRNI Cloud. Optionally, you could also add the HCX Connector to monitor its L2 Network Extensions.
Deploy the Collector
The supported deployment architecture is to have the collector inside the SDDC. This is also to prevent excessive data from going outside the AVS SDDC (and paying for it). Put it on any network and make sure it can reach the vCenter, NSX-T Manager, and ESXi hosts bi-directionally. If the network is secured with NSX-T, here’s a list of ports to configure in the NSX-T firewall.
Add vCenter
Because of how the vCenter permissions are set up, you’ll need to use the VMC on AWS – vCenter data source. In the future, AVS will be mentioned separately for clarity.
Use the internal vCenter IP address and select the collector that’s deployed inside the SDDC. Make sure to create a separate user in vCenter for vRNI and that it has the cloudadmin role.
Add the NSX-T Manager
Next, add the NSX-T Manager as a data source. Select the same collector as before, and make sure to enable the Enable DFW IPFIX and Enable latency metric collection options. These options make sure that the flow latency information and the latency measurements between virtual and physical NICs (even TEP to TEP) are retrieved from NSX and displayed in vRNI.
Even though cloudadmin users do not have access to the NSX Manager and NSX Edge VMs inside the AVS vCenter, vRNI has the NSX Manager and NSX Edge dashboards to monitor them.
Add the HCX Connector
Optionally, you could also add the HCX Connector to vRNI and monitor the traffic going over its L2 extensions. Especially when migrating from on-premises to AVS, these L2 extensions need to be top-shape, and vRNI can help make sure they remain operational.
Here are two examples of charts you get with this integration:
To add VMware HCX as a data source, all you have to supply is the HCX Connector appliance’s IP address and credentials to access it.
HCX is configured directly against the vCenter, so the same credentials can be used as the ones you’re monitoring the vCenter with. The IP address of the HCX Connector is typically internal and is found in the Azure Portal:
Summary
Once the Azure VMware Solution’s vCenter, NSX-T Manager, and HCX Connector are added as data sources, data collection starts. You’ll find dashboards for the NSX-T Manager, highlighting its configuration, network information, and operational status.
You’re able to zoom in to the NSX Edge dashboards, VMs, analyze network traffic behavior, check out how much egress internet traffic is occurring, and use all the other network and security insights that vRealize Network Insight offers.