When we launched vRealize Network Insight 6.2 (vRNI) recently, we also certified vRNI 6.1+ and vRNI Cloud with official support for Google Cloud VMware Engine (GCVE). You can confidently use vRNI to monitor and troubleshoot your GCVE SDDCs and rely on our Global Support Services to help you if required.
The architecture GCVE uses is based on VMware Cloud Foundation, meaning it has vSphere, NSX-T, and vSAN built-in.
vRealize Network Insight on Google Cloud VMware Engine
Completing the certification process means that we can fully support the deployment in GCVE. Our engineers ran comprehensive tests on the platform for our primary use cases, and everything came back green!
The use cases that have been validated are:
- Application Awareness: Discover, Plan Migration (with VMware HCX) and Day 2 Operations
- Security: Flow Visibility, Planning, and Firewall Rule Recommendations
- Alerts and Analytics: Pro-active Alerting, Top-talkers, Outliers, Dynamic Thresholds, Flows (including Latency)
- Dashboards: NSX-T Manager, vCenter, ESXi Hosts, VMs, Applications, and more
- Hybrid Network Troubleshooting:
- Inter SDDC Path
- SDDC to Internet
- On-prem to SDDC over Policy-Based VPN via NSX
- VMware HCX: Stretched L2 VLAN stitched flows
Visibility into Google Cloud Components
vRNI currently does not support native Google Cloud components that could be attached to your GCVE SDDCs. Visibility into things like the Cloud Interconnects, Load Balancers, or VPNs instantiated from Google Cloud directly will be coming in a future version of vRNI. Please reach out if you would like to work with us.
Apart from the connectivity troubleshooting use case, the other use cases around application awareness, security planning, pro-active network monitoring, and migration planning are definitely enough to get our customers excited.
Configuring GCVE in vRealize Network Insight
Adding GCVE to vRNI is the same as adding a VMware Cloud on AWS SDDC to vRNI. In vRNI, you’ll add the GCVE vCenter as a VMC on AWS vCenter data source and NSX as a regular NSX-T Manager data source. The reason to use the VMC on AWS vCenter data source is the way Google sets up the vCenter permissions (similar to VMC on AWS). The data source names will be changed in a future version to avoid confusion.
There are three steps per SDDC; deploy a collector into the SDDC and then add vCenter and the NSX-T Manager as data sources. These steps are the same for both vRNI on-premises and vRNI Cloud. Optionally, you could also add the HCX Connector to monitor its L2 Network Extensions.
Deploy the Collector
The supported deployment architecture is to have the collector inside the SDDC. This is also to prevent excessive data from going outside the GCVE SDDC (and paying for it). Put it on any network and make sure it can reach the vCenter, NSX-T Manager, and ESXi hosts bi-directionally. If the network is secured with NSX-T, here’s a list of ports to configure in the NSX-T firewall.
Because of how the vCenter permissions are set up, you’ll need to use the VMC on AWS – vCenter data source. In the future, GCVE will be mentioned separately for clarity.
Use the internal vCenter IP address and select the collector that’s deployed inside the SDDC. Make sure to create a separate user in vCenter for vRNI and that it has the cloudadmin role.
Add the NSX-T Manager
Next, add the NSX-T Manager as a data source. Select the same collector as before, and make sure to enable the Enable DFW IPFIX and Enable latency metric collection options. These options make sure that the flow latency information and the latency measurements between virtual and physical NICs (even TEP to TEP) are retrieved from NSX and displayed in vRNI.
Even though cloudadmin users do not have access to the NSX Manager and NSX Edge VMs inside the GCVE vCenter, vRNI has the NSX Manager and NSX Edge dashboards to monitor them.
Add the HCX Connector
Optionally, you could also add the HCX Connector to vRNI and monitor the traffic going over its L2 extensions. Especially when migrating from on-premises to GCVE, these L2 extensions need to be top-shape, and vRNI can help make sure they remain operational.
Here are two examples of charts you get with this integration:
To add VMware HCX as a data source, all you have to supply is the HCX Connector appliance’s IP address and credentials to access it.
HCX is configured directly against the vCenter, so the same credentials can be used as the ones you’re monitoring the vCenter with. The IP address of the HCX Connector is typically internal and is found in the Google Cloud console.
Once the Google Cloud VMware Engine’s vCenter, NSX-T Manager, and HCX Connector are added as data sources, data collection starts. You’ll find dashboards for the NSX-T Manager, highlighting its configuration, network information, and operational status.
You’re able to zoom in to the NSX Edge dashboards, VMs, analyze network traffic behavior, check out how much egress internet traffic is occurring, and use all the other network and security insights that vRealize Network Insight offers.