vRealize Network Insight Network Management

vRealize Network Insight 3.1 Architecture and Scalability

vRealize Network Insight 3.1 released on October 6th with a new scalability capability.  This new capability allows Network Insight to better support large enterprises.  In this blog post I will cover the basics of Network Insight architecture, explain the new multi-node clustering feature and walk through the configuration of clustering in the user interface.

Architecture Overview

Network Insight provides a modular architecture with two components which are virtual appliances that easily deploy in your existing vSphere infrastructure.  The primary component is the Platform VM which provides the analytics, user interface and data management for Network Insight.  The other component is the Proxy VM which is used to connect to the various data sources supported by Network Insight, such as vCenter, NSX and physical network devices.

In the diagram below you can see the relationship of the Platform and Proxy VMs in a typical, non-clustered, architecture.


The nice thing about this architecture is it provides flexibility for distributed environments.  For example, the image below shows how three remote sites can be monitored from a single Platform VM using multiple Proxy VMs.  By the way, the Proxy VMs securely communicate over port 443 with the Platform VM and communication is always initiated by the Proxy VMs.


Sizing Considerations

The VMs can be configured in two different sizes, as shown in the chart below.  As you can see, each component can support a maximum of 6,000 virtual machines with data flows enabled (IPFix).  Keep in mind that this is a maximum per component.  That will be important in the cluster sizing consideration I will discuss next.


Multi-Node Clustering Explained

With the single Platform VM architecture larger customers may hit the 6K VM limit.  Deploying multiple “separate” Platform VMs is an option, but not ideal since you do not get a single pane of glass and relationships between separately monitored environments are not identified.

Enter Multi-Node Clustering to overcome this constraint.  With clustering, customers can deploy between 3-5 Platform VMs to scale out linearly and support up to 30,000 virtual machines (again with IPFix flows enabled).  As you can see in the diagram below, this architecture leverages the component flexibility and works nicely with the Proxy VMs for even more deployment options.


There are some considerations when planning your deployment.  First, clustering requires a minimum of 3 Platform VM large platform VMs.  A maximum of 5 large Platform VMs is supported as of the 3.1 release.  Also, while Proxy VMs can be added after the cluster is created, note that currently you cannot change the cluster size after the initial creation (e.g. you cannot add more Platform VMs).

Cluster Installation using UI

Creating a cluster is a simple process.  For an existing deployment, that has been upgraded to 3.1, you can create a cluster but be aware that in a large environment this may require several hours to complete.  Remember, this is an irreversible change so be sure that you have accounted for sizing, growth and that you actually need clustering before continuing.  It is also recommended to create snapshots of your Platform VMs before starting.

To begin, navigate to Settings > Install and Support and look for the Create Cluster button.  This button is only available if a cluster has not been created.


The Create Cluster dialog will pop up and allow you to add the new Platform VMs by IP or FQDN.  Remember, you need a minimum of 3 Platform VMs and a maximum of 5 is currently supported.


Once the cluster has been created, adding new Proxy VMs works as it did in version 3.0.  Simply click the Add Proxy VM button and then copy the shared secret to securely set up the Proxy VM connection.



The Multi-Node Cluster feature available in 3.1 adds scalability and enterprise readiness to Network Insight, allowing large enterprises to monitor up to 30,000 VMs with data flows.  Currently this allows for 3-5 Platform VMs with associated Proxy VMs to support many different configurations including geographically distributed data centers.  Creating the cluster is easily done via the user interface and can be done on existing deployments.



One comment has been added so far

  1. Is there any details on how the clusters work when a failure of the platform VM’s occur? How about the data relationships and how the pushes occur when you have the load balancer in place and multiple proxies underneath? Is there data loss or catch up that occurs when you lose a platform VM?

Leave a Reply

Your email address will not be published. Required fields are marked *