Aria Automation Cloud Automation Cloud Management Platform vRealize

Demystifying vRealize Automation Network Profiles

Ever wanted to know how to build flexible application network topologies with vRA ?
Ever wondered how to model networks using the vRealize Automation Network Profiles concept in vRA and how to configure those?

If yes, then this blog article is for you!

Readers of this post should be familiar with vRealize Automation (vRA) – the automation, self-service and lifecycle manager component of VMware’s Cloud Management Platform (CMP).
vRealize Automation governs and commands the deployment and lifecycle management of full application and IT service stacks for hybrid clouds, all the way up from infrastructure to application components.

See this YouTube video for an overview of this and several other new capabilities delivered with vRA 7.0

When it comes to networking and security, the agility and efficiency offered to application architects to author and assemble flexible topologies is unmatched today in the industry, this is especially true when vRA is coupled with NSX (VMware’s network and security virtualization platform).
vRA’ s latest SW release (vRA 7.0) enhances the network and security design as part of the service authoring process (Blueprint Design in vRA terms) and simplifies the way NSX objects are consumed.

The aim of this article is to quickly explore the logic (objects, configuration steps, etc…) used during this authoring process, with a focus on the vRealize Automation Network Profiles concept, a powerful construct which carries some nuances that are worth digging into.

This article assumes familiarity with vRealize Automation 7.0 and its concepts (reservations, data collection, blueprints, etc…) as well as NSX (DLR, DFW, Security Groups, etc…). Description of those concepts is out of the scope of this post.

The terms “vRA”, “automation platform” and “system” will be used interchangeably throughout this article and all refer to the same thing: the vRealize Automation platform.

Let’s get started with a high-level overview of some of the main application topologies that vRA + NSX allows us to build. While those topologies can be very diverse, we’ve chosen two of the most popular ones:

Pic1

The picture above depicts a basic three-tier app in two different topologies:

  • Blueprint A shows a “traditional” network architecture whereby each application tier has its own L2 subnet and all are connected to an NSX Edge GW, a service-node that supports services such as Routing, NAT, Firewalling, VPN, Load-Balancing, etc… In this case, this GW applies security controls between tiers as well as NAT services so VMs on any tier of this application can be accessed from outside. Security controls are further enforced within each tier by using NSX’s distributed firewall (DFW), a VNIC-level firewall as represented on the picture above. Finally, the same GW also applies Load-Balancing on the 3-node web tier in the graphic.
  • Blueprint B features a more “innovative” architecture where all VMs are connected to a single flat network connected to an NSX DLR, a very powerful routing construct. This routing tier acts as a single block of routing but is actually made of routing modules distributed in all ESXi hosts kernels which are populated and managed by the central NSX controller.

Application tiering is implemented in the form of security groups (SG), yet another feature of NSX that allows policy-based grouping of VMs based on many criteria, such as their functional role in this case. Security controls across application tiers (i.e. between security groups) is again enforced by NSX DFW.

It is important to note that some elements in the picture above have been created prior to any blueprint deployment: Provider-level NSX Edge GW, the transit network and the NSX DLR as well as dynamic routing between the DLR and the Edge GW. All of the rest is dynamically created upon request in the automation platform.

The networks in these topologies have to be modelled in vRA’s converged blueprint designer, which provides a unified design canvas and topology view of all application-centric network services. The resulting blueprint allows the applications to be repeatedly and consistently provisioned. This modelling in vRA leverages the Network Profiles objects and this is what this article will focus on.

But before we get there, let’s think for a minute and ask ourselves: What is needed, in terms of description, to model the networking in the Blueprints above?:

  1. The external (upstream) network
  2. The number of networks inside the blueprint (e.g. 3 for application A, 1 for application B)
  3. Will these networks be Routed or NAT’ed towards the outside world? That is, should the gateway the networks are connected route or NAT the network traffic?
  4. IP addressing scheme for those networks – DHCP, network pools, or static assignment by vRA – as well as the corresponding IP Subnet in case of the latter.
  5. The First-hop L3 gateway those networks will connect to – an on-demand NSX Edge GW for application A, an existing DLR for application B.

Answering these fundamental questions and drawing the topology of those applications on a piece of paper or a whiteboard should provide just enough data to start modelling!

Some theory first…
The approach taken by vRA engineering to drive an efficient model is pretty smart: instead of configuring all of these attributes within the definition of each and every blueprint, they’ve decided to decouple the network topology modeling, opting for a shared profile concept instead. Indeed – think about two different blueprints aimed at deploying two totally different application stacks but with the same networking topology. Instead of modelling twice the same network topology in two different blueprints, one would model it once, outside of the blueprints, and then incorporate it when/if needed.

Enter vRealize Automation NETWORK PROFILES!
Below are brief explanations from my colleague, Ray Budavari, delivered during his VMworld session on the purpose and use of each one of those:

  • EXTERNAL Network Profile – used when connecting VMs or Gateways to a pre-created network.
    e.g. “I have an existing network (VXLAN or VLAN backed) that I want to connect dynamically created VMs or NSX Edge GWs to.
  • ROUTED Network Profile – used when end-to-end routable access with unique IP addresses is needed.
    e.g. “I need to provide end-user access to my Production workloads”
  • NAT Network Profile – When you have overlapping IP addresses across different blueprint deployments and yet external connectivity to VMs is required.
    e.g. “I am using and overlapping IP addresses across my web, access and database tiers, and will deploy many app instances that still need inbound and/or outbound external access”

The choice of a given vRealize Automation network profile type isn’t trivial: it determines the automation actions that will follow as well as the end topology for the application:

  • when you deploy an application leveraging a routed network profile, vRA will instruct NSX to deploy the networks for the application connected to an existing DLR and will instruct the DLR to route those network to the upstream L3 gateway.
  • when you deploy an application leveraging a NAT network profile, vRA will instruct NSX to deploy the networks for the application connected to a newly deployed NSX Edge and instruct ask the DLR to route those network to the upstream L3 gateway.

So let’s now explore the vRealize Automation network profiles configuration required to build Blueprint A and B’s networking topologies.
First, since both Blueprints are aimed at deploying applications that are connected to an external network, we have to model the latter:

  • Step 1: External network profile creation (name: “Transit Network”).

Pic2

The IP range field (192.168.10.101 => 192.168.10.150) is important: it tells the automation platform what IP addresses on this transit network the system should assign to all the NSX Edge GWs that will be created as part the subsequent deployment of Blueprint A. In the same vein, the Default GW field tells the system the default GW it should assign to those Edges (i.e., on those Edge, the route 0.0.0.0 ==> 192.168.10.1 will be inserted). Both fields are irrelevant for Blueprint B as the DLR is already present and connected to the external network called “Transit network”.

  • Step 2: Next, as Blueprint A and B will make use of NAT’ed and Routed network profiles, we have to create the templates for those.

Blueprint A:
We start by creating the web tier NAT’ed network profile (“Sales NAT Network ESG Web”) for Blueprint A as per the picture below:

Pic3

This network profile assumes a 172.16.10.0/24 IP subnet addressing based on the Gateway and Subnet mask configuration in the Network Profile. An IP Range (i.e. network pool) is created for IP allocations of a desired range.

Besides, we’ve tied this blueprint to External Network “Transit Network”. What that means is every time a new network (a L2 NSX logical Switch) gets deployed from this blueprint, it will be attached to a newly created NSX Edge GW, which will in turn be connected to the external network called “Transit Network” and assigned an available [external] IP address in the 192.168.10.101 to 192.168.10.150 range, per the External Network profile.

We then do exactly the same for the App and DB tiers of Blueprint A by creating two additional NAT’ed network profiles and configured to respectively 172.16.20.0/24 and 172.16.30.0/24. Those would be called Sales NAT Network ESG App and Sales NAT Network ESG DB. At deployment time, each of the machines will be provisioned to their respective networks, behind a dedicated NSX Edge GW, while the uplink interface of each GW belongs to the common 192.168.10.0/24 external network.

Blueprint B:
Blueprint B needs a routed network profile as per its topology.

Pic4

vRealize Automation Network Profiles

Configuration is similar to the NAT’ed ones except that here, there is an additional field: “Range Subnet mask”. Indeed, as this profile will be used to deploy subsequent routed networks that don’t overlap from an IP perspective, we have to define the IP range of every deployed instance. This is exactly the purpose of this field: it instructs vRA to perform the first deployment of this blueprint on 172.16.50.1 => 172.16.50.14, the second on 172.16.50.17 => 172.16.50.30, etc…

Finally, and only for Blueprint B, we have to determine which existing DLR Blueprint B networks should be connected to.

To that end, we select the DLR we want under the Routed Gateways section of the reservation network configuration pane in vRA and we associate it to the same “Transit Network” external network as per the screenshot below:

Pic6

And this is how we make both ends meet:
Routed Network Profile <=> External Network Profile <=> DLR
Sales Routed Network DLR <=> Transit Network <=> NSX DLR

 

At this stage, we have prepared all required vRealize Automation network profiles for later consumption in the Blueprints design process. Readers of this post should be familiar with the vRA Blueprint design process so we’ll skip direct to the networking section of both Blueprints to complete our end-to-end networking configuration:

Blueprint A:Pic7

Once the application components have been dragged onto the Blueprint canvas, you simply drag the appropriate network services from the “Network & Security” category to the canvas. In this case, we’ll use the “On-Demand NAT” network and assign the desired network profile previously configured.

A machine’s network interface (nic) can be attached to a network once it has been dragged to the canvas. After moving all three desired NAT networks to the canvas – Web, App, DB, select each application component, head to the “Network” tab, add a NIC and, finally, select the desired Network to complete the binding. The end result should resemble the image below…

Pic8

Save the Blueprint to commit the changes.


 

Blueprint B:Pic9

 

Blueprint B is even more straightforward. You will drag-and-drop an “On-Demand Routed Network” to the canvas and select the desired [routed] Network Profile previously configured. Finally, click on each application component to add a NIC and bind it to the single Routed Network. End result should be similar to the image below…Pic10

Save the Blueprint to commit the changes. Next steps will be to add any additional desired network or application components to the blueprint (e.g. Security Groups, Load Balancer, Software, etc.), Publish the finished product, and Entitle the new service.

This concludes this blogpost on vRealize Automation network profiles. We hope this will be useful for your cloud projects with vRA. As always, comments and questions are most welcome !

Thanks to my colleagues Jad AL Zein, Warren Gay, Gary Hamilton, Adam Bohle and Juergen Schaefer for reviewing this post.