Multi-cloud
vCloud Automation Center Aria Automation vRealize Automation Ecosystem

Scaling a vRA 7.3 Environment (Part 1)

Let’s say you’re the kind of person that doesn’t like wasting resources – you use public transportation, electric if possible (here in Eastern Europe we love EVs), you separate your trash, buy new mountaineering equipment only when necessary, always turn off the lights in the bathroom and you really like the scaling features of virtual infrastructures. Even if you’re only a fan of the “grow on demand” concept it’s all right – this blog is for you. On a side note – you should really consider using public transportation and separating your trash if you don’t do it already.

Many of the vRA deployments are the so-called “Distributed” deployments – mostly because of HA considerations. However, while meeting redundancy requirements, those environments waste too many resources, because they rarely hit a performance issue. In addition, if you’ve ever built vRA at any point in time you know that it requires just too much of Windows virtual machines, kaiju appliances and external databases. So, if you don’t have any requirements for HA, because, let’s say your DR scenario pretty much covers all of the HA aspects or you’re just building a PoC, you can safely start with what I call a “single-node distributed environment.” Here’s an over-simplified diagram of the Large configuration found in the vRA Reference Architecture:

As you can see, we have only one node for each role (the Manager server, DEM and proxy agent are even combined into a single server, but you can separate them if you want) and at the same time we’re using separate load balanced FQDNs.

When scaling the environment, this is what we are trying to achieve:

I have split this blog into three different parts:

  1. Installing a single-node distributed environment.
  2. Scaling the environment with vra-command.
  3. Scaling the environment with the Config REST API.

Let’s begin with the installation, which not only is very easy, but also requires a lot less preparation – only 3 virtual machines. So, we begin with the preparation:

  • Certificates – we need three certificates. The certificate for the web server needs to be a SAN certificate containing the FQDNs of the load balancing endpoint and the IaaS web server itself:

  • Load Balancing Endpoints – or the pompous way of saying “CNAME records” in this case. Why would you need to waste three VIPs in your load balancer when you could just point to the one and only node you have? Remember, we stop the water when soaping.

  • Service Account – Use an Active Directory service account with the minimum set of permissions and a tough password.
  • Prepare your SQL Server machine – for security, availability and licensing reasons keep your database on a separate server.

Here is a guide to installing this type of environment covering the most crucial parts:

  1. Choose an Enterprise deployment and check “Install Infrastructure as a Service” (duh!):

See how there’s a nice picture on the right of what we want to achieve. Pretty neat!

2. Install the Management agents:

3. On the vRealize Automation Host page select Enter Host and type the load balancing FQDN you created as a prerequisite.

4. On the IaaS Host page type the load balancing FQDNs of the IaaS Web service and the Manager Service.

5. The Load Balancers page sums up our little setup. Just make sure all the info here is correct.

6. Go ahead – validate, create snapshots (absolutely mandatory, I wouldn’t even say it’s recommended) and click Install.

Great, you have your distributed vRA environment now!

Next post – how to scale this environment by using the embedded command line tool vra-command.

You may want to check the new features in vRA 7.3 in the meantime!