Cloud Management Platform

Cloud Automation Services – Cloud Assembly Technical Overview

Vipul Shah from VMware’s Cloud Management Business Unit recently published a blog announcing the initial availability of Cloud Automation Services at VMworld. This was a great introduction to our Software as a Service based automation focused group of products. I wanted to spend a bit of time diving into them in a bit more detail and getting some information out into the world about these awesome services are changing the way we approach Cloud Management! The Cloud Automation Services platform is comprised of Cloud Assembly, Service Broker, and a newly redesigned Code Stream.

 

There is way too much content between all of these services to fit into one post, so we’re going to do this as a 3-parter, starting with Cloud Assembly!

Cloud Assembly is VMware’s approach to building a seamless, developer revevant, infrastructure as code first experience between multiple clouds endpoints. What does that mean from a really mean though? At first glance, Cloud Assembly looks like the place where you “build things” at – but in reality, there’s a lot more going on under the hood.

 

Born in the Cloud… Molded by it

 

One of the Cloud Automation Services Product Manager’s likes to say “Cloud Assembly was born in the cloud, molded by it” (kudos to ya’ll if you get the movie reference). What this means is that at it’s very foundation, Cloud Assembly is designed to act as a conduit to consuming services from multiple cloud environments, with public cloud treated as a first class citizen within the platform.

Consider the following screenshot:

 

 

We have multiple endpoints configured in our Technical Marketing environment – AWS, Azure, and our VMware Cloud on AWS SDDC. In order to configure a truly agnostic approach to resource provisioning and management – we need a way to take these accounts, and create a relationship between those endpoints. There are several different types of Cloud Accounts to choose from. These will continue to grow as new services are introduced.

 

 

Similar to above; we need a way to take the capabilities provided with these endpoints and build relationships between them. There’s a reason I keep using that word!

When we add those endpoints to Cloud Assembly we are provided an opportunity to select regions/zones/clusters to be added to our “Cloud Zones”. These constructs will be covered in more detail shortly; but at a high level Cloud Zones are where your compute resources are assigned to user consumable “zones”. Once an endpoint is added, a discovery process is initiated to collect all of the resources contained in within each endpoint. We collect information around the following object types:

 

 

All of these objects have different interaction models around them. They each have different configurations that are going to be needed to be able to leverage them within our mappings. More on that later!

 

What About the Mythical Private Cloud?

 

It’s true, Cloud Assembly was built from the ground up to answer our customers public cloud needs – but interacting with the private cloud is a critical component of VMware’s multi-cloud strategy. How do we leverage on premises resources with a platform hosted in Software as a Service. Has anyone ever seen the movie Stargate?

Enter the Remote Data Collector (RDC)!

 

The RDC is a virtual appliance that spins up a series of docker containers for interacting with on-premises services. Do you want to bind a NSX-T account? You’re going to use the RDC. Do you want to use vCenter on-premises? RDC time. What’s very cool and interesting about the RDC is it’s actually a series of containers hosted within the appliance. Each proxy service is a different container making it extremely modular and easy to update.

Once you deploy this appliance, you’ll be able to leverage across many of the Cloud Automation Services (it’s heavily used in Code Stream!). This will facilitate the data collection and discovery for your traditional vCenter workloads, as well as your NSX-T/V implementations!

So once we have this data inside Cloud Assembly, what do we do with it? How can we achieve cloud agnosticity? Also, self high-five for a made up word.

 

Leveraging Multi-Cloud “Compute” Resources

 

I mentioned earlier the concept of Cloud Zones. These are logical constructs containing compute resources bound to a “region” type, which might vary based on the endpoint being referenced.

 

Cloud Zones are bound to a construct we haven’t talked about yet, projects, to give users access to compute resources. They use various mappings and tag combinations to determine which definitions within our mappings to use.

 

Mapping Resources

 

with this data in place, we can create “mapping” relationships between our resources (as well as profiles in some cases). Specifically the Compute, Network, and Storage resources. We’ll dive deeper into creating these actual mapping objects in a later blog post; but an example of image mapping is below:

 

Consider in this example that we are creating a relationship to say “In all of these cloud providers, Ubuntu is defined. If the user configures their blueprint to deploy a Ubuntu build – it’s going to leverage this mapping to determine which object to use”. We do this for each object through leveraging combinations of Cloud Zones and Tags. It’s not dark magic. It’s science. So you know it’s a real thing. We even provide a capability within the actual blueprint “request” object that lets you see how the placement engine “decides” where a workload lands!

 

Mappings and profiles give our various Cloud environments the necessary “definitions” of capabilities that can be leveraged. We establish these for the following types

  • Flavor Mappings – Equivalent to sizing definition. “What is a small? Medium? Large?”
  • Image Mappings – Defined above – Mapping of image types to a name (Ubuntu above)
  • Network Profiles – Collections of network details. For on premises constructs, this includes IP Ranges.
  • Storage Profiles – Storage types. SSD vs standard disks. IOP limits, etc…

Each of these resources can have tags applied to them. These tags help the tag placement engine (consumed within blueprints) decide which of the resource mappings to leverage. Do you want a workload to land on high speed storage? You might tag a storage profile with “type:performance”. Do you want to leverage an external IP address on a workload? You might tag a network as “network:external”. Each of these “constraints” can be leveraged on the blueprint canvas to direct the workloads to land in/on a desired location.

 

We Have Maps and Tags, But Where Are We Going?

 

Once we have designed and defined these mappings, configured our profiles, and established our tags – we’re in a good place to start working our blueprints. Like all things, these blueprints can be as simple or as complex as we want them to be. In future posts, we’ll dig into methodologies around creating these blueprints, and how to actual achieve real goals around them – but for now, lets take a look at a sample blueprint.

 

There is A LOT going on in this blueprint. A couple of key highlights to talk about…

  • We have a set of objects we can use on the far left, which can be dragged onto the canvas in the middle of the screen. We can clearly take those objects and build connections/dependencies with them.
  • As we add content, the YAML (Yet Another Markup Language) is populated on the right hand of the screen.
  • In the YAML, we have inputs configured for username, password, and the number of deployments to create
  • We have an input to determine which “cloud” endpoint to place this workload on. This is handled by the constraints tag in the YAML.
  • We’re using cloud agnostic objects; including a load balancer. This is all configured within the YAML to the right.
  • We can see a set of code under a “cloudConfig” section – this maps to Cloud-Init. Cloud-Init is an industry standard cloud computing configuration tool, which runs a set of scripts at the time of an instance startup. Think of it like the Configuration Spec in vCenter, but suped up! This allows us to push various configurations, commands, and packages into a resource we’re building.

 

How Can I Learn How to Blueprint In Cloud Assembly?

 

Infrastructure as Code is incredible, but it can also seem daunting at first. Luckily, we’ve provided an in-platform blueprint marketplace to help with getting started! This pre-curated content allows new administrators to import or download existing blueprint YAML for several services to act as examples of how to build both simple and complex workloads!

 

Contained within these examples are samples for the following types of content –

  • Cloud-Init/Config Samples (User injection, Script Usage, Package Install, Config Modification)
  • Dependency Creation
  • Multi-Node Examples
  • Simple Web Server Deployments

These examples give you the opportunity to learn at your own pace on your journey to building a cloud!

 

Cloud Building – Beyond Infrastructure as a Service

 

What’s not shown in our earlier example blueprint, but IS shown in the sample content within the marketplace, is that our capabilities expand beyond traditional Infrastructure as a Service deployments. We also have the capability of consuming Cloud Native primatives from Amazon Web Services (today) and other cloud providers in the future!

 

These services represent the highest demand capabilities what we’ve had our customers ask for. For example, with RDS, we can enable users to consume native database capabilities without having to manage a full size SQL database. We have the ability to execute Lambda functions alongside deployments to consume next generation extensibility. It’s a brave new world!

In addition to these capabilities, we can leverage Configuration Management leveraging Puppet as a blueprint canvas item!

 

Leveraging Puppet, we can take your existing roles and manifests which you’ve configured for existing workloads, and run them against our multi-cloud workloads. This gives us a great leg up in getting application stacks, platforms, and configurations instantiated!

But what happens after we deploy workloads?

 

Breaking Deployments Into Components

 

 

One major differentiation in the way that Cloud Assembly handles deployed objects is that a deployment is no longer the first class citizen. The individual objects within the deployment are! What this means, is that we can actual iterate on this deployment.

Adding on constructs to the existing deployment is a very real use case, and a capability that we expose. For example, in this build we have exposed a web server. What if we wanted to transform this deployment to add on a new tier that was a database? No problem! We can simply modify the blueprint – and select the “Update an Existing Deployment” to push those changes in. The build will present a “plan” around what changes will be made to a blueprint, and as you can see in the screenshot below – nothing is being deleted; just the addition of the new workload we added!

 

When we press the deploy button, the update is inititated. This begins the process of updating the deployment with the new changes. In this case, the change is the addition of the database tier.

 

So now that we have our deployment in a state we are happy with, what’s next? We have a functional blueprint – and ultimately we’d like to get to a place where we can deploy it to our Service Broker catalog for users to consume. In order to do this, we need to “Version” the blueprint and release it!

 

Versioning our Masterpeice

 

We’ve created a glorious creation. We’ve validated it deploys successfully. It’s a 2 tier work of art behind a load balancer that can barely balance the awesome between tiers. If we move back to the blueprints tab, and select our blueprint again – we’re going to introduce a concept that has been in several of the screenshots we’ve posted but hasn’t been talked about. Versioning!

Cloud Assembly gives us the ability to version control blueprints that we create. This gives our blueprints a “history” that we can look back on to understand how they have changed. This is useful in the case of iterative development where we are taking a basic construct and iterating on it until its in a desired state. It’s also especially helpful when troubleshooting, because we can go back and see how the blueprint has changed version to version. This includes script blocks in cloudConfig, as well as objects that were added or removed from the canvas/code. Get in the habit of versioning your changes!

We can do this either through a “diff as code” or a “visual diff”. In my case, i’ve tagged our work of art with a version of 3 by using the “Version” button at the bottom of the canvas screen. Once that is done, I select Version History at the top of the screen to be taken into the versioning submenu:

 

As you can see, we have 3 versioning saved of this blueprint. If I select the “Diff” button, we are able to view the differences between versions. NOTE: You’ll need to SELECT the versions you want to compare between. In my case, I want to compare with version 2.

 

By default, we’re presented the diff as code. Selecting the “Diff Visually” button switches us to the graphical representation.

 

With that, we can select “Release” on as many of the blueprint versions as we want to publish in the Service Broker interface. Releasing a blueprint indicates that it’s ready for user consumption. This “interface” gives us the ability to work with several draft versions of a blueprint without exposing our “backend” work to our end users

 

How Do I Handle Existing Workloads?

 

Significant efforts have been put into addressing how to bring existing workloads under management. As the product continues to mature, many “Day-2” capabilities will be introduced into the product for interacting with existing workloads. Power options, resource expansion, snapshotting are all capabilities that you can expect to see make their way into the platform. To handle on-boarding resources, users can select the “Onboarding Plans” button near the bottom of the screen, on the main infrastructure tab.

Entering this screen allows us to create an onboarding plan, where we can select existing workloads from any of our “bound” cloud accounts. In the below example, I’ve used the Onboarding wizard to create a few sample deployments from existing machines in my Private Cloud environment.

 

This streamlined, multi-cloud, onboarding process makes it very easy to bring in existing workloads into management within Cloud Assembly. From here, we have the ability to create blueprints that represent the components as a deployment! Gone are the days where we need to assign these machines to an EXISTING blueprint!

 

Where Do We Go From Here?

 

At this point, we have toured the major functionality of Cloud Assembly. Extensibility was not covered in this blog post, specifically integrations with vRealize Orchestrator or the new Action Based Extensibility platform. That topic is going to get its own dedicated blog treatment. We’ve introduced you to a number of new concepts and shown you how Cloud Assembly enables developers and administrators to consume multi-cloud services with infrastructure as code on the forefront!

Stay tuned for the next stop in our journey; Service Broker. With Service Broker we take the blueprints we’ve created here – and expose them (as well as Cloud Formation Templates in AWS!) to end users for consumption.

 

Getting started

 

Request a 30-day free trial and get a hands-on experience of our new offering.

Start a Cloud Automation Services Trial

Learn more