This article was written by Cody de Arkland, originally posted here.
Vipul Shah from VMware’s Cloud Management Business Unit recently published a blog announcing the initial availability of Cloud Automation Services at VMworld. This was a great introduction to our suite of SaaS-based automation products. I wanted to spend a bit of time diving into them in a bit more detail and examine how these new services are changing the way we approach Cloud Management.
The Cloud Automation Services platform is comprised of Cloud Assembly, Service Broker, and a newly redesigned Code Stream.
There is way too much content between all of these services to fit into one post, so we’re going to do this as a 3-parter, starting with Cloud Assembly.
Cloud Assembly is VMware’s approach to building a seamless, developer relevant, infrastructure-as-code first experience between multiple cloud endpoints. What does that really mean though? At first glance, Cloud Assembly just looks like a place for developers to “build things” – but there’s a lot more going on under the hood.
A Cloud-first Service
At its foundation, Cloud Assembly is designed to act as a conduit to consuming services from multiple cloud environments, with public cloud treated as a first class citizen within the platform.
Consider the following screenshot:
We have multiple endpoints configured in our Technical Marketing environment – AWS, Azure, and our VMware Cloud on AWS SDDC. In order to configure a truly agnostic approach to resource provisioning and management, we need a way to take these accounts, and create a relationship between those endpoints. There are several different types of Cloud Accounts to choose from. These will continue to grow as new services are introduced.
Similarly, we need a way to take the capabilities provided with these endpoints and build relationships between them.
When we add those endpoints to Cloud Assembly we are provided an opportunity to select regions/zones/clusters to be added to our “Cloud Zones”. These constructs will be covered in more detail shortly; but at a high level Cloud Zones are where your compute resources are assigned to user consumable “zones”. Once an endpoint is added, a discovery process is initiated to collect all of the resources contained within each endpoint. We collect information around the following object types:
All of these objects have different interaction models around them. They each have different configurations that are going to be needed to be able to leverage them within our mappings. More on that later.
What About the Mythical Private Cloud?
Cloud Assembly was built from the ground up to answer our customers public cloud needs – but interacting with the private cloud is a critical component of VMware’s multi-cloud strategy. How do we leverage on premises resources with a platform hosted in Software as a Service.
Enter the Remote Data Collector (RDC).
The RDC is a virtual appliance that spins up a series of docker containers for interacting with on-premises services. Do you want to bind a NSX-T account? You’re going to use the RDC. Do you want to use vCenter on-premises? RDC time. What’s very cool and interesting about the RDC is it’s actually a series of containers hosted within the appliance. Each proxy service is a different container making it extremely modular and easy to update.
Once you deploy this appliance, you’ll be able to leverage across many of the Cloud Automation Services (it’s used heavily in Code Stream). This will facilitate data collection and discovery for your traditional vCenter workloads, as well as your NSX-T/V implementations.
So once we have this data inside Cloud Assembly, what do we do with it? How can we achieve cloud an agnostic cloud?.
Leveraging Multi-cloud “Compute” Resources
I mentioned earlier the concept of Cloud Zones. These are logical constructs containing compute resources bound to a “region” type, which might vary based on the endpoint being referenced.
Cloud Zones are bound to a construct we haven’t talked about yet, projects, to give users access to compute resources. Projects use various mappings and tag combinations to determine which definitions within our mappings should be used.
Mapping Resources
With project data in place, we can create “mapping” relationships between our resources (as well as profiles in some cases). Specifically the Compute, Network, and Storage resources. We’ll dive deeper into creating these actual mapping objects in a later blog post; but an example of image mapping is below:
Consider in this example that we are creating a relationship to say “In all of these cloud providers, Ubuntu is defined. If the user configures their blueprint to deploy an Ubuntu build it’s going to leverage this mapping to determine which object to use”. We do this for each object through leveraging combinations of Cloud Zones and Tags. It’s not dark magic. It’s science. So you know it’s a real thing. We even provide a capability within the actual blueprint “request” object that lets you see how the placement engine “decides” where a workload lands.
Mappings and profiles give our various Cloud environments the necessary “definitions” of capabilities that can be leveraged. We establish these for the following types:
Each of these resources can have tags applied to them. These tags help the tag placement engine (consumed within blueprints) decide which of the resource mappings to leverage. Do you want a workload to land on high speed storage? You might tag a storage profile with “type:performance”. Do you want to leverage an external IP address on a workload? You might tag a network as “network:external”. Each of these “constraints” can be leveraged on the blueprint canvas to direct the workloads to land in/on a desired location.
We Have Maps and Tags, but Where Are We Going?
Once we have designed and defined these mappings, configured our profiles, and established our tags – we’re in a good place to start working our blueprints. Like all things, these blueprints can be as simple or as complex as we want them to be. In future posts, we’ll dig into methodologies around creating these, and how to actual achieve real goals with them – but for now, let’s take a look at a sample blueprint.
There is a lot going on in this blueprint. A couple of key highlights include:
How Can I Learn How to Blueprint In Cloud Assembly?
Infrastructure-as-Code is incredible, but it can also seem daunting at first. Luckily, we’ve provided an in-platform blueprint marketplace to help you get started. This pre-curated content allows new administrators to import or download existing blueprint YAML for several services to act as examples of how to build both simple and complex workloads.
Contained within these examples are samples for the following types of content:
These examples give you the opportunity to learn at your own pace on your journey to building a cloud.
Cloud Building – Beyond Infrastructure-as-a-Service
What’s not shown in our earlier example blueprint, but IS shown in the sample content within the marketplace, is that our capabilities expand beyond traditional Infrastructure-as-a-Service deployments. We also have the capability to consume Cloud Native primatives from Amazon Web Services (today) and other cloud providers in the future.
These services represent the highest demand capabilities our customers have asked for. For example, with RDS, we can enable users to consume native database capabilities without having to manage a full size SQL database. We have the ability to execute Lambda functions alongside deployments to consume next generation extensibility. It’s a brave new world.
In addition to these capabilities, we can use Configuration Management leveraging Puppet as a blueprint canvas item.
Leveraging Puppet, we can take your existing roles and manifests configured for existing workloads, and run them against our multi-cloud workloads. This gives us a great leg up in getting application stacks, platforms, and configurations instantiated.
But what happens after we deploy workloads?
Breaking Deployments Into Components
One major differentiation in the way Cloud Assembly handles deployed objects is a deployment is no longer the first class citizen. Instead, the individual objects within the deployment are. This means we can actually iterate on this deployment.
Adding on constructs to the existing deployment is a very real use case, and a capability we expose. For example, in this build we have exposed a web server. What if we wanted to transform this deployment to add on a new tier that was a database? No problem. We can simply modify the blueprint – and select the “Update an Existing Deployment” to push those changes in. The build will present a “plan” around what changes will be made to a blueprint, and as you can see in the screenshot below – nothing is being deleted; just the addition of the new workload we added.
When we press the deploy button, the update is initiated. This begins the process of updating the deployment with the new changes. In this case, the change is the addition of the database tier.
So now that we have our deployment in a state we are happy with, what’s next? We have a functional blueprint – and ultimately we’d like to get to a place where we can deploy it to our Service Broker catalog for users to consume. In order to do this, we need to “Version” the blueprint and release it.
Versioning our Masterpiece</h3
We’ve created a glorious masterpiece. We’ve validated it deploys successfully. It’s a 2 tier work of art behind a load balancer that can barely balance the awesome between tiers. If we move back to the blueprints tab, and select our blueprint again – we’re going to introduce a concept that has been in several of the screenshots we’ve posted but hasn’t been talked about. Versioning.
Cloud Assembly gives us the ability to version control blueprints we create. This gives our blueprints a “history” we can look back on to understand how they have changed. This is useful in the case of iterative development, where we are taking a basic construct and iterating on it until it’s in a desired state. It’s also especially helpful when troubleshooting, and finding at which version an issue began. This includes script blocks in cloudConfig, as well as objects that were added or removed from the canvas/code. It’s advisable to get in the habit of versioning your changes.
We can do this either through a “diff as code” or a “visual diff”. In my case, i’ve tagged our work of art with a version of 3 by using the “Version” button at the bottom of the canvas screen. Once that is done, I select Version History at the top of the screen to be taken into the versioning submenu:
As you can see, we have three versions saved of this blueprint. If I select the “Diff” button, we are able to view the differences between versions. NOTE: You’ll need to SELECT the versions you want to compare between. In my case, I want to compare with version 2.
By default, we’re presented the diff as code. Selecting the “Diff Visually” button switches us to the graphical representation.
With that, we can select “Release” on as many of the blueprint versions as we want to publish in the Service Broker interface. Releasing a blueprint indicates it’s ready for user consumption. This “interface” gives us the ability to work with several draft versions of a blueprint without exposing our “backend” work to our end users.
How Do I Handle Existing Workloads?
Significant efforts have been put into addressing how to bring existing workloads under management. As the product continues to mature, many “Day-2” capabilities will be introduced into the product for interacting with existing workloads. Power options, resource expansion and snapshotting are all capabilities you can expect to see make their way into the platform. To handle on-boarding resources, users can select the “Onboarding Plans” button near the bottom of the screen, on the main infrastructure tab.
Entering this screen allows us to create an onboarding plan, where we can select existing workloads from any of our “bound” cloud accounts. In the below example, I’ve used the Onboarding wizard to create a few sample deployments from existing machines in my Private Cloud environment.
This streamlined, multi-cloud, onboarding process makes it very easy to bring existing workloads into management within Cloud Assembly. From here, we have the ability to create blueprints that represent the components as a deployment. Gone are the days when we needed to assign these machines to existing blueprints.
Where Do We Go From Here?
At this point, we have toured the major functionality of Cloud Assembly. Extensibility was not covered in this blog post, specifically integrations with vRealize Orchestrator or the new Action Based Extensibility platform. That topic is going to get its own dedicated blog treatment. We’ve introduced you to a number of new concepts and shown you how Cloud Assembly enables developers and administrators to consume multi-cloud services with infrastructure as code on the forefront.
Stay tuned for the next stop in our journey; Service Broker. With Service Broker we take the blueprints we’ve created here – and expose them (as well as Cloud Formation Templates in AWS.) to end users for consumption.
Visit our website to learn more about Cloud Assembly.