vRealize Operations Aria Automation Cloud Automation Cloud Management Platform

Kubernetes Namespace Management in Cloud Assembly

VMware is making huge investments in our ability to deploy and manage Kubernetes. This is evident in many of the VMworld announcements that you hopefully heard while attending the conference or have read in post conference material. This should come as no surprise as we (VMware) feel it is our “birthright” to own the container management market. In this blog I am going to cover a new capability in VMware Cloud Assembly that allows you to better manage Kubernetes namespaces across all our your clusters.

 

The Scenario:

Let’s say you are the virtual administrator in your company and the boss came by and just told you that you are becoming the proud owner and operator of all the Kubernetes clusters in your environment. You need to manage, not just the clusters, but also the namespaces used in the clusters and control who has access to the namespaces AND provide an easy way to request a new managed namespace in a cluster. Your response…that’s easy boss!!

 

So, why is this easy, because you are using vRealize Automation which now has the ability to manage namespaces within Kubernetes clusters and control role based access to those namespaces!!!

 

Let’s walk through how we do this!!

 

On-boarding Clusters:

There are lots of areas where vRealize Automation integrates with Kubernetes. I’m not going to go into every point of integration in this blog but you can check out this blog if you to know more about all the integrations. Here we are going to solve our namespace management issue that the boss dropped on us. We need to start by on-boarding our existing K8s clusters. Of course, VMware has direct integration with Enterprise PKS (as well as OpenShift) and can deploy K8s clusters on a PKS environment, but we need to take existing clusters under management. We can do that simply by adding the existing clusters.

Again, not going to go through that configuration process because it’s pretty self explanatory. Once I have on-boarded the clutser(s), I can see all kinds of information about the cluster including the current namespaces that have already been deployed on the cluster.

You can see all the namespaces in the cluster and you will notice that none of them are managed. At this point you could select any number of the namespaces and bing them under management by selecting the namespaces and then ADD TO PROJECT. This will allow you to take any current namespace and bring it under management in vRealize Automation. By doing this you now can control who has access to the namespace based on the users project membership. We will come back to managed namespaces a little later.

 

Kubernetes Zones

For now we need to take all of our clusters and add them to our Kubernetes Zones so we can tag them to make better placement decisions when one of our developers needs a new namespace. To do this simply click on Kubernetes Zones and select ADD NEW KUBERNETES ZONE. You will be presented with the following configuration screen:

vRealize Automation groups all non-PKS Enterprise or OpenShift clusters into a single bucket called “All external clusters”. Select this group from the list and enter a name for the group. Don’t worry you can see the individual clusters in the group by clicking in to the newly create group and selecting the clusters tab.

Notice that I also tagged the clusters (k8s:dev and k8s:prod). This will be important later on when we are making placement decisions for our self-service namespace request. Save the zone.

Now we can add the Kubernetes Zone to our project so users of the project can begin using them.

Click on Project, then Kubernetes Provisioning, and ADD ZONE to select the new K8s zone to be added to the project.

 

Creating A Namespace Blueprint:

Now that we have on-boarded our Kubernetes clusters, created our Kubernetes Zone, and added the zones to our project, we are ready to build a blueprint that can deploy a K8s namespace.

Drag the new K8s Namespace object on to the blueprint design canvas. Cloud Assembly will start building out the infrastructure as code YAML format for you when you drop an object on to the canvas.

I’m not going to go into great detail on how you create the YAML blueprint but I will show you the finished blueprint and explain the parts.

The above image shows the completed blueprint. I added a couple things of interest to make this blueprint usable in the catalog.

First, I added inputs:

  • nsname input – free form filed complying with the [a-z0-9] pattern. This serves as the input for the new namespace.
  • environment input – this os a dropdown selection so the requestor can select the environment for the namespace (Dev or Prod)

Second, I added a constraint tag:

  • The constraint tag (${input.environment == “Prod” ? “k8s:prod” : “k8s:dev”}) is a blueprint expression that applies one of the tags we put on the K8s clusters in the Kubernetes Zone earlier.

Now that the blueprint is complete, create a version and release it to the catalog.

 

Requesting the Self-Service Namespace Catalog Item:

Now that you you have the namespace blueprint released, you can head to Service Broker and see it in the catalog.

Now any member of the project can request a namespace from the catalog item.

Notice the inputs we put in the blueprint are available in the catalog item. In this request we want to create a namespace for newapp in the Dev K8s Cluster. We can do that by selecting Dev from the dropdown.

Just like when you deploy workloads in vRealize Automation you can see the K8s namespace request in the deployments list.

 

Managing the Namespace:

Now that the request has completed let’s head back to Cloud Assembly and see what actually happened.

Since we selected Dev during the deployment process we will open the Dev cluster from Cloud Assembly. Then select the Namespace tab.

On the Namespace tab you will see the new namespace was created. You can also easily get the access information to run native Kubectl commands on the namespace and cluster by clicking the download link in the Kubeconfig column.

 

Deleting the Namespace:

Just like when you delete a deployment in vRealize Automation, the machines associated with that deployment are delete as well. This same principal applies to the namespaces that are created through the Namespace catalog item.

Clicking delete from the Actions menu on the deployment will begin the process of removing the namespace from the dev cluster

 

Managing Supervisor Namespaces in vSphere with Kubernetes:

Supervisor Namespace

Config

Download Config

vSphere with Kubernetes will revolutionize how Kubernetes will be delivered to developers by making it a first-class citizen within vSphere. In vRealize Automation an administrator can create a supervisor namespace on a supervisor cluster and assign the namespace to a project. Users in that project can get the kubectl and deploy application containers and VMs.

Summary:

Though this is a simple example of how you can create a self-service process for user to request namespaces in a Kubernetes cluster you can see there is a ton of power in being able to manage both the K8s clusters as well as put controls around namespaces on the cluster. With vRealize Automation you can take control of the exploding Kubernetes and container capabilities in your environment and put the same governance and controls around these new capabilities just as you have always been able to do with VMs through vRealize Automation.

 

 

 

Other Cool Blogs:

Blueprint Object Properties Editor

Kubetnetes Integration in vRealize Automation Cloud

vRealize Automation Cloud API’s First

Service Broker Policy Criteria

 

Try our vRealize Automation Hands-On Lab here