Container Service Extension Developer Ready Clouds VMware Cloud Director

VMware Cloud Director Container Service Extension with Cluster Autoscaling

With our constant endeavour to support our partners with the latest innovation and services that enrich their business offerings, we bring to you another enhancement that I am excited to share. VMware Cloud Director Container Service Extension (CSE) – our multi-tenant Kubernetes as a Service (KaaS) platform – now supports Horizontal Auto Scaling of Kubernetes (K8s) clusters.

Let me take a step back and mention that world of Kubernetes and the ecosystem around us have been growing in the market very rapidly. VMware and the Cloud Service Provider business unit within the company is very well positioned to support the good portion of this market. The innovations we have been doing in the last couple of years and working with the VMware Tanzu teams really provides us the opportunity to be the vendor of choice to offer Kubernetes infrastructure as a service on top of the VMware Cloud Director managed IaaS stack.

Auto Scaling is a great addition for our KaaS providers who can now enhance their service offering to support their tenant’s application needs on the go. It really resonates well with our Service Provider consumption model and further provide great economics for a multi-tenant KaaS deployment

So, let us start by understanding what is auto scaling?

Auto scaling, simply put, is scaling – adding (up) or reducing (down) – of resources based on the demand as requested by the clusters.

There are two popular ones; Vertical auto scaling and horizontal auto scaling. The former scales up or down the resources that are assigned / attributed to the respective nodes in a cluster. If there is a demand for CPU or memory resources, vertical scaling enables clusters to access these resources as they need.

Horizontal auto scaling adds / reduces the number of pods or nodes depending on the workload requirement.

In this blog, we’ll discuss more on the horizontal auto scaling bit and understand how CSE extends this support for our cloud providers.

As we stated above, auto scaling is a process that adds more nodes to the cluster based on the requirement requests from the pods or the demands of the application(s). 

So, why do we need auto scaling?

Auto-scaling effectively addresses the needs and demands of businesses. It ensures that there is sufficient resource to keep the applications running optimally. This makes for higher availability and in turn better resilience. Running application optimally means to utilise optimal resources, so the system can handle the spikes and troughs of application demand efficiently which not only ensures resource availability but also ensures optimal resource utilisation thus cost optimality.

With Horizontal Pod Autoscaler – a community driven Kubernetes project – now natively supported thru CSE, VMware K8s providers can now offer enhanced services that meets the modern application demands of their tenants. Providers can offer better SLAs that ease the decisional fatigue for their tenants to project their demands and costs.  

How does the auto scaler work in CSE?

The auto scaler works along with CAPVCD and with worker nodes when they are deployed. These components monitor the nodes and pods as a ‘controller’. This continuously observes the vital  metrics like cores, memory etc. These are then compared to the metrics that are defined by the user to implement a scale action all via the Cluster API.

Let’s see a simple example, if the core utilisation is at 80% on a particular pod – A,  and at 50% on another pod – B, whenever there is a change requested, that is the controller senses the observed values going beyond the defined value of 90% utilisation, the cluster auto scaler automatically kicks in when the number of pending (un – schedule-able) pods increase due to resource shortages and works to add additional nodes to the cluster that can accommodate the resource requirement of pod A.

On the other hand, if pod B’s metric utility falls below 30% (defined value), the auto scaler now reduces the nodes to match the user defined values.
Therefore, reducing costs and optimizing the resources required to run the pod.

We have published a Technical Whitepaper that discusses in detail the design, requirement and implementation of cluster autoscaler on CSE. Read the whitepaper.

Also, check out this episode of Feature Friday from our experts to know how this enhancement helps the Developer Ready Cloud offering.

With CSE now supporting autoscaling capabilities (both vertical and horizontal), the solution brings in great value for our partners and their businesses. This also enables partners to build and offer various differentiated k8s services that support the latest and greatest innovations in the modern applications world.


For more details on the product, contact Manish Arora, Director of Product Management for Modern Apps and Sovereign Clouds for VMware Cloud Service Providers. You can also connect with us on our dedicated Slack channel and we’d be happy to respond to your queries and feedback. If you are not a member yet then please email us for access to the VMware Cloud Provider slack channel. OR, leave a reply.