kubernetes

Cluster API Simplifies Execution and Builds Solid Foundations with v1alpha2

With the new v1alpha2 release of Cluster API, we've advanced our vision of using a declarative API to create and manage Kubernetes clusters. Cluster API is a declarative API specification that builds on top of Kubeadm to add optional support for uniformly managing a Kubernetes cluster's infrastructure and lifecycle.

Coming out of the v1alpha1 release in March, community feedback and retrospectives led us to identify several key areas to improve in this new release as we work to refine the project’s scope and objectives:

  • Code sharing between different providers
  • Composability, boundaries, and responsibilities across components
  • User experience around API validation
  • Documentation and first-run experience

To improve several of these areas, we split the API into three functional groups:

  1. The Core Cluster API, which contains the core types.
  2. Bootstrap Providers, which turn any instance of a Linux computer into a Kubernetes node.
  3. Infrastructure Providers, which create and manage the underlying infrastructure for a cluster.

Separating Responsibilities for a Better User Experience

This separation of responsibilities lets us eliminate the duplication of node bootstrapping logic and focus the infrastructure providers solely on managing the underlying infrastructure. Separating node bootstrapping from infrastructure provisioning enables you to mix and match Kubernetes distributions and cloud providers.

Provider-specific configurations now live in Custom Resource Definitions. To improve the user experience, especially when you have to deal with configuration errors, provider-specific configurations now have first-class support for OpenAPI-based validation.

As for Kubeadm bootstrapping, it has been generalized and now lives as a standalone component that can be reused for creating clusters in AWS, Microsoft Azure, Google Cloud Platform (GCP), VMware vSphere, and other providers.

New Quick Start Guide and Video Demo

To help wrap all this up into a better first-run experience, we added a new Quick Start guide that gets you from zero to a running cluster. And this new video by Boskey Savla demonstrates how to use Cluster API to fire up a new cluster.

In addition, we’re excited to see the Cluster API community expand beyond service providers to include end users who are building their own Kubernetes services using the project.

The Rise of the Management Cluster

With the release of v1alpha2, management clusters now have a more important role in Cluster API. They become the preferred way to manage target Kubernetes clusters across multiple cloud providers or bare metal. (This diagram in the Quick Start guide shows how Cluster API components live in a management cluster and provides an overview of the custom resources.)

In v1alpha1, we shipped a command-line utility called clusterctl, which was intended to be used for creating clusters. Clusterctl worked by creating a temporary local cluster and deploying the Cluster API and the provider-specific components into it. The local cluster would then be used to create the target cluster. After the target cluster was functional, clusterctl would move the Cluster API and the provider components into the target cluster, known as the pivot.

The current plan is that in a future release, a redesigned command-line tool would unify the user experience and lifecycle management across Cluster API-based providers and their associated components.

Four Custom Resource Definitions

The areas of improvements that we identified after v1alpha1 led to new proposals focusing on Machines and Clusters. For the newly released v1alpha2, the API comprises four Custom Resource Definitions, or CRDs: Cluster, Machine, MachineSet, and MachineDeployment.

In the following paragraphs, we’ll go through a simple example taken from the Quick Start guide and highlight some details about the new API.

Cluster

To create or manage a Cluster, we have two objects: one from Cluster API Core and one from the AWS provider. As shown in the figure below, these objects are linked with an Object Reference. The reference instructs the controller manager to use an infrastructure provider to create some required resources. By having a reference to AWSCluster, we can perform validation of fields like region and sshKeyName, a long-awaited feature from v1alpha1.  

Machine

Compared with a Cluster, a Machine object is a little more complicated. As mentioned earlier, we separated the bootstrap and infrastructure configurations into different groups.

The figure below shows how a Machine is now defined and the relationship of the objects.  

  The Machine object, which is from Cluster API Core, includes several fields:

  • A label to define the Machine as a control plane.
  • A required label to link the Machine to a Cluster.
  • A required reference to the infrastructure object from the infrastructure provider, such as AWSMachine.
  • A reference to the bootstrap object KubeadmConfig (from the Bootstrap Provider).

The AWSMachine object has all the AWS-related fields one would expect an EC2 instance to have. In the example above, we set instanceType, iamInstanceProfile, and sshKeyName. The KubeadmConfig instead refers to the configuration to pass the Kubeadm Bootstrap Provider to initialize a control plane and cluster (from the example above), or join a new control plane or worker node. In the example you might notice that there is some repetition to set Kubernetes cloud providers and a workaround in nodeRegistration to use cloud-init’s substitutions from instance data to set the Kubernetes node name.

During the next release cycle, we're planning to discuss how to improve the user experience further.

What’s Next

The community recently had the first Cluster API face- to-face meeting in San Francisco. The meeting helped folks to align on what’s next, gather new user stories, and have great discussions around process and product.

We identified a few high-level objectives we could be working on for v1alpha3, like control plane management, cluster upgrades, support for cloud auto-scaling groups, user experience, documentation, and so on. In the coming weeks, these objectives are expected to turn into Cluster API Enhancement Proposals (CAEPs) or issues for community review. Stay tuned or, better still, join us.

Join Us!

  • Cluster API community forum
  • sig-cluster-lifecycle Google Group to gain access to documents and calendars
  • Cluster API working group sessions—weekly on Wednesdays at 10:00 PT on Zoom
  • Provider implementer office hours—weekly on Tuesdays at 12:00 PT (Zoom) and Wednesdays at 15:00 CET (Zoom)
  • Chat with us on Slack: #cluster-api