Introduction to kubeadm

Summer 2015. Kubernetes 1.0 had just arrived, but with one huge user experience problem: there was no easy way to install it. Everyone willing to try Kubernetes had to endure the hassle of figuring out how to create a cluster on their own. This is not an easy job, as Kubernetes consists of several services that must each be configured in the right way.

Initially, only kube-up was available. It is a small shell script designed solely for testing purposes. But very soon a variety of solutions started springing up—some were shell scripts like kube-up, while other tools like Ansible, Chef, Puppet, etc. came into play. Thanks to those solutions, kops, kubernetes-anywhere and kubespray became more widely used and adopted. Still, none of them were perfect. The community began looking outside for inspiration and found it in the form of Docker Swarm.

Yes, that’s right, Swarm is extremely easy to deploy. It’s simply a matter of running an “init” command on a single node and then using the printed “join” command to attach all other nodes. Many people in the Kubernetes community asked themselves: “Why can’t we have a similarly easy deployment process?” This was the genesis for kubeadm.

The goal was to provide as easy a user interface as possible, but to retain some of the benefits of Kubernetes’ modular design. Early in the design process, it was decided that the user would be responsible for providing the CRI, kubelet daemon and network plugin—these were left outside of the scope of kubeadm.

Besides dealing with initial cluster deployment, kubeadm was designed to allow for the seamless upgrade, modification and tearing down of an existing cluster. The user could also be given the choice of supplying his or her own etcd cluster or rely on kubeadm to set it up.

With those design goals established, work on kubeadm was started by Kubernetes’ SIG (Special Interest Group) Cluster Lifecycle. The first version to ship was part of Kubernetes 1.5. Since then, kubeadm has progressed steadily to beta and we expect to reach General Availability in late 2018 or 2019.

Currently, kubeadm can deploy, upgrade, modify and tear down Kubernetes clusters in both single and multi-master (High Availability) modes. You can use both existing and etcd configurations created by kubeadm to do so. Kubeadm also deploys Kube-proxy and DNS plugins (either kube-dns or CoreDNS, with the latter being the default). Aside from Docker, many different CRIs can be used with kubeadm and Kubernetes, with containerd and CRI-O being among the most popular.

I won’t walk you through every step of Kubernetes deployment via kubeadm—those are best covered in the official documentation. But the general idea is to set up the first control plane node with kubeadm init and then use the kubeadm join command on other nodes to join them to the cluster. Additionally, different subcommands exist for common tasks, such as upgrading or resetting a node, managing configuration and bootstrap tokens, etc.

The biggest goal for kubeadm in the near future is to reach general availability. For that to happen, we first need to:

  • Graduate the config file format to beta. With the release of Kubernetes 1.12, we have reached the Alpha 3 stage. The past couple of release cycles saw huge advances in the right direction, but there’s still work left to do.
  • Overhaul the command line. kubeadm needs to provide a unified policy for its interface. Command line flags have to override parts of the config files when an option is specified in both places. Also, kubeadm operational phases need to be exposed in a flexible and consistent matter to the user so that you can exclude certain operations from occurring during init or invoke a particular one at will.

Although many talented developers from a variety of companies are working on the project, kubeadm still lacks developer power. But we’re hoping that as adoption continues to increase, more developers will be motivated to engage and help the team.

To start contributing, join the kubeadm office hours meeting, ping #kubeadm and #sig-cluster-lifecycle on Kubernetes Slack and, most importantly, pick an issue from the kubeadm issue tracker. Issues marked with “good first issue” and “help wanted” are a great starting point. Those labeled “priority/backlog” are also a good way to make an impact.

Even better news—if you’re attending Open Source Summit Europe this coming month in Edinburgh, stop by our “Introduction to kubeadm” session. We’ll be introducing audiences to kubeadm, provide a brief history of the tool, review its features, design and status and then demo the tool. For more information, check out our OSSEU preview blog here.

Stay tuned to the Open Source Blog and be sure to follow us on Twitter (@vmwopensource) for future updates.


Leave a Reply

Your email address will not be published. Required fields are marked *