By Tom Schwaller, Sr. Technical Product Manager, VMware CNABU

This blog post looks at the most important control plane components of a single Kubernetes master node — etcd, the API server, the scheduler and the controller manager — and explains how they work together. Although other components, such as DNS and the dashboard, come into play in a production environment, the focus here is on these specific four.


etcd is a distributed key-value store written in golang that provides a reliable way to store data across a cluster of machines. Kubernetes uses it as its brain, but only the kube-apiserver can communicate to it to save desired states. To get an idea of how etcd works, download the latest binary version for your preferred operating system and just execute etcd. If you have a golang development environment ready on your system (on a Mac just do: brew install go), you can also clone the etcd GitHub repo and start a cluster with goreman as follows:

kubernetes master components

In a production environment, you have to use certificates and TLS-based encryption, but this is all handled by the framework, — VMware Pivotal Container Service, for instance — that sets up Kubernetes. The binary download package includes a command-line tool, etcdctl, which communicates with your local etcd cluster using the efficient gRPC protocol. Be aware that there were some major changes from etcd2 to etcd3 (such as switching from HTTP to gRPC and using a flat data model instead of a hierarchical one), so it is important to specify which protocol version you want to use.


You can also read past versions of keys, watch them (including historical changes) and grant leases. For more information, check out the video above. Run the following command:

Then, in another terminal, run this command:

The first terminal session then produces the following output:


The kube-apiserver is the spider in the web of Kubernetes components. It is the front-end for the Kubernetes control plane, exposes the Kubernetes API and is designed to scale horizontally by deploying more instances and load-balancing them. When you start pods or apply deployment manifests with kubectl, it communicates with the kube-apiserver, which checks who you are and whether your activities are allowed in the namespace you are trying to use. The API server also checks the validity of your YAML files (kubectl apply -f app.yaml), and if everything checks out, it writes the desired state to the etcd cluster.

Other Kubernetes components watch certain API endpoints that are relevant to them, and act accordingly. These components have to talk to the kube-apiserver using persistent HTTP connections since they can’t talk to etcd directly; only the API server can do that. An example is the kubelet component on each worker node, which is responsible for starting Kubernetes pods (a collection of containers sharing the same IP address and volumes). Eventually, things get done, but not immediately.

To get some more information about how the kube-apiserver works, you can also watch the YouTube video below, but even here we are scratching the surface.  The more you dig into the architectural details, the more you realize that Kubernetes is using a very sophisticated, highly extensible and decoupled design.


The kube-scheduler is responsible for scheduling pods on nodes. When you create a pod, the scheduler assigns a node to the pod using information about available resources and restrictions, such as quality of service, affinity and anti-affinity rules, data locality and hardware, software or policy constraints. A Kubernetes administrator can also enforce scheduling decisions by using NodeSelectors, which determine which node (or group of nodes) a pod should run on. If the default scheduler does not suit your needs, you can implement your own scheduler, which might be useful in the case of HPC environments with many compute jobs fighting for resources and time slots. You can even run multiple schedulers alongside the default scheduler simultaneously and tell Kubernetes what scheduler to use for each of your pods.

If you are interested in a more detailed explanation of the default kube-scheduler workflow, watch the following video on YouTube:


According to the Kubernetes documentation, the kube-controller-manager is a daemon that embeds the core control loops (i.e., controllers) shipped with Kubernetes in a single binary. A controller watches the shared state of the cluster through the kube-apiserver and makes changes attempting to move the current state toward the desired state. Examples of controllers are the deployment controller, DaemonSet controller, node controller, job controller and namespace controller.

To better understand this concept, consider the following examples: If the DaemonSet controller sees a new DaemonSet configuration, it will create a pod on every machine with that pod configuration. And the node controller is responsible for noticing and responding when nodes go down. This means each controller is watching for specific events and configuration changes, and reacting accordingly.

Hopefully, you now have a basic understanding of the major Kubernetes master components and their interactions.

Stay tuned to the Cloud-Native Apps blog for more insights into Kubernetes, and be sure to follow us on Twitter (@cloudnativeapps).