Authored by Massimo Re Ferre, Technical Product Manager for Cloud Native Applications

Kubecon 2017 contained plenty of presentations that moved the needle further up the steep learning curve of Kubernetes. Listening to the advanced experiences and the enthusiasm of presenters gives you the sense that Kubernetes is here to stick around–and that it will be a key driving force in the future of cloud computing.

The technology is evolving quickly. Its implementation is bringing success to startups and small organizations as well as in pockets of enterprises. And in the cases where it has been deployed in pockets of enterprises, the teams that own the deployment are starting to seek help from IT to run Kubernetes for them. Multitenancy and security are beginning to become concerns.

Meanwhile, at the expo, the dominant areas of the Kubernetes ecosystem on display were setup, maintenance, networking, and monitoring. There were, in particular, many interesting offerings and solutions in the area of monitoring.

During the keynote, areas of improvement and the newer features of Kubernetes were at the heart of the presentation by Aparna Sinha of Google’s Kubernetes product team. Improvements include support for 5000 hosts, RBAC and dynamic storage provisioning. One of the seemingly new features in the scheduler allows for “taint” and “toleration,” which may be useful to segment specific worker nodes for different namespaces.

Etcd version 3 gets a mention as having a quite big role in the scalability enhancements to Kubernetes, but the new version seemed to trigger concern among some participants on how to safely migrate from etcd version 2 to the etcd version 3.

Aparna also talked about disks. She suggests leveraging claims to decouple the K8s admin role (infrastructure aware) from the K8s user role (infrastructure agnostic).

Dynamic storage provisioning is available out of the box, and it supports a set of back-end infrastructure (GCE, AWS, Azure, vSphere, Cinder).

For the next version of Kubernetes, Aparna alluded to some network policies being cooked up.

Next, Clayton Coleman of Red Hat talked about K8s security. When he asks how many people set up and consume their own Kubernetes cluster, the vast majority of users raise their hands–very few, it seems, are running centralized Kubernetes instances that users access in multitenant mode, an understandable state of affairs given that RBAC has just made it into the platform.

Clayton went on to mention that security in these “personal” environments isn’t as important as it will be when K8s starts to be deployed and managed by a central organization expressly for users to consume it. At that stage, a clear definition of roles and proper access control will be paramount. As a side note, with 1.6, cluster-up doesn’t enable RBAC by default but Kubeadm does.

On Thursday, Kelsey Hightower talked about cluster federation–that is, federating different K8s clusters. The federation API control plane is a special K8s client that coordinates dealing with multiple clusters.

Many of the breakout sessions were completely full. The containerd session presented by Docker highlighted that containerd was born in 2015 to control and manage runC. It K8s integration will look like this:

Kubelet –> CRI shim –> containerd –> containers

Keep in mind, though, that there is no opinionated networking support, no volumes support, no build support, no logging management support, etc.

Containerd uses gRPC and exposes gRPC APIs. There is the expectation that you interact with containerd through the gRPC APIs, typically via a platform. There is, however, a containerd API that is not expected to be a viable way for a standard user to deal with containerd. In other words,  containerd will not have a fully featured, supported CLI. It is, instead, code that is to be used with or integrated into higher-level code, such as Kubernetes or Docker.

gRPC and container metrics are exposed via a Prometheus end-point. Full Windows support is in the plan but not yet in the repo.

One speaker, Justin Cormack, mentions that VMware has an implementation that can replace containerd with a different runtime, the vSphere Integrated Containers engine. For more on containerd, see one of my previous blog posts, Docker Containerd Explained in Plain Words (

Another interesting breakout session was on cluster operations. Presented by Brandon Philips, the CoreOS CTO, the session covered some best practices to manage Kubernetes clusters. What stood out was the mechanism that Tectonic uses to manage the deployment. Fundamentally, CoreOS deploys Kubernetes components as containers and lets Kubernetes manage those containers (basically letting Kubernetes manage itself). This way Tectonic can take advantage of Kubernetes’s own features, such as keeping the control plane up and running and doing rolling upgrades of the API and scheduler.

Another session covered Helm, a package manager for Kubernetes. Helm Charts are logical units of K8s resources plus variables. The aim of the session was to present new use cases for Helm that aspire to go beyond the mere packaging and interactive setup of a multi-container app.

All in all, KubeCon exposed a lot of people’s experiences with Kubernetes to help developers and operators learn about the system and its related projects, adapt the system to their needs, and deploy it successfully.