The velocity of Kubernetes releases shouldn’t be surprising by now. The quarterly cadence is fueled by hundreds (if not thousands) of pull requests, issues, and meetings all done in complete transparency. The community rallies to get enhancements implemented and stabilized all in a self-governed environment. Nothing is accepted or rejected until it has gone through multiple layers of people and technical automation. To that end, Kubernetes 1.15 contains 25 enhancements, 10 of which are brand new (alpha) and two of which are graduating to stable.
Kubernetes 1.15 saw new performance improvements, a gradual increase in the stability of management and bootstrap components, and heightened use of custom resources.
Performance
Containers are already a very high-performing medium for running application workloads. The internal capabilities of Kubernetes are being improved to make decisions faster and have clean code constructs. NodeLocal DNS Cache is an add-on DaemonSet that runs a DNS cache pod on every node in a cluster. This capability enhances the overall performance and reliability of the internal cluster DNS.
The Event API is under construction and introduced as alpha in Kubernetes 1.15. This new structure changes the deduplication logic so events won't overload the cluster. This will ultimately reduce the performance impact on the rest of the cluster and lead to automated event analysis down the road.
Many developers are going to be excited to see that Go modules are now introduced and labeled as stable. Although Kubernetes has been using Godep and custom versions of Glide to ensure reproducible builds of vendored dependencies, the ecosystem has matured over time. Vendoring can now take place natively using go1.13 with Go modules enabled by default.
Interested to see what impact this has on the code you’re contributing? Check out the Kubernetes Enhancements Proposal (KEP) on Go modules.
Management and Bootstrapping
Kubeadm has emerged as the utility of choice for creating conformant Kubernetes clusters. The teams at VMware have been working hard on making sure it’s a first-class tool and seamlessly integrates with VMware vSphere. Check out Myles Gray’s 3-part blog series on using Kubeadm with vSphere or Kendrick Coleman’s handy bash scripts with CentOS 7. In 1.15, kubeadm graduates to beta. Improvements to kubeadm make it easier to create highly available (HA) clusters via multi-master bootstrapping.
The ecosystem is full of tooling for monitoring Kubernetes clusters, but what about monitoring for custom software and hardware devices used by containers? With support for third-party device monitoring plugins moving to beta, this release lets vendors provide tooling for operators to get container-level metrics. Deeper insights lead to easier analysis and faster resolution.
Custom Resource Extensibility
There are endless possibilities of what can be done with Kubernetes and a lot of that is rooted in Custom Resource Definitions (CRDs). Custom resources are responsible for making Kubernetes extensible with custom controllers and your own set of APIs. This powerful tooling allows countless types of integration points with the entire stack. In Kubernetes 1.15, there are significant improvements to CRD stability, security, and functionality.
Admission webhooks is in beta and allows getting hooks on the creation, modification, and deletion of objects. These webhooks can also mutate objects at the same time to make sure they pass validation. On the same topic, webhook conversion has been promoted to beta. This enhancement allows evolving the API while maintaining backward compatibility. Defaulting and pruning of custom resources is now included as alpha that increases security measures. This addition ensures API compatibility when you add new fields to a custom resource or remove fields that are not specified in the API validation schema. This change will reduce the number of unknown objects in etcd to allow data consistency.
Lastly, the Pod Disruption Budget (PDB) is graduating to beta in this version of Kubernetes. PDB is an API that can limit the number of pods that are down simultaneously during voluntary disruptions. A user can specify a variable number of pods via the minAvailable
and maxUnavailable
parameters in the PodDisruptionBudget
spec, and the Deployment, StatefulSet, ReplicatSet,and ReplicationController will make sure the desired number of replicas stays within that boundary using a common selector. As more applications are deployed based on Custom Resource Definitions (CRDs), it will become necessary to control the number of disruptions for availability testing and upgrades.
Other Enhancements in Kubernetes 1.15
Of course, this isn’t everything that happened in Kubernetes 1.15. There were updates to storage, the introduction of a scheduling framework, stability improvements for kubectl, and much more. To see everything, check out the Kubernetes 1.15 enhancements tracking spreadsheet.
If you want to hash through the new code sooner than later, feel free to join us for the K8s Release Party on Friday, June 28, at 10 am PT, where developers will delve into the new features live on Zoom. Initially this was an internal call, and then we opened it up for anyone to join in the spirit of community. There will also be a CNCF community webinar on Kubernetes 1.15 on July 23. Details will be posted on the CNCF webinar schedule.
What’s VMware Doing?
The 1.15 release cycle is pushing toward the stability of various out-of-tree components like the vSphere Cloud Controller Manager and the vSphere CSI Driver. The vSphere Cloud Controller Manager is graduating to beta this release after making great progress toward stability and feature parity with the in-tree cloud provider. We strongly recommend using the vSphere Cloud Controller Manager—it comes with several optimizations over the in-tree provider. As the Kubernetes community moves toward migrating vendor-specific components out of core components, the in-tree vSphere cloud provider will be removed in a future release and replaced with the vSphere Cloud Controller Manager.
Getting Involved in Kubernetes 1.16
VMware remains committed to being leaders in the upstream Kubernetes community and we want you to come contribute too!
Participating in the Kubernetes Release Team is a wonderful way to contribute to the project. The team comprises multiple roles, many of which require no prior development experience. The application to shadow the Kubernetes 1.16 Release Team is now open. Read more about it here and join us in the community.