It’s that time again when the Kubernetes community, like clockwork, comes together for another quarterly release. This version, 1.11, is the midyear release that continues Kubernetes’ march toward stability. As with previous releases, VMware continues to participate and contribute open-source resources to Kubernetes upstream. This post highlights some important features and development in version 1.11.
The effort to decouple cloud provider-specific code from the core of Kubernetes continues with this release. This work is being done in phases where the critical phase of factoring out an API to support external providers has entered beta. While this work is going on, several providers have already started with the critical step of refactoring their current code to be ported out of the tree, or are planning to do so soon. Fabio Rapposelli of VMware is leading a community effort to factor out the vSphere cloud provider code into an out-of-tree plugin.
VMware open-source engineering also contributed to features for the existing in-tree vSphere Kubernetes cloud provider, including code from Doug MacEachern to add SAML token authentication and Abrar Shivani to add support for using credentials from Kubernetes secrets.
Container Storage Interface
In version 1.11, major work went into the Container Storage Interface (CSI) to continue its progress toward general availability in the upcoming quarters. One of the major features that went into alpha is the integration of raw block storage support. Implemented by Vladimir Vivien, an open source engineer at VMware, this new feature will allow CSI drivers to receive block storage operation requests from Kubernetes. Another important alpha feature is the use of an internal kubelet plugin architecture, which will eventually let CSI drivers participate in a uniform plugin registry and discovery mechanism.
With the release of version 1.11, the Kubernetes SIG-storage continues to make inroads on several fronts. The effort to implement topology awareness in Kubernetes resources continues with new code to support topology-aware dynamic provisioning for persistent volumes. The work to resize an existing persistent volume (PV) has moved to beta; a related effort to dynamically resize a PV (without shutting down a pod) continues in early alpha stage. A new alpha feature has started to allow volume plugins to specify the maximum number of volumes that can be attached to a node given the type of the node. Lastly, the ability to prevent the deletion of a PV bound to a persistent volume claim (PVC) has graduated to stable.
This release has also seen the continuation of important security-related features. The ability to request a service token bound to a specific pod, TokenRequest, has now entered alpha. The feature known as RunAsGroup, where admins can specify both user ID and group ID for running containers, has now graduated as beta. ClusterRole aggregation, a feature that makes it easy to integrate RBAC with Custom Resource Definitions (CRDs) and Extension APIServers, has graduated to stable.
PriorityClass, the ability to give pods a priority while giving the scheduler preemption to schedule more important pods first when the cluster is out of resources, has moved to beta. This feature has been alpha since 1.8 but comes with more enhancements to remove less important pods to create room for more important ones when resources are constrained.
Other Important Efforts
There are many more important features, fixes, and related efforts that will be released in version 1.11, too many to enumerate them all. Here are a few more that we think will have an impact:
- On the networking front there were two major features: the use of IPVS-based in-cluster load balancing and using CoreDNS as the default DNS plugin.
- Support to change namespaced kernel parameters at runtime (via sysctl) has now moved to beta.
- The ability to re-configure kubelet dynamically while it is deployed has now moved to beta.
- CRI continues with improvements to add logging and stats while stabilizing the validation test suite.
- Another important feature that has just entered alpha is the use of the kube-scheduler to schedule DaemonSet pods (instead of using the DaemonSet controller) for a unified pod scheduling strategy.