posted

0 Comments

By Kendrick Coleman, Open Source Technical Product Manager, VMware

Right out of the gate, Kubernetes 1.12 started hot with the feature list growing every day. At its peak, there were a total of 64 features that were being tracked either as new features entering alpha or as features progressing through stages to beta or stable. However, the wishful thinking proved to be a daunting task for a lot of features that still required merges and docs. Over the course of a few weeks, the tracking count decreased to 38–which still makes it one of the largest targets to date.

 

For the greater Kubernetes community, the highlights over the past few releases focused on user interaction and new capabilities for the end user. However, Kubernetes 1.12 has some backend improvements like scheduler performance with an updated algorithm and pods now optionally passing information directly to CSI drivers. These backend enhancements continue to strengthen the core and give Kubernetes a solid foundation.

 

An interesting enhancement for developers offers a better way to test out plugins with the new Dry Run alpha feature. This feature will submit a query to the API server to be validated and “processed” but not persisted.

 

Looking through the lens of a user, rather than a developer, there were notable enhancements focused on usability, storage, networking, security, and VMware functionality.

 

VMware Functionality

The VMware Cloud Provider has implemented an early phase of zone support, or what’s being referred to as Phase 1. Analogous to AWS availability zones or regions, this inclusion allows you to run a single Kubernetes cluster in multiple failure zones, which in VMware vSphere terminology, is one or more VMware vSphere clusters.

 

Phase 1 introduced new field labels to the vSphere configuration file of zone and region properties. Label usage is intended for identifying attributes of objects that are relevant to users, but do not directly impact the core system This tagging will be queried by the kubelet VMs during startup, and the kubelet will then auto-label and propagate the labels to the API server. This feature maps vSphere closer to Kubernetes topologies for a better user experience. If users don’t provide a [Labels] field, then the behavior will be the same as the old version.

Usability

The future of Kubernetes is extensibility. When third-party developers and vendors can add their own primitives to Kubernetes, it will tailor a custom experience for their end users and allow powerful integrations. Today, users interface with Kubernetes through the kubectl command-line utility, but it’s limited to its core commands and base functionality. In its 1.12 alpha debut, the plugin mechanism for kubectl will allow third-party executables to be dropped into the users PATH without additional configuration. The plugin will be invoked through kubectl and be able to parse any arguments or flags. For instance, the plugin /usr/bin/kubectl-vmware-storage could be invoked with kubectl vmware storage --flag1.



Today, the Horizontal Pod Autoscaler automatically scales the number of pods in a replication controller, deployment, or replica set based on observed CPU utilization. This approach is great for stateless applications that can scale out, but what about the types of applications that need to scale up? The
Vertical Pod Autoscaler (VPA) is graduating to beta in Kubernetes 1.12, which gives freedom to request resources for containers. Once a VPA policy is applied, pods can be scheduled onto nodes where appropriate resources are available, and other pods can be adjusted when CPU starvation and out-of-memory (OOM) events occur. The VPA  is a long-awaited feature, especially for those who are responsible for stateful applications.

 

Building a Kubernetes cluster is one of the biggest hurdles when it comes to learning Kubernetes. There are guides to do things, such as Kubernetes the Hard Way, and lengthy documents on individual steps. There’s also been lots of attempts to simplify this process with Kubespray, kops, kube-up, and more.

 

Storage

A storage snapshot is a widely used and adopted feature among storage vendors. It allows persistent disks with critical data to be backed up for events when the data needs to be restored, used for offline development, or replicated and migrated. The initial prototype for snapshots was implemented in Kubernetes 1.8 with in-tree drivers, but with a view toward  the Container Storage Interface (CSI), the implementation shifted to keep the core APIs small and to hand off operations to the volume controller. Therefore, this alpha feature is only supported by CSI Volume Drivers.

 

Because Kubernetes can be installed on a multitude of operating systems, every Kubernetes environment is unique. Kubernetes can run in a local data center or in the cloud, and there are countless storage platforms that it can use, each of which might have a different communication protocol, such as Fibre Channel, NFS, or InfiniBand. The variables seem endless–so why should there be a hardcoded number or environment variable that states the maximum amount of volumes that can be attached to a host? Dynamic Maximum Volume Count is a feature of CSI where the volume plugin can impose these maximums based on the storage platform’s recommended practices. This feature is graduating to beta in Kubernetes 1.12.

 

Network

Stream Control Transmission Protocol (SCTP) is a protocol for transmitting multiple streams of data at the same time between two endpoints. SCTP is typically seen in communication applications designed to transport public switched telephone network (PSTN) signaling messages over IP networks. What does this have to do with Kubernetes? In its alpha debut, SCTP is a now supported as an additional protocol alongside TCP and UDP in Pod, Service, Endpoint, and NetworkPolicy objects. SCTP support means that Kubernetes can expand its use cases to applications that require SCTP as the Layer 4 protocol.

Ingress networking has been available for limiting or controlling traffic through network policies to determine traffic flow to ports, pods, IP addresses, and subnets. In Kubernetes 1.12, controlling egress traffic is now being promoted to stable for external traffic flows. Egress network policies are now at parity with ingress to further secure and provide firewalls for applications and pods.

 

Security

A custom resource is an extension of the Kubernetes API that is not necessarily available on every Kubernetes cluster, such as custom pods and controllers. Custom resources are persisted as JSON blobs, but unknown fields are not dropped. With the new alpha feature of Default and Pruning for Custom Resources, a new step is introduced that will drop fields not specified in the OpenAPI validation spec, which is the same persistence model for native types. An example of an inherit security problem before this feature is where a custom resource assumed a privileged: true field, and the user would suddenly have escalated access with a new version.

 

Ensuring secure communications of the kubelet services between master and worker nodes is necessary. The kubelet TLS bootstrap is now stable. It will generate a private key and certificate signing request (CSR). When a node is initialized, the kubelet checks for TLS certificates and, if not found, generates a keypair and self-signed certificate. The API server then holds the CSR until an administrator accepts or rejects it. This process enhances security between components.

 

With 38 new features, all of them couldn’t be mentioned here and given their recognition. For more information about the entire Kubernetes 1.12 release, view the release notes or check out the 1.12 feature tracking sheet.

 

For Kubernetes 1.13, expect a more stable release with only features that have a real shot at making the code freeze date. With two KubeCon conferences and the holidays approaching, it’s going to be a shorter and more aggressive release cycle to close out 2018. To keep track of 1.13 updates, check out the release timeline and schedule.