Build Next Gen Apps

Top 10 Ways Kubernetes 1.12 Benefits Users

Originally posted here

Kubernetes 1.12 has launched with one of the largest target feature lists to date, with 38 features tracked as either entering alpha or progressing through the stages to beta and stable.

Recent Kubernetes releases have focused on user interaction and new capabilities for the end user. However, Kubernetes 1.12 has some backend improvements like scheduler performance with an updated algorithm and pods now optionally passing information directly to CSI drivers. These backend enhancements continue to strengthen the core and give Kubernetes a solid foundation.

The new Dry Run alpha feature is an interesting enhancement for developers, offering a better way to test out plugins. This feature will submit a query to the API server to be validated and “processed” but not persisted.

From a user perspective there were also significant enhancements to usability, storage, networking, security, and VMware functionality. Read on to discover the top ten new features and how they’ll affect you:

1. Early Phase Zone Support to Run Clusters in Multiple Failure Zones

The VMware Cloud Provider has entered an early phase of zone support, which is being referred to as Phase 1. Analogous to AWS availability zones or regions, this inclusion allows you to run a single Kubernetes cluster in multiple failure zones, which is one or more VMware vSphere clusters.

2. Improved UX with vSphere Configuration File Field Labels

Phase 1 introduces new field labels to the vSphere configuration file of zone and region properties. Labels can be used to identify relevant object attributes, but do not directly impact the core system. This tagging will be queried by the kubelet VMs during startup, and the kubelet will then auto-label and propagate the labels to the API server. This feature maps vSphere closer to Kubernetes topologies for a better user experience. If users don’t provide a [Labels] field, the behavior will be the same as the old version.

3. Adding Third-Party Executables to Kubectl Without Additional Configuration

The future of Kubernetes is extensibility. When third-party developers and vendors can add their own primitives to Kubernetes, it can tailor a custom experience for their end users and allow powerful integrations. Today, users interface with Kubernetes through the kubectl command-line utility, but it’s limited to its core commands and base functionality.

In its 1.12 alpha debut, the plugin mechanism for kubectl allows third-party executables to be dropped into the users PATH without additional configuration. The plugin will be invoked through kubectl and be able to parse any arguments or flags. For instance, the plugin /usr/bin/kubectl-vmware-storage could be invoked with kubectl vmware storage –flag1.

4. Vertical Scaling with Vertical Pod Autoscaler

Today, the Horizontal Pod Autoscaler automatically scales the number of pods in a replication controller, deployment, or replica set based on observed CPU utilization. This approach is great for stateless applications that can scale out, but what about the types of applications that need to scale up? The Vertical Pod Autoscaler (VPA) is graduating to beta in Kubernetes 1.12, which gives you freedom to request resources for containers.

Once a VPA policy is applied, pods can be scheduled onto nodes where appropriate resources are available, and other pods can be adjusted when CPU starvation and out-of-memory (OOM) events occur. The VPA is a long-awaited feature, especially for those who are responsible for stateful applications.

5. Improved Storage Snapshots for Critical Backups

Storage snapshots are a widely used feature among storage vendors. They allow persistent disks with critical data to be backed up for events when the data needs to be restored, used for offline development, or replicated and migrated. The initial prototype for snapshots was implemented in Kubernetes 1.8 with in-tree drivers, but with a view toward the Container Storage Interface (CSI), the implementation shifted to keep the core APIs small and to hand off operations to the volume controller. Therefore, this alpha feature is only supported by CSI Volume Drivers.

6. Recommended Volume Counts for Environments with Dynamic Maximum Volume Count

Because Kubernetes can be installed on a multitude of operating systems, every Kubernetes environment is unique. Kubernetes can run in a local data center or in the cloud, and there are countless storage platforms that it can use, each of which might have a different communication protocol, such as Fibre Channel, NFS, or InfiniBand.

The variables seem endless – so why should there be a hardcoded number or environment variable that states the maximum amount of volumes that can be attached to a host? Dynamic Maximum Volume Count is a feature of CSI where the volume plugin can impose these maximums based on the storage platform’s recommended practices. This feature is graduating to beta in Kubernetes 1.12.

7. New Network Use Cases with Stream Control Transmission Protocol

Stream Control Transmission Protocol (SCTP) is a protocol for transmitting multiple streams of data at the same time between two endpoints. SCTP is typically seen in communication applications designed to transport public switched telephone network (PSTN) signaling messages over IP networks.

What does this have to do with Kubernetes? In its alpha debut, SCTP is a now supported as an additional protocol alongside TCP and UDP in Pod, Service, Endpoint, and NetworkPolicy objects. SCTP support means that Kubernetes can expand its use cases to applications that require SCTP as the Layer 4 protocol.

8. Egress Network Policies for Improved Security

Ingress networking has previously been available for limiting or controlling traffic through network policies to determine traffic flow to ports, pods, IP addresses, and subnets. In Kubernetes 1.12, controlling egress traffic is now being promoted to stable for external traffic flows. Egress network policies are now at parity with ingress to further secure and provide firewalls for applications and pods.

9. Default and Pruning for Custom Resources to Drop Unspecified Fields

A custom resource is an extension of the Kubernetes API that is not necessarily available on every Kubernetes cluster, such as custom pods and controllers. Custom resources are persisted as JSON blobs, but unknown fields are not dropped.

With the new alpha feature of Default and Pruning for Custom Resources, a new step is introduced that will drop fields not specified in the OpenAPI validation spec, which is the same persistence model for native types. An example of an inherit security problem before this feature is where a custom resource assumed a privileged: true field, and the user would suddenly have escalated access with a new version.

10. Stable TLS Bootstrap for Secure Kubelet Communications

Ensuring secure communications of the kubelet services between master and worker nodes is necessary. The kubelet TLS bootstrap is now stable. It will generate a private key and certificate signing request (CSR). When a node is initialized, the kubelet checks for TLS certificates and, if not found, generates a keypair and self-signed certificate. The API server then holds the CSR until an administrator accepts or rejects it. This process enhances security between components.

Conclusion

With 38 new features, all of them couldn’t be mentioned here and given their due recognition. For more information about the entire Kubernetes 1.12 release, view the release notes or check out the 1.12 feature tracking sheet.

For Kubernetes 1.13, expect a more stable release with only features that have a real shot at making the code freeze date. With two KubeCon conferences and the holidays approaching, it’s going to be a shorter and more aggressive release cycle to close out 2018. To keep track of 1.13 updates, check out the release timeline and schedule.

Visit our website to learn more about Kubernetes solutions from VMware: Pivotal Container Service and VMware Kubernetes Engine.