posted

0 Comments

By Art Fewell, Senior Technical Product Manager

Kubernetes is poised to be the most impactful infrastructure technology of 2018. The system offers a rare and powerful set of capabilities that enables enterprises to streamline and automate complex operational processes and deliver a cloud-like user experience and feature set. This post discusses the state of the container ecosystem and why the time is now for enterprise IT leaders to take advantage of the tremendous benefits that Kubernetes offers.

Leading cloud software providers have set a high bar for many aspects of software and service delivery, and have created an expectation of an always-on, always available software user experience. This expectation has created a demand for enterprise IT departments to deliver equivalent capabilities and user experience, while also increasing operational efficiency and reducing costs.

While the cloud-native development patterns and operational practices that enable cloud features and benefits are not secret, they are very different from traditional enterprise IT practices. As a result, most enterprises have struggled with their adoption.

Kubernetes (K8s) is a platform that helps standardize and simplify infrastructure operations, built around a standardized operational model with key DevOps and cloud-native best practices baked in. While capable of supporting purpose-built, cloud-native application development, K8s also has the flexibility to support applications with more traditional requirements. If you have been struggling to transition to agile and DevOps practices, or even if you’ve transitioned successfully, K8s can make implementing cloud-native and DevOps best practices much easier, making it an ideal platform to support enterprise application and operational modernization efforts.

Technically speaking, the core capability of K8s is container orchestration. While enterprise-standard virtualization solutions provide an abstraction to create virtual servers, containers allow you to package just the application code and its dependencies.

A traditional software installation results in a huge array of complexity and variability in the software delivery process, where traditionally a user installs and configures an operating system and then installs the application software. When the user sets up the operating system, they install the version of the OS and all of its subcomponents that are available at the time of installation, and then they configure the OS with their own policies and unique variables. There is no way for a software developer to test the application installation on the same exact OS configuration that was used by others; in fact, every single user installation in this model is different. When it comes to updating and patching the OS, this process can change the files that the application depends on, which creates additional variability in each software installation. Kubernetes

Containers allow application code to be packaged with just its dependencies, ensuring the application, its configuration and its required dependencies can remain independent from user and environment-specific configurations.

While these are powerful benefits, the initial wave of container technologies and standards was focused on running containers on a single server.

Ultimately, you don’t want to install your applications on a server; you want to install your applications on a cloud, right? That is to say, you need your applications to be highly available, and if they come packaged for installation on a server, you have to do a lot of work to set them up with all of the requirements you actually need to run in production, from storage and networking to security and high-availability. The image below highlights the steep “learning cliff” of just some of the different requirements when you run containers in a production environment:

Kubernetes

Although it is possible to automate these requirements in a traditional setting, there is a tremendous amount of additional middleware and custom scripting that needs to be done to deliver high levels of automation. Without the type of standardized abstractions that platforms like K8s and VMware vSphere provide together, automation solutions must deal directly with a variety of low-level interfaces across various underlying subsystems, adding tremendous cost, complexity and fragility to automation efforts.

When you install an application in a K8s environment, it in effect installs it in a cloud, as opposed to installing it on a single server. It does this by allowing application definitions to include not only the containers with the needed application code, but also the compute, networking, storage and other environmental requirements needed for the application to run, laying the foundation for automated application lifecycle management from development through deployment. To deploy an application on K8s, you prepare a simple application manifest that describes the application’s requirements using a generalized language with a pluggable adapter framework to interact with the underlying IaaS providers.

K8s itself only provisions containers, and can support dynamic scaling of applications, but does not do anything to provision or manage the underlying servers that containers run on. This is where vSphere provides the missing pieces of the puzzle. When combined with Kubernetes, vSphere can provide dynamic node scaling and end-to-end Infrastructure-as-code to provide automated lifecycle management across the entire infrastructure stack.

While K8s plays a particularly important role in the infrastructure stack, it does not do everything across the entire software lifecycle. It does not define how you develop your code, or how you automate your dev workflows. Although there are numerous examples and vendor integrations for K8s to participate in a complete software lifecycle, it does not do everything itself.

However, Kubernetes does have a powerful capability in that it establishes an evolutionary pathway for enterprises to start implementation with immediate benefits while providing a step-by-step path toward application and infrastructure modernization.

The historical difficulty for enterprises to transition from a traditional environment to a cloud and DevOps operational style has been notorious. As is the case with most complex open source software, the open source distribution of Kubernetes can be difficult and complex to operate. However, numerous enterprise software providers and leading cloud providers now offer packaged and cloud-based K8s solutions that simplify setup and operations, making the platform surprisingly easy to adopt and support in enterprise environments.

VMware Pivotal Container Service (PKS), for example, offers significant capabilities that make it easier than ever to deploy and operate K8s. It is offered as a virtual appliance using vSphere as the underlying IaaS provider. PKS can be easily deployed, and key platform components can be monitored by vSphere admins without retraining.

Once you start deploying workloads with K8s, you immediately start to realize some of the key operational efficiency benefits of a cloud-optimized platform. For example, K8s monitors the state and availability of the application and automatically implements corrective actions if a server goes offline or an application performance metric hits a negative threshold. PKS further enhances these capabilities by leveraging VMware NSX to automate micro-segmentation, absorbing the network and port requirements from the K8s manifests and then enforcing stateful inspection firewall services at the container level.

You will still need to train some staff to support the K8s environment, but this is not like the re-invent-your-entire-IT-department pitch that has been associated with DevOps adoption in the past. PKS offers platform support operations through the same common vSphere environment that is already well-known. vSphere will continue to be needed to support the majority of enterprise applications that are not yet available in a containerized format, enabling a common base of operational support tools and staffing across a wide range of demands.

While Kubernetes is a relatively new platform, it has already been widely deployed and tested in some of the most demanding environments around. From its origins at Google to its broad success with leading cloud providers like Azure and AWS, it comes with a level of maturity uncommon in a platform technology of its age. Although its rise has been rapid, it has become a key industry standard with backing not only from the Linux Foundation, but also by many technology leaders, ensuring that it will be a relevant platform for years to come.

The unique position of Kubernetes in the infrastructure stack can deliver a rare and powerful business impact that will continue providing value for years to come. The time is now for enterprises to take advantage of the powerful capabilities and operational efficiencies of Kubernetes.

Stay up-to-date on all things containers and Kubernetes by subscribing to the Cloud-Native Apps blog, and follow us on Twitter (@cloudnativeapps).