Virtual machines and containers are two of my favorite technologies. In today’s DevOps driven environment, deliver applications as microservices allows an organization to provide features faster. Splitting a monolithic application into multiple portable fragments based on containers are often top of most organization’s digital transformation strategies. Virtual Machines, delivered as IaaS, has been around since the late 90s, it is a way to abstract hardware to offer enhanced capabilities in fault tolerance, programmability, and workload scalability. While enterprise IT large and small are scrambling to refactor application into microservices, the reality is IaaS are proven and often used to complement container based workloads:
1). We’ve always viewed the IaaS layer as an abstraction from the infrastructure to provide a standard way of managing and consolidate disparate physical resources. Resource abstraction is one of the many reasons most of the container today runs inside of Virtual machines.
2). Today’s distributed application consists of both Cattles and Pets. Without overly generalizing, Pet workload tends to be “hand fed” and often have significant dependencies to the legacy OS that isn’t container compatible. As a result, for most organizations, Pet workloads will continue to run as VMs.
3). While there are considerable benefits to containerize NFV workloads, current container implementation is not sufficient enough to meet 100% NFV workload needs. See IETF report for additional details.
4). Ability to “Right Size” the container host for dev/test workloads where multiple environments are required to perform different testings.
Instead of mutually exclusive, over time it’s been proven that two technologies complement each other. As long as there are legacy workloads and better ways to manage and consolidate sets of diverse physical resources, Virtual Machines (IaaS) will co-exist to complement containers.
OpenStack IaaS and Kubernetes Container Orchestration:
It’s a multi-cloud world, and OpenStack is an important part of the mix. From the datacenter to NFV, due to the richness of its vendor-neutral API, OpenStack clouds are being deployed to meet needs of organizations needs in delivering public cloud like IaaS consumption in a private cloud data center. OpenStack is also a perfect complement to K8S by providing underline services that are outside the scope of K8S. Kubernetes deployments in most cases can leverage the same OpenStack components to simplify the deployment or developer experiences:
1). Multi-tenancy: Create K8S cluster separation leveraging OpenStack Projects. Development teams have complete control over cluster resources in their project and zero visibility to other development teams or projects.
2). Infrastructure usage based on HW separation: IT department often are the central broker for development teams across the entire organization. If Development team A funded X number of servers and Y for team B, OpenStack Scheduler can ensure K8S cluster resources always mapped to Hardware allocated to respective development teams.
3). Infrastructure allocation based on quota: Since deciding how much of your infrastructure to assign to different use cases can be tricky. Organizations can also leverage OpenStack quota system to control Infrastructure usage.
4). Integrated user management: Since most K8S developers are also IaaS consumers, leverage keystone backend simplifies user authentication for K8S cluster and namespace sharing.
5). Container storage persistence: Since K8S pods are not durable, storage persistence is a requirement for most stateful workloads. When leverage OpenStack Cinder backend, storage volume will be re-attached automatically after a pod restart (same or different node).
6). Security: Since VM and containers will continue to co-exist for the majority of enterprise and NFV applications. Providing uniform security enforcement is therefore critical. Leverage Neutron integration with industry-leading SDN controllers such as the VMware NSX-T can simplify container security insertion and implementation.
7). Container control plane flexibility: K8S HA requirements require load balanced Multi-master and scaleable worker nodes. When Integrated with OpenStack, it is as simple as leverage LBaaSv2 for master node load balancing. Worker nodes can scale up and down using tools native to OpenStack. WIth VMware Integrated OpenStack, K8S worker nodes can scale vertically as well using the VM live-resize feature.
I will leverage VMware Integrated OpenStack (VIO) implementation to provide examples of this perfect match made in heaven. This blog is part 1 of a 4 part blog series:
1). OpenStack and Containers Better Together (This Post)
2). How to Integrate your K8S with your OpenStack deployment
3). Treat Containers and VMs as “equal class citizens” in networking
4). Integrate common IaaS and CI / CD tools with K8S