By Kendrick Coleman, Open Source Technical Product Manager, Cloud-Native Applications Business Unit (CNABU)
The topic has been approached countless of times over the past decade. Why do I need vSphere when I can run X, Y, or Z on bare metal? I’m here to tell you that no matter what reasoning you can come up with, VMware vSphere is always going to be there to make things easier. Take a look at any application and decide if your application on bare metal can give you these six benefits: resource optimization, availability, interoperability, security, scale, and performance.
The VMware Software-Defined Data Center has a storied past with traditional workloads, but its life continues while serving as the best platform for operating cloud-native environments. There are technical advantages and operational cost savings that make vSphere the best underlying platform for Kubernetes deployments from the initial stages of planning to the continual evolution of upgrade processes.
From the viewpoint of a production-grade Kubernetes cluster, it’s built on cloud-native principals of being highly available by distributing resources. There is no single monolithic bottleneck or database that becomes a critical choke point if things go down. Therefore, reliability is necessary by having two or three Kubernetes masters. Etcd, which keeps quorum for the cluster, is typically seen installed on Kubernetes master nodes for simplicity. However, the Kubernetes documentation says to “run etcd clusters on dedicated machines or isolated environments for guaranteed resource requirements… run etcd as a cluster of odd members.” If you’ve been keeping track, we’re at a minimum of 5 hosts required for a production-grade Kubernetes deployment. Take a scenario of having multiple clusters siloed by business unit, application, or security zone? Bare metal is no longer an option.
By combining your container infrastructure alongside your virtual machines, you can achieve the highest possible utilization of hardware by spreading nodes across the entire vSphere cluster. The Kubernetes management components gain availability benefits of virtualization and the servers can be better utilized by hosting multiple components or additional virtual workloads.
vSphere is a cornerstone software for many businesses and is heavily relied upon when it comes to exploring new software and infrastructure trends like OpenStack, Docker containers, and configuration management. The flexibility of using a virtualized infrastructure means operationalizing new trends like Kubernetes while continuing to explore future endeavors, such as serverless.
Be mindful that the Kubernetes scheduler is smart. Containers can be placed on worker nodes based on affinity, auto-scaling, and just the sheer amount of containers on the worker. Priority and preemption is a Kubernetes feature where workloads can be given numerical values to make sure they are given enough resources and destroy others that are less important. Kubernetes management components are critical to the health and stability of the cluster so it becomes risky to introduce a maintenance window or to rely on their availability mechanisms. By combining the Kubernetes scheduler with the VMware vSphere Distributed Resource Scheduler, there is now two-level scheduling that properly balances the cluster while vMotion allows these machines to migrate from one host to another when more resources are needed without application downtime or rolling evacuations.
Failure is inevitable. How well is high availability implemented for the application? How often are failure tests being run? Containers bring a lot of promise, but containers aren’t synonymous with cloud native. Just because Postgres can run in a container, doesn’t mean that this traditional relational database is now considered cloud native. Containers can be used as a way to repackage existing apps and fit them into a new environment. Application resiliency is still dependent on the developer and gaining additional benefits from the underlying infrastructure platform.
With vSphere, the recovery process is automated for workers as well as master and etcd virtual machines. vSphere HA Admission Control policies add an additional layer of protection and will always verify that there are sufficient HA resources in the cluster without having to rely on Kubernetes priority policies. This is the N+2 or N+1 resource barrier. Lastly, vSphere HA restart priorities can make sure particular VMs, notably the management and controller components, are given a higher priority for quicker restarts.
You probably have existing applications or hardware that are littered around the data center or hidden somewhere in a cloud. vSphere includes abstractions that allow your resources to be used in a way that supports test/dev, staging, and production environments.
Project Hatchway ensures that any of the hundreds of vSphere-compatible storage partners that provide datastores, including VMware vSAN, can be used for Kubernetes persistent volumes. In contrast, the bare-metal path leads you to a bunch of questions with interoperability of the storage platform or being forced to use local storage with no high availability.
Security is a recurring concern. Exploits have shown that incorrect permissions allow containers to access the same data volumes and that root privileges allow host file system access. Hardware vulnerabilities like heartbleed, spectre, and meltdown have also surfaced. vSphere lowers risk because containers are bound to a virtual machine, which allows flexible sizing of workers while vSphere Update Manager patches servers quickly with in an industry-trusted toolset. A bare-metal environment becomes near impossible due to the amount of servers needed for workload or tenant isolation and the exploration of new tools to patch bare-metal servers.
One of the most overlooked assets are people and human scale. Software and capital assets can have quantified cost, but what about the training required for having the staff operate the assets? Adopting a new technology goes beyond just the core function of it. Ancillary functions such as monitoring, security, networking, and business continuity require new skill sets, processes, and tooling. Multiple infrastructure silos can simply duplicate teams. Rather than driving focused innovation on a single platform, IT ends up repeatedly performing the same tasks.
vSphere solves a lot of the complex operational challenges for all applications, both modern and traditional, that feed into existing tool sets. This allows individuals the time to learn new technologies and operate both VMware and Kubernetes environments while developers can focus on driving value through applications
Automation is a cornerstone of DevOps and SRE culture: The objective is to automate as much as possible so time can be focused on other tasks. Yet, how quickly can infrastructure be scaled for application spikes? Or when developers begin pushing more workloads to the Kubernetes cluster?
With physical hardware, adding more nodes relies on a multitude of variables from hardware type to supported integrations with bare-metal provisioning mechanisms like Foreman or MaaS. It is a very complex automation environment that relies on custom triggers for idle hardware, including the configuration of storage and network components. Capacity management and the basic resource procurement lifecycle would also need to be addressed.
In contrast, vSphere APIs have multiple language bindings (Go, Ruby, Python), which are well documented and tested. These language bindings allow plentiful options for automation with vRealize Automation as well as configuration management with products like Ansible, Puppet, Chef, and Terraform.
One of the strongest arguments for bare-metal environments is related to performance. Many think that removing the hypervisor removes a tremendous amount of overhead. On the contrary, VMware has done considerable research and optimization so that generalized workloads are not affected.
A comparative study by VMware shows that an enterprise web application can run in Docker containers on vSphere 6.5 with better performance than Docker containers on bare metal, largely because of optimizations in the vSphere CPU scheduler for non-uniform memory access (NUMA) architectures. This quashes the belief that running containers on VMs comes with a performance tax. vSphere is better at scheduling VMs on NUMA nodes where their memory resides.
After reading this post, the ultimate question you have to ask is, do all the benefits of a virtualized infrastructure outweigh the cost of a theoretical performance advantage running on bare metal?
Whether it’s traditional or cloud-native applications, the VMware Software-Defined Datacenter delivers resource optimization, availability, interoperability, security, scale, and performance. These are the same reasons why the largest cloud providers offer their services on virtualized infrastructure. Bare metal is rigid and complex, which doesn’t align with business objectives.
To learn more, read the white paper Containers on Virtual Machines or Bare Metal? Deploying and Securely Managing Containerized Applications at Scale.
To find out more about the value of running containers on virtual machines in a VMware SDDC, join us at VMworld US for a technical deep dive by VMware architects Frank Denneman and Michael Gasch on The Value of Running Kubernetes on vSphere CNA1553BU. VMworld US takes place Aug. 26-30, 2018, in Las Vegas.