By Frank Denneman, Sr. Staff Architect
Kubernetes is one of the most exciting technologies of the year. Will it replace virtual machines? Will it displace VMware vSphere? Is it the next platform or is it just another technology making its way into the data center?
By now, we know that Kubernetes optimizes cloud-native workloads when failure and disruption are anticipated. But, you might ask, what about the infrastructure required to run these cloud-native apps? After all, Kubernetes does not run on thin air. How does Kubernetes optimize resource consumption with an eye to cost minimization while maintaining high availability?
We will answer these questions and more at the CNA1553BU session at VMworld US. Attend to learn why Kubernetes and vSphere are such a well-matched pair. The extensive feature set of vSphere, such as high availability, NUMA optimization, and distributed resource scheduler, make it your best choice to run Kubernetes. The combination of Kubernetes and vSphere unlocks several advantages for both cloud-native and traditional workloads. vSphere also plays a critical role in keeping the Kubernetes control plane components highly available in case of planned or unplanned downtime. In this session, we will detail recommended DRS and HA settings and many other best practices for running Kubernetes on vSphere based on real-world customer scenarios.
This is amazing, you might say, but what’s on the horizon? A deep dive would be incomplete without an outlook on upcoming improvements for Kubernetes on vSphere integration. And last but not least, if you’re at work to win back your end-user, you’ll learn how to respond to common objections.
Still not convinced? What about a dive into the behavior of Linux CPU scheduling versus ESXi CPU and NUMA scheduling? What about a primer on the sizing and deployment of Kubernetes clusters on vSphere?
For all this and more, I look forward to seeing you at CNA1553BU: Deep Dive: The Value of Running Kubernetes on vSphere. Schedule your session now!