Home Page

Evolution of Workload Hosting: From Virtualization to Containerization

Virtualization solved the core problem of “one server, one app.” Containerization built on this outcome and refined how it is achieved. However, virtualization remains a mainstay in contemporary computing, and many of the world’s most critical workloads continue and will continue to run in VMs. Beyond its longevity, virtualization improves containerization and Kubernetes in delivering core outcomes users and business needs expect. 

I had the opportunity to attend KubeCon North America in November 2025. Thank you to the Cloud Native Computing Foundation for an exceptional event! You can read my colleague’s great summary about the event here. I also had the privilege of representing Broadcom at the expo booth, where I had compelling conversations with attendees and other sponsors that are part of the broader cloud native community. One question I had with an engineer that stopped by the booth stood out to me: “What does virtualization have to do with Kubernetes?” This understanding is very important to IT work and organizational budgets! 

Computing revolutionized the way we interact with each other, how we work, and has shaped what is possible in industry. IT workloads demand computing resources such as CPU, memory, storage, network, etc in order to perform the desired functions – such as sending an email or updating a database. A critical piece of business operations involves IT organizations optimizing their workload hosting strategy, whether it be hosted on the mainframe, in an on-premises datacenter, or in a public cloud environment.

Virtualization didn’t disappear with Kubernetes — it actually makes Kubernetes work better at enterprise scale.

Virtualization

Since the dawn of electronic computing in the 1940’s, users interacted with dedicated physical hardware to accomplish their tasks. Applications, workloads, and hardware all advanced rapidly and expanded the ability, complexity, and reach of what users could do via computing. However, a key constraint remained – one machine, or server, was dedicated to one application. For example, organizations had servers dedicated to email functionality, or an entire server dedicated to activities that ran a handful of occasions per month like payroll. 

Virtualization, using technology to simulate IT resources, was pioneered in the 1960’s on the mainframe. In that era, virtualization allowed shared access to mainframe resources and enabled mainframes to be used for multiple applications and use cases. This provided a blueprint for contemporary virtualization and cloud computing through allowing multiple applications to run on dedicated hardware. 

VMware led the cloud computing boom through the virtualization of the x86 architecture, the most common instruction set for personal computers and servers. Now physical hardware could house multiple distributed applications, support many users, and fully utilize expensive hardware. Virtualization is the key technology that allows public cloud computing to be possible – below is a summary: 

  • Abstraction: Virtualization abstracts physical hardware that provides CPU, RAM, and storage into logical partitions that can be managed independently
  • Flexibility, Scalability, Elasticity: The abstracted partitions can now be scaled as business needs, provisioned and turned off on demand, and resources can be recaptured as needed
  • Resource Consolidation & Efficiency: Physical hardware can now run multiple, right-sized logical partitions with the appropriate amount of CPU, RAM, and storage – maximally utilizing hardware and saving on fixed costs such as real estate and power. 
  • Isolation & Security: Each VM has its own “world” with an OS independent from the one running on the physical hardware, allowing for deep security and isolation for applications sharing the underlying host 

For most enterprises and organizations, critical workloads that power their mission are built to run on Virtual Machines, and they trust Broadcom to provide the best VMs and virtualization technology on the planet. 

By proving that infrastructure could be abstracted and managed independently of physical hardware, virtualization laid the foundation for the next evolution in workload hosting.

Containerization:

As computing demands scaled, complexity of applications and workloads also rose exponentially. Applications that traditionally were designed and managed as monoliths, or a single unit, began to be broken apart into smaller units of functionality called microservices. This allowed developers and application administrators to manage units of their applications independently, allowing for easier scaling, updates, and reliability. These microservices run in containers, which were popularized in the industry by Docker. 

Docker containers package applications and their dependencies, such as code, libraries, and config files, into units that can run consistently on any infrastructure – whether it be a developer’s laptop, a server in an enterprises’ datacenter, or on a server in the public cloud. Containers get their name from shipping containers, and provide many of the same benefits as their namesake such as standardization, portability, and encapsulation. Below is a quick overview of the key benefits of containerization: 

  • Standardization: Like shipping containers package merchandise in a form factor that other machinery can consistently interact with, software containers package applications in a uniform, logically abstracted, and isolated environment. 
  • Portability: Shipping containers move from ships to trucks and trains. Software containers can run on a developers’ laptop, development environments, production servers, and between cloud providers. 
  • Encapsulation: Shipping containers contain all the merchandise needed to fulfill an order. Software containers hold the application code, runtime, system tools, libraries, and any other dependencies required to run the application. 
  • Isolation: Shipping and software containers both isolate their contents from other containers. Software containers share the underlying physical machine’s OS, but not the application dependencies.

As containers became industry standard, teams began developing their own tools to orchestrate and manage containers at scale. Kubernetes was born out of these projects in 2015 and then donated to the open source community. Building on the nautical theme of containers, Kubernetes means “Helmsman” or “Pilot” in Greek, and functions as the brain of the infrastructure. 

A container allows you to easily deploy applications – Kubernetes allows you to scale how many instances of the application you would like to deploy, it ensures that each instance remains running, and it works the same across any cloud provider or datacenter. These are the three “S” pillars – Self-Healing, Scalability, and Standardization. These outcomes propelled Kubernetes’ rise to industry gold-standard, and is ubiquitous in cloud-native computing by delivering operational consistency, reducing risk, and enhancing portability. 

Virtualization → Containerization

Virtualization paved the way for developers to house and isolate multiple applications on physical hardware, for administrators to manage IT resources decoupled from the underlying hardware, and proving that abstracting underlying parts of the stack is viable for running and scaling complex software. Containers build on these principles and abstract the application layer, providing the following benefits over virtualization: 

  • Efficiency: Due to a shared OS, containers eliminate resource overhead (CPU, memory, storage) associated with running multiple of the same OS for applications
  • Velocity: The smaller footprint allows much faster startup and shutdown times 
  • Portability: Improved portability – containers are lightweight and can be run on any conformant container runtime 

Virtualization improves Kubernetes

Virtualization stabilizes and accelerates Kubernetes, as well. Most managed Kubernetes services, such as the hyperscaler offerings (EKS on AWS, AKS on Azure, GKE on GCP) runs the Kubernetes layer on top of a virtualized OS. As Kubernetes environments typically are complex, virtualization greatly enhances isolation, security, and reliability, as well as eases overhead management operations. A brief overview of these benefits follow: 

  • Isolation & Security: Without virtualization, all containers running on a Kubernetes cluster on a physical host share the same Kernel (OS). If a container is breached, everything running on the physical host can potentially be compromised from the hardware layer. The hypervisor prevents the spread of bad actors to other Kubernetes nodes and containers. 
  • Reliability: Kubernetes can restart containers if they crash, but is powerless if the underlying physical host has issues. Virtualization can restart the Kubernetes environment via High Availability on a different physical server. 
  • Operations: Without virtualization, the entire physical host is typically home to one Kubernetes cluster. This means that the environment is locked into one version of Kubernetes, slowing velocity and making upgrades & operations difficult. 

This is why every major managed Kubernetes service runs on virtual machines: virtualization provides the isolation, reliability, and operational flexibility required at enterprise scale.

Broadcom provides the best platform for workload hosting

Since Kubernetes’ birth in 2015, VMware technology has been a top 3 contributor to the upstream project. Several projects were invented by Broadcom and have been donated to the community, including: 

  • Harbor
  • Antrea
  • Contour
  • Pinniped

Broadcom’s engineering teams remain committed to upstream Kubernetes and are contributing to projects such as Harbor, Cluster API and etcd. 

With the release of VCF 9, Broadcom’s VCF Division has brought to the industry unified operations, shared infrastructure, and consistent tooling agnostic of workload form factors. Customers can run VMs and Containers/Kubernetes workloads on the same hardware, managed with the same tools that millions of practitioners have built their skills and careers. Enterprises and organizations can cut down on capital and operating expenditures, standardize their operating model, and modernize their applications and infrastructure to allow the business to move faster, secure their data, and improve reliability of their core systems. 

Broadcom has been the gold standard for virtualization and VM workloads for 25+ years. Due to continuous innovation and contributions to the technology landscape and overall industry, customers continue to partner with us and trust us to run both their mission critical VM workloads, but also their container and Kubernetes workloads for the next 25 years. 


Discover more from VMware Cloud Foundation (VCF) Blog

Subscribe to get the latest posts sent to your email.