Home > Blogs > Cloud-Native Apps


VMware vSphere Integrated Containers Deep Dive

Two months ago, at Dockercon, VMware introduced Project Bonneville to the world. Today at VMworld, Kit Colbert revealed Bonneville’s coming of age as the core technology powering vSphere Integrated Containers.

The response to Bonneville from our customers has been huge. They absolutely get the value proposition of combining the best of both worlds: The speed, agility and workflow of containers; underpinned by a rock-solid enterprise platform that they already trust.

So in this blog post, I want to look in a little more detail at what the team have been up to in the last couple of months leading up to this announcement today.

When we last demoed our technology, we showed how it can turn an ESX host into a Docker host and explained how containers could be easily provisioned as lightweight VMs. At that time we were able to start a Linux containerVM in around 5 seconds, underpinned by a custom tiny “Photon Pico” and we demonstrated a fledgling MS-DOS port we threw together in a 48-hour hackathon. We talked about the benefits of Instant Clone in being able to share the memory of a Linux kernel between the containers without any of the disadvantages of having to actually share it. We also highlighted the portability aspects by driving Bonneville from a vanilla Docker client and showing simple Swarm and Compose integration.

Well, we promised that better was to come and today’s announcement is just another step along the way of being faster, lighter and more dynamic.

vSphere Integrated Containers takes one of the most fundamental and valuable precepts of virtualization and applies it to containers. I like to call it “exploding the Linux container host”. The virtualization revolution brought flexible, abstract, dynamic resource boundaries to compute – carving up commodity hardware into simple fungible assets. Now we’re doing the same for containers with the “Virtual Container Host” concept.

The Virtual Container Host

Anyone remember what we dealt with before virtualization? Statically sized pieces of compute running a single OS that had to be shut down to be patched or reconfigured. But wait a second, isn’t that kind of what a container host is, even when it’s running in a VM? In some cases, such as the Kubernetes pod model this makes sense in that it’s a very intentional pre-allocation of boundaries around containers that naturally belong together. However that model presupposes that we know ahead of time exactly what containers we’re provisioning and in what configuration. In many cases we don’t know this and our not knowing forces us to make guesses that can result in wasted resource, painful reconfigurations and in the worst case, container hosts that become pets, not cattle.

The Virtual Container Host is a Container endpoint with completely dynamic boundaries. vSphere resource management handles container placement within those boundaries, such that a virtual Docker host could be an entire vSphere cluster one moment and a fraction of the same cluster the next. The only resource consumed by a container host in the cluster is the resource consumed by running containers.

Reconfiguration of the VCH is completely transparent to the containers running in it and the VCH imposes no conceptual limitations on the kernel version or even operating system that the containers are running. It never needs to be patched, upgraded or maintained because it’s an entirely abstract concept. As such, VCHs could also be nested, giving a team access to a large VCH from which smaller VCHs could be sub-allocated for individuals.

vSphere vSAN offers fantastic opportunities for shared storage for the VCH, whether it’s persistent volumes, a consolidated image cache or dynamic horizontal scale-out. vSphere’s networking capabilities bring further opportunities for secure and dynamic networking such that container management traffic, application traffic and networked storage could all be isolated from one another.

In addition to all that, we’ve driven down Linux Container start time down to under 2 seconds while retaining the full BIOS of the VM, allowing us to further extend our portfolio of compatible operating systems. If you want an opportunity to experience the thrill of seeing an A:\> prompt come up from a docker run, come by the VMware Videogame Container System area (Hang Space, Moscone West, Level 2) and interact with it in person. If you want the fizzing sensation of seeing a C:\Windows\System32> prompt, come and see one of the team and we’ll happily give you a sneak preview.

In short, the flexibility and power of the vSphere platform, far from being the legacy software of yesteryear, is bringing an unbeatable level of sophistication and simplicity to the foundations of the container ecosystem that we’re all very excited about.

4 thoughts on “VMware vSphere Integrated Containers Deep Dive

  1. Pingback: Photon Controller @ DockerCon EuropeCloud-Native Apps - VMware Blogs

  2. Pingback: Why Docker != Containers and Docker OSS != Docker Inc. « IT 2.0

  3. Pingback: VMware Embraces Containers across the Software-Defined Data Center - Cloud-Native AppsCloud-Native Apps - VMware Blogs

  4. Pingback: Test driving ContainerX on VMware vSphere | virtuallyGhetto

Leave a Reply

Your email address will not be published. Required fields are marked *


*