Today we’re releasing VMware AppCatalyst – a desktop hypervisor for developers – as a technology preview. As we spoke with development teams the last few months, it became clear that there was a gap in the market. Most developers use some form of hypervisor on their desktop – typically either VMware Fusion or Oracle VirtualBox – and they use these tools every day. But these tools were not specifically designed to support developer workflows, and there are many developer use cases where we thought we could do a lot better.

It’s free! One of the most overwhelming pieces of feedback we got from the community was that developers loved Fusion, but often used some other hypervisor because they didn’t want to pay for Fusion when they just needed a local environment in which to code. So we’re making the technology preview of AppCatalyst free.

AppCatalyst leverages the same hypervisor that ships in Fusion, Workstation, and vSphere. We stripped out some features that we found developers didn’t care about as much like GUI, 3D graphics support, virtual USB support, and Windows guest support. But of course, if you want those features, you’re welcome to upgrade to Fusion and get the full experience.

It’s API- and CLI-driven AppCatalyst is driven by REST API and CLI. Everything the hypervisor does is exposed and controlled through the REST API, making it extremely easy to integrate AppCatalyst into other tools and workflows. We leveraged this capability to build AppCatalyst support for Docker Machine and Vagrant in just a few days (see below for more details on these features).

We’re already working with the community to bring integrations of other common tools like Panamax and Kitematic to AppCatalyst, and we expect to expand integration into more tools going forward.

It’s optimized for cloud-native application workloads One of the most common use cases for the desktop hypervisor is with Docker. Docker is fundamentally a Linux technology, but most developers we talk to are using Mac’s so they need some form of hypervisor to run a Docker engine. But to do this you need to a) download a hypervisor, b) select a Linux distribution, c) download and install said Linux distribution, then d) setup Docker. All just to get to the point where you can start using Docker.

AppCatalyst comes pre-bundled with Photon OS – VMware’s compact container host Linux distribution. When you download AppCatalyst, you can point docker-machine at it, start up a Photon instance almost instantly (since there’s no Linux ISO to download), and start using Docker. This saves a lot of time getting started.

Another common use of the desktop hypervisor is with Vagrant. Developers build Vagrant files and then Vagrant up their deployment. AppCatalyst ships with a Vagrant provider so you can start using it with Vagrant immediately.

A data center on your desktop for dev and test Our long term goal is to turn AppCatalyst into a data center on the desktop: any program or utility that you use against your production data center should be able to run in dev/test mode on your laptop. To do this we’ll be adding storage and networking abstractions to AppCatalyst, and moving towards API parity with the data center. We have a ways to go to get there, and this initial tech preview is just the first step.

In the meantime, please check out the free tech preview of AppCatalyst and let us know what you think. We’d love to hear from you @cloudnativeapps, especially if you’re trying to integrate your tools with AppCatalyst, or if you’ve got cool ideas about what you think it should do next.

More information:

About the Author: Jared Rosoff is the senior director of product management for Cloud-Native Apps at VMware, where he oversees the company’s cloud-native portfolio of products. 


Ben_CorrieIn the last two years there’s been an explosion of interest in the potential of application containers. Some look to containers as the infrastructure grease to help expedite deployment, integration, testing and provisioning – package once, run anywhere. Others point to the resource management capabilities typically associated with VMs and the low overheads of container instantiation. Many have mused on the relationship between containers and virtual machine (VMs) – are they competitive or complementary?

Last year at VMworld, we stressed the latter – that containers and VMs work better together. At that time, we were already two months into our work on Project Bonneville, hoping to deliver on that message in a very literal sense. Our goal was to provision containers directly to virtual infrastructure as first-class citizens and to do it efficiently, quickly and transparently.

When we initially proposed a 1:1 model between containers and VMs, it was met with widespread skepticism. Doesn’t this construct negate the majority of the benefits of a container? Why should containers be a special case – isn’t the hypervisor intentionally workload-agnostic? One year later, our half-pizza team here at VMware is proud to have reached the point where we can share a technology preview with the world and blog publically about our progress.

The design principles of Bonneville take a pure approach to the notion of containers on the hypervisor. In the abstract, a container is a binary executable, packaged with dependencies and intended for execution in a private namespace with optional resource constraints. A container host is a pool of compute resource with the necessary storage and network infrastructure to manage a number of containers. Around this, you have an ecosystem that provides dependency-management, image resolution, cloud storage, etc. Companies such as Microsoft and CoreOS are successfully challenging the assumption that a container must necessarily be a Linux construct.

If we accept these descriptions of containers and container hosts, then arguably a VM fits the abstract description of a container perfectly, and a hypervisor or some part thereof is an equally suitable container host. There’s compelling advantages to hardware virtualization in this brave, new, containerized world, and with Bonneville, we can deliver them with minimal additional cost.

Bonneville is a Docker daemon with custom VMware graph, execution and network drivers that delivers a fully-compatible API to vanilla Docker clients. The pure approach Bonneville takes is that the container is a VM, and the VM is a container. There is no distinction, no encapsulation, and no in-guest virtualization. All of the necessary container infrastructure is outside of the VM in the container host. The container is an x86 hardware virtualized VM – nothing more, nothing less.

We recently tested the flexibility of this abstraction by taking a legacy operating system – one never designed for use with containers – and integrating it with Docker. We chose MS-DOS 6.22 partly for nostalgia, and partly because it neatly encapsulates a simple legacy OS. In 48 hours, we were able to use a vanilla Docker client to pull a Lemmings image from a Docker repository and run it natively in a 32MB VM via a Docker run command. The image was built using a Dockerfile, layered on top of FAT16 scratch and DOS 6.22 OS images with TTY and graphics exported back to the client. Given another 48 hours, I’m confident we’d have had volume support and networking integrated.

That’s a win for the flexibility of hardware virtualization, but what about efficiency? vSphere’s “Instant Clone” technology (aka Project Fargo) already facilitates fast and efficient VM provisioning; with Instant Clone, we’re able to provision child VMs directly from a parent ROM image in memory where they start out with a theoretical footprint of zero. Every byte written to memory is a copy-on-write diff from the parent, so the footprint of the child VM is only ever its mutable memory. The more advanced the initialization of the parent, the faster and more efficient the children. Bonneville is only beginning to tap the potential of Instant Clone; further efficiencies are gained by bootstrapping the parent with a custom “Pico” version of VMware Photon, weighing in at around 25MB.

A Bonneville container host can theoretically run containers of any kernel version of any operating system and can do it fast and efficiently. What other benefits are there of this hardware-virtualized approach? Here are a few points to consider:

  • Security and isolation. If you try to break out of a Bonneville VM, the only thing you’re likely to find is the BIOS.
  • Freedom from Linux distribution management. Not sure what Linux distro to use as a container host? Worried about maintenance, patching, upgrades, sizing, multi-tenancy or kernel version? With Bonneville, there are no such worries. Your containers live in virtual spaces that are dynamic, secure and already have well-trodden upgrade paths. If you really want a Linux container host, Bonneville can provision a Linux container host as a container.
  • Portability. Provided you stick with Linux, any container created in Bonneville can be committed and run on any vanilla Docker host.

In summary, Bonneville is the container ecosystem you love, on the hypervisor you trust. We look forward to bringing you further updates in due course.

More information:

About the Author: Ben Corrie (@bensdoings) is the Principal Investigator on Project Bonneville, a native container solution for VMware’s hypervisor.


FabioIf you’ve been following us @cloudnativeapps, then you know that I’ll be leading a tutorial session at DockerCon (June 22-23, 2015) this coming Tuesday.

We mentioned earlier this year that we’d be working towards making developers first-class users of the data center. To that end, I’m excited to show off some of the new projects we’ve been working on, which extend the data center to the developer’s laptop.

More specifically, I’ll be demonstrating how technologies we’ve built with Docker and some of our other partners can make it quick and easy for developers to build and test code on their local machines. I’ll also cover how those same tools make it easier to push code written locally to production with fewer complications and errors.

And because development, test, and deployment processes for container-based applications have to be accompanied by the right infrastructure and management tools, I’ll also spend a few minutes talking about some of our newest work for the production stack. Altogether, you’ll see that we’re thinking through the application – from development teams to production engineering – and what needs to built at each step to deliver applications and their features to users more quickly, securely, and reliably.

If you can’t attend the session, feel free to reach out to me at @fabiorapposelli or drop by our booth (G9) for more demos at DockerCon. We have a ton of cool projects we need your feedback on!

About the Author:  Fabio is a seasoned IT professional with over 15 years of experience, and a background as a software developer. He currently sits on the edge between Dev and Ops, helping both reach nirvana.