posted

2 Comments

I’ve been fortunate to have one of our super sharp product line managers, Alex Jauch (twitter @ajauch), spend some time explaining to me one of the new enabling technologies of vSphere 6.0: VAIO.  Let’s take a look at this really powerful capability and see what types of things it can enable and an overview of how it works.

VAIO stands for “vSphere APIs for IO Filtering”

This had for a time colloquially been known as “IO Filters”. Fundamentally, it is a means by which a VM can have its IO safely and securely filtered in accordance with a policy.

VAIO offers partners the ability to put their technology directly into the IO stream of a VM through a filter that intercepts data before it is committed to disk.

Why would I want to do that? What kinds of things can you do with an IO filter?

Well that’s up to our customers and our partners. VAIO is a filtering framework that will initially allow vendors to present capabilities for caching and replication to individual VMs. This will expand over time as partners come on board to write filters for the framework, so you can imagine where this can go for topics such as security, antivirus, encryption and other areas, as the framework matures. VAIO gives us the ability to do stuff to an IO stream in a safe and certified fashion, and manage the whole thing through profiles to ensure we get a view into the IO stream’s compliance with policy!

 

The VAIO program itself is for partners – the benefit is for consumers who want to do policy based management of their environment and pull in the value of our partner solutions directly into per-VM and indeed per-virtual disk storage management.

 

When partners create their solutions their data services are surfaced through the Storage Policy Based Management control plane, just like all the rest of our policy-driven storage offerings like Virtual SAN or Virtual Volumes.

 

Beyond that, because the data services operate at the VM virtual device level, they can also work with just about any type of storage device, again furthering the value of VSAN and VVOLs, and extending the use of these offerings through these additional data services.

How does it work?

The capabilities of a partner filter solution are registered with the VAIO framework, and are surfaced for user interaction in the SPBM control plane via a VASA provider built into ESXi itself.

 

Once the data service capabilities of the filters are visible in SPBM, policies can be created for use by VMs.

 

Sample Caching Policy From VAIO SDK

 

For example, a caching policy might be created that will allow access to an onboard flash device for a VM in a host, giving it a certain amount of read or write buffer. Another example might be a policy to synchronously replicate all IO to a DR location, prior to being committed to disk or even prior to caching.

 

The filters that allow these capabilities are written by different vendors, but they are surfaced in a standardized model for consumption through data services in SPBM. This abstracts the details of execution so that the user doesn’t need to configure anything other than the policy, and partners can offer their unique value through a common model.

 

Once a policy is created it can be applied to a VM much like any other storage policy. Taking the above examples, say we apply the replication policy to a given VM. The data service is now enabled for the VM and runs in the VM in the user world, but <strong>not within the guest OS</strong>. Nothing is changed within the guest to operate the filtering of the IO itself. The VM now has a filter attached to it that is activated for any of the VM’s IO whenever it’s VMDK is accessed, even if the VM is powered off.

Let’s do a brief summary of how IO is usually handled before talking about the implementation of VAIO.

Normally a VM’s IO is handed from the user world (the VM) to the kernel for processing. A write for example comes from a guest OS and is handled in user space by the vSCSI driver of the VM. The vSCSI driver will open a channel to the vSCSI backend in the vmkernel which processes the write by opening a location on the file system, which then hands the write to the FDS (file device) layer that accesses a physical device and commits the write.

 

Normal IO Path

 

When manipulating this data path to do things like filtering, we want to ensure a few things: We do not want to introduce any insecurity to the path, we do not want to introduce much overhead, and yet we still want to handle any filtering before it is committed to the physical device. We moreover want the actual data manipulation to happen in the user world of the VM, to protect the kernel and minimize host-wide overhead and instability. Often these have been conflicting requirements in the past, leading to approaches that need to prioritize one requirement over another.

 

With VAIO, we have a mechanism that places the execution of the data services in the user world, while placing the framework that enables the services in the kernel space where it has visibility to the full IO path.

So how does VAIO do this differently?

On first glance, IO is initially handled almost identically such that we do not interrupt IO for systems without a specific data service attached. A write for example will proceed from the guest in the user world down to the kernel where it passes through the file system and the file device layers. But here, at the kernel’s file device layer, before it is sent to the physical device the VAIO framework is able to see the IO request. The VAIO framework has visibility via a kernel module attached to the FDS layer and can see if that VM’s IO has a particular data service attached to it.

 

If there is no policy, the IO commits as normal with no filtering and no overhead. If there is a policy for that VM’s IO, instead of being immediately committed it is passed back to the user space of the requesting VM, where the data service executes the filter against the IO. This is done very quickly, with no context switching and without copying the data.

 

After processing for the data service (cache, replicated, etc.), the filter can then return the IO directly without needing to return through the entire vSCSI/file system layers again. If for example the policy is for replication, the filter will pass the write directly to the registered replication service and then return the IO to the VAIO framework and device layer to be committed to the physical device.

 

IO Path with VAIO

 

The ‘promotion’ of the IO back to the user world (referred to as an “upcall”) from the kernel might strike you as overhead, yet it uses a specifically designed and extremely lightweight mechanism that takes less than a microsecond. This grants us the benefits we were looking for initially: Minimal overhead, coupled with execution in the user space for security and kernel stability without touching the vSCSI layer that bridges the user and kernel worlds.

Awesome: How can I get it?

With just a bit more patience! In the next few months we will be offering an SDK for partners who have registered on the program, and we will begin certifying solutions at that point. Once we have a few certified solutions we’ll be able to offer VAIO-based partner solutions on an ongoing basis. Another benefit of the VAIO model running in user space, not the vmkernel, is that we can start to certify and release solutions outside of the standard vSphere release cycle. Partner offerings simply use the VAIO framework without exposing any danger to the kernel. No third party kernel modules; no attempts to turn the pluggable storage architecture into a Frankenstein monster doing things never intended by engineering; no manipulation of vSCSI means we can be quite flexible in bringing solutions on-board.

 

If you’re a user you’ll just have to wait for a bit while we work with our partners to get some certified solutions out the door.

 

If, however, you’re a partner and want to start creating some new solutions using the vSphere APIs for IO Filtering, we will have an SDK for you very shortly. If you’re interested, get in touch with the VMware partner management team or our ecosystem engineering team to get the ball rolling.