Last month Virtual Volumes celebrated it’s first year since it’s release as part of vSphere 6.0 see VVols First Year in Review. Now more than ever, customers are asking how will Virtual Volumes play a role in their virtual infrastructure. One of the more common questions I hear is how backup and recovery of VVols are handled, which inspired me to write this post. The list of supported storage and backup vendors for Virtual Volumes is growing and the way in which snapshots have been implemented in VVols make backing them up much more compelling.
So, how does Virtual Volumes affect backup software vendors? The answer is simple. Backup software using VADP is for the most part unaffected. Virtual Volumes are modeled in vSphere exactly as today’s virtual disks. The VADP APIs used by backup vendors are fully supported on Virtual Volumes just as they are on vmdk files on a LUN. Snapshots created by backup software using VADP will look the same to both vSphere and the backup software as non-VVol based snapshots, though on-array the snapshots are actually Virtual Volume objects.
What backup software vendors support Virtual Volumes?
Note: This is not an exhaustive list rather just a collection of vendors that I am aware of. If you know of others please share and I will update the post.
|Veritas||Backup Exec||Yes, since v15|
|Veritas||NetBackup||Yes, since v7.7|
|IBM||Tivoli Storage Manager||Yes, since v7.1.2|
|CommVault||Commvault||Yes, since v10 SP 10|
|Veeam||Veeam||Yes, since v8 update 2|
|Dell||vRanger||Yes since v7.3|
|CA Technologies||Arcserve Unfied Data Protection||Yes since r17|
|Unitrends||Enterprise Backup||Yes since v9.0|
VMware vStorage APIs for Data Protection
Virtual Volumes supports backup software that uses vSphere APIs for Data Protection (VADP). Originally introduced in vSphere 4.0, VADP enables backup products to do centralized, efficient, off-host LAN free backup of vSphere virtual machines.
A backup product using VADP can backup vSphere virtual machines from a central backup server or virtual machine without requiring backup agents or requiring backup processing to be done inside each guest virtual machine on the ESX host. This offloads backup processing from the ESX hosts and reduces costs by allowing each ESX host to run more virtual machines.
VADP leverages the snapshot capabilities of VMware vSphere to enable backup across SAN without requiring downtime for virtual machines. As a result, backups can be performed non-disruptively at any time of the day without requiring extended backup windows and the downtime to applications and users associated with backup windows.
Supported Transport Modes
A VMware Backup Host can access Virtual Machine data from datastores using four different methods – SAN, LAN(NBD), HotAdd and NBDSSL. These methods are referred to as VMware Transport modes.
- SCSI HotAdd: Supported with VVol
When running VMware Backup Host on a Virtual Machine, vStorage APIs can take advantage of the SCSI Hot-add capability of the ESX/ESXi server to attach the VMDKs of a Virtual Machine being backed up to the VMware Backup Host. This is referred to as HotAdd transport mode. With VVols, the backup proxy is a VM, and the snapshots that are to be backed up are attached in something of a “read-only” mode* to the backup proxy as a virtual scsi disk.
* It is not exactly read-only mode, it is attached as a independent nonpersistent disk, which means that any writes to this “read-only” VVol object are redirected to a temporary object that get destroyed when the VM is powered off or the disk is removed.
- Network Block Device (NBD): Supported with VVol
In this mode, the ESX/ESXi host reads data from storage and sends it across a network to the VMware Backup Host. As its name implies, this transport mode is not LAN‐free, unlike SAN transport. This method is not as efficient as it uses the network stack instead of the storage stack.
- NBDSSL: Supported with VVol
NBDSSL is the same as NBD, except that NBDSSL uses SSL to encrypt all data passed over the TCP/IP connection.
- SAN Transport Mode: Not Supported with VVol
For today’s VMFS disks, SAN transport mode relies on a proxy VM to tell the backup appliance which blocks it should read (since the Windows OS on the backup system can’t directly mount a VMFS disk). With VVols we use the VASA API to establish a “binding”, something of a “handshake” between the ESX host and the VASA provider, which not only instructs the VASA provider to “construct” a data path for the VVol for the ESX host, but also provides the requisite information needed by the ESX host to construct the data path on the ESX side. Because physical backup proxies cannot be VASA clients, currently, it is impossible to construct a “direct” data path between the VVol storage and the Physical Backup Proxy.
Protecting the VASA Provider
So what does it mean when a backup software product supports VVols? In addition to backing up VVols, the backup vendor should also protect the vCenter Server and in cases where the VASA Provider (VP) is a Virtual Machine, the VP itself. (This of course does not apply to all partner VPs as some reside natively on the Array OS). The VP database contains information about all of the storage capability profiles that have been established and the mappings for the storage container. In the event that either the vCenter server or the VP is unavailable, the data still resides natively on the array and VMs continue to run but there will be no management. More importantly, without the VP the VVol structure would be lost.
The good news is most vendors using Virtual Machines for the VASA Provider have incorporated some disaster recovery and\or High Availability to prevent such a catastrophe. There is a subtle difference from “VP HA” and “vSphere HA”. VP HA is where multiple instances of the VP are up and running, possibly coordinating with each other as changes are made. That lets vCenter then switch from one VP to another VP instance without losing manageability. vSphere HA triggers a restart of the VP VM on another host, though, which requires the same VM disks to be available and doesn’t help in case of corruption of any internal databases. Be sure to check with your storage vendor for details on their VASA Provider.
How VVol enhances snapshots.
It’s no secret that historically, VM snapshots have left a lot to be desired. So much so, that GSS best practices for VM snapshots as per KB article 1025279 recommends having 2-3 snapshots in a chain (even though the maximum is 32) and to use no single snapshot for more than 24-72 hours.
With Virtual Volumes, several things change. VVol mitigates these restrictions significantly, not just because snapshots can be offloaded to the array, but also in the way consolidate and revert operations are implemented. For starters, the base VMDK is always the base VMDK. It is always the write target. Instead of writing to the redo log, the snapshots are now read only reference files that do not exist with a chain. Because of this, when the snap is reverted or deleted, there’s nothing to ingest back into the base VMDK because it already has it. This change alone has significantly enhanced the performance and usability of VMware snapshots. See Cormac Hogan’s detailed explanation of how VVols presents A new way of doing snapshots
Since both “managed-snapshots” (VMware snapshots with a 32 snapshots-in-a-chain limit) and “unmanaged-snapshots” (array snapshots with vendor specific limits) are offloaded to the array there is no longer a reason not to utilize the full 32 managed-snapshots.
For more information on Virtual Volumes please refer to the following resources:
- VMware vSphere Virtual Volumes FAQs
- What’s New: vSphere Virtual Volumes
- Benefits of Virtual Volumes
- More VVol Snapshot Goodness
- VVol for Database Backup and Recovery
- Hands on Lab HOL-SDC-1627 – VVol, Virtual SAN & Storage Policy-Based Management
In summary, the list of backup vendors supporting Virtual Volumes is growing, and since the VADP APIs used by backup vendors are fully supported, backing up VVols is business as usual. In addition the enhancement of Snapshots makes Virtual Volumes even more compelling.