In short: Yes, it sure is!
In this post I’ll show 6 VMs being protected with vSphere Replication. 2 VMs each will reside on fibre channel data stores (EMC CX4), iSCSI data stores (Falconstor NSS Gateway), and an NFS datastore (EMC VNX5500). I’ll replicate them onto different datastores, fail them over, reprotect, and fallback.
I’m always grateful to the sponsors of my storage platform for these labs – not everyone has the luxury of these setups for their lab environments! Obviously both Falconstor and EMC have great replication options on their platforms (and I use them as well!) but for these purposes I’ll be using vSphere Replication to show the heterogenous support.
So let’s have some fun, and make sure that each VM is replicating to a completely different type of storage. In fact, let’s throw in a twist and use local disk on the recovery site!
- Server1-iSCSI will be replicated to FC
- Server2-iSCSI -> NFS
- Server3-FC -> iSCSI
- Server4-FC -> NFS
- Server5-NFS -> iSCSI
- Server6-NFS -> local server disk.
Setting up the replication and the target datastore is something I’ve covered at length here before, so let’s take that as done and just show the results. I chose each VM on each type of storage and specifically landed the target on a datastore using a different protocol as you can see here:
So replication is going great – mixing and matching not only different storage vendors but utterly different protocols for the data stores as well.
This works because vSphere Replication works above the storage layer and is in fact unaware of what the storage subsystem is. It’s not aware of it, doesn’t care about it, and won’t interact with it beyond sending blocks to a VMDK to be written.
But how does SRM handle this? With great ease and no complaints. I created a single, simple protection group and recovery plan for all these VMs together. All on different source datastore types, all with different target datastore types:
And ran a test recovery against it to make sure the isolated test snapshots will work on all our VMs on all storage types. As you can see, the storage synched correctly, snapshots were all created happily, and the test run went off smoothly with VMs all over the place, on all types of data stores, using all types of storage protocols.
“Test” you say? I don’t trust a test, that’s just a snapshot, I want to see a *real* failover! Fine, after cleanup I ran a real failover. One picture says 1000 words:
So, this is great. We’ve taken a bunch of VMs that were scattered across a lot of different types of data stores, and failed them over to another site on different types of data stores. Let’s have even more fun, and reprotect them back to the primary site. This should now set up our VMs, ready to be recovered, *back on their original storage*. I click on reprotect, and then go check out the status in the VR tab of the SRM plugin:
Indeed, if you look, you can see the the iSCSI VMs are being protected to the Falconstor (which is iSCSI), the FC VMs to the CX4 (FC) and the NFS servers (one of which is currently on FC, one of which is on local disk) are being replicated back to their original location on the VNX NFS mount. vSphere Replication is smart enough to leave the original VMs behind during a failover and then use them as a target seed for reprotection when we want to fallback. In fact the reprotect will use the exact same VMDKs that were left in place in the first place, as well as the exact same replication schedule and options that we used to protect for the initial failover.
Once the reprotect is done, a fallback will put the VMs back to their original homes, on their original data stores! Replicate between different storage vendors and types, failover happily between sites (even onto local storage) then reprotect and failback to their original homes. Of course, after we fail back, I click reprotect once more and… it will again use the replicas that were already at the recovery site from our very first protection of the VMs, in the same directories, on the same data stores as before.