Site Recovery Manager

SRM – Array Based Replication vs. vSphere Replication

SRM supports two different replication technologies, Storage Array or Array-Based Replication and vSphere Replication. One of the key decisions when implementing SRM is which technology to use and for what VMs. The two technologies can be used together in an SRM environment though not to protect the same VM. Given that, what are the differences and why would you use one over the other? This table will provide all the answers you need:

  Array-Based Replication vSphere Replication
Type Replication using the storage layer Replication using the host/vSphere layer
RPO min/max 0 up to max supported by the vendor 5 minutes to 24 hours (5 min RPO was introduced in version 6.0 for vSAN-to-vSAN and all datastores in 6.5)
Scale Scales up to 5,000 VMs protected/2,000 simultaneously recoverable per vCenter/SRM pair Scales up to 2,000 VMs (protected & recoverable) per vCenter/SRM pair
Write order fidelity Supports write order fidelity within and across multiple VMs in the same consistency group Supports write order fidelity on the disks/VMDKs that make up a VM, consistency cannot be guaranteed across multiple VMs
Replication level Replicates at the LUN/VMFS or NFS volume level Replicates at the VM level
Replication configuration Replication is configured and managed on the storage array Replication is configured and managed in the vSphere Web Client
Array/ vendor types Requires same storage replication solution at both sites (eg. EMC RecoverPoint, NetApp vFiler, IBM SVC, etc) Can support any storage solution at either end including local storage as long as it is covered by the vSphere HCL
Storage supported Replication supported on FC, iSCSI, or NFS storage only Supports replicating VMs on local, attached, vSAN, FC, iSCSI, or NFS storage
Cost Replication and snapshot licensing is required vSphere Replication is included in vSphere Essentials Plus license levels and higher of vCenter version 5.1 and higher
Deployment Deployment is fairly involved and must include storage administration and possibly networking Deployment requirements are minimal. Deploy an OVF at each site and start configuring replications
Application consistency Depending on the array, application consistency may be supported with the addition of agents to the VM Supports VSS & Linux file system application consistency
FT VMs Can replicate UP FT protected VMs (once recovered VM is no longer FT enabled). Does not support SMP FT VMs. Cannot replicate FT protected VMs
Powered off VMs/Templates/Linked clones/ISO’s Able to replicate powered off VMs, Templates, Linked Clones (as long as all nodes in the snapshot tree are replicated as well), and ISOs Can only replicate powered on VMs. Cannot replicate powered off VMs, Templates, Linked Clones, ISOs, or any non-VM files
RDM support Physical and Virtual mode RDMs can be replicated Only Virtual mode RDMs can be replicated
MSCS support VMs that are part of an MSCS cluster can be replicated Cannot replicate VMs that are part of an MSCS cluster. VR cannot replicate disks in multi-writer mode.
vApp support Replicating vApps is supported Replicating vApps is not possible. However, it is possible to replicate VMs that are part of a vApp and to create a vApp at the recovery site that they are recovered into
vSphere versions supported Hosts running vSphere 3.5-6.5 are supported Hosts must be running vSphere 5.0 or higher
MPIT Multiple point-in-time snapshots or rollback is supported by some supported array vendors (eg. EMC RecoverPoint) Supports up to 24 recovery points
Snapshots Supports replicating VMs with snapshots and maintaining the snapshot tree Supports replicating VMs with snapshots however the tree is collapsed at the target site
Response to Host failure Replication is not impacted Host Failure and the VM restarting on another host triggers a full sync. For details about what a full sync involves see the vSphere Replication FAQ
vVols integration SRM does not currently support vVols with array-based replication vVols are supported by vSphere Replication with SRM
Interop with vRA VMs that are managed/deployed by vRA and are using array-based replication can be easily protected either with Storage Policy-Based Protection Groups or using the vRO SRM plug-in vRA managed VMs that need to use vSphere Replication can be protected using the vRO plug-ins for SRM and VR
Policy-based protection Policy-based protection is possible through the use of SPPGs (Storage Policy Based Protection Groups) vSphere Replication doesn’t support policy-based protection

For detailed information on Site Recovery Manager and vSphere Replication including answers to commonly asked questions check out and the vSphere Replication FAQ


7 comments have been added so far

  1. Hi GS,
    With storage array base replication indicate “Replication supported on FC, iSCSI or NFS storage only”.
    Does it only support from FC FC / iSCSI iSCSI / NFC NFC?
    Can it support mix protocols like from FC iSCSI such type of replication ?
    Thinking using this to migrate RDM cluster from old SAN to new SAN running different protocols could be handy.
    Is it achievable ?

  2. It really depends on what the array supports. If the array and the SRA supports it then SRM would be fine with it.

  3. Very nice table, thank you!
    We are running SRM 8.1, using a SRA (RecoverPoint). It sounds like you can run both array based and vSphere based repl. We do have some non replicated LUN’s, so we could also use vSphere replication on datastores that are not being repl with RP, correct?

    Would that allow us to run a failover on only the vSphere replicated VM’s or is it an all or nothing failover? I would just like to take a test a little bit further and failover one or two VM’s to see how they would react.

  4. Yes, ABR and VR can be run together with SRM. You are correct, you could put VMs protected by VR on non-replicated datastores.

    VMs are organized by Protection Groups in SRM. There are different PGs for VR and ABR so you could organize those PGs into Recovery Plans to allow you to failover individual PGs or all of them.

  5. So glad I did this. One thing that the test failover doesn’t really do is re-IP the VM. I setup vRep, and stood up a test VM, then failed it over. Everything worked great, but my ping of the new VM didn’t respond. The GW and VRRP IP’s of the failover network we’re working fine, but when I looked at the switch configs, I didn’t tag the VLAN ID of the failover network host uplinks. I corrected that, and everything came alive. Thanks again

Leave a Reply

Your email address will not be published. Required fields are marked *