Product Announcements

A few cautionary notes about replication performance

This is part three of a set of blogs about the fantastic performance improvements found in vSphere Replication 5.5. Take a look at the prior two posts to understand what has changed and why it such a change by reading:

and

But now I do have a few warnings about all this improved vSphere Replication performance.  Why is this not unequivocally a great thing? Because you may run the risk of overloading certain systems with this.

Let’s look at the technical issues at hand

Let’s say you mass protect a few hundred VMs at the same time, what will happen?  Well first of all your hosts will get very busy calculating the initial full sync. They need to calculate the checksum of every disk both at the source *and* at the recovery target disk for all these VMs.  We cap the memory heap size and the number of IOPS, but this can still impose a pretty hefty burden on the CPUs of your hosts to do those calculations.

Once the blocks that need to be shipped are calculated, keep in mind you can have up to 10 VR Appliances at the recovery location, and that may be sending a LOT of blocks to those appliances if you are sending hundreds of VMs worth of changes at once.  The VR Appliances or hosts may get pegged trying to process all the blocks and to do the writes to disk.

VR Appliances will receive an incoming replication and then send that write to a host to write.  That write then becomes ‘sticky’ to that host unless there is a failure or a change.  The next VM replication may go to a different host and then mostly become ‘sticky’ there, and so forth.  But each VR Appliance is unaware of the others’ replications.  It’s possible you may end up with 10 appliances, all writing their replications to just a few hosts in the cluster!  A good practice is to only deploy as *few* appliances as you need to satisfy your replications.

What if, moreover, you only have one or a few target datastores to write to at the recovery location?  All of a sudden you have 10 VR Appliances all doing a lot of now very fast writes.  If you have one large datastore backed by only a few spindles, those disks can get very hot doing all these writes.

Moreover hostd needs to be protected against these scenarios.  When creating disks on the recovery site, VC talks to hostd to do this activity.  Hundreds of VMs needing to be created can add a lot of load to hostd.  VR and recovery actions also interact with hostd to do things like reconfiguring replication, changing properties, changing the RPO, and so forth.   Don’t forget that when you go to recover you may also be issuing a bunch of final syncs to the hosts as well!

The behaviour you may see in this case is a bunch of timeouts, and possibly in extreme overload scenarios you may see some replications become disabled. In essence hostd has a queue of operations it can satisfy, and if we start overloading that queue with writes, with recoveries, with final syncs, with “ReloadVMFromPath” calls to hook the VMX up to the VMDK, etc. we are putting a lot of burden on the hosts.   If you are doing a whole lot of replications or even replication configuration actions as above you may see hostd or the disks, or any number of systems start to saturate, and you will see replications taking too long to complete, RPO violations, and timeouts.

There is no easy solution to this, other than limiting the number of actions vSphere Replication is doing at any one time.  It is dependent on so many factors, like number of VR Servers, number of hosts, number of datastores and disks, available bandwidth…  So as a best practice, try to minimize the number of replications you configure at one time to avoid a whole bunch of initial full syncs.

In sum, deploy as few appliances as possible for your replication, don’t just default to putting 10 in one datacenter!  The warning is mostly for scenarios where you have high bandwidth, low latency, small cluster size, with large datastores backed by only a few disks.  In these scenarios, since VR is so much faster than it used to be, you may find yourself saturating disks, hostd, or VR itself.  In that scenario, simply configure your replications in batches of 30 or less, and otherwise… enjoy the new performance!