From the Trenches

Changes to Snapshot Consolidation

Another guest post today from Simon Todd, who is a Tech Support Engineer in our Cork, Ireland office who specializes in Storage issues.

Anyone using snapshots has probably run into this issue once, twice or many times:

When trying to perform an online consolidation there is not enough space on the Datastore to perform the consolidation.

This is mainly due to the way snapshots are consolidated. To illustrate this, let’s look at an example where we have a virtual machine residing on a 600Gb VMFS Datastore that has a 500Gb base disk, and that virtual machine has 8 levels of snapshots of the following sizes:

As you can see we are consuming an additional 64Gb on top of the original base disk leaving us at risk to run out of space on the Datastore. The logical thing to do is consolidate these snapshots. So what happens if we consolidate these?  Well, prior to vSphere 4.0 Update 2 the snapshot consolidate process would take the latest snapshot, merge that into the previous snapshot, then merge that combined snapshot and merge it into the previous one, and so on and so forth until complete.

If each of the snapshots contains unique changes that do not exists in previous snapshots then the following diagram would illustrate the worst case scenario for Datastore usage:

As you can see there’s a potential for the snapshot consolidate process to consume 153Gb more than your Datastore allows. This process has been the same all the way back from when the snapshot process was first introduced.

vSphere 4.0 Update 2 changes

As of vSphere 4.0 Update 2 the snapshot consolidation operation has changed, instead of rolling the snapshots into each other, the snapshot process now takes the oldest snapshot and consolidates that into the base disk, removes the snapshot and then processes the next oldest snapshot and merges that into the base disk, so after all the snapshots are consolidated into the base disk they are removed. This not only saves on required disk space, but it also performs better:


Leave a Reply

Your email address will not be published. Required fields are marked *