With vSphere 6, snapshot consolidation issues are a thing of the past!

0 Flares Twitter 0 Facebook 0 LinkedIn 0 Email -- 0 Flares ×

Snapshot consolidation (or commit) operations in VMware vSphere have always been a problem, especially for large and really active virtual machines. But vSphere 6 has introduced some changes that are probably going to make commits a problem of the past!

A new consolidation method

Before vSphere 6.0, the consolidation and commit phases of any VM snapshot has always followed the same procedure: an additional helper snapshot was created to “freeze” not just the base virtual disk, but also the snapshot disk, and once the changes stored in the snapshot disk have been merged into the base disk, the helper snapshot was also committed, and at some point the I/O was not directed anymore to the snapshot, but back to the original disk.

This process sounds simple, but once you start looking at the details are not that easy, and in some production environments this behaviour have caused problems. If the virtual machine has a decent I/O write activity, also the helper snapshot could grow in size, and in order to commit the changed data back into the base disk, an additional helper snapshot may be created. Cormac Hogan wrote a great post to explain the situation in vSphere 5.5, and also how it has been improved in vSphere 6.0. You can read his great post Snapshot Consolidation changes in vSphere 6.0, here I’m only reporting a part of it:

What we did previously is used a helper snapshot, and redirected all the new I/Os to this helper snapshot while we consolidated the original chain. Once the original chain is consolidated, we then did a calculation to see how long it would take to consolidate the helper snapshot. It could be that this helper snapshot has grown considerably during the consolidate operation. If the time to consolidate the helper is within a certain time-frame (12 seconds), we stunned the VM and consolidated the helper snapshot into the base disk. If it was outside the acceptable time-frame, then we repeated the process (new helper snapshot while we consolidated original helper snapshot) until the helper could be committed to the base disk within the acceptable time-frame.

We retried this process for a defined number of iterations, but we have had situations where, due to the amount of I/O in flight, we could never successfully consolidate the snapshot chain and helpers.

In vSphere 6.0, this consolidation process has been dramatically changed:

In vSphere 6.0 the snapshot consolidation process also uses the mirror driver [as in Storage vMotion]. With the mirror driver mechanism, changes to the VM are written to the active VMDK and the base disk (while protecting write order) during consolidation. One should now hopefully see snapshot consolidations completing in 1 pass (with minimal or indeed no helper disks) and with hopefully a dramatically shorter stun time, and a much small chance of consolidation failure.

Testing the new consolidation process

This sounded too good to be true, so my colleague Tom Sightler, Solutions Architect in Veeam, decided it was time to run some tests to prove this change. Tom shared with me his findings, so thanks Tom for the reports!

The test involved two servers, one with ESXi 5.5 (5.5 Update 3) and one with ESXi 6.0 (6.0 Update 1b), while the rest of the environment is exactly the same: the same exact server model and characteristics, same network, and also the same NFS datastore to run the test virtual machine. This VM is a CentOS 7 minimal install (text only). It has 12 virtual disks, 16GB each, all stored in the NFS Datastore – 1 disk for the Operating System, while the other 11 disks are just attached, but not even formatted or mounted. Why this setup? Because the biggest problems for snapshot consolidation is observed with customers that have lots of VMDKs because vSphere consolidates multiple disks and their snapshots of the same VM sequentially. Even with a completely empty VM (the 11 disks are not even formatted, so they will be completely idle) you usually end up with a stun of around 1second per disk, and since VMkernel goes through each disk and consolidates the last helper, with 11 disks you will almost always have more than 10 seconds stun.

And remember, this is a completely idle VM, there is absolutely no I/O activity. Think about a really active VM like database machines for example, where setup with multiple disks are pretty common: the problem here can we way worse. It’s not the hardware in use, there’s no backup software involved, it’s all about the snapshot technology.

For each test, two parameters have been measured: a network ping executed each second, and the effective stun time read from VMware logs. The first gives you the result has it can be seen from another application or user over the network connecting to the VM, while the second gives you the exact time spend by VMware engine to manage snapshot creation and consolidation.

ESXi 5.5 U3

In order to verify the effects of snapshot creation and removal, different tests have been conducted. First, we took a snapshot, because even the snapshot creation requires at some point an I/O redirection from the base disk to the snapshot disk, and even if this time is minimal, it’s not zero.

Snapshot creation required 1,96 seconds, which is almost 2 seconds, and in fact we lost 2 pings (remember, they are executed once every second).
After the snapshot has been created, we proceeded to remove it.

You start to see the issue arising already.  There have been multiple stun operations, and the sum of them all is 12648914 microseconds, or 12,6 seconds. As said before, around 1 second per virtual disk. As a result of this, multiple pings have been lost:

This is the well known situation where the virtual machine becomes unresponsive.

ESXi 6.0 U1b

Then, we repeated the same tests using the latest available version of vSphere 6. First, again, we took a snapshot:

The stun lasted for 0.6 seconds, and no ping has been lost.
Then, we removed the snapshot:

As we can notice immediately, the single stun time per disk has been dramatically reduced from around 1 second to 65 milliseconds; this is on average a 15x improvement! The entire stun time is now 828436 microsecond, that is less than a second. As a result of this, availability of this VM over the network has also dramatically improved:

Just one ping lost, in line with the duration of the stun. Impressive!

Final notes

I work too for a vendor, and I know that some times the biggest technological features of a new release are not always the same that are promoted. I usually call them “little gems”, and I love to talk about them. You don’t build a marketing campaign over them, but still they are a huge part of what makes a solution a great solution.
This enhancement in vSphere snapshot consolidation is to be honest one of the biggest improvement I can remember in vSphere releases. Promoted features like VVOLs and Multi-Processor FT have made the news during the launch of vSphere 6.0, and later the issues with CBT (change block tracking) have been the most talked topic, and all contributed to overlook this improvement.
But with CBT issues now finally solved, if you were still looking for reasons to upgrade to vSphere 6.0, this one alone in my opinion would be more then enough to justify an upgrade of all my ESXi servers to 6.0.
Huge kudos to VMware for this enhancement!

3 thoughts on “With vSphere 6, snapshot consolidation issues are a thing of the past!

  1. Fantastic improvement! It’s definitely something that makes me want to upgrade our 5.5 env to v6.

  2. Great blog post. One thing though, in order to see the real-world impact I’d be interested in seeing the figures with actual writes being generated on the VM. As it stands this it what you would expect to see as a minimum with a VM doing nothing in my opinion.

Comments are closed.