From time to time, Veeam users in the forums ask for help on unexpected performances in their backup operations. Especially when it comes to I/O intensive operations like Reversed Incremental or the transform operation done during synthetic fulls or the new forever forward incremental, people see their storage arrays performing at low speed and backup operations taking a long time.
I just published a dedicated white paper where I explain the different backup modes in Veeam, how they have huge differences in their I/O profiles, and how the choice of one method over the other can have a great impact on the final performances.
Another huge factor, often overlooked, is the stripe size of the underlying storage.
It’s not a general purpose storage
Many storage arrays use a default stripe size around 32 or 64k. This is because they have to be a general purpose solution able to manage at the same time different workloads: datastore for VMware volumes, NFS or SMB shares, archive volumes. Since in many situations the different volumes are still carved out from the same RAID pool, this pool has a unique stripe size value.
As you can read however in the aforementioned whitepaper, Veeam uses a different and specific block value. The default value, listed in the interface as “Local Storage” under Storage optimization is 1024KB:
The other values are 8MB for the 16TB+ option, 512KB for LAN target and 256KB for the WAN target. Again, as explained in the whitepaper, thanks to compression these values are reduced to half when the block lands into the repository. So, we can assume the default block size written in the repository is going to be 512KB.
With this value in mind, we can do some simulations using an online calculator.
UPDATE 18-10-2017: I’ve been notified by several users that the website I used for the calculations is not working anymore. I tried without any success to find another solution, if and when I’ll find anoher online calculator with stripe size options, I will update the post again.
UPDATE 15-03-2018: thanks to one of my readers that suggested me a new calculator, that is derived from the previous one, so it’s basically the same. You can find it at:
http://omnitech.net/iops/
Let’s assume we have a common NAS machine, using 8 SATA disks, as it’s commonly found in small shops. We apply the I/O profile of a Transform operation, the same we can find in a Backup Copy Job or in the new Forward incremental-forever backup method. This profile uses 512KB blocks and a 50/50 read/write mix.
Using the default 64KB, we get:
Bandwidth is going to be 13.50 MBs. Since half the I/O is used for reads, the amount of data that needs to be moved is double the size of the backup file. Say you have to transform 1 TB of data, you need to move 2TB of blocks. At that speed, it will take 20,6 hours!
If you change only the storage stripe size to for example 256KB, the new bandwidth will become 40,51, three times the previous one! This means the same backup job will last 6,86 hours. Compare it with the previous result of 20,6 hours, and you can immediately understand the importance of configuring the right stripe size on your Veeam repository.
Also, the RAID level is important. We often suggest customers, if budget allows, to use non-parity based RAID like Raid10. You can see again in the tool the reason. If we use the same I/O profile, with 512KB for both the stripe size and the Veeam block size, in an 8-disks storage array we have:
Raid5: bandwidth (MiB/s) 60,76
Raid10: bandwidth (MiB/s) 101,27
Obviously, there’s a tradeoff in terms of disk space. But again, remember to carefully evaluate these configurations options when you design a new storage for Veeam backups.