Your Veeam backups are slow? Check the stripe size!

0 Flares Twitter 0 Facebook 0 LinkedIn 0 Email -- 0 Flares ×

From time to time, Veeam users in the forums ask for help on unexpected performances in their backup operations. Especially when it comes to I/O intensive operations like Reversed Incremental or the transform operation done during synthetic fulls or the new forever forward incremental, people see their storage arrays performing at low speed and backup operations taking a long time.

I just published a dedicated white paper where I explain the different backup modes in Veeam, how they have huge differences in their I/O profiles, and how the choice of one method over the other can have a great impact on the final performances.

Another huge factor, often overlooked, is the stripe size of the underlying storage.

It’s not a general purpose storage

Many storage arrays use a default stripe size around 32 or 64k. This is because they have to be a general purpose solution able to manage at the same time different workloads: datastore for VMware volumes, NFS or SMB shares, archive volumes. Since in many situations the different volumes are still carved out from the same RAID pool, this pool has a unique stripe size value.

As you can read however in the aforementioned whitepaper, Veeam uses a different and specific block value. The default value, listed in the interface as “Local Storage” under Storage optimization is 1024KB:

Veeam block size

The other values are 8MB for the 16TB+ option, 512KB for LAN target and 256KB for the WAN target. Again, as explained in the whitepaper, thanks to compression these values are reduced to half when the block lands into the repository. So, we can assume the default block size written in the repository is going to be 512KB.

With this value in mind, we can do some simulations using an online calculator.

UPDATE 18-10-2017: I’ve been notified by several users that the website I used for the calculations is not working anymore. I tried without any success to find another solution, if and when I’ll find anoher online calculator with stripe size options, I will update the post again.

UPDATE 15-03-2018: thanks to one of my readers that suggested me a new calculator, that is derived from the previous one, so it’s basically the same. You can find it at:


Let’s assume we have a common NAS machine, using 8 SATA disks, as it’s commonly found in small shops. We apply the I/O profile of a Transform operation, the same we can find in a Backup Copy Job or in the new Forward incremental-forever backup method. This profile uses 512KB blocks and a 50/50 read/write mix.

Using the default 64KB, we get:

64k stripe size

Bandwidth is going to be 13.50 MBs. Since half the I/O is used for reads, the amount of data that needs to be moved is double the size of the backup file. Say you have to transform 1 TB of data, you need to move 2TB of blocks. At that speed, it will take 20,6 hours!

If you change only the storage stripe size to for example 256KB, the new bandwidth will become 40,51, three times the previous one! This means the same backup job will last 6,86 hours. Compare it with the previous result of 20,6 hours, and you can immediately understand the importance of configuring the right stripe size on your Veeam repository.

Also, the RAID level is important. We often suggest customers, if budget allows, to use non-parity based RAID like Raid10. You can see again in the tool the reason. If we use the same I/O profile, with 512KB for both the stripe size and the Veeam block size, in an 8-disks storage array we have:

Raid5: bandwidth (MiB/s) 60,76

Raid10: bandwidth (MiB/s) 101,27

Obviously, there’s a tradeoff in terms of disk space. But again, remember to carefully evaluate these configurations options when you design a new storage for Veeam backups.

24 thoughts on “Your Veeam backups are slow? Check the stripe size!

  1. Hi Luca,

    great post. I was wondering if the storage stripe size can have a higher value than the veeam block size. What are the tradeoffs of configuring a extreme high value (4096) of the storage stripe size ?

    • More than Veeam block size, it’s useless to create a stripe size bigger than the filesystem you are going to use on top of it. For example,with NTFS the maximum block size is 64k, so any windows repository will use at most this value. Same goes for XFS on linux, but for example ZFS uses a variable block size, with default at 128k.

      • Hi Luca,
        This is posible to use exFAT with Windows and a block size of 512k?

        • exFAT is not a journaled filesystem, so I would honestly not using it to avoid possible data corruptions.

      • Hello,

        Thanks for sharing all this with us. So did I understand correctly, if I am using Windows Server as a backup repository it is useless to use any other RAID stripe size than 64k? Using the calculator on 64k stripe size against our soon-to-be environment the bandwidth is 8.44 MiB which sounds kinda low…

        • On NTFS I always use 64k stripe, and also check if the underlying storage is using the same size to be able to align blocks.
          The final result depends on many other parameters like number of spindles, storage raid chosen….

          • Okay thanks. Repository will be 8x2TB 7,2K SATA drives on RAID 5. So you say I should configure RAID stripe size to be 64k?

          • Ok good. What if I just create two partitions, say (C:) for Windows and (D:) for Veeam backup files and format (D:) using 64k NTFS cluster size. (C:) would be on the default 4k cluster size. Would I achieve benefits of 64k RAID stripe size that way? I would like to keep Veeam backup files separate for simplicity. Thanks for your help so far.

          • Correct. The important part is that the volume used as Veeam repository has the 64k stripe size, other volumes are not important in regards to this setting.

          • Mmmh Lucas, don’t you mix NTFS block size and Stripe here? If I understand well. Lan Target profil (512Kb I/O) from a storage configuration (NAS/SAN/DAS) for 8 disks array, iSCSI each disk should have a 512KB stripe (or chunk). It makes a Raid (10) Striping of 2048KB, (4x512KB). If you repository is a Windows. NTFS should be using 64KB block. Did I miss something? 🙂

          • Hi, Luca here (without the s, I’m not spanish :P).
            I’ve never mentioned NTFS in the article. And the stripe size is not dependent on the raid type you select, your calcs are wrong. If I select 512KB stripe size, it will be like this regardless the raid I choose.

          • My apologize about the typo 🙂 Yes indeed is not bound to the type of you choose. I was only taking the math from your example. If I am not wrong the repository file system block size you should not affect your performance. I would be just a waste too choose a lower block size since we store only big files.

            I do have another question. I was about to check it with perfmon but maybe you can save me the time. About the IO used by Veeam 8M/1M/512KB or 256K, can we say that the amount concurrent connection you allow on the repository will be the multiplier on the written bytes during the backup ? e.g. I choose the Lan target (512KB) and 2 connections makes a 1MB IO when written on your disk?

  2. I’ve just read this great blog (and whitepaper), obviously because I’m having performance issues. I am however a little confused about what to do about it. We have a 4 bay qnap backup repository where it’s not possible to choose the stripe size. From what I’ve found on the internet, it’s probably 64KB. Should I change the storage optimaziation to WAN target then?

    • Sorry to say this but this small system is probably not going to see significant benefits regardless the block size. But because the WAN target is going to create additional load on the proxies and the ESXi servers, I’d be really careful in using it, it could even decrease performance if the storage subsystem is already stressed, as there are then many more blocks to be processed.

  3. Hi Luca,
    I’m just configuring a bigger system (12*2TB DAS). The controller can go up to 512KB stripe size but as long gone as we’re using NTFS we can’t benefit from a stripe size bigger than 64Kb. Are there any other ways to optimize performance? Would a lower block size in Veeam increase the IOs?

    And where can we benefit from a bigger stripe size at all?

    • Go for 64k block size at the storage layer and at the NTFS layer.
      Also, remember to format the partition with /L option, it gives more allocation space for blocks in the filesystem table.

      • What would be the result for using 32k block size vs 64k? esp. when using dedupe on storage spaces for example? would it affect performance / dedupe results?.

        In my testing env. I have formatted all my storage spaces volumes with 32k blocks and to use LargeFRS.
        Format-Volume -FileSystem NTFS -NewFileSystemLabel Veeam04 -AllocationUnitSize 32768 -UseLargeFRS

        What would be the result of moving to 64k blocks?

        • Bigger block size on the storage means more bandwidth, as with one single IO operation I can write double the amount of data from 32k to 64k.

  4. Great article unfortunately the calculator link no longer works. My backup repository was setup on the C: drive as Veeam Server is that bad? The CBT speeds I am getting on my windows backup jobs are averaging about 4MBs CBT. and taking forever. I read your whitepaper to test this would I run fio on my veeamserver? I saw the example config files in the whitepaper but switches do I use. Could you give me an example cmd I would type loading a config. Do I need to do a client/server test to confirm your whitepaper? Do all VM’s need to be 64K or just the location of your Backup Repository? Thanks

    • Hi Jon,
      may I suggest to post this request in our forums, so more people can chime in and help, and the answer can be read afterwards by other having the same issue? Thanks.

  5. Hello
    I would like to know how to change veeam time. for example, I enter 1:00am to backup all data, but the job finishes about 9 hours or so. please can you help me in this matter.

Comments are closed.