Veeam Backup with PernixData write-back caching

0 Flares Twitter 0 Facebook 0 LinkedIn 0 Email -- 0 Flares ×

PernixData is, as of today, the only server-side caching solution for VMware offering write-back capabilities, that is the possibility to accelerate write operations. This feature is extremely helpful in increasing performances in virtualized environments running write intensive applications like databases, mail servers and others. However, the usage of this feature requires some proper configuration in order to correctly protect VM with Veeam Backup.

PernixData FVP sits in the middle of the storage stack.

PernixData delayed write

(image courtesy of Frank Denneman, from his post Write-Back and Write-Through policies in FVP)

If you are only using Write-Thorugh, PernixData is only accelerating read operations, so the content of your storage is the same you see “from above”, in any ESXi server. But if you enable write-back (acceleration of write operations), there is a delay between the commit towards ESXi and the commit of the same data towards the backend storage. This can be a problem when performing backups, because the content you are reading from the Storage System is not the same known by the virtual machine. So, when a snapshot happens, the content is not aligned.

Based on the type of backup mode you are going to use with Veeam Backup & Replication, there are different configurations you need to apply to your environment.

Virtual Appliance Mode (HotAdd)

When you are using HotAdd, one or more Windows virtual machines running in a cluster (accelerated by PernixData) are acting as Veeam Proxies. during the backup activities virtual disks of protected VMs are first of all snapshotted, and then mounted on the proxy in order to be processed.

If the VMDK disk is coming from a Virtual Machine accelerated by PernixData, you need to properly instruct Pernix to manage caching of this disk accordingly. To do so, you need to use PernixData powershell extensions, and put any Veeam Proxy in a “blacklist”. When the Proxy is going to HotAdd the accelerated VMDK, Pernix will de-stage or “flush” its cache contents related to that disk, so that data seen by the Proxy are the same as those stored in the backend datastore.

To place a Proxy in the blacklist is really easy. After you have opened a Powershell console, first of all load the Pernix cmdlets:

Import-Module prnxcli

Then you connect to PernixData Management Server:

Connect-PrnxServer <FVP Management Server name OR IP address>

POSH connect to Pernix Management Server

Finally, you set a specific Acceleration Policy to the VM running as a Veeam Proxy:

Set-PrnxAccelerationPolicy -Name vbr-proxy -Vadp

In the command, the name is the one seen in vCenter, but as you can see in the next picture, in reality Pernix saves the UUID value of that VM, so any change in the VM name is not going to create any problem:


Finally, you can check in the Advanced section of Pernix the blacklisted VM:


As you can see, there is a specific policy for VADP Appliances. Also, the VM cannot be simply removed from the blacklist via the user interface. This in done on purpose by Pernix so this VM cannot be removed by mistake. You will have to use again Powershell to remove it.

DirectSAN mode

With DirectSAN mode, Veeam reads data directly from the backend storage, acting as a ESXi server. This completely bypasses PernixData, that doesn’t know (and it couldn’t know) anything about the connection coming from a Veeam Proxy. Because of this, in oder to run a backup in DirectSAN you need to use PernixData powershell commands to flush the cache of any protected VM you would like to backup using DirectSAN; and, in order to cohordinate the backup activity, you would also start the corresponding Veeam backup via Powershell too.

Here is an example of the commands you need to place in your powershell script, obviously you can create more complex scripts, based on your needs and Powershell skills. I havent’ found any option in the Pernix powershell commands to filter the VM list based on their configured acceleration policy, so the script needs to know beforehand which VMs you need to backup. If for example I’m accelerating the VM called “SQL”, the script will be something like this:


Set-PrnxAccelerationPolicy -Name sql -WriteThrough -WaitTimeSeconds 60
Start-VBRJob SQL
Set-PrnxAccelerationPolicy -Name sql -WriteBack -NumWBPeers 1 -WaitTimeSeconds 60

The script connects to the PernixData Management server, sets the Acceleration policy of the VM named SQL to WriteThrough, waits 60 seconds to be sure the policy has been applied and the cache is flushed, then starts the Backup Job where that VM is listed, and finally sets back the acceleration policy to WriteBack (with 1 remote flash).