One of the nice features of Veeam Backup & Replication, when it comes to backup speed, is the possibility to use DirectSAN as its backup method on vSphere environments. This option offers the best performances, but has some precise requirements at the hardware level. It could be easy to comply with them in a production environment, but what if you want to test it in your lab, where usually hardware options are limited? Don’t worry, there is a solution!
How DirectSAN works
First things first, let’s have a look at DirectSAN. In VMware VADP libraries, this is one of the three available methods to access data stored in a datastore in order to retrieve them. Veeam Backup & Replication is a “VMware Ready” solution, and as such is fully compliant with these libraries. It supports all the three methods (the other two being HotAdd and Network Mode), so any administrator has several options at his disposal.
DirectSAN is a technique to access a shared block datastore formatted with the VMFS filesystem directly from the backup appliance, bypassing the ESXi storage stack; in Veeam architecture, this component is the “proxy”. If you read carefully my previous sentence, there is already listed the first requirement for DirectSAN to work: the datastore must be a block volume, formatted with VMware VMFS filesystem; NFS shares are not supported. The other keyword is “shared”: the volume must be hosted on an external storage array, and this excludes any local storage inside ESXi servers.
So, what’s left? At the end, DirectSAN works when you have a shared array, connected via FC, iSCSI or SAS to your vSphere cluster. Supposing SAS storage arrays are a really small niche, basically it’s al about FC or iSCSI. Those are pretty common storage protocols in many production environments, but in almost every lab, FC fabrics are not so common. iSCSI is far more used by many admins, and also in my lab all my storage is shared using this protocol.
The other requirement is the proxy. This machine has a direct connection into the storage fabric, regardless it’s the FC fabric or the iSCSI network. With a proper storage network configuration, the proxy can connect to the production storage as a typical ESXi server would do, read data, and send it to the backup repository.
There is no issue in letting an external entity connecting to your storage. If you are using Windows 2003 or earlier, Veeam once installed disables the automount feature, or in Windows 2008 and above it sets the SANPolicy to “offline shared”. In both situations, the final result is your proxy only reads the content of the volumes without trying to change the partition schemes.
Configure the virtual proxy to connect into the iSCSI network
The workaround to run DirectSAN backups in a lab, without a dedicated physical server, is to configure the virtual proxy in order to access the iSCSI network. My “phyrtual” proxy, as I like to call it, has two network connections, the usual one laying in the VM network (called dvp-prodVM in my lab), and a second one configured to use the “iscsi” port group:
This is how the “trick” works. There are two vmkernel ports in each ESXi server: prod-IPstor7 is using 10.2.70.0/24 and VLAN 270, prod-IPstor8 uses 10.2.80.0/24 and VLAN 280 (note the mapping between vlan numbers and IP octects to help quick identification of networks). Correct bindings are in place and iSCSI is correctly configured and enabled. My physical storage arrays are all connected into those two same VLANs, and thanks to this two networks, I can also have multipathing.
For my virtual appliances I don’t care that much about performances, so I’m not going to configure my proxy with two ethernet connections into both vmkernel ports. I picked up only VLAN270 and created a “virtual machine” type of portgroup, called “iscsi”. There are two machines connected to this port group as you can see, “LHvsa02” is one of the two HP StoreVirtual VSAs, and “vbr-proxy” is the Veeam proxy that will have DirectSAN capability.
The next step is to configure the iSCSI protocol in the proxy itself. This is pretty easy in Windows 2012 like my VM, since iSCSI is from Windows 2008 a native protocol and not an additional component as it was in the past with Windows 2003. When you open the iSCSI Initiator, it first asks you to activate the service:
iSCSI is automatically configured with the IQN value “iqn.1991-05.com.microsoft:vbr-proxy.skunkworks.local”. Next, add the Targets that needs to be contacted, in my case they are 10.2.70.30 (the Virtual IP of the StoreVirtual cluster) and 10.2.70.11 (my NetApp FAS2020).
That’s it for the proxy! Now, depending on your storage array, you will have maybe to add the IP address or the IQN of the proxy into an access control list, in order to authorize it to connect to the storage. This was the case for both my HP StoreVirtual cluster and my NetApp array.
Time to test DirectSAN
Once the new proxy is installed and added into Veeam console, you can force it to use DirectSAN mode only. In this way you are sure it’s going to validate your configuration. DO NOT do it in a production environment, Automatic Selection is able to identify DirectSAN capability, and failover to network should be left enabled since it could be your saviour if something isn’t working with your storage fabric.
One last activity before creating a test backup: browse Veeam console has shown here, and hit “rescan”:
That’s because Veeam rescans the infrastructure at scheduled intervals, so unless you want to wait for the next run, your backup will fail because Veeam is not aware that the new proxy can connect to the datastores via DirectSAN.
Finally, is time to create a test backup! Create a new backup job, select one of the virtual machines hosted in one of your compatible datastore, slect specifically the “phyrtual” proxy to be used, and run it:
If you select the specific VM in the left, you will see the word [san] between brackets. This is the confirmation that DirectSAN is working, even with a virtual proxy!