In my previous post, I talked about Veeam N2WS Backup and Recovery (known previosuly as CPM) and how to configure it to protect different AWS accounts. Now that the configuration is ready, it’s time to protect the virtual machines, and to export them into S3 so that we can have an offsite copy using Veeam Backup & Replication.
Step 1: create a dedicated S3 repository
We left the backup job running at the end of the previous blog post. The job completed, but only a local copy is available. We want to have an additional copy stored into an S3 bucket, so that we can later expose it externally to Veeam Backup & Replication.
So, first of all we need to create the S3 bucket. This is created into the Master account, the one where N2WS is running: again, we don’t want to let backups accessible from the account we are protecting, in case it’s compromised.
So, we go into S3 and we create a new bucket. This has one fundamental requirement, that is encryption has to be enabled, otherwise it will not be possible to map it as a N2WS repository, and you will get the error “Repositories can only be created in an encrypted S3 bucket”. Also, it’s not a requirement but still a best practice to have a dedicated repository for this specific task.
Step 2: create “cpmdata” policy
If you just try to register the new S3 repisitory into N2WS, you will get this error:
“The cpmdata policy must be configured before ‘Copy to S3’ can be used”
This is also explained in this KB article. So, we go into policies and we create this special policy like this:
Note that you also need a Schedule to be attached to it.
Step 3: register the new S3 repository
Then, we can create a new S3 Repository (new in N2WS terms, since it already exists in AWS):
The S3 repository is finally mapped and visible in N2WS console:
Step 4: configure a S3 Worker
An S3 worker is a temporary machine that is dynamically deployed into the account where data is stored, so that it can read those EBS snapshot and copy data over to an S3 bucket, in Veeam proprietary format. This machine needs to be configured before starting the policy, otherwise you will face an error like this:
The local backup has been successful, but the following copy to S3 failed. By opening the log, we can clearly understand what happened:
Tue 04/02/2019 06:55:46 PM - Info - Backup Finished successfully on all volumes/databases. Tue 04/02/2019 06:55:46 PM - Info - Copy to S3 repository initiated Tue 04/02/2019 06:55:46 PM - Info - Starting copy to S3 Tue 04/02/2019 06:55:47 PM - Error - Worker configuration not found for account eu-west-1, region cpmdemo-sandbox_bkp. Cannot launch worker Tue 04/02/2019 06:55:47 PM - Error - Workers could not be launched Tue 04/02/2019 06:55:47 PM - Error - Backup copy failed
We need to configure an S3 Worker in the target account. We can do this by looking at the bottom of the console, where’s there a specific option:
This opens a configuration box:
It’s important to select the same account and the region as listed in the error! Once defined, the worker is listed among the available ones, since a master account may leverage multiple workers, one for each tenant he is protecting:
Step 5: define a policy to move data to S3
Now that the infrastructure is properly configured, we can go back to our backup policy and configure the S3 options:
In the dedicated settings:
we start by enabling Copy to S3, we then select the target repository, then we choose the frequency, that is among how many local backups we also create an S3 backup. If you select one, every backup will be also copied to S3; since they backup is daily, if we say 7 for example, the copy to S3 will only happen weekly. We also define how many restore points (“generations”) we want to keep, and if retention has also to be time based.
Now that the configuration has been completed, we can execute the policy immediately to see if it now completes successfully:
And yes, now the backup has also been stored into S3.
Step 6: mount the S3 bucket in Veeam Backup & Repository
Starting from version 9.5 update 4, Veeam Backup & Replication (VBR) can now mount the so called “external repository”, that is the S3 bucket where N2WS has copied its backups.
First, we need to register in VBR a new IAM user that is capable of accessing the S3 bucket:
Then, we go into Backup Infrastructure -> External Repositories and we choose to add a new Repository. we give it a name, then we choose to use the credentials we just stored, we specifiy that the region is “global” and we use one of the windows machines as the connecting gateway to S3 itself. In the bucket options, we go and search for our S3 repository:
the bucket appears into our console:
Tha scan process also told us that there’s one backup in it; we can guess it by the fact that there’s already 1GB of used space, and if we go and look into our backups, we can clearly see it:
Step 7: backup copy from AWS to a local repository
Now, we can configure a classic backup copy job, where the source is the external repository, and the target one of our local repositories. There’s a specific option to create a backup copy job for Amazon EC2 instances:
We configure all the usual parameters, and immediately we download the available restore point:
At the end of the activity, we now have a local backup containing an EC2 instance, and we have multiple restore options for it: