Windows 2016 Storage Replica is a really great technology introduced by Microsoft, and the great thing is that it also replicates ReFS blockcloning savings. This makes it a great option for a Veeam storage repository, completely replicated in two different locations.
Dashboards in Ceph have always been a bit of a problem. In the past, I tried first to deploy and run Calamari, but it was a complete failure. I talked about my disgraces in this blog post, and there I also suggested a way better solution: Ceph Dash. But now with the release of Luminous, Ceph is trying again to have its own dashboard. Will it be good this time?
With the release of Ceph Luminous 12.2 and its new BlueStore storage backend finally declared stable and ready for production, it was time to learn more about this new version of the open-source distributed storage, and plan to upgrade my Ceph cluster.
SPBM allows virtualization administrator to remove all the burden of manual placement of virtual disks, spreadsheets full of data about which VM is stored where, which LUN coming from a given array has feature X enabled, and so on. With SPBM, admins can create multiple policies with the needed options, and once the policy is applied to a VM, vSphere will automatically check for the compliancy of the VM and the storage it is actually stored onto, and if the policy is not fulfilled, a storage vmotion will happen to move the VM into a complaint storage. And policies can also be changed in real–time, and remediation again will happen automatically.
This new solution is a huge advantage, and many admins are leveraging this capability more and more. But what happens when a virtual machine has to be restored from a backup? Are those policies preserved? The answer is yes, if you use Veeam Backup & Replication.
I’ve always been a fan of scale-out storage architecture, I’ve always said that The future of storage is Scale Out, and I’ve spent a fair amount of time studying software-only solutions like Ceph. The new solution from Microsoft, Storage Spaces Direct, seems like another great solution that will be soon available to us, so I decided to test it in my lab.
The upcoming Veeam Availability Suite v9 has tons of enhancements and new features, but improvements around primary and backup storage will surely be one of the biggest parts of our next release.
We already announced a new addition in our list of supported storage arrays for our storage snapshots integration (EMC VNX/VNXe), but this isn’t the only storage news—on the contrary, there are plenty of them, and I’ll cover some of them in this post.
As any existing software, Ceph is subject to minor and major releases. My entire series of posts has been realized using version Giant (0.87), but by the time I completed the series, Hammer (0.94) was released. Note that Ceph, as other linux softwares, uses a major release naming scheme based on the letter of the alphabet, so Giant is the 7th major release. Ceph releases both minor and major versions of the software, so it’s important to know how to upgrade it.
In the last months, I’ve refreshed my knowledge on Ceph storage, an open source scale out storage entirely made in software. As I’ve walked through my own learning path, I’ve created a series of blog posts explaining the basics, how to deploy and configure it, and my use cases. In this 8th part: Veeam clustered repository.