With the release of Ceph Luminous 12.2 and its new BlueStore storage backend finally declared stable and ready for production, it was time to learn more about this new version of the open-source distributed storage, and plan to upgrade my Ceph cluster.
As Microsoft Windows 2016 is now finally generally available, people are starting to seriously looking at its features, and no doubt S2D together with the new ReFS 3.1 is one of the hot topics. I’ve first of all updated my lab with the final version of Windows 2016 in order to have my cluster in a “stable” state, than I started to focus on the different topics related to Windows 2016 and its usage as a Veeam repository. And I started to ask How can we leverage ReFS BlockCloning and Storage Spaces to make Windows 2016 the best solution for Veeam repositories? What about Storage Spaces Direct?”.
Netflix decided in 2008 that its new business model would have been the complete consumption of public cloud, specifically AWS. It took 8 years to the leader in Video Streaming to complete the migration of its services into AWS, and now Netflix doesn’t run any significant workload in its own premises.
Latest news about telecommunication companies and their struggles against giant cloud service providers show how the war for the public cloud is at its peak, and we are starting to see the first victims.
In 2014, in a presentation I’ve done, I’ve said to people that in 2-3 years new and cheaper flash memory would have become the stardard solution for general purpose disk storage, thanks to a price per GB comparable with spinning disk. Seems that I was right after all.
Looking at the latest announcements and the history of the behemoth of public cloud services, probably yes. And a leading one.
As I’m following closely the growth and evolution of this new technology for vSphere environments, I’ve found an article on the blogosphere and some additional comments on Twitter that made me re-think a bit about the real value of VVOLS. Is the real value of VVOLs the VM granularity, or it’s more the policy-based management?
I’ve always been a fan of scale-out storage solutions, and I’ve always preached about them.
As data is skyrocketing, the best viable way to cope with this growth is having a system that can be scaled accordingly without the pain of data migrations involving TBs of data. One of the limits of scale-out systems however has always been the data protection techniques applied to them. RAID is inefficient, replication is too expensive, so what about Erasure Coding? Is it mature enough to become the new data protection technique for storage systems?