VVOLs are more than just “per-VM”storage volumes

44 Flares Twitter 0 Facebook 7 Google+ 1 LinkedIn 36 Email -- 44 Flares ×

As I’m following closely the growth and evolution of this new technology for vSphere environments, and I’m still in search of a solution to play with VVOLs in my lab, I’ve found an article on the blogosphere and some additional comments on Twitter that made me re-think a bit about the real value of VVOLs.

It’s just a “per-VM” storage?

The original article comes from one of my favorite startups, Coho Data. In this article Suzy Visvanathan explains how Coho being an NFS-based storage doesn’t really need to support VVOLs. To me, this article sounded strange from the beginning, especially considering Suzy just joined the company and she was previously at VMware exactly in the VVOLs team as a product manager.

I’m not going again to explain what VVOLs are and how they work, there are plenty of articles around. What I see some of those articles are missing, and the one from Suzy is no exception, is one half of the story.

Indeed, the biggest and most visible effect of VVOLs is the per-VM management now possible in the storage. No more huge LUNs holding dozens of VMs, where every activity needed to involve all the VMs in the same storage. Now with VVOLs each VM has its set of dedicated “volumes”, and any operation like a snapshot or a replica for example can be targeted at that specific VM. Is it such a great innovation? To me yes, but for people using NFS storage arrays this is something already possible before VVOLs: each VM file (a vmdk virtual disk, a configuration file, a log…) was already visible as a single distinguished file in the file system, and as such used as the atomic unit of storage even before VVOLs for any operation. And since Coho exposes its storage as an NFS share, here comes probably the bias of the article.

It’s all about policies

Yes, per-VM storage management is great, no doubt (thanks to HP Storage for the nice graph):

VVOLs architecture overview

But this same picture shows also the other, and in my opinion even more important, part of the architecture: Policy-Based Management.

SPBM, Storage Policy Based Management, is in my opinion the real big innovation of VVOLs as much as it’s the VM granularity of the storage layout. Said even more explicitely: the real innovation is probably the SPBM, and simply without per-VM management it wouldn’t be possible or so effective, thus the new storage layout has simply been a requirement to enable SPBM.

VM deployments and management are now policy driven: storage capabilities are surfaced up to vSphere, admins build policies by selecting the desired capabilities, the policy is chosen when the VM is deployed, and finally the VM’s VVOLs are created in the correct position of the storage array to comply to the policy requirements. These capabilities vary from storage vendor to vendor, but since there is no hard list of VVOLs capabilities, it all comes down to the array itself and what specific or unique features it has. And thanks to the VASA Provider exposing these capabilities to vCenter, all it’s in control of the Storage vendor: vSphere will simply consume the capabilities the storage has.

And because of this, I think it doesn’t matter if your storage already supports per-VM layout in the storage array. You still want to support VVOLs to expose your features to vSphere and have it consume them. Say you are a startup with some great features nobody else has, well this is exactly the reason to support VVOLs and have vSphere use your features.

Per-VM granularity and policy-driven management has taken indeed a lot of time to arrive in VMware environments, but now that they are available, it would be a shame to ignore them just because your storage is NFS-based.

44 Flares Twitter 0 Facebook 7 Google+ 1 LinkedIn 36 Email -- 44 Flares ×
  • Disclaimer: VMware employee.

    Hi Luca – great article. As well as the per-VM granularity, and the policy driven storage capabilities, I think there are two other features worth mentioning. Both revolve around the use of PEs (Protocol Endpoints). With PEs, the storage access point is decoupled from the actual data container, which means that now only one or a few PEs are needed by the ESXi host to utilize all the storage on an array. No need for many, many LUNs or many, many mount points, which has traditionally been the case. This removes a lot of the complexity vSphere admins have had to do in the past, such as ensuring storage presentation and multipathing is consistent and uniform across all hosts sharing the datastore. PEs also bring the ability to scale storage from a vSphere perspective. The limit continues to be 256 LUNs per hosts, but with PEs, we can now scale to much larger configurations.

    Cormac

  • Alex wbtit

    Salve Sig. Luca,

    I’m a system administrator with the mission to search on the market a new storage product for our Vmware infrastructure that reside on a d (high latency) NFS NAS of (only) two years age.

    Now the offer of storage is literally exploded with many, too many, new hardware and software vendors: hybrid, afa, sds, hyperconverged,etc, etc.

    Evaluating all the choices I’ve found two candidate but I’ve discover something very interesting about SSD performance that can influence the requisite:

    http://storagemojo.com/2015/06/03/why-its-hard-to-meet-slas-with-ssds/

    At the end the article author wrote:

    “My read of the paper suggests several best practices:

    Give each VM its own partition.

    Age SSDs before testing performance.

    Plan for long-tail latencies due to garbage collection.

    Pray that fast, robust, next-gen NVRAM gets to market sooner rather than later.”

    The question are:

    – may be “Give each VM its own partition” correlated to vVol ?

    – has vVol an unintentionally hidden feature that improve SSD performance ? And, if yes,

    – vVol support by storage (using SSD) vendors is a key requirement?

    • Hi Alex,
      VVOLs as explained here and in Cormac’s blog, are a great enhancements thanks both to the per-VM granularity and operations (like snapshots) and the policies you can apply to them.
      But I wouldn’t say it’s a mandatory requirement to select a storage technology; if you think as a customer you need the benefits that VVOLs can bring, then obviously put this as a requirement for selecting the winning vendor. Or if they do not have VVOLs now, get a clear statement from them about when they are going to support it, many vendors are not ready now but they will probably in the near future.
      And finally no, SSDs are not needed to have VVOLs, there are many hybrid arrays supporting VVOLS (or they will soon).