The last presentation I attended at Tech Field Day roundtables during VMworld US 2013 was with SimpliVity.
SimpliVity is a company working in the hyper-converged market, like Nutanix and ScaleComputing. Founded in Boston in 2009, it has now around 100 employees, and 65 of them are engineers; this percentage to me is a sign of a high focus on development and innovation still going on. their product, named OmniCube, is an hyper-converged system, where compute and storage resources are collapsed in the same server, and then aggregated between multiple nodes to offer at the end a scale-out system.
Each node has CPU power, memory, both used by the bare-metal hypervisor (VMware vSphere as of today). Inside the hypervisor there is a virtual machine called SVT. The virtual machine controls all the local storage in each node, made of mechanical disks and SSD, used together with a tiering system. The more OmniCube you add to the federation (a cluster as called by SimpliVity), the more data are evenly spread across all of them. This is one of the reason a 10G network is mandatory: writes are synchronously saved into two nodes before they are acknowledged, so a 1G network could eventually become a bottleneck.
There are many ideas similar to those of other vendors: the VSA is the heart of the solution, they rely on vSphere networking for all the communications between nodes, storage is presented to ESXi hosts as NFS, the combined use of HDD and SSD for tiering, space optimization via deduplication and compression. In my opinion, they reason why all these companies has come to the same solution are the best ones to design a scale-out solution: maybe a direct integration of the storage layer into the kernel would have been to time consuming to be realized, or at least too difficult being ESXi a closed source system without direct access to source code, but only to APIs. The use of a VSA is an elegant and quick solution to solve this problem.
SimpliVity has nonetheless some distinguishing features: While other decided to get rid of many redundancies inside a single node and concentrate on the overall reliability of the cluster, here there is also a raid protection inside the single node, and also a double power unit. Probably local raid is reducing the performances and available space of the disk pool, on the other side it protects workload from the failure of a simple disk. Design choices.
An interesting element is OmniStack, that is their own “Data Virtualization Engine”, allowing OmniCube to offer deduplication, compression and data optimization. All these features are enabled and executed inline thanks to an internally developed PCI card, called “OmniCube Accelerator”. Thanks to it, all IO activitiesat each layer (RAM, SSD, HDD) are heavily reduced, resulting in a much more performing system: maybe is not so intuitive to understand, but since an hard disk is so limited in its performances, less IO activities you involve it into, the faster is the system.
Finally, SimpliVity in my opinion placed a deep focus on two topics: the introductions of their systems into existing environments, and data protection.
There are few infrastructure that are built from zero with their solution, instead often they are added to existing environments with some “standard” solutions already in place. For this reason, SimpliVity allows to export the NFS storage outside the federation, so customers can leverage vSphere features like Storage vMotion. It’s a common feature in hyper-converged systems, and helps in introducing those systems by making them live side-by-side with the existing ones, and slowly migrate.
About data protection, there is first of all the duplicated writes in two different positions of the federation, or towards another federation, or even towards Amazon EC2. This is possible by using a SVT running inside EC2: from here it can be joined to a on-premise federation, and configure replication policies between the two zones. Obviously the replica involves only storage data, is not possible to start a VM in Amazon since it does not runs on vSphere. Maybe in the future, by supporting public clouds based on vCloud Director. SimpliVity is able to replicate at the VM level and not at a LUN level, and can replicate VMs both locally or remotely, applying different policies regarding retention. Be aware, is not a proper backup: they do not use vSphere snapshots or VSS. Copy is done at the storage layer, by relicating changed blocks. Consistency groups (logical groups of VMs that needs to be replicated alltogether) are not available, probably in the future.
SimpliVity is a really interesting solution. They are one of the two main vendors (the other one being Nutanix) in a relatively young market. They have many common features but also some important differences. We will see in the near future how this market will evolve, and if other solutions will appear, coming from new startups or some big vendor. Listening to several presentations, the development cycle to have a stable and ready-to-sell solution takes around 2 years, so a new startup would be ready at least in 2015; I do not know however if there are some companies in “stealth mode” ready to come out.
Finally, here is the complete presentation:
[This post was originally written by Luca Dell’Oca, and published on the blog www.virtualtothecore.com ]