Run PernixData on servers without SSD

0 Flares Twitter 0 Facebook 0 LinkedIn 0 Email -- 0 Flares ×

If you are designing a new project with PernixData, you can have a cluster where not all ESXi servers have a Flash memory (a PCIe card or an SSD); however, there are some design considerations you need to keep in mind, both during the installation part or while using this software.


In my lab I have 3 ESXi servers, but only two of them have a Flash memory (a couple of Fusion-IO ioDrive 320Gb cards), while the third server does not have anything, not even a small SSD.

So, when I was installing PernixData, at first I installed the host extensions on the first two servers, and I configured the FVP Cluster:

Pernix Cluster

With this configuration, I was able to enable acceleration for a virtual machine running on server 1 or 2 in write-through mode (accelerating only reads), but I wasn’t able to select write-back:

Pernix write-back not possible

The solution is to install host extensions on ALL ESXi servers in the cluster, even those without a Flash memory. So, I installed PernixData also in my third server, and then even if Pernix was complaining about missing any Flash device in this server:

Pernix Cluster 2

I was able anyway to select Write Back:

Pernix Write-back

Obviously, I was only able to choose between a local flash only, or at best 1 network flash device, since I had only two servers with Flash.

Can I move a VM on the server without Flash?

My accelerated virtual machine is running on server 2, one of the two with a Fusion-IO card, and Pernix confirms me I’m actually using Write Back acceleration:

Pernix Write-Back enabled

Since my cluster is configured for VMware DRS in automatic mode, the biggest question that comes to my mind is: “Can I move with vmotion this virtual machine towards the server without any Flash device? And how will Pernix react?”. In order to answer this question, I re-configured DRS in manual mode, so I was sure my virtual machine would stay on the desired ESXi server; then I started a workload with JetStress to simulate 200 Exchange Server users (200 MB and 10 iops per mailbox). Immediately, PernixData started to do its job, by accelerating the Exchange database. At 00.43 (the red line in the following graph) I started a vmotion from server 2 (with Flash memory) to server 3 (without Flash memory). The migration completed without any itch or error, and simply Pernix stopped to accelerate that virtual machine:

Pernix start of  a vmotion

For a short time, cache status was “Replay Required or in Progress”:

Pernix Cache Reply

and then it became “Write Through”, even if obviously nothing was being accelerated, since there wasn’t any Flash device on that server.

Pernix on non-accelerated Host

You can also see the cache content was then listed as completely remote (Network flash), since it was saved on ESXi 2. As a last test, I moved back the same virtual machine to 2; Pernix immediately reconfigured cache for “Write Back” and started again to accelerate the I/O.

Final notes

Two things can be learned from thes tests.

From a technical standpoint, you can use PernixData, in write-back too, in a cluster where not all servers have a Flash device; you only need to install host extensions on all the servers. All the activities leveraging vmotion can be used, like manual vMotion and DRS, and Pernix can adapt its behaviour based on the server where the accelerated VM is running on.

From a design standpoint, however I strongly advice against such a configuration: performances of a virtual machine are going to wave up and down, with all the consequences for a guarantee performance level. If you really need to use Flash devices only on some servers, maybe because your budget is too small or because you only need to accelerate just one virtual machine, you should create dedicated DRS rules in order to run those virtual machines only on the servers with Flash devices, avoiding dangerous vMotions.