vRam Entitlement and Monster Servers

0 Flares Twitter 0 Facebook 0 LinkedIn 0 Email -- 0 Flares ×

Last week we did some activities at a customer’s VMware cluster. As usual, work days were also a moment to talk about last updates that came out from VMware, and the main topic was “should we upgrade to vSphere 5 or not?”.

Scenario: two nodes cluster, each with 4 socket exacore Intel and 512 Gb ram, 10G connectivity. All licensed with Enterprise Plus 4.1. The reason for these choices is the custom-made application that’s at the heart of this : cpu and ram hungry, both needed for the kind of workload it . Enterprise Plus licensing was chosen in order to have 8 vCPU on those VM. Also, 10G network cards were basically a mandatory choice to manage the high I/O values towards the iscsi storage, and at the same time to allow vmotion/drs of these big VM.

Talking with customer, the obvious fear was to go over the vSphere 5 vRam Entitlement di vSphere 5, and have to buy additional licenses.

Is that true?

Surprisingly (for the customer…), a run of the vSphere Licensing Advisor Tool showed us a 20% of unused memory.. The reason was easy to : 4 Sockets.

Enterprise Plus licenses entitle you to have 96 Gb per socket, aggregated for the whole cluster. Having 8 sockets, we have 768 Gb of vRam entitlement. Lower than the full Tb of installed ram, but enough for a 75% ram load. Also, think about the fact that a single node should not have a load over 50% since we have a two node cluster, and we found that the upgrade to vSphere 5 is not an issue even in this scenario.

Finally, vSphere 5 upgrade also brings additional features useful in this :

– 32 vCPU on a single VM, so we can create more powerful VMs

– vMotion can use multiple ethernet cards, so we can move more easily huge VMs

– Stun During Page Send, that is the forced slowdown of I/O when a VM is too loaded to be easily vMotion-ed

At the end of this analysis, the upgrade to vSphere 5 has become an opportunity rather than an issue.