My Lab is built basically as a production environment: it has 3 * 1RU Rack servers, gigabit switches and a couple of iSCSI/NFS Storage arrays. It’s nothing as a home lab, it’s noisy and it consumes a good amount of electricity. I was lucky enough to have a good friend with some free space in his racks inside a datacenter, and he’s hosting my hardware gear for free. As time goes by however my hardware is becoming old and starts to show its limits. At some point, I decided it was time for a hardware refresh.
My old servers…
I have 3 old HP Proliant G5, equipped with Intel E5400 processors and 20 GB of memory each. I was able to get them from a customer who was disposing them, and at that time it was a good choice for me. However, they are starting to show their limits: first of all, the Merom generation CPUs that they have on board have no Intel VT-x with Extended Page Tables (EPT) libraries, this means I cannot create any nested 64 bit hypervisor in them. This is a major problem when I want to do quick tests on vSphere clusters, because I cannot create disposable clusters; I always have to do the tests on the physical ESXis, and sometimes this is a problem. The underlying cluster in my idea should be a “production” Lab with few and controlled changes, but then I’d need some “guinea pig” servers for the most “dangerous” tests. Also because the datacenter is almost 100 km from my home, so it’s not easy to go there everytime I do a mistake…
Second problem, the memory: the G5 have only 12 DIMM slots, but most of all the memory for this models is no more available on the market, and the price for used DIMMs is insane. With all the tests I need to do, the amount of memory I need is never enough, and I always have to power down many VMs when I have to try something. This was probably the main reason for the hardware upgrade: for the same price of 64GB of RAM for one of this server, I’d be able to buy a new server.
Last problem, the PCI bus: the G5 has a Generation 1 PCI bus, very limited in bandwidth and general performances. It’s enough to manage the 1G ethernet cards that they have on board, but a painful bottleneck for the Fusion-IO cards I have. I still have a bunch of SSDs in my office I never installed on those servers, right because I know they would wasted…
My choices
At first I looked around for commercial servers from primary vendors. I spent hours on their website, or those of some resellers, trying to create a configuration that was at the same time cheap but good for my needs. With no suprise, every try ended up with a price that was out of reach for my budget… After reading lately some awesome posts by Erik Bussink (here) and Frank Denneman (here), I saw there was a good solution to build a custom server with a fair budget (fair, not cheap).
The main difference between their design and mine is the chassis: I need rack servers, not tower ones. After some reseach, I found the right solution for me: SuperMicro CSE-822T-400LPB. This is a 2RU rack chassis that can hold any 12″ x 13″ E-ATX motherboard. This is a major advantage for me, since hopefully in some years I would be able to keep the chassis and only change motherboard and CPUs. Also, It has 6 bays for 3,5” disks, useful to create local storage and tests new technologies like VSAN or other VSAs.
Finally, another advantage is the 2U height: my Proliant G5 are 1RU, and they only have free space for 2 PCIe cards. Because they have only 2 * 1GB ethernet cards on board, I already had to add another 2 ports ethernet card to them. When I received the Fusion-IO cards, the other left PCI slot was gone, and I had no more possibility to expand the servers. Add this to the fact the onboard controller on those servers is an HP P400 with limited funcionalities (the biggest problem is the lack of SSD support), and you quickly understand as I wasn’t able to have at the same time a good disk controller. The 2RU format gives me finally the possibility to add multiple PCIe cards, for example 10Gb Ethernet cards.
I’m not promoting any online reseller in this article, but I was able to buy this chassis for 288 €. One suggestion I can give you is to check an online reseller in your same country. The chassis is pretty big and it weights 21 kg; international shipping is going to heavily increase the final price. I bought this one from an Italian reseller and the shipping price was only 7 €. As a comparison, the same server shipped directly from hong Kong would have been only few euros cheaper, but with 120 € of shipping costs.
For the motherboard, I followed the advice of both Erik and Frank, and I went too for the SuperMicro X9SRH-7TF. With the new 6 and 8 cores CPUs on the market I don’t need anymore to have a dual processors motherboard. All the considerations made by Erik are good: the management console for example is a must for me since my lab is so far from my home. The 10G connections are going to be used at 1G for now, I’m not planning to add a 10G switch for now, it’s out of my budget, but as Erik said by having at least two servers with 10G connectivity I can do some tests with a simple crossover cable. The motherboard has only this two onboard 10G connections, but with 3 PCIe slots and the 2U chassis I can move the ethernet cards I already have in this new servers, the Fusion-IO cards, and still have a free slot (plus 2 32bit slots…). The best price I found for the motherboard was 375 €. I found the best prices in some USA resellers, and thanks to my frequent flights there, I’ll try to buy it there and bring it home. I’m not sure I’ll be able to buy it for these prices directly in Europe…
The 8 memory slots of the motherboard are another winning point. Any single modern processor can address all the 512 GB of RAM that can be installed on this motherboard without needing a secondary CPU. I had two choices here: 8 * 8GB, or 4 * 16 GB. Even if the 16GB module is slightly more expensive than 2 * 8 GB, I went for these. In this way I used only 4 slots; if in he future I want to increase the memory I don’t have to throw away the existing modules but I only have to add new ones. I found 4 Kingston 16GB KVR16R11D4/16HA for 144 € each, so the total is going to be 576 €.
Finally, the CPU. Unlike Erik And Frank, I care more about cores than cache or frequency: I run a high amount of VMs in my lab, so my main parameter is the Core per vCPU ratio. For these reasons, I chose an Intel E5-2620. It’s a 6 Core CPU at 2.5 GHz frequency with 15MB of cache, it has hyper-threading so if I need I can simulate 12 cores, and it has all the most recent libraries available. I found it at 299 € (agan, is USA), plus 45 € for a cooler. I still have to find a proper cooler that is not too much high and can fit into the 2U rack.
At the end, the total price of my new server, without any local disk at the moment, is going to be 1583 €, or 2170 USD if you prefer. I know there are way cheaper way to design a server for a lab, but for my needs this is a good price, and any commercial server from any vendor (even a cheap SuperMicro) had a price from 2200 € or more.