SkunkWorks Labs

Skunk Works is an official alias for Lockheed Martin’s Advanced Development Programs (ADP), formerly called Lockheed Advanced Development Projects. Skunk Works is responsible for a number of famous aircraft designs, including the U-2, the SR-71 Blackbird (my preferred one), the F-117 Nighthawk, and the F-22 Raptor.

The designation “skunk works”, or “skunkworks”, is widely used in business, engineering, and technical fields to describe a group within an organization given a high degree of autonomy and unhampered by bureaucracy, tasked with working on advanced or secret projects.

What better name then for my new lab dedicated to virtualization?

This lab is installed and operated only by me. From this lab most of my articles take form, thanks to trial and error, reinstalls, fun with the various technologies that have the ability to work with.

From time to time, some vendors give me the chance to try some of their hardware or software products, which are then temporarily added to the laboratory. What you found instead described here is the portion under my property, and therefore permanent. This page will be updated with new components as they will be added or will replace the existing ones. At the bottom of this page then you will find a changelog which will record these changes.

Overview

As I already told in the past, having a lab as much as similar to a production environment is the best solution one can aim to have, for a long list of reasons.

Server

The lab is comprised of 3 servers:

ESXi01
1 * Intel Xeon E5-2603 @ 1.80GHz (4 cores, sandybridge generation)
64 GB RAM

ESXi02
1 * Intel Xeon E5-2603 @ 1.80GHz (4 cores, sandybridge generation)
64 GB RAM

ESXi03
2 * Intel Xeon X5650 @ 2.67 Ghz (12 cores, westmere generation)
96 GB RAM

Thanks to these servers, I can use the Westmere EVC mode in my vSphere cluster, which means both VT-d libraries and VT-x with Extended Page Tables (EPT). So, I can create nested environments like virtual ESXi servers or other hypervisors.

Networking

HP V1910-24

My main switch is an HP V1910. It has 24 gigabit ports plus 4 SPF connectors. My design has several VLANs, and some of them need to talk to each other; this switch has native support for VLAN routing, so it also helps me to interconnect all my VLANs. To connect from remote to my lab, I enable and configured OpenVPN on a pfSense machine, so I can reach all my subnets.

Storage

I have three different storage systems in my lab.

My primary storage is a NetApp FAS2020, in an OEM version by IBM, branded N3300. It has 12 SATA 7200 rpm 500 Gb disks, and gives me a total space of about 4 TB, using NetApp RAID-DP configuration.

IBM NetApp N3300

It had in the past two iscsi controllers, but in order to use all the 12 disks in a single configuration and so maximizing the available free space, I removed one of the controllers. This has 2 iscsi 1 GB connections, and from here I publish several LUNs to my ESXi servers.

A second storage is made with three HP StoreVirtual VSA, installed on each server. Their total space is around 400 GB each, made with both local SSD and magnetic disks; this system gives me a total of around 600 GB.

Finally, two of the servers has Fusion-io Gen1 cards. They have 320GB of disk space, and I use them as local datastores when I need to run tests where the storage doesn’t have to be the bottleneck at all.

  • Aaron McDonald

    I am a power user, but have no knowledge of clustering and virtualization. Do you have a cluster of machines that are ran as separate machines, or somehow clustered to one machine, then virtual machines run on top of that? I know, a very basic question…

    • It’s a VMware vSphere cluster, with a shared storage so I can use features like vmotion.