As many guys working in IT, I have my own lab. It’s a good lab, with many components and many technologies running on it, and even if I have access to my corporate lab, I usually prefer to use my own as the degree of freedom I can experience in my own lab cannot be compared. In a corporate lab, you need to be limited in the actions you can do, as your actions would impact also the other users. On the other hand, in my lab I’m free to break whatever I can 🙂
It often happens that some specific configurations (one for all, my vCloud Director environment) are better looking in my own lab than any other place, and so I also use my lab to show those technologies to partners and customers. This is easy when I’m home, even if my lab is not technically in my home. I have all the proper networking in place, and I can easily open the connections I need to my machines.
The issue I’m describing in this post happens some times when I travel. I may be in an hotel room, in a conference room at the customer’s site, or another different place. In all these situations, it may happen (and it happened enough times to justify this little project) that the connections to my lab are blocked by a firewall or another device. I have two ways to connect to my lab: an RDP to a jumpbox machine, but published on a different port that the usual TCP/3389, and an ipsec vpn concentrator. In one case, none of them was possible at a customer, so we ended up with a colleague of mine tethering from his phone. I decided it was time to develop a better solution that was able to work in almost any situation.
The linux jumpbox
As in my previous version of the solution, the idea is to have a machine in the internal network, exposed then over the internet in a secure way, that I can use as a jumpbox to then reach all the other machines. My first iteration of the design has been a windows server, with its Remote Dekstop published over a non standard TCP port. It served me well for a while, and was a good solution to reach my lab when IPsec was blocked by a firewall. But still, it has issues: even if the TCP port is available, I’m actually connected to just one RDP session, and any following connection is executed “inside” this machine. Think about another RDP session, an SSH login to a linux box, and so on. In all those cases, things like clipboard copy/past, file transfer from my local machine to the lab, had to be done via the jumpbox, and I always also have to cycle through the different RDP windows when I’m working with different machines at the same time. This is where a VPN would help, but also IPsec is often blocked in public networks.
So, I decided this time to build something different. I used in the past ssh a lot, and I still connect to many machines with this protocol. Is extremely fast, and when used with the proper options, also very secure. It also has a nice feature, called ssh tunneling, where one can use ssh not just to connect to the local linux machine running ssh itself, but to use it as a gateway/forwarder to other machines hosted in the internal network. In this way, the only connection exposed over the internet is the SSH server itself, and also I don’t need to configure multiple RDP ports over internet, or have the connections opened on the jumpbox. The connections will all be opened in my laptop, and the ssh tunneling will forward my connections to the required machines. Compared to a VPN connection, I will still miss a pure networking connection, for example to open SMB shares from my computer to my lab, but this is not an issue for me, as bandwidth would be in any case too slow to do so, while for small files I can still leverage the remote disk mapped inside RDP, or the SCP protocol for linux machines.
Expose and protect SSH
The first part of the project is to create and configure the linux machine. I’m a CentOS user, so this is the distribution I will use, but with some minor differences, this operation will be the same for other distributions.
First, I’ve installed and configured a CentOS 7.0 virtual machine in my lab. It sits in my physical network 192.168.0.0/24, with its own IPv4 address 192.168.0.198. Then, there are different internal networks, but all of them are properly routed between each other by the L3 switch that I own. So, I just need to reach my Linux box to then be able to jump into other networks.
After upgrading the system with every new available package, I went to secure ssh even more. SSH is installed and enabled by default on CentOS, but I needed a more tight configuration before exposing it to the internet. Mainly, I wanted to use keys instead of username and password. There are different tutorials in internet to explain how to do it, but I felt that another one with step by step instructions would be good to have anyway.
On the lab’s firewall, I publish the ssh connection over port tcp/22 over the internet, using my public IP address. For this blog I’d use a fake public ip address like 18.104.22.168. In my windows machine, I installed PuTTY, PuTTYgen, and Pageant. After SSH it’s published, I quickly test from my windows laptop that I can correctly connect to the ssh server using username and password in PuTTY:
Ok, the firewall rule is correct. Now, time to enable ssh to only accept access via ssh keys.
Using PuTTYgen, I’m creating a new private/public key pair. I start the program, and choose the Generate button. The default parameters are ok. After moving the mouse around to generate the needed entropy, the key is ready. I can optionally apply a passphrase to the key to further protect it. Then, I save both the public and the private key in a safe location in my computer. Before closing PuTTYgen, I copy the key in the clipboard.
Then, I login again into the SSH server (still using username and password). One interesting information I receive from the ssh server upon logging in is this:
Last failed login: Wed Sep 28 03:51:40 CEST 2016 from 22.214.171.124 on ssh:notty There were 32 failed login attempts since the last successful login.
Multiple login attempts has been tried already. Scanners and bots are alredy trying to enter the ssh server; the key-only authentication is really needed to expose the ssh server over the internet! I run these commands to create a new file to store the authorized keys:
mkdir ~/.ssh chmod 700 ~/.ssh vi ~/.ssh/authorized_keys
and I paste the key from the clipboard into the file. I save the file and exit vi, and I configure the permissions so that the file is write/readable by my user:
chmod 600 ~/.ssh/authorized_keys
In PuTTY, I start to create a new connection profile. I save the IP address of my jumpbox, and then I go into Connection -> SSH -> Auth. Here, with the Browse button I go and load the private key I created before. I go back to Session and I save it. Then, I load it and I try the login with the new keys, and this is what I see:
The key is working correctly, so I can safely disable the interactive login based on username and password. To do so, in the ssh server run:
and edit these lines like this:
[...] Protocol 2 PasswordAuthentication no [...]
Then restart sshd:
service ssh restart
If I now try to login using the username, I get this error from PuTTY:
but I can still login using my private key. The SSH server is more secure now!
If you login multiple times a day and you don’t want to type the passphrase everytime, you can use Pageant to cache it for you and use it against PuTTY.
Tunnel RDP inside SSH
Now that the ssh connection is ready, I can use it to reach my internal servers using for example RDP. In my example, I need to connect to my windows-based vCenter server running on the internal IP 10.2.50.111. I open again PuTTY and load the saved connection settings, then I go to Tunnels. Here, I choose a random Source port like 3000, and in the Destination I type the address and port of my remote machine, in my case 10.2.50.111:3389. I click Add, and the result would be like this:
I then save the session and open it. As before, PuTTY will ask me the passphrase to use the private key. After the session is established, I can open a remote desktop client, and I will put in the connection address this:
RDP will connect to the local port 3000, and PuTTY will forward this connection to the SSH server, that then will forward the connection to the RDP port of my remote machine. Cool, isn’t it?
From here on, I can expand the configuration by adding additional tunnels to my PuTTY configuration, all in the same session. It’s not like having a pure VPN link, where I can freely navigate the remote network and connect to any resource, but if I load all the tunnels I need in advance into PuTTY, I can then use remote desktop managers like RoyalTS for example, and connect to multiple remote machines at the same time, all through the same SSH connection. Or even configure web servers (like the vCenter Web Client) or other ssh servers running on my linux machines.
SSH is usually allowed in guest networks, but you can also think about publishing ssh over different tcp ports like 80 if it wouldn’t work on native port 22.