Balance multiple View Connection Servers using HAProxy

0 Flares Twitter 0 Facebook 0 LinkedIn 0 Email -- 0 Flares ×

In medium and large deployments, you can distribute the load on VMware View Connection Server by adding many of them thanks to the “Replica” role. However, the balance and failover between them is not available directly inside the software. To distribute View clients among them, you need an external balancer.

VMware has its own way of doing it, by leveraging balancing features of VCNS (VMware vCloud Networking & Security, formerly known as vShield Edge). However, I always find myself more at ease with HAProxy. This program is really small on memory, and has neat features like SSL offloading, multiple algorythms for load balancing and others.

HAProxy can work on a single linux machine, balancing multiple backend servers, but for a real HA deployments, is better to deploy at least 2 nodes, with a virtual IP address managed by another great program, Keepalived. For my deployments, I use CentOS 6, so this tutorial will be based on this linux distribution. The network has these machines and IP addresses:

hostnameIP AddressNote
lb1.domain.local192.168.1.162Load Balancer 1, master
lb2.domain.local192.168.1.163Load Balancer 2, slave
view.domain.local192.168.1.170Virtual IP managed by balancers
view01.domain.local192.168.1.165View Connection Server
view02.domain.local192.168.1.166View Connection Server (Replica)

HAProxy can work both with 1 or 2 network connections, in this example all the servers will be connected on the same network.

First we need to activate the Extra Packages for Enterprise Linux (EPEL) repository, which should host packages of the HAProxy and Keepalived software. Install EPEL repo info into YUM:

#rpm -ivh

Now we will install the HAproxy and Keepalived software:

#yum -y install haproxy keepalived

Edit the Keepalived config file and make it look something like below:

#vi /etc/keepalived/keepalived.conf


global_defs {
    notification_email {
    notification_email_from keepalived@domain.local
    smtp_connect_timeout 30

vrrp_script chk_haproxy {
script "killall -0 haproxy"
interval 1                     # check every second
weight 2                       # add 2 points of prio if OK

vrrp_instance VI_1 {
interface eth0
state MASTER
virtual_router_id 51
priority 101                   # 101 on master, 100 on slaves
advert_int 1
     authentication {
         auth_type PASS
         auth_pass 1111
virtual_ipaddress {

track_script {

A quick explanation on the parameters you see in this configuration file:

  • Keepalived uses VRRP to manage the virtual IP between the balancers. VRRP is an active/passive system, so at any given time the virtual IP is listening only on one of the balancers. This makes the use of keepalived more “network friendly” since the IP is up only on one servers, so there is no need for dedicated configurations
  • the proper VRRP configuration is wrapped inside a check script that monitors the status of HAProxy, this is vital to guarantee a working configuration. There is in fact no use in having the virtual IP listening on a balancer where for some reason HAProxy is dead. So, if HAProxy check fails, the script failovers the virtual IP to the other balancer, where HAProxy is supposed to be running.
  • Priority is 101 on master, and 100 on all other slaves. Remember to change this value when you copy the configuration on the slave nodes.
  • to protect the communications between nodes, you should use a password. This is a pre-shared key you would need to configure on every balancer.
Before you can load the virtual IP and test it, there are some other configuration changes to be made. First, add this line at the end of the /etc/sysctl.conf file:
net.ipv4.ip_nonlocal_bind = 1

and apply the new parameter by executing:

#sysctl -p

Extra configuration of iptables is required for keepalived, in particular we must enable support for multicast broadcast packets:

#iptables -I INPUT -d -j ACCEPT

Then, add this rule for the VRRP IP protocol:

#iptables -I INPUT -p 112 -j ACCEPT

In addition insert a rule that will correspond with the traffic that you are load balancing, for View is HTTP and HTTPS (by default is only https, but I’m going to create a rule inside HAProxy to redirect http calls to https):

#iptables -I INPUT -p tcp --dport 80 -j ACCEPT
#iptables -I INPUT -p tcp --dport 443 -j ACCEPT

Finally save the iptables config so it will be restored after restarting, and start Keepalived:

#service iptables save
#service keepalived start

Once you configured both balancers, you can see the virtual IP listening on the master node:

#ip addr sh eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 1000
    link/ether 00:50:56:b8:7b:8a brd ff:ff:ff:ff:ff:ff
    inet brd scope global eth0
    inet scope global eth0
    inet6 fe80::250:56ff:feb8:7b8a/64 scope link
       valid_lft forever preferred_lft forever

If you run the same command on the slave node, there will be only the physical IP. You can test the virtual IP by pinging it, and stopping keepalived service on the master node. You would maybe see a failed ping (depending on the speed of the underlying network infrastructure), and then the ping replies will come back to you from the slave node. Once you will restart Keepalived on the master, it will regain control of the virtual IP.

Cool! Let’s move on to HAProxy configuration:

#vi /etc/haproxy/haproxy.cfg
# Global settings

log local2
chroot /var/lib/haproxy
pidfile /var/run/
maxconn 4000
user haproxy
group haproxy
stats socket /var/lib/haproxy/stats

# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000

# Redirect to secured
frontend unsecured
redirect location https://view.domain.local

# frontend secured
frontend secured
bind #ssl crt ./haproxy-cert.pem
mode tcp
default_backend view

# balancing between the various backends
backend view
mode tcp
balance source
server view01 weight 1 check port 443 inter 2000 rise 2 fall 5
server view02 weight 1 check port 443 inter 2000 rise 2 fall 5

Let’s look in details at the last three sections, the first part are common configurations:

  • “frontend unsecured” is a simple redirect, it intercepts every connection attempt to port 80 (http) and redirects it to 443 (https). So even lazy users forgetting to write https in the url will get a correct connection instead of a 404 error.
  • “frontend secured” is the real frontend. It listen on the virtual IP, on port 443. If you want to issue a dedicated SSL certificate for the hostname “view.domain.local”, you can use OpenSSL to create a certificate request, and once you have the PEM file, you can save it into both balancers and remove the comment to enable it
  • backend “view” holds the configuration of the balanced servers. Balancing mode is “source”: every source IP address will be always redirected to the same Connection Server as long as that Connection Server is alive (check port 443 is used right to verify View is listening). Source is a good configuration for connections coming from many different source IPs like View clients. Be careful if connections are redirected by NAT systems or other network services, source does not work well when there are few source IPs.

Once everything is configured, enable HAProxy and Keepalived to start automatically on system boot and start HAProxy:

# chkconfig haproxy on
# chkconfig keepalived on
# service haproxy start

You will be able to connect to https://view.domain.local and see one of your View Connection servers.