Adding a Second High Availability Cloud Load Balancer
The final step in our fully redundant, high-availability LAMP stack is adding a second load balancer.
The two load balancers use the heartbeat package to monitor each other and check if the other node is still alive.
We configure our two load balancers in an active/passive setup, which means we have one active load balancer, and the other one is a "hot standby" and becomes active if the active one fails.
The steps are as follows:
Step 1: Add a second load balancer
Step 2: Set up hostnames
Step 3: Install and configure heartbeat
Step 4: Test heartbeat and total failover
Step 1: Add a second Load Balancer
Set up a new server called haddock and do the following things:
• Give it a static IP address (in this tutorial, we'll use 46.20.121.120)
• Give it the private address 10.0.0.6 and attach it to the VLAN - check you can ping it from herring
• Follow the steps in Add an Apache load balancer to install an Apache load balancer
• Check that visiting its static IP in a browser now shows our site
Step 2: Set up Hostnames
Next, we need to set up hostnames on both herring and haddock. We'll call herring (10.0.0.5) loadb1, and haddock (10.0.0.6) loadb2.
Start by editing the /etc/hosts file on both machines:
$ vi /etc/hosts
And replace the content of the file with the following:
127.0.0.1 localhost
10.0.0.5 loadb1
10.0.0.6 loadb2
$ echo loadb1 > /etc/hostname
$ service hostname start
On herring, now run:
Afterwards, run:
$ hostname
$ hostname -f
On herring, both commands should show loadb1.
You should now do the same on haddock, substituting loadb2 for loadb1.
Step 3: Install and Configure Heartbeat
We now install a package called heartbeat on both servers, which lets the two nodes monitor each other and check if the other node is still alive.
First, add the universe and multiverse repositories to your sources file:
$ vi /etc/apt/sources.list
deb http://gb.archive.ubuntu.com/ubuntu lucid universe
deb http://gb.archive.ubuntu.com/ubuntu lucid multiverse
Then, update your sources:
$ apt-get update
Now, installing heartbeat is as simple as:
$ apt-get install heartbeat
We can now set up a simple configuration by creating just three files, all of which are stored under /etc/ha.d:
• ha.cf, the main configuration file
• haresources, resource configuration file
• authkeys, authentication information
These three configuration files must be identical on both haddock and herring.
Firstly, create ha.cnf:
$ vi /etc/ha.d/ha.cf
The file should consist of this:
use_logd on
debugfile /var/log/ha-debug
logfile /var/log/ha-log
logfacility local0
keepalive 2
deadtime 10
udpport 694
ucast eth0 46.20.121.121
ucast eth0 46.20.121.120
ucast eth1 10.0.0.5
ucast eth1 10.0.0.6
node loadb1
node loadb2
auto_failback on
These options specify various logging directives. keepalive specifies that heartbeat should check the status of the other server every 2 seconds. deadtime specifies that it should assume the other server has gone down if there is no response after 10 seconds.
ucast specifies the IP addresses of the two servers (unicast directives to the machine's own IP addresses are ignored, which is why the file can be the same on both herring and haddock). auto_failback specifies that if the master server goes down and then comes back up again, heartbeat should automatically return control to the master server when it reappears.
Note that the names of the two nodes at the end of the above files must match the names returned by uname -n on each server.
Next, create haresources:
vi /etc/ha.d/haresources
On both servers, the file should look like this:
loadb1 IPaddr::46.20.121.113
The file specifies the node that this resource group would prefer to be run on and the IP address it serves.
We specify our virtual IP, which is 46.20.121.113 - you should substitute yours.
And finally, we need to set up authorization keys:
$ vi /etc/ha.d/authkeys
The content of the file should be as follows, on both servers:
auth 1
1 sha1 somerandomstring
somerandomstring is a password that the two heartbeat daemons on loadb1 and loadb2 use to authenticate against each other. Use your own string here. You have the choice between three authentication mechanisms - we use SHA1.
/etc/ha.d/authkeys should be readable by root only, therefore finally, do this:
$ chmod 600 /etc/ha.d/authkeys
Step 4: Test Heartbeat and Total Failover
On both herring and haddock, run the following:
$ /etc/init.d/heartbeat stop
$ /etc/init.d/heartbeat start
Now, if you visit your static IP, you should see the site running.
After a few seconds, try ifconfig on herring to check that it is the master node:
$ ifconfig
You should see a new entry for eth0:0 as follows:
eth0:0 Link encap:Ethernet HWaddr 02:00:2e:14:79:79
inet addr:46.20.121.113 Bcast:46.20.121.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
There should be no equivalent entry for ifconfig on haddock - if there is, check that your haresources file is identical on both servers.
Finally, the acid test. In your Crosspeer control panel, shut down herring. This is our master server, so when it is shut down, the site should automatically fail over to haddock.
After you have shut down herring, re-visit the static IP address in a browser and check the site continues to work. You should also run ifconfig on haddock and check that the new eth0:0 entry has appeared, as above.
If this is the case, then our high-traffic, scalable, redundant web application is now complete, with no single point of failure anywhere in the system.