My objective was to have two apache web servers that were load balanced, but I didn’t want to have a single point of failure load balancer and I didn’t want to spend the resources of two more servers solely for a highly available load balancer. So I combined to run the load balancers on the same two servers running apache. Of course if you have a high traffic site this is probably not the best solution for you, but if your traffic is moderate this allows you to at least reboot a web server with out taking down your site.
There are several tutorials out there on the subject, but none of them completely solved my challenge. I’m by no means an expert on this, but I’ve gathered the bits and pieces from the other tutorials to make this work.
This is NOT creating an active/passive Apache cluster. We are creating an active/passive Load Balancer cluster pointing to two independent Apache web servers that just happen to reside on the same servers.
Here’s my setup:
OS: CentOS 5.5 x64 Virtual IP: 192.168.0.40 Server1: web01.example.com 192.168.0.41 Server2: web02.example.com 192.168.0.42
Start with Server1 and install the necessary packages
yum install httpd heartbeat heartbeat-ldirectord
After several tests, I found that heartbeat would always fail on the first attempt to install. I’m not sure if this is specific to CentOS 5.5 x64 or if it also happens on previous releases. Yum would say the install was successful but if you scroll up you would see this:
useradd: user hacluster exists error: %pre(heartbeat-2.1.3-3.el5.centos.x86_64) scriptlet failed, exit status 9 error: install: %pre scriptlet failed (2), skipping heartbeat-2.1.3-3.el5.centos
None the less if I try and install heartbeat a second time it always worked.
yum install heartbeat
Set the services to start automatically
chkconfig httpd on chkconfig heartbeat on chkconfig ldirectord on
Other tutorials would now instruct you to create an interface eth0:0 but the heartbeat service takes care of this for you.
Configure apache
Start Apache
service httpd start
Since Apache will be accepting connections routed from the load Balancer we have to configure it to handle the virtual IP
Create a second loopback adapter
vim /etc/sysconfig/network-scripts/ifcfg-lo:0
with this content
DEVICE=lo:0 IPADDR=192.168.0.40 NETMASK=255.255.255.255 ONBOOT=yes NAME=loopback
Restart networking
ifup lo:0
Verify the interface is online
ifconfig lo:0
lo:0 Link encap:Local Loopback inet addr:192.168.0.40 Mask:255.255.255.255 UP LOOPBACK RUNNING MTU:16436 Metric:1
Files need on the webserver
We need to have a file on the web server for the Load Balancer to know the web server is up and running.
create this file
vim /var/www/html/check.txt
with this content
ok
For testing purposes, so that we know which webserver the load balancer is pointing us to, create this file.
vim /var/www/html/index.html
with this content
web01
Configure the Load Balance, ldirectord
The Load Balancer must route traffic from the virtual IP to apache so we must enable routing.
Edit this file
vim /etc/sysctl.conf
Change the 0 to a 1 on this line
net.ipv4.ip_forward = 1
And add these lines
net.ipv4.conf.all.arp_ignore = 1 net.ipv4.conf.eth0.arp_ignore = 1 net.ipv4.conf.all.arp_announce = 2 net.ipv4.conf.eth0.arp_announce = 2
To activate the changes run this:
sysctl -p
create the Load Balancer configuration file
vim /etc/ha.d/ldirectord.cf
With this content:
checktimeout=30 checkinterval=2 autoreload=yes logfile="/var/log/ldirectord.log" quiescent=no virtual=192.168.0.40:80 real=192.168.0.41:80 gate real=192.168.0.42:80 gate service=http request="/check.txt" httpmethod=GET receive="ok" persistent=100 scheduler=lblc protocol=tcp checktype=negotiate
Start the Load Balancer
service ldirectord start
If you get this error then heartbeat failed to install. Install it again.
/etc/init.d/ldirectord: line 33: /etc/ha.d/shellfuncs: No such file or directory
Create the heartbeat service configuration file.
vim /etc/ha.d/ha.cf
With this content (the node names must match the output of uname -n)
logfile /var/log/heartbeat.log logfacility local0 keepalive 2 deadtime 10 bcast eth0 mcast eth0 225.0.0.1 694 1 0 auto_failback off node web01.example.com node web02.example.com
Specify what resources we want heartbeat to manage. In this case we want heartbeat to control the virtual IP and the Load Balancer. Unlike other HA resources the Load Balancer will be running on both nodes at the same time, but heartbeat will control which one is the master. Since we are running all the components on just two servers and there has to be an additional loopback IP, lo:0, the same as the virtual IP we must specify where heartbeat will bind this virtual IP or it will fail to come online.
Create this file:
vim /etc/ha.d/haresources
With this content:
web01.example.com ldirectord::ldirectord.cf LVSSyncDaemonSwap::master IPaddr::192.168.0.40/24/eth0/192.168.0.255
Create the authkeys file required by heartbeat
vim /etc/ha.d/authkeys
With this content:
auth 3 3 md5 randomstring
Hearbeat requires that the authkey file be accessible only by root.
chmod 600 /etc/ha.d/authkeys
Start heartbeat.
service heartbeat start
Wait a few moments to give heartbeat time to bring the Virtual IP online.
Verify the Virtual IP is online
ifconfig
eth0 Link encap:Ethernet HWaddr 00:50:56:AB:00:15 inet addr:192.168.0.41 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::250:56ff:feab:15/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:9533 errors:0 dropped:0 overruns:0 frame:0 TX packets:6248 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:9429192 (8.9 MiB) TX bytes:607896 (593.6 KiB) eth0:0 Link encap:Ethernet HWaddr 00:50:56:AB:00:15 inet addr:192.168.0.40 Bcast:192.168.0.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:1510 errors:0 dropped:0 overruns:0 frame:0 TX packets:1510 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:140430 (137.1 KiB) TX bytes:140430 (137.1 KiB) lo:0 Link encap:Local Loopback inet addr:192.168.0.40 Mask:255.255.255.255 UP LOOPBACK RUNNING MTU:16436 Metric:1
At this point you now have a single node completely configured. You should be able to browse to http://192.168.0.40 and see that it says web01
Additional verification, run the following command
ip addr sh eth0
If you haven’t yet done any of the configuration for web02 you will see this:
2: eth0:mtu 1500 qdisc pfifo_fast qlen 1000 link/ether 00:50:56:ab:00:15 brd ff:ff:ff:ff:ff:ff inet 192.168.0.41/24 brd 192.168.0.255 scope global eth0 inet 192.168.0.40/24 brd 192.168.0.255 scope global secondary eth0:0 inet6 fe80::250:56ff:feab:15/64 scope link valid_lft forever preferred_lft forever
You can also use ipvsadm to show the current statistics for ldirectord.
ipvsadm -L -n
Again since we have not configured the second webserver you will see this
IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 192.168.0.40:80 lblc persistent 100 -> 192.168.0.41:80 Local 1 0 0
Now lets setup or second web server that will also be the second node of our Load Balancer cluster.
Go back and repeat all the same steps for web02. Do not adjust any of the files to be specific for web02 except the index.html page. The haresources file specifies the preferred host so it should still be the same on both servers.
Once you have completed all the steps on web02, test and make sure everything is working.
ipvsadm -L -n
You should now see both web servers
IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 192.168.0.40:80 lblc persistent 100 -> 192.168.0.42:80 Route 1 0 0 -> 192.168.0.41:80 Local 1 0 0
Stop apache on web01
service httpd stop
Now browse to 192.168.0.40 and you should see that it says web02.
Restart apache on web01
service httpd start
The load balancer maintains connection state so if you keep refreshing your browser you should continue to get the same web server.
Now you could stop apache on web02 and refresh your browser and you should get web01.
Now lets test the Load Balancer failover. Apache should be running on both servers.
From another server or workstation start a continuous ping to your Virtual IP 192.168.0.40
On web01 stop the heartbeat service
service heartbeat stop
You should see the IP continues to respond.
On web01 check to see if the eth0:0 is still there. It should not.
ifconfig
On web02 check to see if eth0:0 is there. It should be
ifconfig
Refresh your browser and you should still have a connection to the webserver. It may or may not have connected you to a different server this time.
Congratulations! You now have a highly available load balanced web server configuration.
Hi James,
How can we know the incoming requests are load-distributed between the servers involved in the cluster 🙂 ? Do you have any way to test this 🙂
By the way, thank you for the tutorial 🙂
@ Lucky
I had the name of the web server in the index file of each server so when I connected I could see which server I connected to