Customer (ISP) wanted to setup the Voip network using asterisk . It was expecting a traffice of 2000 simulatneous calls . So it would require a cluster of asterisk servers .
Also it has to get incorporate the servers with the combination of OpenSer + A2billing + high availibility .
We evaluated Heartbeat for High Availability and installed it on the server .
What is Heartbeat ?
Your "cluster" is established via a "heartbeat" between two or more computers (nodes) generated by the software package of the same name.
The basic goal of the High Availability Linux project is to:
Provide a high-availability (clustering) solution for Linux which promotes reliability, availability, and serviceability (RAS) through a community development effort.
Heartbeat now ships as part of SUSE Linux, Mandriva Linux, Debian GNU/Linux, Ubuntu Linux, Red Flag Linux, and Gentoo Linux. Ultra Monkey, and several company's embedded systems are also based on it. Although this is called the Linux-HA project, the software is highly portable and runs on FreeBSD, Solaris, and OpenBSD, even on !MacOS/X from time to time.
There have been many articles and several chapters in books written on this project and software. See the PressRoom for more details.
We are now competitive with commercial systems similar to those described in D. H. Brown's 1998 or March 2000 analysis of RAS cluster features and functions. This release 2 series brings technologies and basic capabilities which match or exceed the capabilities of many commercial HA systems. We think you'll be surprised. An R2 getting started guide is available.
We include advanced integration with the DRBD real-time disk replication software, and also work well with the LVS (Linux Virtual Server) project. We expect to continue to collaborate with them in the future, since our goals are complementary.
We have a page of reference sites to provide a few real-life examples of how organizations both small and large use Heartbeat in production. Submissions for this page are actively encouraged.
Heartbeat is a leading implementor of the Open Cluster Framework (OCF) standard
Definitions
Node: An Instance that runs an OS
Resource: A service or facility you want to be highly-available
RA: Resource Agent
CRM:
More to come....
Getting Started
What you need:
At least two computers. You don't need to have identical hardware in both machines (or amount of memory, kernel, etc.).
One or more media to send the heartbeat packages (serial, USB or Ethernet cable(s)).
A Linux OS and the media between the 2 nodes configured and working.
High Availibility as good as it gets:
Avoiding a single point of failure is the whole idea of the setup. This also means it is advised to use more than one communications method between the nodes. It is also advised not to use only Ethernet cables or one single type of media, because in that case the mediastack would be a single point of failure.
The Setup
This quickstart will describe a full installation and configuration of Heartbeat 2 and some resources on Debian Unstable. It is a quickstart so we will configure a simple and common configuration. We will make an Apache 2 service highly available on a shared IP (Active/Passive). The configuration will contain monitors and starting the resources will be done using a group. Completing this document should give you a basic understanding of Heartbeat 2.
For simplicity, we use only one media, an Ethernet cable between the hosts:
The primary node: hostname = mars, ipaddr = 10.0.0.10
The slave node: hostname = venus, ipaddr = 10.0.0.20
The IP Apache listens to: 10.0.0.30
Installing Heartbeat
Heartbeat is available at:
Download Software
You can build it from source: tar -xvzf /path_to_downloaded_tarball/heartbeat-x.x.x
./ConfigureMe configure
make && make install
or use the RPM or .deb binaries available. If you use Gentoo, you can simply emerge it.
I used the .deb binaries for Debian so I added the following line to /etc/apt/sources.list
deb http://ftp.belnet.be/debian/ unstable main contrib non-free
and then
apt-get update
apt-get install heartbeat-2
Configuring Heartbeat
General Configuration Info
Configuring Heartbeat v2 is a little more tricky than Heartbeat v1, but Heartbeat v2 can also be used with a Heartbeat v1 configuration. If you would like to try out the v1 configuration first, read this.
There are 3 files you need to configure - you have to create these files:
HA v1
HA v2
Purpose
Notes
/etc/ha.d/authkeys
/etc/ha.d/authkeys
authenticate the nodes
the same in both versions
/etc/ha.d/ha.cf
/etc/ha.d/ha.cf
general Heartbeat configuration
HA v2: you have to define you want to use the CRM
/etc/ha.d/haresources
/var/lib/heartbeat/crm/cib.xml
configure the resources
another world
Configuring Authkeys
The authkeys file is used to authenticate the members of the clusters. There are three types of authentication methods available:
crc: Has as good as none security, and doesn't use much resources. Don't use this on real networks.
md5: Has security and doesn't use a lot of CPU resources
sha1: Best security currently available
In this example we are going to use md5. If you want to know more about authkeys, look at this document.
A sample of the /etc/ha.d/authkeys file could be: auth 1
1 md5 key-for-md5-any-text-you-want
Whatever index you put after the keyword auth must be found below in the keys listed in the file. If you put "auth 4", then there must be a "4 signaturetype" line in the list below.
The file permissions MUST be safe so do: chmod 600 /etc/ha.d/authkeys
Configuring ha.cf
#logfacility local7
#logfile /var/log/ha-log
#debugfile /var/log/ha-debug
use_logd on
udpport 694
keepalive 1 # 1 second
deadtime 10
initdead 80
bcast eth0
#serial /dev/ttyS0 #if you use serial
#baud 19200 #if you use serial
node linuxha1 linuxha2
crm yes
auto_failback yes
Log and debug explained:
If the logging daemon is used, logfile/debugfile/logfacility in this file are not meaningful any longer. You should check the config file for logging daemon (the default is /etc/logd.cf). Example in /usr/share/doc/heartbeat-2/ha_logd.cf.
If use_logd is not used, all log messages will be written to log files directly. The logging daemon is started/stopped in heartbeat script.
use_logd
Use the ha-logd, setting to "yes" recommended.
logfacility
Defines which syslog logging facility it should use for logging its messages. Ignored if use_logd is enabled.
logfile
All non-debug messages from Heartbeat will go into this file. Ignored if use_logd is enabled.
debugfile
Specifies the file Heartbeat will write debug messages to. Ignored if use_logd is enabled.
Time & Network:
udpport
Specifies which port Heartbeat will use for communication between nodes, default 694.
keepalive
Specifies the time between keepalive packets.
deadtime
Specifies how quickly Heartbeat should decide that a node in a cluster is dead.
initdead
Specifies how long to wait before a node is considered dead after things first come up (e.g. after reboot), to take into account bootup time and network initialization.
bcast
Specifies which interface(s) should be used to send keepalives on (if you want to use broadcasts).
Note that mcast and ucast can be used in place of bcast if desired.
Other:
node
Tells what machines are part of the cluster.
crm
Specifies whether Heartbeat should run v2-style. We set it to on
Note: When you want to use a Heartbeat 1 setup set crm to off.
I want to know more about the ha.cf file
Configuring Resources
/var/lib/heartbeat/crm/cib.xml: Basically this file specifies the services for the cluster and who the default owner is.
A simple example setup that explains a basic resource configuration can be found here: v2/Examples/Simple.
An example that explains some more advanced configuration is here: v2/Examples/Advanced.
And more advanced things are here: ClusterInformationBase/UserGuide.
Other configuration
Apache
You need to configure 2 things for our setup, these you can simply configure in the Apache config.
In Debian: /etc/apache2/apache2.conf
Define the IP that Apache has to listen to:
Listen 10.7.200.30:80
For the monitoring:
# Allow server status reports, with the URL of http://servername/server-status
# Change the ".your_domain.com" to match your domain to enable.
#
SetHandler server-status
Order allow,deny
Allow from all
DRBD
If you want to use DRBD instead of shared storage, then you need to configure it
OCF Scripts
We recommend that any resources agents you write be OCF resource agents.