Heartbeat简介

Heartbeat 项目是 Linux-HA 工程的一个组成部分,它实现了一个高可用集群系统。心跳服务和集群通信是高可用集群的两个关键组件,在 Heartbeat 项目里,由 heartbeat 模块实现了这两个功能。下面描述了 heartbeat 模块的可靠消息通信机制,并对其实现原理做了一些介绍。

随着Linux在关键行业应用的逐渐增多,它必将提供一些原来由IBM和SUN这样的大型商业公司所提供的服务,这些商业公司所提供的服务都有一个关键特性,就是高可用集群。

heartbeat的原来
heartbeat (Linux-HA)的工作原理:heartbeat最核心的包括两个部分,心跳监测部分和资源接管部分,心跳监测可以通过网络链路和串口进行,而且支持冗 余链路,它们之间相互发送报文来告诉对方自己当前的状态,如果在指定的时间内未受到对方发送的报文,那么就认为对方失效,这时需启动资源接管模块来接管运 行在对方主机上的资源或者服务。
高可用集群
高可用集群是指一组通过硬件和软件连接起来的独立计算机,它们在用户面前表现为一个单一系统,在这样的一组计算机系统内部的一个或者多个节点停止工作,服务会从故障节点切换到正常工作的节点上运行,不会引起服务中断。从这个定义可以看出,集群必须检测节点和服务何时失效,何时恢复为可用。这个任务通常由一组被称为“心跳”的代码完成。在Linux-HA里这个功能由一个叫做heartbeat的程序完成。
消息通信模型
Heartbeat包括以下几个组件:

heartbeat – 节点间通信校验模块

CRM - 集群资源管理模块

CCM - 维护集群成员的一致性

LRM - 本地资源管理模块

StonithDaemon - 提供节点重启服务

logd - 非阻塞的日志记录

apphbd - 提供应用程序级的看门狗计时器

Recovery Manager - 应用故障恢复

底层结构–包括插件接口、进程间通信等

CTS – 集群测试系统,集群压力测试

这里主要分析的是Heartbeat的集群通信机制,所以这里主要关注的是heartbeat模块。

heartbeat模块由以下几个进程构成:

master进程(masterprocess)

FIFO子进程(fifochild)

read子进程(readchild)

write子进程(writechild)

在heartbeat里每一条通信通道对应于一个write子进程和一个read子进程,假设n是通信通道数,p为heartbeat模块的进程数,则p、n有以下关系:

p=2*n+2

在heartbeat里,master进程把自己的数据或者是客户端发送来的数据,通过IPC发送到write子进程,write子进程把数据发送到网络;同时read子进程从网络读取数据,通过IPC发送到master进程,由master进程处理或者由master进程转发给其客户端处理。

Heartbeat启动的时候,由master进程来启动FIFO子进程、write子进程和read子进程,最后再启动client进程。

可靠消息通信
Heartbeat通过插件技术实现了集群间的串口、多播、广播和组播通信,在配置的时候可以根据通信媒介选择采用的通信协议,heartbeat启动的时候检查这些媒介是否存在,如果存在则加载相应的通信模块。这样开发人员可以很方便地添加新的通信模块,比如添加红外线通信模块。

对于高可用集群系统,如果集群间的通信不可靠,那么很明显集群本身也不可靠。Heartbeat采用UDP协议和串口进行通信,它们本身是不可靠的,可靠性必须由上层应用来提供。那么怎样保证消息传递的可靠性呢?

Heartbeat通过冗余通信通道和消息重传机制来保证通信的可靠性。Heartbeat检测主通信链路工作状态的同时也检测备用通信链路状态,并把这一状态报告给系统管理员,这样可以大大减少因为多重失效引起的集群故障不能恢复。例如,某个工作人员不小心拨下了一个备份通信链路,一两个月以后主通信链路也失效了,系统就不能再进行通信了。通过报告备份通信链路的工作状态和主通信链路的状态,可以完全避免这种情况。因为这样在主通信链路失效以前,就可以检测到备份工作链路失效,从而在主通信链路失效前修复备份通信链路。

Heartbeat通过实现不同的通信子系统,从而避免了某一通信子系统失效而引起的通信失效。最典型的就是采用以太网和串口相结合的通信方式。这被认为是当前的最好实践,有几个理由可以使我们选择采用串口通信:

(1)IP通信子系统的失效不太可能影响到串口子系统。

(2)串口不需要复杂的外部设备和电源。

(3)串口设备简单,在实践中非常可靠。

(4)串口可以非常容易地专用于集群通信。

(5)串口的直连线因为偶然性掉线事件很少。

不管是采用串口还是以太网IP协议进行通信,heartbeat都实现了一套消息重传协议,保证消息包的可靠传递。实现消息包重传有两种协议,一种是发送者发起,另一种是接收者发起。

对于发送者发起协议,一般情况下接收者会发送一个消息包的确认。发送者维护一个计时器,并在计时器到时的时候重传那些还没有收到确认的消息包。这种方法容易引起发送者溢出,因为每一台机器的每一个消息包都需要确认,使得要发送的消息包成倍增长。这种现像被称为发送者(或者ACK)内爆(implosion)。

对于接收者发起协议,采用这种协议通信双方的接收者通过序列号负责进行错误检测。当检测到消息包丢失时,接收者请求发送者重传消息包。采用这种方法,如果消息包没有被送达任何一个接收者,那么发送者容易因NACK溢出,因为每个接收者都会向发送者发送一个重传请求,这会引起发送者的负载过高。这种现像被称为NACK内爆(implosion)。

Heartbeat实现的是接收者发起协议的一个变种,它采用计时器来限制过多的重传,在计时器时间内限制接收者请求重传消息包的次数,这样发送者重传消息包的次数也被相应的限制了,从而严格的限制了NACK内爆。

可靠消息通信的实现
一般集群通信有两类消息包,一类是心跳消息包,这类消息包通告集群内节点的存活情况;另一类是控制消息包,这类消息包负责集群的节点和资源管理。heartbeat把心跳消息包看成是控制消息包的一个特例,采用相同的通信通道进行发送,这使得协议的实现简单化,而且很有效,并把相应的代码限制在几百行之内。

在heartbeat里,一切流向网络的数据都由master进程发送到write子进程进行发送。master进程调用send_cluster_msg()函数把消息发送到所有的write子进程。下面通过一些代码片段看看heartbeat是怎么发送消息的。在介绍代码之前先介绍相关的重要数据结构

Heartbeat的消息包数据结构structha_msg{intnfields;/*消息包数据域的个数*/intnalloc;/*己分配的内存块个数*/char**names;/*消息包数据域的名称*/size_t*nlens;/*各个数据域称的长度*/void**values;/*与数据域名称对应的数据值*/size_t*vlens;/*各个数据域对应的数据值的长度*/int*types;/*消息包的类型*/};

Heartbeat的历史消息队列structmsg_xmit_hist{structha_msg*msgq[MAXMSGHIST];/*历史消息队列*/seqno_tseqnos[MAXMSGHIST];/*历史消息序列号*/longclock_tlastrexmit[MAXMSGHIST];/*上一次重传的时间*/intlastmsg;/*上一次重传到的消息序列号*/seqno_thiseq;/*最大消息序列号*/seqno_tlowseq;/*最小消息序列号*/seqno_tackseq;/*确认了的消息序列号*/structnode_info*lowest_acknode;/*确认的节点*/};

代码所属文件heartbeat/heartbeat.c

intsend_cluster_msg(structha_msg*msg){...pid_tourpid=getpid();...

if(ourpid==processes[0]){/*来自master进程的消息*//*添加控制信息,包括源节点名,源节点全局标识符,序列号,代数,时间等*/if((msg=add_control_msg_fields(msg))!=NULL){/*可靠的多播消息包传递*/rc=process_outbound_packet(&msghist,msg);}}else{/*来自client进程的消息*/intffd=-1;char*smsg=NULL;

...

/*发送到FIFO进程*/

if((smsg=msg2wirefmt_noac(msg,&len))==NULL){...}elseif((ffd=open(FIFONAME,O_WRONLY|O_APPEND))nodename)==0);

/*把消息转换成字符串*/smsg=msg2wirefmt(msg,&len);

...

if(cseq!=NULL){/*存放到历史消息队列里,通过序列号记录,如果需要,则进行重传*/add2_xmit_hist(hist,msg,seqno);}

...

/*通过write子进程发送到所有的网络接口上*/send_to_all_media(smsg,len);

...

returnHA_OK;}

add2_xmit_hist()函数把发送的消息发到一个历史消息队列里去,队列的最大长度为200。如果接收者请求重传消息,发送者通过序列号在该队列里查找要重传的消息,如果找到则进行重传。下面是相关代码。

staticvoidadd2_xmit_hist(structmsg_xmit_hist*hist,structha_msg*msg,seqno_tseq){intslot;structha_msg*slotmsg;

...

/*查找队列里消息存放的位置*/slot=hist->lastmsg+1;if(slot>=MAXMSGHIST){/*到达队尾,从头开始。在这里实现循环队列*/slot=0;}

hist->hiseq=seq;slotmsg=hist->msgq[slot];

/*删除队列中找到的位置上的旧消息*/if(slotmsg!=NULL){hist->lowseq=hist->seqnos[slot];hist->msgq[slot]=NULL;if(!ha_is_allocated(slotmsg)){...}else{ha_msg_del(slotmsg);}}

hist->msgq[slot]=msg;hist->seqnos[slot]=seq;hist->lastrexmit[slot]=0L;hist->lastmsg=slot;

if(enable_flow_control&&live_node_count>1&&(hist->hiseq–hist->lowseq)>((MAXMSGHIST*3)/4)){/*消息队列长度大于告警长度,记录日志*/...}if(enable_flow_control&&hist->hiseq–hist->ackseq>FLOWCONTROL_LIMIT){/*消息队列的长度大于流控限制长度*/if(live_node_counthiseq–(FLOWCONTROL_LIMIT–1));all_clients_resume();}else{/*client进程发送消息过快,暂停所有的client进程*/all_clients_pause();hist_display(hist);}}

}

当发送者收到接收者的重传请求后,通过回调函数HBDoMsg_T_REXMIT()函数调用process_rexmit()函数进行消息重传。

#defineMAX_REXMIT_BATCH50/*每次最多重传的消息包数*/

staticvoidprocess_rexmit(structmsg_xmit_hist*hist,structha_msg*msg){constchar*cfseq;constchar*clseq;seqno_tfseq=0;seqno_tlseq=0;seqno_tthisseq;intfirstslot=hist->lastmsg–1;intrexmit_pkt_count=0;constchar*fromnodename=ha_msg_value(msg,F_ORIG);structnode_info*fromnode=NULL;

...

/*取得要重传的消息包的起始序列号*/if((cfseq=ha_msg_value(msg,F_FIRSTSEQ))==NULL||(clseq=ha_msg_value(msg,F_LASTSEQ))==NULL||(fseq=atoi(cfseq))lseq){/*无效序列号,记录日志信息*/...}

...

/*重传丢失的消息包*/for(thisseq=fseq;thisseqtrack.ackseq){/*该消息包已经被确认过,可以忽略掉*/continue;}if(thisseqlowseq){/*序列号小于消息队列里的最小序列号,该消息己不存在于历史消息队列中*//*告知对方,不重传该消息*/nak_rexmit(hist,thisseq,fromnodename,“seqnotoolow”);continue;}if(thisseq>hist->hiseq){/*序列号大于消息队列中最大序列号*/...continue;}

for(msgslot=firstslot;!foundit&&msgslot!=(firstslot+1);--msgslot){char*smsg;longclock_tnow=time_longclock();longclock_tlast_rexmit;size_tlen;

...

/*重传上一次重传剩下的消息包*/last_rexmit=hist->lastrexmit[msgslot];

if(cmp_longclock(last_rexmit,zero_longclock)!=0&&longclockto_ms(sub_longclock(now,last_rexmit))<(ACCEPT_REXMIT_REQ_MS)){gotoNextReXmit;}

/*一次不能发送太多数据包,如果数据包太多的话,可能会引起串口溢出*/++rexm

注意:以上简介来自互联网百度百科!

实验拓扑图:

image

一.DNS服务器配置

1.1 在realserver-1上的DNS服务器配置:

[root@server1 ~]# yum install bind bind-chroot caching-nameserver –y

[root@server1 ~]# cd /var/named/chroot/etc/

[root@server1 etc]# cp -p named.caching-nameserver.conf named.conf

[root@server1 etc]# vim named.conf

15 listen-on port 53 { any; };
27 allow-query { any; };
28 allow-query-cache { any; };
37 match-clients { any; };
38 match-destinations { any; };
[root@server1 etc]# vim named.rfc1912.zones

20 zone "a.com" IN {

21 type master;

22 file "a.com.db";

23 allow-update { none; };

24 };
37 zone "145.168.192.in-addr.arpa" IN {

38 type master;

39 file "192.168.145.db";
40 allow-update { none; };

41 };
[root@server1 etc]# cd ../var/named/

[root@server1 named]# cp -p localhost.zone a.com.db

[root@server1 named]# cp -p named.local 192.168.145.db

[root@server1 named]# vim a.com.db

image

[root@server1 named]# vim 192.168.145.db

image

[root@server1 named]# service named restart

[root@server1 named]# rndc reload

1.2 在real-server-2上的DNS服务器配置:

[root@server2 ~]# yum install bind bind-chroot caching-nameserver –y

[root@server2 ~]# cd /var/named/chroot/etc/

[root@server2 etc]# cp -p named.caching-nameserver.conf named.conf

[root@server2 etc]# vim named.conf

15 listen-on port 53 { any; };
27 allow-query { any; };
28 allow-query-cache { any; };
37 match-clients { any; };
38 match-destinations { any; };
[root@server2 etc]# vim named.rfc1912.zones

20 zone "a.com" IN {

21 type master;

22 file "a.com.db";

23 allow-update { none; };

24 };
37 zone "145.168.192.in-addr.arpa" IN {

38 type master;

39 file "192.168.145.db";
40 allow-update { none; };

41 };
[root@server2 etc]# cd ../var/named/

[root@server2 named]# cp -p localhost.zone a.com.db

[root@server2 named]# cp -p named.local 192.168.145.db

[root@server2 named]# vim a.com.db

image

[root@server2 named]# vim 192.168.145.db

image

[root@server2 named]# service named restart

[root@server2 named]# rndc reload

二 . 服务器配置

编辑node1、node2的hosts文件:

image

编辑node1、node2的本地yum服务器:

image

分别在node1、node2上安装httpd服务:

yum install httpd

拷贝heartbeat软件包到根目录:

image

安装软件包(node1、node2都得安装):

yum localinstall -y heartbeat-2.1.4-9.el5.i386.rpm heartbeat-pils-2.1.4-10.el5.i386.rpm heartbeat-stonith-2.1.4-10.el5.i386.rpm libnet-1.1.4-3.el5.i386.rpm perl-MailTools-1.77-1.el5.noarch.rpm --nogpgcheck

vim /etc/ha.cf

image

vim /etc/ha.d/authkeys

靠md5的校验实现验证

(image )

vim /etc/ha.d/haresources

image

cp /etc/init.d/httpd resource.d/

scp ha.cf haresources authkeys node2.a.com:/etc/ha.d

scp /etc/init.d/httpd node2.a.com:/etc/ha.d/resource.d/

chkconfig heartbeat on

测试访问:

image

cd /usr/lib/heartbeat/

执行./hb_standby模拟失效,可以看到

image

image

没有任何丢包现象,任然联通

利用LVS结合heartbeat实现高可用性:

先卸载httpd服务(node1、node2)

yum remove httpd

再安装ipvsadm

yum install ipvsadm

vim /etc/ha.d/haresource

image

执行./hb_takeover抢回地址

发现ipvsadm -ln下面为空

image

2.1 Node-1服务器ip地址配置

image
2.2 为node-1添加路由

[root@node1 ~]# route add -host 192.168.145.101 dev eth0:0

[root@node1 ~]# route -n

Kernel IP routing table

Destination Gateway Genmask Flags Metric Ref Use Iface

192.168.145.101 0.0.0.0 255.255.255.255 UH 0 0 0 eth0

192.168.145.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0

192.168.10.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1

169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1

2.3 配置本地yum服务器:

[root@node1 ~]# vim /etc/yum.repos.d/server.repo

[rhel-server]
name=Red Hat Enterprise Linux server

baseurl=file:///mnt/cdrom/Server/
enabled=1
gpgcheck=1
gpgkey=file:///mnt/cdrom/RPM-GPG-KEY-redhat-release
[rhel-cluster]
name=Red Hat Enterprise Linux cluster

baseurl=file:///mnt/cdrom/Cluster/
enabled=1
gpgcheck=1
gpgkey=file:///mnt/cdrom/RPM-GPG-KEY-redhat-release
[root@node1 ~]#mkdir /mnt/cdrom

[root@node1 ~]# mount /dev/cdrom /mnt/cdrom/

mount: block device /dev/cdrom is write-protected, mounting read-only

[root@node 1~]#yum list all
2.4 安装配置dircetor-1服务器:

[root@node1 ~]# yum install -y ipvsadm

[root@node1 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

-> RemoteAddress:Port Forward Weight ActiveConn InActConn

[root@node1 ~]# ipvsadm -A -t 192.168.145.101:80 -s rr

[root@node1 ~]# ipvsadm -a -t 192.168.145.101:80 -r 192.168.145.200 -g

[root@node1 ~]# ipvsadm -a -t 192.168.145.101:80 -r 192.168.145.201 -g

[root@node1 ~]# service ipvsadm save
[root@node1 ~]# service ipvsadm restart
[root@node1 ~]# service ipvsadm stop

Clearing the current IPVS table: [ OK ]

三 . Node-2服务器配置

3.1 Node服务器ip地址配置

image
3.2 为node-2添加路由

[root@node2 ~]# route add -host 192.168.145.101 dev eth0:0

[root@node2 ~]# route -n

Kernel IP routing table

Destination Gateway Genmask Flags Metric Ref Use Iface

192.168.145.101 0.0.0.0 255.255.255.255 UH 0 0 0 eth0

192.168.145.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0

192.168.10.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1

169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1

0.0.0.0 192.168.145.101 0.0.0.0 UG 0 0 0 eth0

3.3 配置本地yum服务器:

[root@node 2~]# vim /etc/yum.repos.d/server.repo

[rhel-server]
name=Red Hat Enterprise Linux server

baseurl=file:///mnt/cdrom/Server/
enabled=1
gpgcheck=1
gpgkey=file:///mnt/cdrom/RPM-GPG-KEY-redhat-release
[rhel-cluster]
name=Red Hat Enterprise Linux cluster

baseurl=file:///mnt/cdrom/Cluster/
enabled=1
gpgcheck=1
gpgkey=file:///mnt/cdrom/RPM-GPG-KEY-redhat-release
[root@node 2~]#mkdir /mnt/cdrom

[root@node2 ~]# mount /dev/cdrom /mnt/cdrom/

mount: block device /dev/cdrom is write-protected, mounting read-only

[root@node2 ~]#yum list all
3.4 安装配置dircetor-2服务器:

[root@node 2~]# yum install -y ipvsadm

[root@node2 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn

[root@node2 ~]# ipvsadm -A -t 192.168.145.101:80 -s rr

[root@node2 ~]# ipvsadm -a -t 192.168.145.101:80 -r 192.168.145.200 -g

[root@node2 ~]# ipvsadm -a -t 192.168.145.101:80 -r 192.168.145.201 -g

[root@node2 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn

TCP 192.168.145.101:80 rr

-> 192.168.145.201:80 Route 1 0 0

-> 192.168.145.200:80 Route 1 0 0

[root@node2 ~]# service ipvsadm save
Saving IPVS table to /etc/sysconfig/ipvsadm: [ OK ]

[root@node2 ~]# service ipvsadm restart
Clearing the current IPVS table: [ OK ]

Applying IPVS configuration: [ OK ]
四.配置real-server-1的web服务器:

4.1 解决arp问题:

[root@server1 ~]# cat /etc/sysconfig/network

NETWORKING=yes
NETWORKING_IPV6=no
HOSTNAME=server1.a.com
[root@server1 ~]# echo "net.ipv4.conf.all.arp_announce = 2" >> /etc/sysctl.conf

[root@server1 ~]# echo "net.ipv4.conf.lo.arp_announce = 2" >> /etc/sysctl.conf

[root@server1 ~]# echo "net.ipv4.conf.all.arp_ignore = 1" >> /etc/sysctl.conf

[root@server1 ~]# echo "net.ipv4.conf.lo.arp_ignore = 1" >> /etc/sysctl.conf

[root@server1 ~]#sysctl -p
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.lo.arp_announce = 2
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.lo.arp_ignore = 1
4.2 配置ip地址和路由

[root@server1 ~]# route add -host 192.168.145.101 dev lo:0

[root@node2 named]# route -n

Kernel IP routing table

Destination Gateway Genmask Flags Metric Ref Use Iface

192.168.145.101 0.0.0.0 255.255.255.255 UH 0 0 0 lo

192.168.145.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0

169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0

0.0.0.0 192.168.145.101 0.0.0.0 UG 0 0 0 lo

4.3 配置Real-server-1的Web服务器:

[root@server1 ~]# rpm -ivh /mnt/cdrom/Server/httpd-2.2.3-31.el5.i386.rpm

[root@server1 ~]# echo "web1 -- real-server-1" > /var/www/html/index.html

[root@server1 ~]# service httpd start

Starting httpd: httpd: apr_sockaddr_info_get() failed for r1.guirong.com

httpd: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1 for ServerName

[ OK ]

4.4 客户端配置信息

image

4.5 客户端访问real-server-1的web服务:(桥接)

 

image

五.配置real-server2的web服务器:

5.1 解决arp问题:

[root@server2 ~]# cat /etc/sysconfig/network

NETWORKING=yes
NETWORKING_IPV6=no
HOSTNAME=server2.a.com
[root@server2 ~]# echo "net.ipv4.conf.all.arp_announce = 2" >> /etc/sysctl.conf

[root@server2 ~]# echo "net.ipv4.conf.lo.arp_announce = 2" >> /etc/sysctl.conf

[root@server2 ~]# echo "net.ipv4.conf.all.arp_ignore = 1" >> /etc/sysctl.conf

[root@server2 ~]# echo "net.ipv4.conf.lo.arp_ignore = 1" >> /etc/sysctl.conf

[root@server2 ~]# sysctl -p
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.lo.arp_announce = 2
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.lo.arp_ignore = 1
5.2 配置ip地址和路由

[root@server2 ~]# route add -host 192.168.145.101 dev lo:0

[root@server2 ~]# route -n
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
192.168.145.101 0.0.0.0 255.255.255.255 UH 0 0 0 lo

192.168.145.128 0.0.0.0 255.255.255.0 U 0 0 0 eth0

169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0

0.0.0.0 192.168.145.142 0.0.0.0 UG 0 0 0 eth0

5.3 配置Real-server-2的Web服务器:

[root@server2 ~]# rpm -ivh /mnt/cdrom/Server/httpd-2.2.3-31.el5.i386.rpm

[root@server2 ~]#echo "web2 -- real-server-2" > /var/www/html/index.html l

[root@server2 ~]# service httpd start

Starting httpd: httpd: apr_sockaddr_info_get() failed for r2.guirong.vom

httpd: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1 for ServerName

[ OK ]

5.4 客户端访问real-server-2的web服务:(桥接)

image

六.客户端测试lvs-DR模型:

6.1 测试1

关闭node-1的ipvsadm服务,确保以下信息

[root@node1 ~]# service ipvsadm stop
Clearing the current IPVS table: [ OK ]
[root@node1 ~]# service ipvsadm status
ipvsadm is stopped
开启node-2的ipvsadm服务,确保以下信息

[root@node2 ~]# service ipvsadm restart
Clearing the current IPVS table: [ OK ]
Applying IPVS configuration: [ OK ]
[root@node2 ~]# service ipvsadm status
ipvsadm dead but subsys locked
客户端访问node-2的群集服务服务:(网卡使用桥接模式)

image

客户端开始不断刷新,发现web2和web1交替出现,比率为1:1,说明依次轮询模式为RR

image

在node-2上查看信息如下:轮询调度比几乎为1:1;

说明lvs调度方法是用的是RR模式

[root@node2 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

-> RemoteAddress:Port Forward Weight ActiveConn InActConn

TCP 192.168.145.101:80 rr
-> 192.168.145.200:80 Route 1 0 50

-> 192.168.145.201:80 Route 1 0 50

6.2 测试2

关闭node-2的ipvsadm服务,确保以下信息

[root@node2 ~]# service ipvsadm stop
Clearing the current IPVS table: [ OK ]
[root@node2 ~]# service ipvsadm status
ipvsadm is stopped
开启node-1的ipvsadm服务,确保以下信息

[root@node1 ~]# service ipvsadm start
Clearing the current IPVS table: [ OK ]
Applying IPVS configuration: [ OK ]
[root@node1 ~]# service ipvsadm restart
Clearing the current IPVS table: [ OK ]
Applying IPVS configuration: [ OK ]
[root@node1 ~]# service ipvsadm status
ipvsadm dead but subsys locked
客户端访问node-1的群集服务服务:(网卡使用桥接模式)

image

客户端开始不断刷新,发现web2和web1交替出现,比率为1:1,说明依次轮询模式为RR

image

在node-1上查看信息如下:轮询调度比几乎为1:1;

说明lvs调度方法是用的是RR模式

[root@node1 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

-> RemoteAddress:Port Forward Weight ActiveConn InActConn

TCP 192.168.145.101:80 rr
-> 192.168.145.200:80 Route 1 0 25

-> 192.168.145.201:80 Route 1 0 25

七:heartbeat服务搭建

7.1首先停止ipvsadm服务:

[root@node1 ~]# service ipvsadm stop

Clearing the current IPVS table: [ OK ]

[root@node1 ~]# service ipvsadm status

ipvsadm is stopped
[root@node2 ~]# service ipvsadm stop

Clearing the current IPVS table: [ OK ]

[root@node2 ~]# service ipvsadm status

ipvsadm is stopped

八.测试:

8.1 使用ip地址测试:

image

[root@node1 ha.d]# ipvsadm
IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

-> RemoteAddress:Port Forward Weight ActiveConn InActConn

TCP www.a.com:http rr
-> 192.168.145.200:http Route 1 0 7

-> 192.168.145.201:http Route 1 0 7

[root@node2 ~]# ipvsadm
IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

-> RemoteAddress:Port Forward Weight ActiveConn InActConn

TCP www.a.com:http rr
-> 192.168.145.200:http Route 1 0 0

-> 192.168.145.201:http Route 1 0 0

[root@node1 ~]# cat /etc/resolv.conf
; generated by /sbin/dhclient-script
nameserver 192.168.145.200
nameserver 192.168.145.201
[root@node2 ~]# cat /etc/resolv.conf
; generated by /sbin/dhclient-script
nameserver 192.168.145.200
nameserver 192.168.145.201
[root@node1 ha.d]# ipvsadm -A -t 192.168.145.101:53 -s rr

[root@node1 ha.d]# ipvsadm -a -t 192.168.145.101:53 -r 192.168.145.200 -g

[root@node1 ha.d]# ipvsadm -a -t 192.168.145.101:53 -r 192.168.145.201 -g

[root@node1 ha.d]# ipvsadm -A -u 192.168.145.101:53 -s rr

[root@node1 ha.d]# ipvsadm -a -u 192.168.145.101:53 -r 192.168.145.200 -g

[root@node1 ha.d]# ipvsadm -a -u 192.168.145.101:53 -r 192.168.145.201 -g

[root@node1 ha.d]# service ipvsadm save
Saving IPVS table to /etc/sysconfig/ipvsadm: [ OK ]

[root@node1 ha.d]# cat /etc/sysconfig/ipvsadm

-A -u 192.168.145.101:53 -s rr
-a -u 192.168.145.101:53 -r 192.168.145.201:53 -g -w 1

-a -u 192.168.145.101:53 -r 192.168.145.200:53 -g -w 1

-A -t 192.168.145.101:53 -s rr
-a -t 192.168.145.101:53 -r 192.168.145.201:53 -g -w 1

-a -t 192.168.145.101:53 -r 192.168.145.200:53 -g -w 1

-A -t 192.168.145.101:80 -s rr
-a -t 192.168.145.101:80 -r 192.168.145.200:80 -g -w 1

-a -t 192.168.145.101:80 -r 192.168.145.201:80 -g -w 1

[root@node2 ~]# ipvsadm -A -t 192.168.145.101:53 -s rr

[root@node2 ~]# ipvsadm -a -t 192.168.145.101:53 -r 192.168.145.200 -g

[root@node2 ~]# ipvsadm -a -t 192.168.145.101:53 -r 192.168.145.201 -g

[root@node2 ~]# ipvsadm -A -u 192.168.145.101:53 -s rr

[root@node2 ~]# ipvsadm -a -u 192.168.145.101:53 -r 192.168.145.200 -g

[root@node2 ~]# ipvsadm -a -u 192.168.145.101:53 -r 192.168.145.201 -g

[root@node2 ~]# service ipvsadm save
Saving IPVS table to /etc/sysconfig/ipvsadm: [ OK ]

[root@node2 ~]#
[root@node2 ~]# cat /etc/sysconfig/ipvsadm

-A -u 192.168.145.101:53 -s rr
-a -u 192.168.145.101:53 -r 192.168.145.201:53 -g -w 1

-a -u 192.168.145.101:53 -r 192.168.145.200:53 -g -w 1

-A -t 192.168.145.101:53 -s rr
-a -t 192.168.145.101:53 -r 192.168.145.201:53 -g -w 1

-a -t 192.168.145.101:53 -r 192.168.145.200:53 -g -w 1

-A -t 192.168.145.101:80 -s rr
-a -t 192.168.145.101:80 -r 192.168.145.200:80 -g -w 1

-a -t 192.168.145.101:80 -r 192.168.145.201:80 -g -w 1

8.2:使用域名http://www.a.com/访问,

并不断刷新网页,以下网页交替出现

image

在node1上查看信息:

[root@node1 ha.d]# ifconfig eth0:0
eth0:0 Link encap:Ethernet HWaddr 00:0C:29:66:E1:DA

inet addr:192.168.145.101 Bcast:192.168.145.143 Mask:255.255.255.0

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

Interrupt:19 Base address:0x2000

[root@node1 ha.d]# ipvsadm
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn

UDP www.a.com:domain rr

-> 192.168.145.200:domain Route 1 0 51

-> 192.168.145.201:domain Route 1 0 49

TCP www.a.com:domain rr

-> 192.168.145.200:domain Route 1 0 0

-> 192.168.145.201:domain Route 1 0 0

TCP www.a.com:http rr

-> 192.168.145.201:http Route 1 0 31

-> 192.168.145.200:http Route 1 0 30

在node2上查看信息:

root@node2 ha.d]# ifconfig eth0:0
eth0:0 Link encap:Ethernet HWaddr 00:0C:29:79:F8:F7

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

Interrupt:19 Base address:0x2000

[root@node2 ha.d]# ipvsadm
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn

可以看出此时node1为主要的调度器,node2为standby状态!!

九:模拟node1服务器故障情况,并测试

9.1模拟失效情况:

[root@node1 ha.d]# cd /usr/lib/heartbeat/

[root@node1 heartbeat]# ls
[root@node1 heartbeat]# ./hb_standby # (模拟失效)
2012/04/02_17:00:35 Going standby [all].
在node1上查看信息:

[root@node1 heartbeat]# ifconfig eth0:0
eth0:0 Link encap:Ethernet HWaddr 00:0C:29:66:E1:DA

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

Interrupt:19 Base address:0x2000
[root@node1 heartbeat]# ipvsadm
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn

在node2上查看信息:

[root@node2 ha.d]# ifconfig eth0:0
eth0:0 Link encap:Ethernet HWaddr 00:0C:29:79:F8:F7

inet addr:192.168.145.101 Bcast:192.168.145.143 Mask:255.255.255.0

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

Interrupt:19 Base address:0x2000
[root@node2 ha.d]# ipvsadm
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn

UDP www.a.com:domain rr

-> 192.168.145.200:domain Route 1 0 9

-> 192.168.145.201:domain Route 1 0 9

TCP www.a.com:domain rr

-> 192.168.145.200:domain Route 1 0 0

-> 192.168.145.201:domain Route 1 0 0

TCP www.a.com:http rr

-> 192.168.145.201:http Route 1 0 0

-> 192.168.145.200:http Route 1 0 0

9.2使用域名http://www.a.com/访问,

并不断刷新网页,以下网页交替出现

image

[root@node2 ha.d]# ipvsadm
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn

UDP www.a.com:domain rr

-> 192.168.145.200:domain Route 1 0 13

-> 192.168.145.201:domain Route 1 0 12

TCP www.a.com:domain rr

-> 192.168.145.200:domain Route 1 0 0

-> 192.168.145.201:domain Route 1 0 0

TCP www.a.com:http rr

-> 192.168.145.201:http Route 1 0 30

-> 192.168.145.200:http Route 1 0 30

9.3 模拟故障恢复:

[root@node1 heartbeat]# ./hb_takeover
在node1上查看信息:

[root@node1 heartbeat]# ifconfig eth0:0
eth0:0 Link encap:Ethernet HWaddr 00:0C:29:66:E1:DA

inet addr:192.168.145.101 Bcast:192.168.145.143 Mask:255.255.255.0

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

Interrupt:19 Base address:0x2000

[root@node1 heartbeat]# ipvsadm
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn

UDP www.a.com:domain rr

-> 192.168.145.200:domain Route 1 0 8

-> 192.168.145.201:domain Route 1 0 8

TCP www.a.com:domain rr

-> 192.168.145.200:domain Route 1 0 0

-> 192.168.145.201:domain Route 1 0 0

TCP www.a.com:http rr

-> 192.168.145.201:http Route 1 0 0

-> 192.168.145.200:http Route 1 0 0

在node2上查看信息:

[root@node2 ha.d]# ifconfig eth0:0
eth0:0 Link encap:Ethernet HWaddr 00:0C:29:79:F8:F7

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

Interrupt:19 Base address:0x2000

[root@node2 ha.d]# ipvsadm
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn

9.4使用域名访问,并不断刷新网页,以下网页交替出现

image

在node1上查看信息:

[root@node1 heartbeat]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn

UDP 192.168.145.101:53 rr

-> 192.168.145.200:53 Route 1 0 22

-> 192.168.145.201:53 Route 1 0 22

TCP 192.168.145.101:53 rr

-> 192.168.145.200:53 Route 1 0 0

-> 192.168.145.201:53 Route 1 0 0

TCP 192.168.145.101:80 rr

-> 192.168.145.201:80 Route 1 0 25

-> 192.168.145.200:80 Route 1 0 24

至此在linux下搭建HA和LB集群成功!!!

十.模拟web服务器故障情况:

10.1 故障测试
10.1.1 查看node1上的HA集群信息:如下:

(可以看出node1为主控制器,而且显示的HA显示了2个real-server的信息)
[root@node1 ~]# ifconfig eth0:0
eth0:0 Link encap:Ethernet HWaddr 00:0C:29:66:E1:DA

inet addr:192.168.145.101 Bcast:192.168.145.143 Mask:255.255.255.0

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

Interrupt:19 Base address:0x2000

[root@node1 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn

UDP 192.168.145.101:53 rr

-> 192.168.145.200:53 Route 1 0 97

-> 192.168.145.201:53 Route 1 0 97

TCP 192.168.145.101:53 rr

-> 192.168.145.200:53 Route 1 0 0

-> 192.168.145.201:53 Route 1 0 0

TCP 192.168.145.101:80 rr

-> 192.168.145.201:80 Route 1 0 0

-> 192.168.145.200:80 Route 1 0 0

10.1.2 在real-server-1上的停止httpd和named服务,模拟real-server-1服务器故障,如下

[root@server1 ~]# service httpd stop
Stopping httpd: [ OK ]

[root@server1 ~]# service named stop
Stopping named: [ OK ]
[root@server1 ~]#
10.1.3 再次查看node1上的HA集群信息,如下:

(可以看出在node1显示了错误的的HA的信息,此时real-server-1服务器也不能正常工作,但是在node1上无法发现)
[root@node1 ~]# ifconfig eth0:0
eth0:0 Link encap:Ethernet HWaddr 00:0C:29:66:E1:DA

inet addr:192.168.145.101 Bcast:192.168.145.143 Mask:255.255.255.0

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

Interrupt:19 Base address:0x2000

[root@node1 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn

UDP 192.168.145.101:53 rr

-> 192.168.145.200:53 Route 1 0 97

-> 192.168.145.201:53 Route 1 0 97

TCP 192.168.145.101:53 rr

-> 192.168.145.200:53 Route 1 0 0

-> 192.168.145.201:53 Route 1 0 0

TCP 192.168.145.101:80 rr

-> 192.168.145.201:80 Route 1 0 0

-> 192.168.145.200:80 Route 1 0 0

10.1.4 客户端不断刷新网页,而且只显示如下网页,且反应缓慢

(说明:real-server-1服务器已经出现故障)

image

10.1.5 再次查看node1上的HA集群信息,如下

[root@node1 ~]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn

UDP 192.168.145.101:53 rr

-> 192.168.145.200:53 Route 1 0 113

-> 192.168.145.201:53 Route 1 0 114

TCP 192.168.145.101:53 rr

-> 192.168.145.200:53 Route 1 0 0

-> 192.168.145.201:53 Route 1 0 0

TCP 192.168.145.101:80 rr

-> 192.168.145.201:80 Route 1 0 16

-> 192.168.145.200:80 Route 1 0 17

(说明:node1依然认为real-server-1正常工作,并且不断调度请求给real-server-1,此时已经出现大问题了)

此时的解决方法是:使node能够知道real-server的工作情况!

10.2 解决web故障问题

10.2.1 在node1上安装配置heartbeat-lnoded

[root@node1 ~]# cd HA/
[root@node1 HA]# ls
heartbeat-2.1.4-9.el5.i386.rpm
heartbeat-lnoded-2.1.4-9.el5.i386.rpm
heartbeat-pils-2.1.4-10.el5.i386.rpm
heartbeat-stonith-2.1.4-10.el5.i386.rpm
libnet-1.1.4-3.el5.i386.rpm
perl-MailTools-1.77-1.el5.noarch.rpm
[root@node1 HA]# yum localinstall heartbeat-lnoded-2.1.4-9.el5.i386.rpm –nogpgcheck –y

[root@node1 HA]# cp /usr/share/doc/heartbeat-lnoded-2.1.4/lnoded.cf /etc/ha.d

[root@node1 HA]# cd /etc/ha.d/
[root@node1 ha.d]# vim lnoded.cf
21 quiescent=yes
24 virtual=192.168.145.101:80
25 real=192.168.145.200:80 gate

26 real=192.168.145.201:80 gate

27 service=http

28 request=".test.html"

29 receive="ok"

30 virtualhost=www.a.com

31 scheduler=rr

34 protocol=tcp
[root@node1 ha.d]# vim haresources
46 node1.a.com 192.168.145.101 lnoded::lnoded.cf

[root@node1 ha.d]# service heartbeat restart

Stopping High-Availability services:
[ OK ]

Waiting to allow resource takeover to complete:

[ OK ]

Starting High-Availability services:
2012/04/04_11:25:46 INFO: Resource is stopped
[ OK ]

10.2.2 在node2上安装配置heartbeat-lnoded

[root@node2 ~]# cd HA/
[root@node2 HA]# ls
heartbeat-2.1.4-9.el5.i386.rpm
heartbeat-lnoded-2.1.4-9.el5.i386.rpm
heartbeat-pils-2.1.4-10.el5.i386.rpm
heartbeat-stonith-2.1.4-10.el5.i386.rpm
libnet-1.1.4-3.el5.i386.rpm
perl-MailTools-1.77-1.el5.noarch.rpm
[root@node2 HA]# yum localinstall heartbeat-lnoded-2.1.4-9.el5.i386.rpm –nogpgcheck –y

[root@node2 HA]# cp /usr/share/doc/heartbeat-lnoded-2.1.4/lnoded.cf /etc/ha.d

[root@node2 HA]# cd /etc/ha.d/
[root@node2 ha.d]# vim lnoded.cf
21 quiescent=yes
24 virtual=192.168.145.101:80
25 real=192.168.145.200:80 gate

26 real=192.168.145.201:80 gate

27 service=http

28 request=".test.html"

29 receive="ok"

30 virtualhost=www.a.com

31 scheduler=rr

34 protocol=tcp
[root@node2 ha.d]# vim haresources
46 node1.a.com 192.168.145.101 lnoded::lnoded.cf

[root@node2 ha.d]# service heartbeat restart

Stopping High-Availability services:
[ OK ]

Waiting to allow resource takeover to complete:

[ OK ]

Starting High-Availability services:
2012/04/04_11:25:53 INFO: Resource is stopped
[ OK ]

10.2.3 此时,查看node1上的HA集群信息,如下:

[root@node1 ha.d]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn

TCP 192.168.145.101:80 rr
-> 192.168.145.201:80 Route 0 0 0

-> 192.168.145.200:80 Route 0 0 0

(由于在/etc/ha.d/lnoded.cf中21行存在 quiescent=yes,故此处http的权重为0,即此时不提供服务)

#修改/etc/ha.d/lnoded.cf中21行为 quiescent=no,会自动加载

[root@node1 ha.d]# vim lnoded.cf
21 quiescent=no
[root@node2 ha.d]# vim lnoded.cf
21 quiescent=no
#并再次查看node1上的HA集群信息

[root@node1 ha.d]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn

TCP 192.168.145.101:80 rr

[root@node1 ha.d]#
(由于在/etc/ha.d/lnoded.cf中21行存在 quiescent=no,故此处http的记录为空)

10.2.4 此时,real-server-1上添加以下信息:

[root@server1 ~]# echo "ok" >> /var/www/html/.test.html

查看node1上的HA集群信息,如下:

[root@node1 ha.d]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn

TCP 192.168.145.101:80 rr

-> 192.168.145.200:80 Route 1 0 0

[root@node1 ha.d]#
此时,real-server-2上添加以下信息:

[root@server2 ~]# echo "ok" >> /var/www/html/.test.html

查看node1上的HA集群信息,如下:

[root@node1 ha.d]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn

TCP 192.168.145.101:80 rr

-> 192.168.145.201:80 Route 1 0 0

-> 192.168.145.200:80 Route 1 0 0

[root@node1 ha.d]#
10.2.5 此时,在real-server-1上停止httpd服务,出现以下信息:

[root@server1 ~]# service httpd stop
Stopping httpd: [ OK ]

查看node1上的HA集群信息,如下

[root@server1 ~]#
[root@node1 ha.d]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn

TCP 192.168.145.101:80 rr

-> 192.168.145.201:80 Route 1 0 0

在real-server-1上和real-server-2上停止httpd服务,出现以下信息:

[root@server1 ~]# service httpd stop
Stopping httpd: [ OK ]

[root@server1 ~]#
[root@server2 ~]# service httpd stop
Stopping httpd: [ OK ]

[root@server2 ~]#
查看node1上的HA集群信息,如下

[root@node1 ha.d]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn

TCP 192.168.145.101:80 rr

10.2.6 在real-server-1上和real-server-2上启动httpd服务,恢复正常情况

[root@server1 ~]# service httpd start
Starting httpd: httpd: apr_sockaddr_info_get() failed for server1.a.com

httpd: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1 for ServerName

[ OK ]

[root@server1 ~]#
[root@server2 ~]# service httpd start
Starting httpd: httpd: apr_sockaddr_info_get() failed for server2.a.com

httpd: Could not reliably determine the server's fully qualified domain name, using 127.0.0.1 for ServerName

[ OK ]

[root@server2 ~]#
查看node1上的HA集群信息,如下

[root@node1 ha.d]# ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn

TCP 192.168.145.101:80 rr

-> 192.168.145.200:80 Route 1 0 0

-> 192.168.145.201:80 Route 1 0 0

至此在linux下搭建HA和LB集群成功!!!