环境配置:
server1 server3 作为haproxy 服务器,用pacemaker实现负载均衡服务器高可用
server2 server4 作为httpd 服务器
1.server1环境配置
(1)配置yum源,
vim /etc/yum.repos.d/rhel-source.repo
[rhel-source]
name=Red Hat Enterprise Linux $releasever - $basearch - Source
baseurl=http://172.25.26.250/iso/
enabled=1
gpgcheck=0
[HighAvailability]
name=HighAvailability
baseurl=http://172.25.26.250/iso/HighAvailability
gpgcheck=0
[LoadBalancer]
name=LoadBalancer
baseurl=http://172.25.26.250/iso/LoadBalancer
gpgcheck=0
[ResilientStorage]
name=ResilientStorage
baseurl=http://172.25.26.250/iso/ResilientStorage
gpgcheck=0
[ScalableFileSystem]
name=ScalableFileSystem
baseurl=http://172.25.26.250/iso/ScalableFileSystem
gpgcheck=0
(2)服务安装与配置
yum install -y pacemaker corosync
yum install crmsh-1.2.6-0.rc2.2.1.x86_64.rpm pssh-2.3.1-5.el6.noarch.rpm
cd /etc/corosync/
ls
cp corosync.conf.example corosync.conf
vim corosync.conf
# Please read the corosync.conf.5 manual page
compatibility: whitetank
totem {
version: 2
secauth: off
threads: 0
interface {
ringnumber: 0
bindnetaddr: 172.25.26.0 #更改广播域
mcastaddr: 226.94.1.1
mcastport: 5405
ttl: 1
}
}
logging {
fileline: off
to_stderr: no
to_logfile: yes
to_syslog: yes
logfile: /var/log/cluster/corosync.log
debug: off
timestamp: on
logger_subsys {
subsys: AMF
debug: off
}
}
amf {
mode: disabled
}
service {
ver: 0
name:pacemaker
}
/etc/init.d/corosync start ##开启服务
2.server3环境配置
yum install -y pacemaker corosync
rpm -ivh crmsh-1.2.6-0.rc2.2.1.x86_64.rpm --nodeps --force
cd /etc/corosync/
ls
cp corosync.conf.example corosync.conf
vim corosync.conf
# Please read the corosync.conf.5 manual page
compatibility: whitetank
totem {
version: 2
secauth: off
threads: 0
interface {
ringnumber: 0
bindnetaddr: 172.25.26.0 #更改广播域
mcastaddr: 226.94.1.1
mcastport: 5405
ttl: 1
}
}
logging {
fileline: off
to_stderr: no
to_logfile: yes
to_syslog: yes
logfile: /var/log/cluster/corosync.log
debug: off
timestamp: on
logger_subsys {
subsys: AMF
debug: off
}
}
amf {
mode: disabled
}
service {
ver: 0
name:pacemaker
}
/etc/init.d/corosync start##开启服务
3.在server1上(server3开启服务后)
查看节点状态:crm status
检测文件:crm_verify -VL
检测如果出错:crm configure property no-quorum-policy="ignore"
crm configure property stonith-enabled=false
添加虚拟ip:
[root@server1 corosync]# crm
crm(live)# configure
crm(live)configure# show
node server1
node server2
property $id="cib-bootstrap-options" \
dc-version="1.1.10-14.el6-368c726" \
cluster-infrastructure="classic openais (with plugin)" \
expected-quorum-votes="2" \
no-quorum-policy="ignore" \
stonith-enabled="false"
crm(live)configure# primitive vip ocf:heartbeat:IPaddr2 params ip=172.25.26.100 cidr_netmask=24 op monitor interval=1min
crm(live)configure# commit
crm(live)configure# exit
bye
4.server3监控
crm_mon
关闭server1的corosync后,在server3查看,服务会自动跳到server3上