实验环境
主机 | 服务 |
---|---|
server1:172.25.23.1 | ricci uci fence |
server2:172.25.23.2 | ricci httpd |
物理机:172.25.23.250 | fence |
一、创建集群
1.在server1上搭建高级yum源,列出信息
[root@server1 ~]# cd /etc/yum.repos.d
[root@server1 yum.repos.d]# vim rhel-source.repo
[rhel-source]
name=Red Hat Enterprise Linux $releasever - $basearch - Source
baseurl=http://172.25.23.250/rhel6.5
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
[HighAvailability]
name=HighAvailability
baseurl=http://172.25.23.250/rhel6.5/HighAvailability
gpgcheck=0
[LoadBalancer]
name=LoadBalancer
baseurl=http://172.25.23.250/rhel6.5/LoadBalancer
gpgcheck=0
[ResilientStorage]
name=ResilientStorage
baseurl=http://172.25.23.250/rhel6.5/ResilientStorage
gpgcheck=0
[ScalableFileSystem]
name=ScalableFileSystem
baseurl=http://172.25.23.250/rhel6.5/ScalableFileSystem
gpgcheck=0
[root@server1 yum.repos.d]# yum clean all
[root@server1 yum.repos.d]# yum repolist
2.把server1上搭建好的yum源传给server2并列出信息
[root@server1 yum.repos.d]# scp rhel-source.repo root@172.25.23.2:/etc/yum.repos.d
[root@server2 ~]# yum clean all
[root@server2 ~]# yum repolist
3.在server1上安装ricci(图形里的集群管理)和 luci(图形界面)
[root@server1 ~]# yum install -y ricci luci
4.安装完成后会自动生成ricci用户,修改ricci用户的密码
[root@server1 ~]# cat /etc/passwd
[root@server1 ~]# passwd ricci
5.开启ricci和luci并设为开机自启动
[root@server1 ~]# /etc/init.d/ricci start
[root@server1 ~]# /etc/init.d/luci start
[root@server1 ~]# chkconfig ricci on
[root@server1 ~]# chkconfig luci on
6.在server2安装ricci ,修改ricci用户的密码 并开启ricci
[root@server2 ~]# yum install -y ricci
[root@server2 ~]# passwd ricci
[root@server2 ~]# /etc/init.d/ricci start
7.浏览器访问https://172.25.23.1:8084
添加server1 server2 两个节点
(2)用超级用户登录,密码为刚刚设置的密码
(3)点击集群管理(Manager Clusters),然后点击create,创建一个集群westos_ha
(4)点击Create Clusters,进入等待页面,此时server1和server2会重启,重新连接server1和server2
(5)节点添加成功如下图:
8.添加完成后在两个主机上执行/etc/cluster/cluster.conf ,可以看到集群的信息
[root@server1 ~]# cat /etc/cluster/cluster.conf
[root@server1 ~]# clustat
[root@server2 ~]# cat /etc/cluster/cluster.conf
[root@server2 ~]# clustat
二、配置fence
1.在真实主机上安装fence
[root@foundation23 ~]# yum search fence
[root@foundation23 ~]# yum install fence-virtd.x86_64 fence-virtd-libvirt.x86_64 fence-virtd-multicast.x86_64 -y
2.fence_virtd -c 编辑fence配置文件
3.如果没有密钥目录,需要自己先建立一个,并截取密钥
[root@foundation23 ~]# rpm -qa | grep fence
[root@foundation23 ~]# cd /etc/cluster/
[root@foundation23 cluster]# ls
fence_xvm.key
4.打开fence服务
[root@foundation23 cluster]# systemctl start fence_virtd.service
5.将修改好的密钥文件发送给节点server1和server2
[root@foundation23 cluster]# scp fence_xvm.key server1:/etc/cluster/
[root@foundation23 cluster]# scp fence_xvm.key root@172.25.23.2:/etc/cluster/
6.server1和server2切换到密钥目录下,查看密钥文件是否存在
[root@server1 ~]# cd /etc/cluster/
[root@server1 cluster]# ls
cluster.conf cman-notify.d fence_xvm.key
[root@server2 ~]# cd /etc/cluster/
[root@server2 cluster]# ls
cluster.conf cman-notify.d fence_xvm.key
7.浏览器访问https://172.25.23.1:8084,给两个节点添加fense
(1)点击Fence Devices ,再点击Add
(2)选择多播模式的fence
(3)fence设备添加成功
(4)绑定两个节点
UUID可以在虚拟机管理页面找到
server1上添加成功
server2同理
8.测试:在server1上通过fence干掉节点server2,server2断电能重启则为成功
[root@server1 cluster]# fence_node server2
fence server2 success
三、高可用服务配置
1.添加故障转移域
将server1和server2添加在域中,当sever1或server2出现故障时,服务会自动切换到正常的节点上,集群打开后服务落在优先级高的节点上(数字越小优先级越高)
添加故障转移成功
2.添加服务中所要用到的资源
添加IP Address
添加Script
添加资源成功
3.向集群中添加之前添加好的资源
创建资源组
将资源添加进去
4.在server1和server2上安装httpd,并编辑默认发布文件
[root@server1 ~]# yum install httpd -y
[root@server1 ~]# cd /var/www/html
[root@server1 html]# vim index.html
[root@server1 html]# cat index.html
server2
[root@server2 ~]# yum install httpd -y
[root@server2 ~]# cd /var/www/html
[root@server2 html]# vim index.html
[root@server2 html]# cat index.html
server2
5.开启http服务
6.刷新页面,显示httpd服务运行在server1中(因为server1的优先级高)
7.修改httpd服务运行在server2中
8.在真实主机访问两台节点,正常
访问172.25.23.100显示server2
9.在server2中,输入echo c > /proc/sysrq-trigger,手动宕掉server2服务器
服务器会在5秒内重新开机启动
10.再次curl 172.25.23.100,服务变为server1(因为server1的优先级高于server2)