需要三台纯净的虚拟机,server3作为M端
server1:172.25.45.1
server2:172.25.45.2
server3:172.25.45.3        


1.虚拟机的配置

server1,server2

1.永久关掉火墙

/etc/init.d/iptables stop
chkconfig iptables off


2.重新配置yum源

vim /etc/yum.repos.d/dvd.repo
yum repolist

dvd.repo

# repos on instructor for cla***oom use

 

# Main rhel6.5 server

[base]

name=Instructor Server Repository

baseurl=http://172.25.254.19/rhel6.5

gpgcheck=1

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

 

# HighAvailability rhel6.5

[HighAvailability]

name=Instructor HighAvailability Repository

baseurl=http://172.25.254.19/rhel6.5/HighAvailability

gpgcheck=1

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

 

# LoadBalancer packages

[LoadBalancer]

name=Instructor LoadBalancer Repository

baseurl=http://172.25.254.19/rhel6.5/LoadBalancer

gpgcheck=1

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

 

# ResilientStorage

[ResilientStorage]

name=Instructor ResilientStorage Repository

baseurl=http://172.25.254.19/rhel6.5/ResilientStorage

gpgcheck=1

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

 

# ScalableFileSystem

[ScalableFileSystem]

name=Instructor ScalableFileSystem Repository

baseurl=http://172.25.254.19/rhel6.5/ScalableFileSystem

gpgcheck=1

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release


wKiom1dxJFOyAIpGAAFBb-Bq7iQ237.png


3.配置/etc/hosts文件
wKiom1dhct6wIvAeAACh4IvjBx0408.png

server3

(1)重新配置yum

vim dvd.repo    ##内容同上
yum repolist

(2)配置/etc/hosts文件(物理机也需要)

vim /etc/hosts    ##内容同上


2.安装ricci

server1,server2

yum install -y ricci 

echo westos | passwd --stdin ricci

chkconfig ricci on
/etc/init.d/ricci start

【server3】

yum install httpd
/etc/init.d/httpd start
yum install -y luci

wKiom1dxJVfzhtUoAAATdPo2XeM810.png


/etc/init.d/luci start

wKioL1dxJXHTkgLbAAAvllHBbpc234.png


访问https://server3.example.com:8084 

wKioL1dhdS7im89nAAAdb7ynNH0878.png


https://server3.example.com:8084

(登陆账户root,密码为server3M端密码)


右上角 Perferences

wKiom1dhdFrSTDbzAABlFUVPxAI105.png


此为目前登陆的用户。要添加用户,需要以该用户身份先尝试登陆一次,记录它的登陆信息,管理员再给它权限来进行登录。

Manage Clusters - Create

wKiom1dhdL3BWtOkAADIUqIrinE514.png


wKiom1dhdQTw-RsQAABmLuAqmJM774.png


clustat

wKiom1dxJmyxR8MnAABCKK_hzl0347.png


如果过程中重复创建会报错
解决方法:
rm -rf /etc/cluster/cluster.conf
重启服务            


运行时启动了以下进程

wKioL1eXbDmByTXLAAAnWyh82Ak144.png

2.fence
【物理机】

yum search fence

wKioL1dxJyDy6ZweAABudsy0kUQ716.png


fence-virtd.x86_64 : Daemon which handles requests from fence-virt    
fence-virtd-libvirt.x86_64 : Libvirt backend for fence-virtd    **
fence-virtd-multicast.x86_64 : Multicast listener for fence-virtd    **
fence-virtd-serial.x86_64 : Serial VMChannel listener for fence-virtd    
fence-virt.x86_64 : A pluggable fencing framework for virtual machines    **

大概要下有**标志的这三个。。。待验证

yum install fence-virt.x86_64 fence-virtd-multicast.x86_64 fence-virtd-libvirt.x86_64


wKioL1dhdtDz205GAAByMYh9ND4023.png

cat /etc/fence_virt.conf

wKioL1dxJ2yAPt1wAABqI7Dy6-A720.png


fence_virtd -c

wKiom1dhdhHzOJJtAAD5bbTy1Dg263.png


wKioL1dhdz-z_d4qAAEY3Tw4OQ0666.png


wKiom1dhdkvyUAiLAACriWaSuu0254.png


cd /etc/cluster(没有建立 mkdir /etc/cluster)
dd if=/dev/urandom of=fence_xvm.key bs=128 count=1  (纯净则可忽略)
file fence_xvm.key

wKiom1dhdtvDQGvgAACSJM7F9t8169.png

systemctl restart fence_virtd
netstat -anulp | grep :1229

wKioL1dhd6fSaXBiAABMdWJqwno960.png

scp /etc/cluster/fence_xvm.key root@172.25.19.1:/etc/cluster/
scp /etc/cluster/fence_xvm.key root@172.25.19.2:/etc/cluster/
virsh list


server1/server2

clustat

使主机名与虚拟机名匹配。因为网页只看到主机名,物理机上只显示虚拟机名,不能识别,所以可以通过UUID来绑定,使主机名与虚拟机名匹配。

server1 UUID:9c69bfbd-0368-45da-86ad-f358c86c1cb8

server2 UUID:6c2fba2e-8723-4650-bd13-96bf16e69887


Fence Devices - Add

wKioL1eUMfLBR3YLAAA1jSoNUjE546.png

 

Nodes - server1.example.com - Add Fence Method

wKiom1eUMgiz6FuEAAAlnQq4tpA272.png


Add Fence Instance

wKiom1eUMkuhxYK4AAA86qnYY7A173.png 


依此类推,对server2进行同样操作


测试:
【server1】

fence_node server2.example.com        ##远程断电server2

wKiom1dheHrBq-NYAAKeyhSkZHE260.png


3.webfail
【server1/server2】

yum install httpd
echo server1.example.com > /var/www/html/index.html
echo server2.example.com > /var/www/html/index.html
/etc/init.d/httpd start

网页

wKioL1dhfFKQsptYAAC7bE42Qk8741.png


wKioL1dhfFORAYNTAACeayXbe5g122.png


wKiom1dxKP6jBTv_AAA1VCRYeaw236.png


wKiom1dhe0DiwfIQAADrW7SlX_s750.png


wKiom1dhe0HgQlGlAACHItatyCA330.png


测试:
(1)此时在server1上

wKiom1dhe82ybvvEAACKMh-Ppzs349.png


wKioL1dhfHCjbnJuAABOAYJt-9g228.png


wKioL1dhfHLiAexJAAD-kLVu_aU499.png


clusvcadm -r apache -m server2.example.com        ##访问server2

wKioL1dhfPCzNFFrAADW2YLD0IA505.png


wKiom1dhfAyR_9YpAAEEgQoAI7U241.png


(2)

/etc/init.d/network stop        ##关掉server1的网络服务

server1挂掉,自动重启,此时浮动IP在server2上


(3)

echo c > /proc/sysrq-trigger    ##使server2的内核崩溃

server2挂掉,自动重启,此时浮动IP在server1上

4.
【server3】
将内容扩为1024M
添加一个虚拟硬盘

wKiom1dhfLaxDbE4AAB2bDGe-Us980.png


yum install scsi*
[root@server3 ~]# rpm -qa | grep scsi
scsi-target-utils-1.0.24-10.el6.x86_64
vim /etc/tgt/targets.conf        ##去掉注释,将38行到40行的内容修改为以下:
 38 <target iqn.2016-06.com.example:server.disk>
 39         backing-store /dev/vdb
 40         initiator-address 172.25.19.1
 41         initiator-address 172.25.19.2
 42 </target>

wKioL1dhff7AZYVkAABJ4HNDZTk879.png


/etc/init.d/tgtd start
tgt-admin -s

wKiom1dhfUTA6W9vAADq2EpNVLg399.png


【server1】

yum install -y iscsi-*

wKioL1dxK0SSdDrTAAAWz8bTYdA929.png


iscsiadm -t st -m discovery -p 172.25.19.3
iscsiadm -m node -l
fdisk -l

wKiom1dhfYjDEfV8AACeSHRmWvs116.png


建立一个逻辑卷

fdisk -cu /dev/sda

wKioL1dhfvmxkMAWAADbAmvyr50754.png


【server2】

yum install -y iscsi-*
[root@server1 ~]# chkconfig iscsi --list
iscsi              0:off    1:off    2:off    3:on    4:on    5:on    6:off
iscsiadm -t st -m discovery -p 172.25.19.3
iscsiadm -m node -l
fdisk -l

wKioL1dhf6CggmHnAACcye_x2gc722.png

/dev/sda 被同步过来

cat /proc/partitions

wKiom1dhfuKzr2yFAABbmGr-ReM783.png


若没有被同步过来,可使用partx -a /dev/sda 进行同步

若出现以下情况

[root@server2 ~]# partx -a /dev/sda

BLKPG: Device or resource busy

error adding partition 1

BLKPG: Device or resource busy

error adding partition 2

则使用可以 partprobe 进行同步


5.分布式
【server1】(任意都可以)

pvcreate /dev/sda1
pvs

查看server2是否同步
wKioL1dhgE_ge4gfAAA01qCKxEo321.png


vgcreate clustervg /dev/sda1[object Object]
vgs

查看server2是否同步

wKiom1dhf2qBz1UkAAAz6uj0dGw501.png


wKioL1dhgRCQEde6AADBvY7Oprg966.png


lvcreate -L 2G -n lv1 clustervg
lvs

查看server2是否同步

wKiom1dhgEWxLy01AABv-wql0ys535.png


mkfs.ext4 /dev/clustervg/lv1    ##两台虚拟机都可以成功挂载


测试:
1.挂载
在server1和server2上都挂载在/mnt下,server1在/mnt下建立一个文件,server2不能看到,只有当server2重新挂载,才可见

2.网页
增加资源-文件系统
disabe apache
删掉scripts,添加Filesystem和Script

wKiom1dhgOijyOlJAADgjP3u2hU907.png


wKiom1dhgOiR_RBbAABV_28x13I871.png


wKioL1dhgf2yR1rnAACKzuq-UjY211.png


wKiom1dhgOrgm_ZOAACHu7xD03I721.png

clustat
clusvcadm -e apache
clustat
cd /var/www/html/
echo www.westos.com > index.html

wKioL1dhgf7ABvnAAABNOmej7vo656.png

测试:

1)

echo c > /proc/sysrq-trigger    ##原来在server1上
clustat

wKioL1dxLDPRA5c-AABg0cC6IpQ050.png


df

wKiom1dxLECjFGU1AABZd5g8Mnk576.png


wKioL1dxLE7A-E0fAACix53KtXg052.png


 

2)

/etc/init.d/httpd stop
watch -n 1 clustat     ##监控另一台,状态:recoverable-starting-started




6.集群式

clusvcadm -d apache


Service Groups

删掉FilesystemScript

Resources

删掉webdata

lvremove /dev/clustervg/lv1 
lvs        ##边查看server2  

lvcreate -L 2G -n demo clustervg
lvs        ##边查看server2
mkfs.gfs2 -p lock_dlm -t wjl_ha:mygfs2 -j 3 /dev/clustervg/demo        ##3为节点数加1

wKiom1dhgcnAuNZDAADdegOMpG8051.png


gfs2_tool sb /dev/clustervg/demo all

wKioL1dhgt3TStW5AACc9Pc-6eM657.png


测试:
在server1和server2上都挂载在/mnt下,server1在/mnt下建立一个文件,server2可以看见并操作
gfs2_tool journals /dev/clustervg/demo


永久挂载
【server1/server2】wKiom1dhgrnDl-yyAAD5x8Z7IE4698.png

umount /mnt


在server1和server2上都挂载在/var/www/html下

vim /etc/fstab

UUID="d287c031-bb4a-013a-3d29-ddd651a4b168"     /var/www/html   gfs2    _netdev 0 0


wKiom1dhguPBNKnlAADhSr-FTco615.png


wKioL1dhhCzhCtvpAADGqAs462I904.png


wKiom1dhgxjwHYMxAABWwWUzpYs732.png


wKioL1dhhC2AMa9QAAB30hq0nOY999.png


wKioL1dhhC3y6CxwAACIgPb9xQU416.png


增加资源-GFS2,在Service groups添加GFS2和Script 


clusvcadm -e apache
echo www.westos.com > /var/www/html/index.html

网页 显示ww.westos.com
在另一台虚拟机上挂载,挂载后修改index.html(如改www.westos.org),刷新页面后,页面显示www.westos.org

增加日志

gfs2_jadd -j 2 /dev/clustervg/demo

wKiom1dhg1HTh8s2AAAys4TYpJA022.png


wKiom1dhg3LAfAtjAAD2mqrlIbs021.png

lvextend -l +511 /dev/clustervg/demo
gfs2_grow /dev/clustervg/demo

wKioL1dhhKTQZDqxAACO4Csj5oE664.png


wKiom1dhg8bQfXSqAAAy36VR6Lo755.png


wKioL1dhhNuiW8ALAABt5bs3Z2A159.png


wKiom1dhg8ijPz7CAABRi9FdqYI148.png


注意:

1.如果同步不上,可能是因为时间不同步。

2.页面没有显示,可能是M端luci服务关闭

[root@server3 ~]# /etc/init.d/luci status
No PID file /var/run/luci/luci.pid
[root@server3 ~]# /etc/init.d/luci start
Starting saslauthd:                                        [  OK  ]
Start luci...                                              [  OK  ]
Point your web browser to https://server3.example.com:8084 (or equivalent) to access luci
[root@server3 ~]#