分布式网络文件系统——MFS高可用

一.MFS文件系统的数据备份

在搭建好MFS网络文件系统的基础之上,进行配置。

首先开启各个节点的服务:

[root@server1 ~]# systemctl start moosefs-master
[root@server1 ~]# systemctl enable moosefs-master
[root@server1 ~]# systemctl start moosefs-cgiserv.service 

[root@server2 ~]# systemctl start moosefs-chunkserver
[root@server2 ~]# systemctl enable moosefs-chunkserver

[root@server3 ~]# systemctl start moosefs-chunkserver
[root@server3 ~]# systemctl enable moosefs-chunkserver

  在浏览器访问:http://172.25.52.1:9425/

1. 在客户端删除文件之后如何恢复

 文件passwd删除之后,在/mnt/目录下建立mfsmeta目录,然后将其挂载:

[root@foundation52 mnt]# mkdir mfsmeta
[root@foundation52 mnt]# cd mfsmeta/
[root@foundation52 mfsmeta]# pwd
/mnt/mfsmeta
[root@foundation52 mfsmeta]# ls
[root@foundation52 mfsmeta]# cd
[root@foundation52 ~]# mfsmount -m /mnt/mfsmeta/   ##挂载
mfsmaster accepted connection with parameters: read-write,restricted_ip
[root@foundation52 ~]# cd /mnt/mfsmeta/
[root@foundation52 mfsmeta]# ls
sustained  trash
[root@foundation52 mfsmeta]# cd trash/
[root@foundation52 trash]# ls

 注意:在文件恢复之后,将目录/mnt/mfsmeta/ 卸载

[root@foundation52 mfs]# umount /mnt/mfsmeta/  ##卸载

2.文件的存储(大文件分模块存储)

[root@foundation52 mfs]# ls
dir1  dir2
[root@foundation52 mfs]# cd dir1
[root@foundation52 dir1]# ls
passwd
[root@foundation52 dir1]# dd if=/dev/zero of=bigfile bs=1M count=100
100+0 records in
100+0 records out
104857600 bytes (105 MB, 100 MiB) copied, 0.178912 s, 586 MB/s
[root@foundation52 dir1]# ls
bigfile  passwd
[root@foundation52 dir1]# mfsfileinfo bigfile ##文件比较大时,是分模块存储的

 

 

3.文件的备份设置

 添加一个新节点:

[root@server4 ~]# cd /etc/yum.repos.d/
[root@server4 yum.repos.d]# ls
dvd.repo  MooseFS.repo  redhat.repo
[root@server4 yum.repos.d]# yum install -y moosefs-chunkserver   ##下载安装
[root@server4 yum.repos.d]# vim /etc/hosts
[root@server4 yum.repos.d]# mkdir /mnt/chunk3
[root@server4 yum.repos.d]# chown mfs.mfs /mnt/chunk3/
[root@server4 yum.repos.d]# ll -d /mnt/chunk3/
drwxr-xr-x 2 mfs mfs 6 Dec 27 10:25 /mnt/chunk3/
[root@server4 yum.repos.d]# vim /etc/mfs/mfshdd.cfg
/mnt/chunk3
[root@server4 yum.repos.d]# systemctl start moosefs-chunkserver  ##启动
[root@server4 yum.repos.d]# systemctl enable moosefs-chunkserver   ##自启动

 在浏览器访问:http://172.25.52.1:9425/   ----->节点添加成功

修改配置文件:
[root@server2 ~]# vim /etc/mfs/mfschunkserver.cfg
LABELS = A
[root@server2 ~]# systemctl reload moosefs-chunkserver.service 

[root@server3 ~]# vim /etc/mfs/mfschunkserver.cfg
LABELS = B
[root@server3 ~]# systemctl reload moosefs-chunkserver.service 

[root@server4 yum.repos.d]# vim /etc/mfs/mfschunkserver.cfg
LABELS = C
[root@server4 yum.repos.d]# systemctl reload moosefs-chunkserver.service 

 前端查看:http://172.25.52.1:9425/

 在客户端:

[root@foundation52 dir2]# ls
bigfile2  fstab
[root@foundation52 dir2]# mfsfileinfo fstab 
[root@foundation52 dir2]# mfsscadmin create A,B AB_class  ##
[root@foundation52 dir2]# mfsscadmin list
[root@foundation52 dir2]# mfssetsclass AB_class fstab 
[root@foundation52 dir2]# mfsfileinfo fstab 

 

 查看文件存储信息,可以看到文件之前在3,4中存储,现在在2,3中存储

因为实验环境有限,没有那么多的主机,我们需要修改各个moosefs-chunkserver节点的配置文件来模拟真实的情况:

 修改各个节点的配置文件:

[root@server2 ~]# vim /etc/mfs/mfschunkserver.cfg
LABELS = A S
[root@server2 ~]# systemctl reload moosefs-chunkserver.service 

[root@server3 ~]# vim /etc/mfs/mfschunkserver.cfg
LABELS = B H
[root@server3 ~]# systemctl reload moosefs-chunkserver.service 

[root@server4 yum.repos.d]# vim /etc/mfs/mfschunkserver.cfg
LABELS = C S
[root@server4 yum.repos.d]# systemctl reload moosefs-chunkserver.service 

 前端查看:

 

  修改各个节点的配置文件:

[root@server4 yum.repos.d]# vim /etc/mfs/mfschunkserver.cfg
LABELS = B C S
[root@server4 yum.repos.d]# systemctl reload moosefs-chunkserver.service 

 前端查看:

 在客户端:

 同理,继续修改配置文件:

[root@server2 ~]# vim /etc/mfs/mfschunkserver.cfg
LABELS = A H S
[root@server2 ~]# systemctl reload moosefs-chunkserver.service 

[root@server4 yum.repos.d]# vim /etc/mfs/mfschunkserver.cfg
LABELS =  A B C S
[root@server4 yum.repos.d]# systemctl reload moosefs-chunkserver.service 

 在客户端:

根据实际情况,我们知道有些数据刚开始或者在某一个时间段它的读写非常频繁(热数据),在一段时间之后就会成为“冷数据”,我们可以根据此情况,设置它的不同阶段存储在文件系统中的位置。

[root@foundation52 dir3]# mfsscadmin create -C 2AS -K AS,BS -A AH,BH -d 7 4s_class
[root@foundation52 dir3]# cd ..
[root@foundation52 mfs]# mkdir dir4
[root@foundation52 mfs]# mfssetsclass -r 4s_class dir4/
dir4/:

 在前端查看:

二.MFS高可用

给server2添加一块5G的硬盘:

 root@server2 ~]# fdisk -l  ##查看

 [root@server2 ~]# yum install -y targetcli

[root@server2 ~]# systemctl start target
[root@server2 ~]# targetcli 
/> ls
/> cd /backstores/
/backstores> cd block 
/backstores/block> ls
/backstores/block> cd ..
/backstores> ls
/backstores> cd ..
/> ls
/> cd iscsi 
/iscsi> ls
/iscsi> create iqn.2021-12.org.westos:target1
/iscsi> ls
/iscsi> cd iqn.2021-12.org.westos:target1/tpg1/acls 
/iscsi/iqn.20...et1/tpg1/acls> ls
/iscsi/iqn.20...et1/tpg1/acls> create iqn.2021-12.org.westos:client
/iscsi/iqn.20...et1/tpg1/acls> cd ..
/iscsi/iqn.20...:target1/tpg1> cd luns 
/iscsi/iqn.20...et1/tpg1/luns> ls
iscsi/iqn.20...et1/tpg1/luns> create /backstores/block/my_disk 
/iscsi/iqn.20...et1/tpg1/luns> ls
/iscsi/iqn.20...et1/tpg1/luns> cd ..
/iscsi/iqn.20...:target1/tpg1> ls
/iscsi/iqn.20...:target1/tpg1> exit

 

[root@server1 ~]# yum install -y iscsi-*
[root@server1 ~]# cd /etc/iscsi/
[root@server1 iscsi]# ls
initiatorname.iscsi  iscsid.conf
[root@server1 iscsi]# cat initiatorname.iscsi 
InitiatorName=iqn.1994-05.com.redhat:5e41889393f2
[root@server1 iscsi]# vim initiatorname.iscsi 
InitiatorName=iqn.2021-12.org.westos:client 

   同理在server4中进行相同的配置:

[root@server4 ~]# systemctl stop moosefs-chunkserver.service 
[root@server4 ~]# systemctl disable moosefs-chunkserver.service 
[root@server4 ~]# yum install -y moosefs-master
[root@server4 ~]# yum install -y iscsi-*
[root@server4 ~]# cd /etc/iscsi/
[root@server4 iscsi]# ls
initiatorname.iscsi  iscsid.conf
[root@server4 iscsi]# vim initiatorname.iscsi 
[root@server4 iscsi]# cat initiatorname.iscsi 
InitiatorName=iqn.2021-12.org.westos:client
[root@server1 iscsi]# iscsiadm -m discovery -t st -p 172.25.52.2
172.25.52.2:3260,1 iqn.2021-12.org.westos:target1
[root@server1 iscsi]# iscsiadm -m node -l
Logging in to [iface: default, target: iqn.2021-12.org.westos:target1, portal: 172.25.52.2,3260] (multiple)
Login to [iface: default, target: iqn.2021-12.org.westos:target1, portal: 172.25.52.2,3260] successful.
[root@server1 iscsi]# fdisk -l
[root@server1 iscsi]# fdisk /dev/sda  ##磁盘分区
[root@server1 iscsi]# fdisk -l /dev/sda
[root@server1 iscsi]# mkfs.xfs /dev/sda1  ##格式化

 

[root@server1 iscsi]# mount /dev/sda1 /mnt/  ##挂载
[root@server1 iscsi]# df
[root@server1 iscsi]# rm -fr /mnt/*
[root@server1 iscsi]# systemctl stop moosefs-master.service ##stop服务
[root@server1 iscsi]# cd /var/lib/mfs/
[root@server1 mfs]# ls
changelog.0.mfs  changelog.2.mfs  metadata.crc  metadata.mfs.back.1  stats.mfs
changelog.1.mfs  changelog.3.mfs  metadata.mfs  metadata.mfs.empty
[root@server1 mfs]# cp -p * /mnt/
[root@server1 mfs]# cd /mnt
[root@server1 mnt]# ls
[root@server1 mnt]# ll
[root@server1 mnt]# chown mfs.mfs /mnt/  ##修改挂载目录的所有人和所有组

 

 在server4中进行相似的操作:

[root@server4 iscsi]# iscsiadm -m discovery -t st -p 172.25.52.2
[root@server4 iscsi]# iscsiadm -m node -l
[root@server4 iscsi]# fdisk -l /dev/sda
[root@server4 ~]# mount /dev/sda1 /var/lib/mfs/
[root@server4 ~]# df
[root@server4 ~]# cd /var/lib/mfs/
[root@server4 mfs]# ls
[root@server4 mfs]# ll
[root@server4 mfs]# systemctl start moosefs-master
[root@server4 mfs]# netstat -antlp  ##查看端口

 挂载到指定目录下:

 查看文件权限,开启moosefs-master服务,查看端口是否开启:

 最后记得关闭服务和卸载

[root@server4 mfs]# systemctl stop moosefs-master.service 
[root@server4 ~]# umount /var/lib/mfs
[root@server4 ~]# df

 

 MFS高可用配置

[root@foundation52 ~]# cd /var/www/html/rhel7.6
[root@foundation52 rhel7.6]# ls
addons  EULA              GPL     isolinux  media.repo  repodata                 RPM-GPG-KEY-redhat-release
EFI     extra_files.json  images  LiveOS    Packages    RPM-GPG-KEY-redhat-beta  TRANS.TBL
[root@foundation52 rhel7.6]# cd addons/
[root@foundation52 addons]# ls
HighAvailability  ResilientStorage
[root@foundation52 addons]# 
[root@server1 ~]# ssh-keygen 
[root@server1 ~]# ssh-copy-id server4
[root@server1 ~]# cd /etc/yum.repos.d/
[root@server1 yum.repos.d]# ls
dvd.repo  MooseFS.repo  redhat.repo
[root@server1 yum.repos.d]# vim dvd.repo 
[root@server1 yum.repos.d]# cat dvd.repo 
[dvd]
name=rhel7.6
baseurl=http://172.25.52.250/rhel7.6
gpgcheck=0

[HighAvailability]
name=rhel7.6 HighAvailability
baseurl=http://172.25.52.250/rhel7.6/addons/HighAvailability
gpgcheck=0

[root@server1 yum.repos.d]# yum install -y pacemaker pcs psmisc policycoreutils-python
[root@server1 yum.repos.d]# ls
dvd.repo  MooseFS.repo  redhat.repo
[root@server1 yum.repos.d]# scp dvd.repo server4:/etc/yum.repos.d/
dvd.repo                                                                        100%  190   135.1KB/s   00:00 
   
[root@server1 yum.repos.d]# ssh server4 yum install -y pacemaker pcs psmisc policycoreutils-python
[root@server1 ~]# systemctl start pcsd.service 
[root@server1 ~]# systemctl enable pcsd.service 
[root@server1 ~]# ssh server4 systemctl enable --now pcsd.service
[root@server1 ~]# id hacluster
uid=189(hacluster) gid=189(haclient) groups=189(haclient)
[root@server1 ~]# echo westos | passwd --stdin hacluster
[root@server1 ~]# ssh server4 'echo westos | passwd --stdin hacluster'
[root@server1 ~]# systemctl start pcsd.service 

 

[root@server1 ~]# pcs cluster auth server1 server4
Username: hacluster
Password: 
server4: Authorized
server1: Authorized
[root@server1 ~]# pcs cluster setup --name mycluster server1 server4
[root@server1 ~]# pcs cluster start --all  ##开启
[root@server1 ~]# pcs status 

 

root@server1 ~]# crm_verify -LV
   error: unpack_resources:	Resource start-up disabled since no STONITH resources have been defined
   error: unpack_resources:	Either configure some or disable STONITH with the stonith-enabled option
   error: unpack_resources:	NOTE: Clusters with shared data need STONITH to ensure data integrity
Errors found during check: config not valid
[root@server1 ~]# pcs property set stonith-enabled=false
[root@server1 ~]# crm_verify -LV
[root@server1 ~]# pcs status 

 [root@server1 ~]# pcs resource create vip ocf:heartbeat:IPaddr2 ip=172.25.52.100 op monitor interval=30s
[root@server1 ~]# pcs status

[root@server1 ~]# ip addr

[root@server1 ~]# ip addr del 172.25.52.100/24 dev eth0  ##删除ip
[root@server1 ~]# ip addr  ##查看ip地址

 在60s之后,会自动添加ip 172.25.52.100

[root@server1 ~]# pcs status  ##查看pcs状态

[root@server1 ~]# pcs node standby
[root@server1 ~]# pcs status  ##server1为standby, 然后vip移到server4

 

[root@server1 ~]# pcs node unstandby  ##使得server1继续online
[root@server1 ~]# pcs status  ##vip依旧在server4上

[root@server1 ~]# pcs resource standards 
[root@server1 ~]# pcs resource providers 
[root@server1 ~]# pcs resource agents ocf:heartbeat 

 

 [root@server1 ~]# pcs resource describe ocf:heartbeat:IPaddr2

 在客户端:

[root@foundation52 addons]# vim /etc/hosts  ##修改本地解析
172.25.52.100  mfsmaster
[root@foundation52 addons]# ping mfsmaster
PING mfsmaster (172.25.52.100) 56(84) bytes of data.
64 bytes from mfsmaster (172.25.52.100): icmp_seq=1 ttl=64 time=0.420 ms
64 bytes from mfsmaster (172.25.52.100): icmp_seq=2 ttl=64 time=0.149 ms
c64 bytes from mfsmaster (172.25.52.100): icmp_seq=3 ttl=64 time=0.165 ms
^C
--- mfsmaster ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 62ms
rtt min/avg/max/mdev = 0.149/0.244/0.420/0.125 ms
[root@foundation52 addons]# 

 地址解析:

[root@server1 ~]# vim /etc/hosts  ##所有节点都进行地址解析

172.25.52.100 mfsmaster
[root@server1 ~]# pcs status
[root@server1 ~]# pcs resource describe ocf:heartbeat:Filesystem

[root@server1 ~]# pcs resource create mfsdata ocf:heartbeat:Filesystem device=/dev/sda1 directory=/var/lib/mfs fstype=xfs op monitor interval=60s
[root@server1 ~]# pcs status   ##vip和mfsdata没有在一个节点

 

[root@server1 ~]# df
Filesystem            1K-blocks    Used Available Use% Mounted on
/dev/mapper/rhel-root  17811456 1273748  16537708   8% /
devtmpfs                1011400       0   1011400   0% /dev
tmpfs                   1023468   54624    968844   6% /dev/shm
tmpfs                   1023468   29772    993696   3% /run
tmpfs                   1023468       0   1023468   0% /sys/fs/cgroup
/dev/vda1               1038336  135172    903164  14% /boot
tmpfs                    204696       0    204696   0% /run/user/0
/dev/sda1               5231616   37900   5193716   1% /mnt
[root@server1 ~]# cd /var/lib/mfs/
[root@server1 mfs]# ls
[root@server1 mfs]# pcs resource create mfsd systemd:moosefs-master op monitor interval=60s
[root@server1 mfs]# pcs status 

[root@server1 mfs]# cd
[root@server1 ~]# pcs resource group add mfsgroup vip mfsdata mfsd  ##建立一个mfs组使得vip,mfsdata,mfsd一起切换,始终在一个节点
[root@server1 ~]# pcs status 

[root@server1 ~]# df  ##已挂载
[root@server1 ~]# ip addr    ##vip存在
[root@server1 ~]# ps ax

 

 

 在server4中:

在server2和3中打开moosefs-chunkserver.service
[root@server2 ~]# vim /etc/hosts
[root@server2 ~]# systemctl start moosefs-chunkserver
-----------

[root@server3 ~]# vim /etc/hosts
[root@server3 ~]# systemctl start moosefs-chunkserver.service 

 访问:http://172.25.52.1:9425/

在客户端:

[root@foundation52 ~]# mfsmount      ##挂载
[root@foundation52 ~]# cd /mnt/mfs
[root@foundation52 mfs]# ls
dir1  dir2  dir3  dir4
[root@foundation52 mfs]# cd dir1
[root@foundation52 dir1]# ls
bigfile  passwd
[root@foundation52 dir1]# mfsfileinfo passwd

故障测试

故障模拟1:在用户对文件进行读写的过程中,moosefs-master出现问题

[root@foundation52 dir1]# dd if=/dev/zero of=bigfile2 bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB, 1000 MiB) copied, 2.3436 s, 447 MB/s
[root@foundation52 dir1]# 

 在master端:之前mfsmaster一直在server1

[root@server1 ~]# pcs node standby   ##server1: standby
[root@server1 ~]# pcs status   ##直接切换到server4

 从上图我们看到在切换到server4上之后,mfsd是stopped,这是因为server4中的moosefs-master非正常关闭导致的,服务在开启的时候找不到metadata.mfs文件,我们需要在server4中手动修复,然后开启服务,执行命令 mfsmaster -a  然后在server4中正常关闭moosefs-master服务。

 由于刚才server4的切换中出现问题,所以当1中 unstandby,会自动切到server1

[root@server1 ~]# pcs node unstandby 
[root@server1 ~]# pcs status 

 故障模拟二:

[root@server1 ~]# systemctl stop moosefs-master.service ##服务停止
[root@server1 ~]# pcs status 

等待 60s之后,会自动开启mfsd(因为server1中除了mfsd服务关闭之外没有其他问题,如果进行切换的话,需要将vip,mfsdata和mfsd都进行切换,所以在1中将mfsd打开即可)

 故障模拟三: 

[root@server1 ~]# ip link set down dev eth0

server1出现问题,切换到server4中,但由于1是非正常关闭,导致vip mfsd依旧在server1中,目录也在1上挂载

[root@server4 ~]# pcs status

 修复server1中link set down的问题:

 

 我们需要在server1中关闭虚拟机

[root@server1 ~]# pcs cluster start
Starting Cluster (corosync)...
Starting Cluster (pacemaker)...
[root@server1 ~]# pcs status 

[root@server4 ~]# pcs resource disable mfsd
[root@server4 ~]# pcs status 

[root@server1 ~]# cd /var/lib/mfs/
[root@server1 mfs]# ls
changelog.0.mfs  changelog.2.mfs  changelog.4.mfs  metadata.crc       metadata.mfs.back.1  stats.mfs
changelog.1.mfs  changelog.3.mfs  changelog.5.mfs  metadata.mfs.back  metadata.mfs.empty
[root@server1 mfs]# mfsmaster -a  ##修复
[root@server1 mfs]# mfsmaster stop
sending SIGTERM to lock owner (pid:5146)
waiting for termination terminated
[root@server1 mfs]# ls
changelog.0.mfs  changelog.2.mfs  changelog.4.mfs  metadata.crc  metadata.mfs.back.1  stats.mfs
changelog.1.mfs  changelog.3.mfs  changelog.5.mfs  metadata.mfs  metadata.mfs.empty
[root@server1 mfs]# pcs resource enable mfsd
[root@server1 mfs]# pcs resource refresh mfsd
Cleaned up vip on server4
Cleaned up vip on server1
Cleaned up mfsdata on server4
Cleaned up mfsdata on server1
Cleaned up mfsd on server4
Cleaned up mfsd on server1
[root@server1 mfs]# pcs status ##修复成功

 

 

 故障模拟四:

内核出现问题:
[root@server1 mfs]# echo c > /proc/sysrq-trigger    

 master切换到server4之后moosefa-master服务无法正常启动:

修复使得文件系统正常工作:

[root@server4 ~]# cd /var/lib/mfs/
[root@server4 mfs]# ls
changelog.0.mfs  changelog.2.mfs  changelog.4.mfs  metadata.crc       metadata.mfs.back.1  stats.mfs
changelog.1.mfs  changelog.3.mfs  changelog.5.mfs  metadata.mfs.back  metadata.mfs.empty
[root@server4 mfs]# cd
[root@server4 ~]# systemctl status moosefs-master.service 
[root@server4 ~]# vim /usr/lib/systemd/system/moosefs-master.service
[root@server4 ~]# systemctl daemon-reload 

 

[root@server4 ~]# pcs resource cleanup mfsd
[root@server4 ~]# pcs status ##master为server4

 server1的内核出问题之后:断电,启动(注意:不是shut down)

[root@server1 ~]# vim /usr/lib/systemd/system/moosefs-master.service
ExecStart=/usr/sbin/mfsmaster -a
[root@server1 ~]# systemctl daemon-reload 
[root@server1 ~]# pcs cluster start
Starting Cluster (corosync)...
Starting Cluster (pacemaker)...
[root@server1 ~]# pcs status 

fence的安装及配置

[root@server1 ~]# yum install -y fence-virt.x86_64
[root@server1 ~]# pcs stonith list
fence_virt - Fence agent for virtual machines
fence_xvm - Fence agent for virtual machines

在真机中:

[root@foundation52 mfs]# yum install fence-virtd-libvirt.x86_64 fence-virtd-multicast.x86_64 fence-virtd.x86_64 
[root@foundation52 mfs]# rpm -qa|grep fence
[root@foundation52 mfs]# fence_virtd -c

需要在/etc/cluster/目录下建立fence_xvm.key 

[root@foundation52 mfs]# systemctl restart fence_virtd.service
[root@foundation52 mfs]# netstat -anulp | grep :1229

[root@foundation52 ~]# systemctl status fence_virtd.service

 在server4中:

[root@server4 ~]# yum install -y fence-virt
[root@server4 ~]# pcs stonith list
fence_virt - Fence agent for virtual machines
fence_xvm - Fence agent for virtual machines

  在server1中:

[root@server1 ~]# yum install -y fence-virt
[root@server1 ~]# pcs stonith list
fence_virt - Fence agent for virtual machines
fence_xvm - Fence agent for virtual machines
[root@server1 ~]# pcs stonith describe fence_xvm
[root@server1 ~]# mkdir /etc/cluster
[root@server1 ~]# ssh server4 mkdir /etc/cluster

在真机中将key传给server1和4:

[westos@foundation52 Desktop]$ cd /etc/cluster/
[westos@foundation52 cluster]$ ls
fence_xvm.key
[westos@foundation52 cluster]$ scp fence_xvm.key root@172.25.52.1:/etc/cluster
[westos@foundation52 cluster]$ scp fence_xvm.key root@172.25.52.4:/etc/cluster

[root@server1 ~]# pcs stonith create vmfence fence_xvm pcmk_host_map="server1:test1;server4:test4" op monitor interval=60s
[root@server1 ~]# pcs property set stonith-enabled=true
[root@server1 ~]# pcs status 

 我们看到途中的vmfence是stopped,是因为真机的火墙开着,将防火墙关闭即可。

[root@server1 ~]# crm_verify -LV
[root@server1 ~]# pcs cluster start
Starting Cluster (corosync)...
Starting Cluster (pacemaker)...
[root@server1 ~]# pcs cluster enable --all
server1: Cluster Enabled
server4: Cluster Enabled
[root@server1 ~]# 

 

 然后进行故障模拟:攻击内核

[root@server4 ~]# echo c > /proc/sysrq-trigger

master会直接切换到server1上:

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值