MFS分布式文件的高可用

实验环境Master vm1chunck server vm2 , vm3client 主机backupbackup vm4yum install moosefs-master -y磁盘分区共享vm2加一块磁盘yum install targetcli -ysystemctl start target.service targetcli vm master backup masteryum install iscsi-* -...
摘要由CSDN通过智能技术生成

目录

实验环境

backup

磁盘分区共享

故障

ip link set down dev eth0​

​ echo c > /proc/sysrq-trigger

 fence的安装和配置

vm1,4

主机安装

 [/etc/cluster/fence_xvm.key]:创建

创建

 启动

 测试:


实验环境

Master vm1

chunck server vm2 , vm3

client 主机

backup

backup vm4

yum install moosefs-master -y

磁盘分区共享

vm2加一块磁盘

yum install targetcli -y
systemctl start target.service
 targetcli 

 vm master  backup master

yum install iscsi-* -y
cd /etc/iscsi/

vim initiatorname.iscsi


cat nitiatorname.iscsi 
InitiatorName=iqn.2021-12.org.westos:client



iscsiadm -m discovery -t st -p 172.25.7.6

iscsiadm -m node -l



一个客户端建立分区格式化即可

对磁盘进行挂载

vm1

mount /dev/sda1 /mnt/

关掉master服务

cd /var/lib/mfs/
cp -p * /mnt/
chown mfs.mfs /mnt/

vm4测试数据是否同步

mount /dev/sda1 /var/lib/mfs/
systemctl start moosefs-master
systemctl stop moosefs-master
umount /var/lib/mfs/

对vm1和4进行免密设置

ssh-keygen
ssh-copy-id vm4

 修改vm1yum源,将高可用目录添加进去

[root@vm1 mnt]# vim /etc/yum.repos.d/dvd.repo 
[root@vm1 mnt]# cat /etc/yum.repos.d/dvd.repo 
[dvd]
name=dvd
baseurl=http://172.25.7.250/rhel7.6
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release/
enabled=1

[HighAvailability]
name=dvd HighAvailability
baseurl=http://172.25.7.250/rhel7.6/addons/HighAvailability 
gpgcheck=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release/
enabled=1

四也要修改
scp /etc/yum.repos.d/dvd.repo vm4:/etc/yum.repos.d/dvd.repo

 搭建高可用集群

vm1

yum install -y pacemaker pcs psmisc policycoreutils-python

ssh vm4 yum install -y pacemaker pcs psmisc policycoreutils-python

systemctl enable --now pcsd.service 

ssh vm4 systemctl enable --now pcsd.service

[root@vm1 mnt]# id hacluster
uid=189(hacluster) gid=189(haclient) groups=189(haclient)

vm1,4 给用户密码

echo westos | passwd --stdin hacluster

ssh vm4 'echo westos | passwd --stdin hacluster'

用户认证

pcs cluster auth vm1 vm4

pcs cluster setup --name mycluster vm1 vm4

pcs cluster start --all  开启集群

也可以本机启动
systemctl start corosync.service
systemctl start pacemaker.service

pcs status  有警告

crm_verify -LV
pcs property set stonith-enabled=false
crm_verify -LV
pcs status  无报错

 添加vip自动管理

 pcs resource create VIP ocf:heartbeat:IPaddr2 ip=172.25.7.100 op monitor interval=30s

ocf:heartbeat调用脚本
op monitor interval=30s 监控频率

测试

资源级别的监控

ip addr del 172.25.7.100/24 dev eth0

30s自动恢复

 节点

vm1: pcs node standby            恢复/unstandby
节点休息

 pcs resource standards      资源体系

pcs resource providers    提供资源

 pcs resource agents ocf:heartbeat   代理脚本资源选项

pcs resource describe ocf:heartbeat:IPaddr

查看调用脚本选项描述

添加 存储管理

本地解析以及VM1~4

master和chunk 都关掉

 vm1

pcs resource describe ocf:heartbeat:Filesystem

将手动的挂载卸掉在作
pcs resource create mfsdata ocf:heartbeat:Filesystem device=/dev/sda1 directory=/var/lib/mfs/ fstype=xfs op monitor interval=60s

 

pcs resource create mfsd systemd:moosefs-master op monitor interval=60s

pcs resource  group  add mfsgroup VIP mfsdata mfsd
资源组合

互相转换

[root@vm4 .ssh]# pcs node standby

故障

ip link set down dev eth0

 

 

 vm1接管起不来

 

关掉vm4重新连接

开启集群

pcs cluster start

vm1可能重新连接
开启集群
pcs cluster start

 pcs resource disable mfsd

 

mfsm stop 
mfsmaster -a  修复

 

mfsmaster stop
pcs resource enable mfsd
pcs resource  refresh mfsd

 echo c > /proc/sysrq-trigger

 

内核崩溃

 

修改脚本start    改  -a

[root@vm1 mfs]# vim /usr/lib/systemd/system/moosefs-master.service
[root@vm1 mfs]# systemctl daemon-reload 

 

 

 pcs resource cleanup mfsd

 

 

vm4

force off  在开机

vim /usr/lib/systemd/system/moosefs-master.service
systemctl daemon-reload
pcs cluster start

 

 

 fence的安装和配置

      当意外原因导致主机异常或者宕机时,备机会首先调用FENCE设备,然后通过FENCE设备将异常主机重启或者从网络隔离,当FENCE操作成功执行后,返回信息给备机,备机在接到FENCE成功的信息后,开始接管主机的服务和资源。这样通过FENCE设备,将异常节点占据的资 源进行了释放,保证了资源和服务始终运行在一个节点上,并且有效的阻止了脑裂(高可用节点分裂为两个独立节点,这个时候会开始争抢共享资源)的发生。

vm1,4

yum install fence-virt -y
 pcs stonith list

pcs stonith describe fence_xvm
[root@vm1 mfs]# mkdir /etc/cluster/ 
[root@vm1 mfs]# ssh vm4 mkdir /etc/cluster/

 

 

主机安装

fence-virtd-libvirt-0.4.0-9.el8.x86_64

fence-virtd-0.4.0-9.el8.x86_64

fence-virtd-multicast-0.4.0-9.el8.x86_64

fence_virtd -c  配置创建

[root@students72 dir2]# fence_virtd -c
Module search path [/usr/lib64/fence-virt]:

Available backends:
    libvirt 0.3
Available listeners:
    multicast 1.2

Listener modules are responsible for accepting requests
from fencing clients.

Listener module [multicast]:

The multicast listener module is designed for use environments
where the guests and hosts may communicate over a network using
multicast.

The multicast address is the address that a client will use to
send fencing requests to fence_virtd.

Multicast IP Address [225.0.0.12]:

Using ipv4 as family.

Multicast IP Port [1229]:

Setting a preferred interface causes fence_virtd to listen only
on that interface.  Normally, it listens on all interfaces.
In environments where the virtual machines are using the host
machine as a gateway, this *must* be set (typically to virbr0).
Set to 'none' for no interface.

Interface [virbr0]: br0

The key file is the shared key information which is used to
authenticate fencing requests.  The contents of this file must
be distributed to each physical host and virtual machine within
a cluster.

Key File [/etc/cluster/fence_xvm.key]:

Backend modules are responsible for routing requests to
the appropriate hypervisor or management layer.

Backend module [libvirt]:

The libvirt backend module is designed for single desktops or
servers.  Do not use in environments where virtual machines
may be migrated between hosts.

Libvirt URI [qemu:///system]:

Configuration complete.

=== Begin Configuration ===
backends {
    libvirt {
        uri = "qemu:///system";
    }

}

listeners {
    multicast {
        port = "1229";
        family = "ipv4";
        interface = "br0";
        address = "225.0.0.12";
        key_file = "/etc/cluster/fence_xvm.key";
    }

}

fence_virtd {
    module_path = "/usr/lib64/fence-virt";
    backend = "libvirt";
    listener = "multicast";
}

=== End Configuration ===
Replace /etc/fence_virt.conf with the above [y/N]? y 

 [/etc/cluster/fence_xvm.key]:创建

mkdir -p /etc/cluster/
 dd if=/dev/urandom of=/etc/cluster/fence_xvm.key bs=128 count=1

 

systemctl restart fence_virtd.service
 netstat -anulp | grep 1229

 

 

将主机的key拷贝到vm1和vm4

scp fence_xvm.key vm1:/etc/cluster/
scp fence_xvm.key vm4:/etc/cluster/

创建

pcs stonith create vmfence fence_xvm pcmk_host_map="vm1:vm1;vm4:vm4" op monitor interval=60s
[root@vm1 mfs]# pcs property set stonith-enabled=true


 crm_verify -LV  检测

 

 启动

 fence没有启动

本地解析的问题,修改解析

pcs stonith cleanup vmfence

 测试:

 echo c > /proc/sysrq-trigger

 

 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值