CentOS7离线安装GlusterFS7.9.1

CentOS 7 离线安装GlusterFS 7.9.1

环境

系统ip主机名添加磁盘并格式化每个磁盘挂载点
192.168.32.128servera/dev/sdb1/brick/brick
192.168.32.132serverb/dev/sdb1/brick/brick

每台机器中均添加

192.168.32.128 servera
192.168.32.132 serverb

servera:

[root@servera /]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.32.128 servera
192.168.32.132 serverb
[root@servera /]# 

serverb:

[root@serverb /]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.32.128 servera
192.168.32.132 serverb
[root@serverb /]# 

安装前的准备

​ 至少2个虚拟磁盘,一个用于安装OS (sda),一个用于服务GlusterFS存储(sdb)。这将模拟一个真实的部署,在这个部署中,您可能希望将GlusterFS存储与操作系统安装分离。

格式化磁盘

# mkfs.ext4 /dev/sdb1
# mkdir -p /brick/brick
# mount /dev/sdb1  /brick/brick

开始安装

安装参考链接:

https://blog.csdn.net/liuskyter/article/details/111595968
https://blog.csdn.net/daydayup_gzm/article/details/52748800
https://blog.csdn.net/weixin_44729138/article/details/105663849

开始安装,其中部分依赖包因为之前已安装未列出,如安装提示缺少依赖包,请自行安装。

rpm -ivh glusterfs-libs-7.9-1.el7.x86_64.rpm --nodeps --force
rpm -ivh glusterfs-7.9-1.el7.x86_64.rpm --nodeps --force
rpm -ivh glusterfs-client-xlators-7.9-1.el7.x86_64.rpm
rpm -ivh glusterfs-api-7.9-1.el7.x86_64.rpm --nodeps --force
rpm -ivh glusterfs-cli-7.9-1.el7.x86_64.rpm --nodeps --force
rpm -ivh userspace-rcu-0.10.0-3.el7.x86_64.rpm
rpm -ivh glusterfs-fuse-7.9-1.el7.x86_64.rpm --nodeps --force
rpm -ivh glusterfs-server-7.9-1.el7.x86_64.rpm

安装包链接:

链接:https://pan.baidu.com/s/18SECuBRbIcX8BRDeoUg0sg 

设置gluster开机启动,所有服务器上均须执行

systemctl  start   glusterd.service
systemctl  enable  glusterd.service
systemctl  status  glusterd.service

启动gluster管理守护进程,所有服务器上均须执行

service glusterd start

配置防火墙

节点上的gluster进程需要能够相互通信。为了简化此设置,请在每个节点上配置防火墙以接受来自另一个节点的所有流量

iptables -I INPUT -p all -s 192.168.32.132 -j ACCEPT   
iptables -I INPUT -p all -s 192.168.32.128 -j ACCEPT

配置可信池

#servera上执行
gluster peer probe serverb
peer probe: success. Host serverb port 24007 already in peer list
#serverb上执行
gluster peer probe servera
peer probe: success. Host servera port 24007 already in peer list

创建复制卷,所有服务器均须执行

在所有服务器上均须执行:

mkdir -p /brick/brick/gv0
mkdir -p /brick/brick/gv0

任意一台服务器上执行:

gluster volume create volume1 replica 2 servera:/brick/brick/gv0 serverb:/brick/brick/gv0
Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to avoid this. See: http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/.
Do you still want to continue?
 (y/n) y
volume create: gv0: success: please start the volume to access data
启动该卷
gluster volume start volume1
#volume start: gv0: success
查看卷的信息
gluster volume info
################################################ 
Volume Name: gv0
Type: Replicate
Volume ID: 8afec598-4867-4136-8c7c-dc3301475a47
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: servera:/brick/brick/gv0
Brick2: serverb:/brick/brick/gv0
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: off

注意
如果该卷未显示“已启动”,/var/log/glusterfs/glusterd.log则应检查下面的文件 以便调试和诊断情况。可以在一台或所有配置的服务器上查看这些日志

创建挂载卷目录(两台服务器均需操作)

mkdir /brick1

挂载卷(两台服务器均需操作)

servera:

mount -t glusterfs servera:/volume1 /brick1

serverb:

mount -t glusterfs serverb:/volume1 /brick1

写数据测试

在任意一台服务器上执行以下命令,将/var/log/目录下的messages日志拷贝至/brick1目录下

for i in `seq -w 1 10`; do cp -rp /var/log/messages  /brick1/copy-test-$i; done

观察

servera:

[root@servera glusterfs]# cd /brick1
[root@servera brick1]# ll
total 5985
-rw-------. 1 root root 612545 May 13 01:50 copy-test-01
-rw-------. 1 root root 612545 May 13 01:50 copy-test-02
-rw-------. 1 root root 612545 May 13 01:50 copy-test-03
-rw-------. 1 root root 612545 May 13 01:50 copy-test-04
-rw-------. 1 root root 612545 May 13 01:50 copy-test-05
-rw-------. 1 root root 612545 May 13 01:50 copy-test-06
-rw-------. 1 root root 612545 May 13 01:50 copy-test-07
-rw-------. 1 root root 612545 May 13 01:50 copy-test-08
-rw-------. 1 root root 612545 May 13 01:50 copy-test-09
-rw-------. 1 root root 612545 May 13 01:50 copy-test-10
[root@servera brick1]# 

同时观察serverb发现,在serverb服务器上/brick1目录下也会存在一份拷贝文件

[root@serverb glusterfs]# cd /brick1/
[root@serverb brick1]# ll
total 5985
-rw-------. 1 root root 612545 May 13 01:50 copy-test-01
-rw-------. 1 root root 612545 May 13 01:50 copy-test-02
-rw-------. 1 root root 612545 May 13 01:50 copy-test-03
-rw-------. 1 root root 612545 May 13 01:50 copy-test-04
-rw-------. 1 root root 612545 May 13 01:50 copy-test-05
-rw-------. 1 root root 612545 May 13 01:50 copy-test-06
-rw-------. 1 root root 612545 May 13 01:50 copy-test-07
-rw-------. 1 root root 612545 May 13 01:50 copy-test-08
-rw-------. 1 root root 612545 May 13 01:50 copy-test-09
-rw-------. 1 root root 612545 May 13 01:50 copy-test-10
[root@serverb brick1]# 

问题汇总

1、创建卷失败

GlusterFS创建volume失败的解决方法,(* or a prefix of it is already part of a volume)

rm -rf  /u01/isi/glusterdata/.glusterfs/   						#删除数据目录下的数据
setfattr -x trusted.glusterfs.volume-id  /u01/isi/glusterdata   #清除属性信息
setfattr -x trusted.gfid  /u01/isi/glusterdata       			#清除属性信息

重新创建即可

2、常见复制卷失败
volume create: volume1: failed: Staging failed on glusterfs02. Error: The brick 
glusterfs02:/u01/isi/glusterdata is being created in the root partition. It is 
recommended that you don't use the system's root partition for storage backend. 
Or use 'force' at the end of the command if you want to override this behavior.

是因为创建的卷占用了/分区,在命令的后面加上force即可

gluster volume create volume1 replica 2 glusterfs01:/u01/isi/glusterdata/ 
glusterfs02:/u01/isi/glusterdata/ force
3、卸载过程(两台服务器均操作)
step1:卸载盘和停止卷
cd
umount /brick1/  (两台服务器均操作)                     #卸载盘
gluster volume stop volume1(任意一台服务器操作,以118为例)#停止卷
step2:删除卷
gluster volume delete volume1  	#删除卷
gluster volume list           	#列出集群中的所有卷
gluster volume info         	#查看集群中的卷信息
gluster volume status     	 	#查看集群中的卷状态
gluster volume status all 	    #查看集群中的卷状态
step3:删除节点
gluster peer detach serverb
step4:停用服务
systemctl stop glusterd
systemctl disable glusterd
systemctl status gluster
step5:卸载服务
yum -y remove glusterfs-server centos-release-gluster  #卸载服务
rm -rf /u01/isi/glusterdata  /brick1    #删除数据目录和挂载卷目录
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 3
    评论
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值