环境
服务端
节点 | IP |
---|---|
ServerNode1 | 192.168.1.225 |
ServerNode2 | 192.168.1.226 |
ServerNode3 | 192.168.1.227 |
客户端
节点 | IP |
---|---|
ClientNode1 | 192.168.1.225 |
ClientNode2 | 192.168.1.226 |
ClientNode3 | 192.168.1.227 |
ClientNode4 | 192.168.1.228 |
ClientNode5 | 192.168.1.229 |
服务端的安装与配置
安装服务端
在ServerNode1,ServerNode2,ServerNode3安装服务端,因安装服务端时,也会自动安装客户端,故服务三个服务节点无需再安装客户端。
apt install glusterfs-server
启动和自启动
systemctl start glusterd.service
systemctl enable glusterd.service
组建集群
在ServerNode1执行,将ServerNode2,ServerNode3加入集群
sudo gluster peer probe 192.168.1.226
sudo gluster peer probe 192.168.1.227
在ServerNode1上查看集群状态
sudo gluster peer status
显示如下:
Number of Peers: 2
Hostname: 192.168.1.226
Uuid: b61ac3b3-0745-4db5-800d-1a550196eeb3
State: Peer in Cluster (Connected)
Hostname: 192.168.1.227
Uuid: ec27c07d-d835-480d-97bf-63c2052ee6ad
State: Peer in Cluster (Connected)
创建数据卷
在三个服务节点上都创建存放数据的文件夹
mkdir -p /home/zcbuser/gluster/data
在ServerNode1上执行,创建3副本的复制卷。
#创建卷
root@BGServer225:sudo gluster volume create app-data replica 3 transport tcp 192.168.1.225:/home/zcbuser/gluster/data/ 192.168.1.226:/home/zcbuser/gluster/data/ 192.168.1.227:/home/zcbuser/gluster/data/ force
volume create: app-data: success: please start the volume to access data
# 列出卷
root@BGServer225:/home/zcbuser/gluster# sudo gluster volume list
app-data
# 查看卷信息
root@BGServer225:/home/zcbuser/gluster# sudo gluster volume info
Volume Name: app-data
Type: Replicate
Volume ID: 65454891-44e3-4be1-bcb2-c977f7b99fb4
Status: Created
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 192.168.1.225:/home/zcbuser/gluster/data
Brick2: 192.168.1.226:/home/zcbuser/gluster/data
Brick3: 192.168.1.227:/home/zcbuser/gluster/data
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off
# 启动卷,并限制卷大小10G
root@BGServer225:/home/zcbuser/gluster# sudo gluster volume start app-data
volume start: app-data: success
root@BGServer225:/home/zcbuser/gluster# sudo gluster volume quota app-data enable
volume quota : success
root@BGServer225:/home/zcbuser/gluster# sudo gluster volume quota app-data limit-usage / 10GB
volume quota : success
root@BGServer225:/home/zcbuser/gluster# sudo gluster volume status
Status of volume: app-data
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick 192.168.1.225:/home/zcbuser/gluster/da
ta 49152 0 Y 8179
Brick 192.168.1.226:/home/zcbuser/gluster/da
ta 49152 0 Y 5747
Brick 192.168.1.227:/home/zcbuser/gluster/da
ta 49152 0 Y 4628
Self-heal Daemon on localhost N/A N/A Y 8202
Quota Daemon on localhost N/A N/A Y 8242
Self-heal Daemon on 192.168.1.226 N/A N/A Y 5770
Quota Daemon on 192.168.1.226 N/A N/A Y 5803
Self-heal Daemon on 192.168.1.227 N/A N/A Y 4651
Quota Daemon on 192.168.1.227 N/A N/A Y 4678
Task Status of Volume app-data
------------------------------------------------------------------------------
There are no active volume tasks
客户端安装及挂载
安装客户端
在ClientNode4,ClientNode5两个节点上安装glusterfs-client
apt install glusterfs-client
挂载
在所有的客户端节点上创建一个文件夹,并进行挂载
mkdir /home/zcbuser/gfs-share && mount -t glusterfs -o backup-volfile-servers=192.168.1.226:192.168.1.227 192.168.1.225:/app-data /home/zcbuser/gfs-share
上面的方式为手动挂载的,重启后得重新挂载,可以使用fstab自动挂载
sudo nano /etc/fstab
将下面语句插入到打开的文本中
192.168.1.225:/app-data /home/zcbuser/gfs-share glusterfs defaults,_netdev,direct-io-mode=enable,backup-volfile-servers=192.168.1.226:192.168.1.227 0 0
使用mount -a
检测并挂载测试
最后可通过 df -h 命令查看挂载情况
root@BGServer225:/home/zcbuser# df -h
文件系统 容量 已用 可用 已用% 挂载点
udev 2.0G 0 2.0G 0% /dev
tmpfs 395M 6.0M 389M 2% /run
/dev/sda1 39G 11G 26G 30% /
tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
192.168.1.225:/app-data 10G 3.6M 10G 1% /home/zcbuser/gfs-share
tmpfs 395M 0 395M 0% /run/user/115
overlay 39G 11G 26G 30%
tmpfs 395M 0 395M 0% /run/user/1000
如果挂载可通过 cat /var/log/glusterfs/home-用户名-gfs-share.log
查看错误日志
用户名是根据你系统用户名相应填入
umount /home/zcbuser/gfs-share 可以解除挂载
当前遇到的问题
1、当读写频繁是,挂载就莫名消失了,df -h中找不到了,需要重启后才正常
2、小文件读写慢