GFS的搭建
一.基础环境
操作系统 系统IP 主机名 挂载磁盘 挂载目录
centos7.6 192.168.1.11 node1 /dev/sd{b-e}{3-6G} /[b-e][3-6]
centos7.6 192.168.1.22 node2 /dev/sd{b-e}{3-6G} /[b-e][3-6]
centos7.6 192.168.1.33 node3 /dev/sd{b-d}{3-5G} /[b-d][3-5]
centos7.6 192.168.1.44 node4 /dev/sd{b-d}{3-5G} /[b-d][3-5]
centos7.6 192.168.1.55 client
服务器相关信息
卷名 卷类型 空间大小 brick
dis-volume 分布式卷 12 node1[/e6],node2[/e6]
stripe-volume 条带卷 10 node1[/d5],node2[/d5]
rep-volume 复制卷 5 node3[/d5].node4[/d5]
dis-stripe 分布式条带卷 12 node1[/b3],node2[/b3],node3[/b3],node4[/b3]
dir-rep 分布式复制卷 8 node1[/c4],node2[/c4],node3[/c4],node4[/c4]
二.搭建服务
在node[1-4]节点上执行以下操作
root@node1~
>
e
c
h
o
′
192.168.1.11
n
o
d
e
1192.168.1.22
n
o
d
e
2192.168.1.33
n
o
d
e
3192.168.1.44
n
o
d
e
4
′
>
>
/
e
t
c
/
h
o
s
t
s
r
o
o
t
@
n
o
d
e
1
>echo '192.168.1.11 node1 192.168.1.22 node2 192.168.1.33 node3 192.168.1.44 node4' >> /etc/hosts root@node1~
>echo′192.168.1.11node1192.168.1.22node2192.168.1.33node3192.168.1.44node4′>>/etc/hostsroot@node1 >mkdir /gfs
上传glusterfs所需要的所有软件包到gfs目录下,没有可以去官网下载https://www.gluster.org/install/
root@node1~
>
r
p
m
−
e
−
−
n
o
d
e
p
s
‘
r
p
m
−
q
a
g
l
u
s
t
e
r
f
s
∗
‘
r
o
o
t
@
n
o
d
e
1
>rpm -e --nodeps `rpm -qa glusterfs*` root@node1~
>rpm−e−−nodeps‘rpm−qaglusterfs∗‘root@node1 >vim /opt/gfsconf.sh
#!/bin/bash
#对磁盘筛选排序
for i in KaTeX parse error: Expected 'EOF', got '#' at position 51: … | sort) do #̲对磁盘分区 dd if=/de…i bs=1024 count=1024
fdisk $i << EOF
n
p
w
EOF
#格式化磁盘
partprobe $i
mkfs.ext4 ${i}1
done
#创建挂载目录挂载磁盘
mkdir /b3 /c4 /d5 /e6
fdisk -l | grep -w “/dev/sd[b-z]” | sed -r ‘s/.(/d.{8})./\1/g’ | sed -r ‘s/(.)(.):(.)/mount \1\21 /\2\3/’ | bash
#写入磁盘挂载文件
fdisk -l | grep -w “/dev/sd[b-z]” | sed -r ‘s/.(/d.{8})./\1/g’ | sed -r ‘s/(.)(.):(.)/\1\21 /\2\3 xfs default 0 0/’ >> /etc/fstab
#关闭防火墙
iptables -F
systemctl stop firewalld
setenforce 0
#安装软件包
yum -y install firewalld-filesystem
mkdir /etc/yum.repos.d/a
mv /etc/yum.repos.d/* /etc/yum.repos.d/a
cat << EOF >> /etc/yum.repos.d/gfs.repo
[gfs]
name=gfs
baseurl=file:///gfs
gpgcheck=0
enabled=1
EOF
rpm -e --nodeps rpm -qa glusterfs*
yum clean all && yum makecache
yum -y install glusterfs glusterfs-server glusterfs-fuse glusterfs-rdma
systemctl start glusterd
systemctl enable glusterd
root@node1~$>sh /opt/gfsconf.sh
三.使用gfs
1.添加节点(只在一台机器上操作,添加node[1-4]节点)
root@node1/gfs
>
g
l
u
s
t
e
r
p
e
e
r
p
r
o
b
e
n
o
d
e
1
p
e
e
r
p
r
o
b
e
:
s
u
c
c
e
s
s
.
P
r
o
b
e
o
n
l
o
c
a
l
h
o
s
t
n
o
t
n
e
e
d
e
d
r
o
o
t
@
n
o
d
e
1
/
g
f
s
>gluster peer probe node1 peer probe: success. Probe on localhost not needed root@node1/gfs
>glusterpeerprobenode1peerprobe:success.Probeonlocalhostnotneededroot@node1/gfs>gluster peer probe node2
peer probe: success.
root@node1/gfs
>
g
l
u
s
t
e
r
p
e
e
r
p
r
o
b
e
n
o
d
e
3
p
e
e
r
p
r
o
b
e
:
s
u
c
c
e
s
s
.
r
o
o
t
@
n
o
d
e
1
/
g
f
s
>gluster peer probe node3 peer probe: success. root@node1/gfs
>glusterpeerprobenode3peerprobe:success.root@node1/gfs>gluster peer probe node4
peer probe: success.
2.查看集群状态
通过以下命令在每个节点上查看集群状态,正常情况下,每个节点的输出结果为Cluster(Connected),如果显示Dissconnected,请减仓host文件配置
root@node1~$>gluster peer status
Number of Peers: 3
Hostname: node2
Uuid: 69e81657-0041-4574-813a-050500c7e326
State: Peer in Cluster (Connected)
Hostname: node3
Uuid: 9afccb42-8c91-417d-ba07-0087c15c8364
State: Peer in Cluster (Connected)
Hostname: node4
Uuid: c80afe82-bfad-44e7-bcbd-6d01989b55fc
State: Peer in Cluster (Connected)
2.创建卷
(1).创建分布式卷(没有执行类型,默认创建分布式卷)
root@node1~KaTeX parse error: Expected 'EOF', got '#' at position 65: …:/e6 force #̲创建卷dis-volume …>gluster volume info dis-volume
Volume Name: dis-volume
Type: Distribute #代表分布式卷
Volume ID: 62a5ab74-b5a5-40df-a6f2-d14e1baa72fa
Status: Created
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: node1:/e6
Brick2: node2:/e6
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
root@node1~KaTeX parse error: Expected 'EOF', got '#' at position 39: …is-volume #̲启动卷 vol…>gluster volume create stripe-volume stripe 2 node1:/d5 node2:/d5 force #stripe是指定创建的卷类型为条带 2为节点数
volume create: stripe-volume: success: please start the volume to access data
root@node1~KaTeX parse error: Expected 'EOF', got '#' at position 78: …ripe-volume #̲卷名 Type…>gluster volume start stripe-volume #启动条带卷
volume start: stripe-volume: success
(3).创建复制卷
root@node1~KaTeX parse error: Expected 'EOF', got '#' at position 76: …de4:/d5 force #̲replica2为卷类型"复制…>gluster volume info replica-volume
Volume Name: replica-volume
Type: Replicate
Volume ID: 3f757bf5-6ee0-4d61-a5ae-7b8a349795a9
Status: Created
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: node3:/d5
Brick2: node4:/d5
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
root@node1~
>
g
l
u
s
t
e
r
v
o
l
u
m
e
s
t
a
r
t
r
e
p
l
i
c
a
−
v
o
l
u
m
e
v
o
l
u
m
e
s
t
a
r
t
:
r
e
p
l
i
c
a
−
v
o
l
u
m
e
:
s
u
c
c
e
s
s
(
4
)
.
创
建
分
布
式
条
带
卷
r
o
o
t
@
n
o
d
e
1
>gluster volume start replica-volume volume start: replica-volume: success (4).创建分布式条带卷 root@node1~
>glustervolumestartreplica−volumevolumestart:replica−volume:success(4).创建分布式条带卷root@node1 >gluster volume create dis-stripe stripe 2 node1:/b3 node2:/b3 node3:/b3 node4:/b3 force #因为用了4快硬盘只组成了一份条带,所以剩下的自动默认组成分布式
volume create: dis-stripe: success: please start the volume to access data
root@node1~
>
g
l
u
s
t
e
r
v
o
l
u
m
e
i
n
f
o
d
i
s
−
s
t
r
i
p
e
V
o
l
u
m
e
N
a
m
e
:
d
i
s
−
s
t
r
i
p
e
T
y
p
e
:
D
i
s
t
r
i
b
u
t
e
d
−
S
t
r
i
p
e
V
o
l
u
m
e
I
D
:
3
d
2
f
24
d
2
−
a
d
86
−
475
e
−
b
e
0
b
−
7
c
39228
a
0
f
27
S
t
a
t
u
s
:
C
r
e
a
t
e
d
S
n
a
p
s
h
o
t
C
o
u
n
t
:
0
N
u
m
b
e
r
o
f
B
r
i
c
k
s
:
2
x
2
=
4
T
r
a
n
s
p
o
r
t
−
t
y
p
e
:
t
c
p
B
r
i
c
k
s
:
B
r
i
c
k
1
:
n
o
d
e
1
:
/
b
3
B
r
i
c
k
2
:
n
o
d
e
2
:
/
b
3
B
r
i
c
k
3
:
n
o
d
e
3
:
/
b
3
B
r
i
c
k
4
:
n
o
d
e
4
:
/
b
3
O
p
t
i
o
n
s
R
e
c
o
n
f
i
g
u
r
e
d
:
t
r
a
n
s
p
o
r
t
.
a
d
d
r
e
s
s
−
f
a
m
i
l
y
:
i
n
e
t
n
f
s
.
d
i
s
a
b
l
e
:
o
n
r
o
o
t
@
n
o
d
e
1
>gluster volume info dis-stripe Volume Name: dis-stripe Type: Distributed-Stripe Volume ID: 3d2f24d2-ad86-475e-be0b-7c39228a0f27 Status: Created Snapshot Count: 0 Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1: node1:/b3 Brick2: node2:/b3 Brick3: node3:/b3 Brick4: node4:/b3 Options Reconfigured: transport.address-family: inet nfs.disable: on root@node1~
>glustervolumeinfodis−stripeVolumeName:dis−stripeType:Distributed−StripeVolumeID:3d2f24d2−ad86−475e−be0b−7c39228a0f27Status:CreatedSnapshotCount:0NumberofBricks:2x2=4Transport−type:tcpBricks:Brick1:node1:/b3Brick2:node2:/b3Brick3:node3:/b3Brick4:node4:/b3OptionsReconfigured:transport.address−family:inetnfs.disable:onroot@node1 >gluster volume start dis-stripe
volume start: dis-stripe: success
(5).创建分布式复制卷
root@node1~KaTeX parse error: Expected 'EOF', got '#' at position 89: …de4:/c4 force #̲创建分布式复制卷只复制了两份所…>gluster volume info dis-rep
Volume Name: dis-rep
Type: Distributed-Replicate
Volume ID: 2e7f3e3d-fdc7-4bc7-b5cd-3194bded1292
Status: Created
Snapshot Count: 0
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: node1:/c4
Brick2: node2:/c4
Brick3: node3:/c4
Brick4: node4:/c4
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
root@node1~
>
g
l
u
s
t
e
r
v
o
l
u
m
e
s
t
a
r
t
d
i
s
−
r
e
p
v
o
l
u
m
e
s
t
a
r
t
:
d
i
s
−
r
e
p
:
s
u
c
c
e
s
s
四
.
部
署
G
l
u
s
t
e
r
客
户
端
1.
安
装
客
户
端
r
o
o
t
@
m
s
>gluster volume start dis-rep volume start: dis-rep: success 四.部署Gluster客户端 1.安装客户端 root@ms~
>glustervolumestartdis−repvolumestart:dis−rep:success四.部署Gluster客户端1.安装客户端root@ms >mkdir /gfs
root@ms~
>
h
o
s
t
n
a
m
e
c
i
l
e
n
t
r
o
o
t
@
m
s
>hostname cilent root@ms~
>hostnamecilentroot@ms >bash
root@ms~$>scp -r 192.168.1.11:/gfs/* /gfs/
#!/bin/bash
iptables -F
systemctl stop firewalld
setenforce 0
cat << EOF >> /etc/hosts
192.168.200.11 node1
192.168.200.22 node2
192.168.200.33 node3
192.168.200.44 node4
EOF
yum -y install firewalld-filesystem
mkdir /etc/yum.repos.d/a
mv /etc/yum.repos.d/* /etc/yum.repos.d/a
cat << EOF >> /etc/yum.repos.d/gfs.repo
[gfs]
name=gfs
baseurl=file:///gfs
gpgcheck=0
enabled=1
EOF
rpm -e --nodeps rpm -qa glusterfs*
yum clean all && yum makecache
yum -y install glusterfs glusterfs-fuse
2.创建挂载目录
root@cilent~
>
m
k
d
i
r
/
o
p
t
/
d
i
s
,
s
t
r
i
p
e
,
r
e
p
,
d
i
s
a
n
d
s
t
r
i
p
e
,
d
i
s
a
n
d
r
e
p
r
o
o
t
@
c
i
l
e
n
t
>mkdir /opt/{dis,stripe,rep,dis_andstripe,dis_and_rep} root@cilent~
>mkdir/opt/dis,stripe,rep,disandstripe,disandreproot@cilent >ls /test
dis dis_and_rep dis_andstripe rep stripe
3.挂载Gluster文件系统
root@cilent~
>
m
o
u
n
t
−
t
g
l
u
s
t
e
r
f
s
n
o
d
e
1
:
d
i
s
−
s
t
r
i
p
e
/
t
e
s
t
/
d
i
s
a
n
d
s
t
r
i
p
e
r
o
o
t
@
c
i
l
e
n
t
>mount -t glusterfs node1:dis-stripe /test/dis_andstripe root@cilent~
>mount−tglusterfsnode1:dis−stripe/test/disandstriperoot@cilent >mount -t glusterfs node1:replica-volume /test/rep
root@cilent~
>
m
o
u
n
t
−
t
g
l
u
s
t
e
r
f
s
n
o
d
e
1
:
d
i
s
−
r
e
p
/
t
e
s
t
/
d
i
s
a
n
d
r
e
p
r
o
o
t
@
c
i
l
e
n
t
>mount -t glusterfs node1:dis-rep /test/dis_and_rep root@cilent~
>mount−tglusterfsnode1:dis−rep/test/disandreproot@cilent >mount -t glusterfs node1:stripe-volume /test/stripe
root@cilent~$>mount -t glusterfs node1:dis-volume /test/dis
4.修改fstab配置文件
root@cilent~$>vim /etc/fstab
node1:dis-stripe /test/dis_andstripe glusterfs defaults,_netdev 0 0
node1:replica-volume /test/rep glusterfs defaults,_netdev 0 0
node1:dis-rep /test/dis_and_rep glusterfs defaults,_netdev 0 0
node1:stripe-volume /test/stripe glusterfs defaults,_netdev 0 0
node1:dis-volume /test/dis glusterfs defaults,_netdev 0 0
五.测试
1.创建测试文件
root@cilent~
>
f
o
r
i
i
n
1..5
;
d
o
d
d
i
f
=
/
d
e
v
/
z
e
r
o
o
f
=
/
r
o
o
t
/
d
e
m
o
n
>for i in {1..5};do dd if=/dev/zero of=/root/demon
>foriin1..5;doddif=/dev/zeroof=/root/demoni.log bs=1M count=43;done
记录了43+0 的读入
记录了43+0 的写出
45088768字节(45 MB)已复制,0.156882 秒,287 MB/秒
记录了43+0 的读入
记录了43+0 的写出
45088768字节(45 MB)已复制,0.166265 秒,271 MB/秒
记录了43+0 的读入
记录了43+0 的写出
45088768字节(45 MB)已复制,0.416119 秒,108 MB/秒
记录了43+0 的读入
记录了43+0 的写出
45088768字节(45 MB)已复制,0.491025 秒,91.8 MB/秒
记录了43+0 的读入
记录了43+0 的写出
45088768字节(45 MB)已复制,0.409948 秒,110 MB/秒
root@cilent~$> cp demon* /test/dis
root@cilent~$> cp demon* /test/stripe/
root@cilent~$> cp demon* /test/rep
root@cilent~$> cp demon* /test/dis_andstripe/
root@cilent~$> cp demon* /test/dis_and_rep/
2.查看文件分布
查看分布式卷文件存储(分布式存储,文件未分片)
假设node1出现问题1,3,4无法访问.2,5可以正常访问
root@node1~$>ll -h /e6
总用量 130M
drwxr-xr-x 4 root root 4.0K [2019-11-11 06:32] .
dr-xr-xr-x. 22 root root 4.0K [2019-11-11 01:46] …
-rw-r–r-- 2 root root 43M [2019-11-11 06:32] demon1.log
-rw-r–r-- 2 root root 43M [2019-11-11 06:32] demon3.log
-rw-r–r-- 2 root root 43M [2019-11-11 06:32] demon4.log
drw------- 11 root root 4.0K [2019-11-11 06:32] .glusterfs
drwx------ 2 root root 16K [2019-11-11 02:06] lost+found
root@node2/gfs$>ll -h /e6
总用量 87M
drwxr-xr-x 4 root root 4.0K [2019-11-11 06:32] .
dr-xr-xr-x. 22 root root 4.0K [2019-11-11 02:31] ..
-rw-r--r-- 2 root root 43M [2019-11-11 06:32] demon2.log
-rw-r--r-- 2 root root 43M [2019-11-11 06:32] demon5.log
drw------- 10 root root 4.0K [2019-11-11 06:32] .glusterfs
drwx------ 2 root root 16K [2019-11-11 02:31] lost+found
查看条带卷文件存储(文件分片存储)
假设node1出现问题,剩下文件都是一半,所以全部无法访问
root@node1~$>ll -h /d5
总用量 108M
drwxr-xr-x 4 root root 4.0K [2019-11-11 06:32] .
dr-xr-xr-x. 22 root root 4.0K [2019-11-11 01:46] ..
-rw-r--r-- 2 root root 22M [2019-11-11 06:32] demon1.log
-rw-r--r-- 2 root root 22M [2019-11-11 06:32] demon2.log
-rw-r--r-- 2 root root 22M [2019-11-11 06:32] demon3.log
-rw-r--r-- 2 root root 22M [2019-11-11 06:32] demon4.log
-rw-r--r-- 2 root root 22M [2019-11-11 06:32] demon5.log
drw------- 13 root root 4.0K [2019-11-11 06:32] .glusterfs
drwx------ 2 root root 16K [2019-11-11 02:06] lost+found
root@node2/gfs$>ll -h /d5
总用量 108M
drwxr-xr-x 4 root root 4.0K [2019-11-11 06:32] .
dr-xr-xr-x. 22 root root 4.0K [2019-11-11 02:31] ..
-rw-r--r-- 2 root root 22M [2019-11-11 06:32] demon1.log
-rw-r--r-- 2 root root 22M [2019-11-11 06:32] demon2.log
-rw-r--r-- 2 root root 22M [2019-11-11 06:32] demon3.log
-rw-r--r-- 2 root root 22M [2019-11-11 06:32] demon4.log
-rw-r--r-- 2 root root 22M [2019-11-11 06:32] demon5.log
drw------- 13 root root 4.0K [2019-11-11 06:32] .glusterfs
drwx------ 2 root root 16K [2019-11-11 02:31] lost+found
查看复制卷文件存储
假设node3出现问题,所有文件依然可以访问.因为文件存储在node3,4各一份,并且每个文件都是完整的
root@node3~$>ll -h /d5
总用量 216M
drwxr-xr-x 4 root root 4.0K [2019-11-11 06:32] .
dr-xr-xr-x. 22 root root 4.0K [2019-11-11 02:36] ..
-rw-r--r-- 2 root root 43M [2019-11-11 06:32] demon1.log
-rw-r--r-- 2 root root 43M [2019-11-11 06:32] demon2.log
-rw-r--r-- 2 root root 43M [2019-11-11 06:32] demon3.log
-rw-r--r-- 2 root root 43M [2019-11-11 06:32] demon4.log
-rw-r--r-- 2 root root 43M [2019-11-11 06:32] demon5.log
drw------- 13 root root 4.0K [2019-11-11 06:32] .glusterfs
drwx------ 2 root root 16K [2019-11-11 02:36] lost+found
root@node4~$>ll -h /d5
总用量 216M
drwxr-xr-x 4 root root 4.0K [2019-11-11 06:32] .
dr-xr-xr-x. 22 root root 4.0K [2019-11-11 02:36] ..
-rw-r--r-- 2 root root 43M [2019-11-11 06:32] demon1.log
-rw-r--r-- 2 root root 43M [2019-11-11 06:32] demon2.log
-rw-r--r-- 2 root root 43M [2019-11-11 06:32] demon3.log
-rw-r--r-- 2 root root 43M [2019-11-11 06:32] demon4.log
-rw-r--r-- 2 root root 43M [2019-11-11 06:32] demon5.log
drw------- 13 root root 4.0K [2019-11-11 06:32] .glusterfs
drwx------ 2 root root 16K [2019-11-11 02:36] lost+found
查看分布式条带
分布式条带存储假设node1出现问题2,5不受任何影响,然而1,3,4只剩下一半无法访问
root@node1~$>ll -h /b3
总用量 65M
drwxr-xr-x 4 root root 4.0K [2019-11-11 06:32] .
dr-xr-xr-x. 22 root root 4.0K [2019-11-11 01:46] ..
-rw-r--r-- 2 root root 22M [2019-11-11 06:32] demon1.log
-rw-r--r-- 2 root root 22M [2019-11-11 06:32] demon3.log
-rw-r--r-- 2 root root 22M [2019-11-11 06:32] demon4.log
drw------- 11 root root 4.0K [2019-11-11 06:32] .glusterfs
drwx------ 2 root root 16K [2019-11-11 02:06] lost+found
root@node2/gfs$>ll -h /b3
总用量 65M
drwxr-xr-x 4 root root 4.0K [2019-11-11 06:32] .
dr-xr-xr-x. 22 root root 4.0K [2019-11-11 02:31] ..
-rw-r--r-- 2 root root 22M [2019-11-11 06:32] demon1.log
-rw-r--r-- 2 root root 22M [2019-11-11 06:32] demon3.log
-rw-r--r-- 2 root root 22M [2019-11-11 06:32] demon4.log
drw------- 11 root root 4.0K [2019-11-11 06:32] .glusterfs
drwx------ 2 root root 16K [2019-11-11 02:31] lost+found
root@node3~$>ll -h /b3
总用量 44M
drwxr-xr-x 4 root root 4.0K [2019-11-11 06:32] .
dr-xr-xr-x. 22 root root 4.0K [2019-11-11 02:36] ..
-rw-r--r-- 2 root root 22M [2019-11-11 06:32] demon2.log
-rw-r--r-- 2 root root 22M [2019-11-11 06:32] demon5.log
drw------- 10 root root 4.0K [2019-11-11 06:32] .glusterfs
drwx------ 2 root root 16K [2019-11-11 02:36] lost+found
root@node4~$>ll -h /b3
总用量 44M
drwxr-xr-x 4 root root 4.0K [2019-11-11 06:32] .
dr-xr-xr-x. 22 root root 4.0K [2019-11-11 02:36] ..
-rw-r--r-- 2 root root 22M [2019-11-11 06:32] demon2.log
-rw-r--r-- 2 root root 22M [2019-11-11 06:32] demon5.log
drw------- 10 root root 4.0K [2019-11-11 06:32] .glusterfs
drwx------ 2 root root 16K [2019-11-11 02:36] lost+found
查看分布式复制卷
文件分布式存储在node[1,2][3,4],并且因为是复制卷,每个文件都有完全的冗余,所以无论那个出现问题,不影响文件
root@node1~$>ll -h /c4
总用量 130M
drwxr-xr-x 4 root root 4.0K [2019-11-11 06:32] .
dr-xr-xr-x. 22 root root 4.0K [2019-11-11 01:46] ..
-rw-r--r-- 2 root root 43M [2019-11-11 06:32] demon1.log
-rw-r--r-- 2 root root 43M [2019-11-11 06:32] demon3.log
-rw-r--r-- 2 root root 43M [2019-11-11 06:32] demon4.log
drw------- 11 root root 4.0K [2019-11-11 06:32] .glusterfs
drwx------ 2 root root 16K [2019-11-11 02:06] lost+found
root@node2/gfs$>ll -h /c4
总用量 130M
drwxr-xr-x 4 root root 4.0K [2019-11-11 06:32] .
dr-xr-xr-x. 22 root root 4.0K [2019-11-11 02:31] ..
-rw-r--r-- 2 root root 43M [2019-11-11 06:32] demon1.log
-rw-r--r-- 2 root root 43M [2019-11-11 06:32] demon3.log
-rw-r--r-- 2 root root 43M [2019-11-11 06:32] demon4.log
drw------- 11 root root 4.0K [2019-11-11 06:32] .glusterfs
drwx------ 2 root root 16K [2019-11-11 02:31] lost+found
root@node3~$>ll -h /c4
总用量 87M
drwxr-xr-x 4 root root 4.0K [2019-11-11 06:32] .
dr-xr-xr-x. 22 root root 4.0K [2019-11-11 02:36] ..
-rw-r--r-- 2 root root 43M [2019-11-11 06:32] demon2.log
-rw-r--r-- 2 root root 43M [2019-11-11 06:32] demon5.log
drw------- 10 root root 4.0K [2019-11-11 06:32] .glusterfs
drwx------ 2 root root 16K [2019-11-11 02:36] lost+found
root@node4~$>ll -h /c4
总用量 87M
drwxr-xr-x 4 root root 4.0K [2019-11-11 06:32] .
dr-xr-xr-x. 22 root root 4.0K [2019-11-11 02:36] ..
-rw-r--r-- 2 root root 43M [2019-11-11 06:32] demon2.log
-rw-r--r-- 2 root root 43M [2019-11-11 06:32] demon5.log
drw------- 10 root root 4.0K [2019-11-11 06:32] .glusterfs
drwx------ 2 root root 16K [2019-11-11 02:36] lost+found
六.其他维护命令
1.查看glusterfs卷有那些
root@node4~
>
g
l
u
s
t
e
r
v
o
l
u
m
e
l
i
s
t
d
i
s
−
r
e
p
d
i
s
−
s
t
r
i
p
e
d
i
s
−
v
o
l
u
m
e
r
e
p
l
i
c
a
−
v
o
l
u
m
e
s
t
r
i
p
e
−
v
o
l
u
m
e
2.
查
看
所
有
卷
详
细
信
息
r
o
o
t
@
n
o
d
e
4
>gluster volume list dis-rep dis-stripe dis-volume replica-volume stripe-volume 2.查看所有卷详细信息 root@node4~
>glustervolumelistdis−repdis−stripedis−volumereplica−volumestripe−volume2.查看所有卷详细信息root@node4 >gluster volume info
3.查看卷状态
root@node4~
>
g
l
u
s
t
e
r
v
o
l
u
m
e
s
t
a
t
u
s
4.
停
止
删
除
卷
s
t
o
p
r
o
o
t
@
n
o
d
e
4
>gluster volume status 4.停止删除卷 stop root@node4~
>glustervolumestatus4.停止删除卷stoproot@node4 >gluster volume stop dis-Stripe #dis_stripe为卷名
delete
root@node4~KaTeX parse error: Expected 'EOF', got '#' at position 39: …dis-Stripe #̲dis_stripe为卷名 …>gluster volume set dis-rep auth.allow 192.168.1.,192.168.2.
volume set: success #dis-rep为卷名