CentOS7 离线部署GlusterFS分布式存储(glusterfs 3.8.15)

1.基础环境配置
主机名IP地址
glusterfs01192.168.200.10
glusterfs02192.168.200.20
client192.168.200.30

(1)创建3台虚拟机,节点为192.168.200.10(20/30),并且每个服务器节点(10/20)各添加三块10GB的硬盘为sdb、sdc、sdd,此处以192.168.200.10节点为例

sdb用于创建volume分布式卷gs1

sdc用于创建volume分布式复制卷gs2

sdd用于创建volume分布式条带卷gs3

[root@localhost ~]# lsblk
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0   20G  0 disk
├─sda1            8:1    0    1G  0 part /boot
└─sda2            8:2    0   19G  0 part
  ├─centos-root 253:0    0   17G  0 lvm  /
  └─centos-swap 253:1    0    2G  0 lvm  [SWAP]
sdb               8:16   0   10G  0 disk
sdc               8:32   0   10G  0 disk
sdd               8:48   0   10G  0 disk
sr0              11:0    1  9.5G  0 rom

(2)更改三台虚拟机的主机名,第三台(30)为client节点,并在每个服务器节点(10/20)添加hosts文件实现集群之间相互解析,此处以192.168.200.10节点为例

[root@bogon ~]# hostnamectl set-hostname glusterfs01
[root@bogon ~]# bash
[root@glusterfs01 ~]# hostnamectl
   Static hostname: glusterfs01
         Icon name: computer-vm
           Chassis: vm
        Machine ID: 5d582b1e52644617b876ee8234b9be8f
           Boot ID: 5d94c9cb53604f8b9ac50f7469a6b5fa
    Virtualization: vmware
  Operating System: CentOS Linux 7 (Core)
       CPE OS Name: cpe:/o:centos:centos:7
            Kernel: Linux 3.10.0-1160.el7.x86_64
      Architecture: x86-64
[root@glusterfs01 ~]# vi /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.200.10 glusterfs01
192.168.200.20 glusterfs02

(3)三台虚拟机关闭防火墙和selinux,此处以192.168.200.10节点为例

[root@glusterfs01 ~]# systemctl stop firewalld
[root@glusterfs01 ~]# systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@glusterfs01 ~]# setenforce 0
2.服务安装与配置

(1)在服务器节点(10/20)配置本地gluster源,此处以192.168.200.10节点为例

本地gluster源下载:

https://download.csdn.net/download/m0_75102488/89656177icon-default.png?t=O83Ahttps://download.csdn.net/download/m0_75102488/89656177

[root@glustfs01 ~]# mv /etc/yum.repos.d/* /media/
[root@glustfs01 ~]# mkdir /opt/centos
[root@glustfs01 ~]# mount /dev/cdrom /opt/centos
mount: /dev/sr0 is write-protected, mounting read-only
[root@glustfs01 ~]# vi /etc/yum.repos.d/local.repo
[centos]
name=centos
baseurl=file:///opt/centos
gpgcheck=0
enabled=1
[gluster]
name=gluster
baseurl=file:///opt/gluster
gpgcheck=0
enabled=1

(2)在服务器节点(10/20)安装GlusterFS分布式存储服务,samba服务和rpcbind服务并查看版本信息,此处以192.168.200.10节点为例

[root@glustfs01 ~]# yum install -y glusterfs-server samba rpcbind
[root@glusterfs01 ~]# which glusterfs
/usr/sbin/glusterfs
[root@glusterfs01 ~]# glusterfs -V
glusterfs 3.8.15 built on Aug 16 2017 14:48:01
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2013 Red Hat, Inc. <http://www.redhat.com/>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.

(3)在服务器节点(10/20)上启动GlusterFS服务并设为开机自启,启动成功后查看服务状态信息,此处以192.168.200.10节点为例

[root@glusterfs01 ~]# systemctl start glusterd
[root@glusterfs01 ~]# chkconfig glusterd on
Note: Forwarding request to 'systemctl enable glusterd.service'.
Created symlink from /etc/systemd/system/multi-user.target.wants/glusterd.service to /usr/lib/systemd/system/glusterd.service.
[root@glusterfs01 ~]# systemctl status glusterd
● glusterd.service - GlusterFS, a clustered file-system server
   Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2024-04-23 23:35:55 EDT; 22s ago
 Main PID: 11888 (glusterd)
   CGroup: /system.slice/glusterd.service
           └─11888 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO

Apr 23 23:35:55 glusterfs01 systemd[1]: Starting GlusterFS, a clustered file-system server...
Apr 23 23:35:55 glusterfs01 systemd[1]: Started GlusterFS, a clustered file-system server.

(4)在服务器节点(10)上将存储主机加入信任存储池,但不需要添加信任自己,确保所有虚拟机的glusterd服务处于开启状态

[root@glusterfs01 ~]# gluster peer probe glusterfs02
peer probe: success.

(5)登录不同的服务器节点,查看虚拟机信任状态结果是否添加成功

[root@glusterfs01 ~]# gluster peer status
Number of Peers: 1

Hostname: glusterfs02
Uuid: 7ce2bdd8-a743-4035-995f-97710c49a588
State: Peer in Cluster (Connected)


[root@glustfs02 ~]# gluster peer status
Number of Peers: 1

Hostname: glusterfs01
Uuid: dcbcfa85-f06b-42a4-b545-d245184cdb08
State: Peer in Cluster (Connected)

(6)格式化服务器节点(10/20)的三块10GB硬盘,然后进行挂载查看,此处以192.168.200.10节点为例

[root@glusterfs01 ~]# ll /dev/sd*
brw-rw----. 1 root disk 8,  0 Apr 23 22:38 /dev/sda
brw-rw----. 1 root disk 8,  1 Apr 23 22:38 /dev/sda1
brw-rw----. 1 root disk 8,  2 Apr 23 22:38 /dev/sda2
brw-rw----. 1 root disk 8, 16 Apr 23 22:38 /dev/sdb
brw-rw----. 1 root disk 8, 32 Apr 23 22:38 /dev/sdc
brw-rw----. 1 root disk 8, 48 Apr 23 22:38 /dev/sdd
[root@glusterfs01 ~]# mkfs.ext4 /dev/sdb
mke2fs 1.42.9 (28-Dec-2013)
/dev/sdb is entire device, not just one partition!
Proceed anyway? (y,n) y
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
655360 inodes, 2621440 blocks
131072 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2151677952
80 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

[root@glusterfs01 ~]# mkdir -p /gluster/brick1
[root@glusterfs01 ~]# mount /dev/sdb /gluster/brick1
[root@glusterfs01 ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
devtmpfs                 475M     0  475M   0% /dev
tmpfs                    487M     0  487M   0% /dev/shm
tmpfs                    487M  7.8M  479M   2% /run
tmpfs                    487M     0  487M   0% /sys/fs/cgroup
/dev/mapper/centos-root   17G  1.6G   16G  10% /
/dev/sda1               1014M  138M  877M  14% /boot
tmpfs                     98M     0   98M   0% /run/user/0
/dev/sdb                 9.8G   37M  9.2G   1% /gluster/brick1


[root@glusterfs01 ~]# mkfs.ext4 /dev/sdc
mke2fs 1.42.9 (28-Dec-2013)
/dev/sdc is entire device, not just one partition!
Proceed anyway? (y,n) y
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
655360 inodes, 2621440 blocks
131072 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2151677952
80 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

[root@glusterfs01 ~]# mkdir -p /gluster/brick2
[root@glusterfs01 ~]# mount /dev/sdc /gluster/brick2
[root@glusterfs01 ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
devtmpfs                 475M     0  475M   0% /dev
tmpfs                    487M     0  487M   0% /dev/shm
tmpfs                    487M  7.8M  479M   2% /run
tmpfs                    487M     0  487M   0% /sys/fs/cgroup
/dev/mapper/centos-root   17G  1.6G   16G  10% /
/dev/sda1               1014M  138M  877M  14% /boot
tmpfs                     98M     0   98M   0% /run/user/0
/dev/sdb                 9.8G   37M  9.2G   1% /gluster/brick1
/dev/sdc                 9.8G   37M  9.2G   1% /gluster/brick2


[root@glusterfs01 ~]# mkfs.ext4 /dev/sdd
mke2fs 1.42.9 (28-Dec-2013)
/dev/sdd is entire device, not just one partition!
Proceed anyway? (y,n) y
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
655360 inodes, 2621440 blocks
131072 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2151677952
80 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

[root@glusterfs01 ~]# mkdir -p /gluster/brick3
[root@glusterfs01 ~]# mount /dev/sdd /gluster/brick3
[root@glusterfs01 ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
devtmpfs                 475M     0  475M   0% /dev
tmpfs                    487M     0  487M   0% /dev/shm
tmpfs                    487M  7.7M  479M   2% /run
tmpfs                    487M     0  487M   0% /sys/fs/cgroup
/dev/mapper/centos-root   17G  1.6G   16G  10% /
/dev/sda1               1014M  138M  877M  14% /boot
tmpfs                     98M     0   98M   0% /run/user/0
/dev/sdb                 9.8G   37M  9.2G   1% /gluster/brick1
/dev/sdc                 9.8G   37M  9.2G   1% /gluster/brick2
/dev/sdd                 9.8G   37M  9.2G   1% /gluster/brick3
3.分卷式卷部署

(1)创建volume分布式卷,在gluster01节点创建成功之后自动创建的卷

[root@glusterfs01 ~]# gluster volume create gs1 glusterfs01:/gluster/brick1 glusterfs02:/gluster/brick1 force
volume create: gs1: success: please start the volume to access data
[root@glusterfs01 ~]# gluster volume start gs1
volume start: gs1: success

(2)登录到不同的服务器节点可以看到volume卷的详细信息

[root@glustfs02 ~]# gluster volume info

Volume Name: gs1
Type: Distribute
Volume ID: 4125eda5-630a-4c48-ba5d-cc9f2a26d822
Status: Started
Snapshot Count: 0
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: glusterfs01:/gluster/brick1
Brick2: glusterfs02:/gluster/brick1
Options Reconfigured:
transport.address-family: inet
performance.readdir-ahead: on
nfs.disable: on

(3)使用glusterfs方式挂载volume卷,将服务挂载到指定目录之后,在挂载好的/opt目录里创建一个实验文件

[root@glusterfs01 ~]# mount -t glusterfs 127.0.0.1:/gs1 /opt
[root@glusterfs01 ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
devtmpfs                 475M     0  475M   0% /dev
tmpfs                    487M     0  487M   0% /dev/shm
tmpfs                    487M  7.7M  479M   2% /run
tmpfs                    487M     0  487M   0% /sys/fs/cgroup
/dev/mapper/centos-root   17G  1.6G   16G  10% /
/dev/sda1               1014M  138M  877M  14% /boot
tmpfs                     98M     0   98M   0% /run/user/0
/dev/sdb                 9.8G   37M  9.2G   1% /gluster/brick1
/dev/sdc                 9.8G   37M  9.2G   1% /gluster/brick2
/dev/sdd                 9.8G   37M  9.2G   1% /gluster/brick3
127.0.0.1:/gs1            20G   73M   19G   1% /opt
[root@glusterfs01 ~]# touch /opt/{1..5}
[root@glusterfs01 ~]# ls /opt
1  2  3  4  5  lost+found

(4)登录到其他服务器节点,挂载分布式卷gs1到对应目录下,使用命令查看同步挂载结果

[root@glustfs02 ~]# mount -t glusterfs 127.0.0.1:gs1 /opt
[root@glustfs02 ~]# ls /opt/
1  2  3  4  5  lost+found

(5)在glusterfs01和glusterfs02节点中查询实验文件结果,发现两个节点存储的信息数据是不相同的,说明分布式存储的实验部署成功

[root@glusterfs01 ~]# ls /gluster/brick1
1  5  lost+found

[root@glustfs02 ~]# ls /gluster/brick1
2  3  4  lost+found
4.创建分布式复制卷和条带卷

(1)在glusterfs02节点上,创建volume分布式卷gs2,在glusterfs02节点创建成功之后启动创建的卷,并查看卷消息

[root@glustfs02 ~]# gluster volume create gs2 replica 2 glusterfs01:/gluster/brick2 glusterfs02:/gluster/brick2 force
volume create: gs2: success: please start the volume to access data
[root@glustfs02 ~]# gluster volume start gs2
volume start: gs2: success
[root@glustfs02 ~]# gluster volume info gs2

Volume Name: gs2
Type: Replicate
Volume ID: 2abfbd7a-91cd-4500-a6f9-d263f0e9231a
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: glusterfs01:/gluster/brick2
Brick2: glusterfs02:/gluster/brick2
Options Reconfigured:
transport.address-family: inet
performance.readdir-ahead: on
nfs.disable: on

(2)在glusterfs02节点上,创建stripe条带卷gs3,在glusterfs02节点创建成功之后启动创建的卷,并查看卷消息

[root@glustfs02 ~]# gluster volume create gs3 stripe 2 glusterfs01:/gluster/brick3 glusterfs02:/gluster/brick3 force
volume create: gs3: success: please start the volume to access data
[root@glustfs02 ~]# gluster volume start gs3
volume start: gs3: success
[root@glustfs02 ~]# gluster volume info gs3

Volume Name: gs3
Type: Stripe
Volume ID: bc3a8fe0-e080-44f8-a09f-501efe0c896a
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: glusterfs01:/gluster/brick3
Brick2: glusterfs02:/gluster/brick3
Options Reconfigured:
transport.address-family: inet
performance.readdir-ahead: on
nfs.disable: on

(3)在glustfs01节点上,开启gs1、gs2、gs3卷的nfs挂载方式

[root@glustfs01 ~]# gluster volume set gs1 nfs.disable off
volume set: success
[root@glustfs01 ~]# gluster volume set gs2 nfs.disable off
volume set: success
[root@glustfs01 ~]# gluster volume set gs3 nfs.disable off
volume set: success

在glusterfs01节点和glusterfs02节点上都启动nfs启动器,查看卷服务状态

[root@glusterfs01 ~]# systemctl start rpcbind
[root@glusterfs01 ~]# systemctl start nfs-utils
[root@glusterfs01 ~]# systemctl restart glusterd
[root@glusterfs01 ~]# gluster volume status
Status of volume: gs1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick glusterfs01:/gluster/brick1           49152     0          Y       11989
Brick glusterfs02:/gluster/brick1           49152     0          Y       12783
NFS Server on localhost                     2049      0          Y       12476
NFS Server on glusterfs02                   2049      0          Y       13309

Task Status of Volume gs1
------------------------------------------------------------------------------
There are no active volume tasks

Status of volume: gs2
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick glusterfs01:/gluster/brick2           49153     0          Y       12124
Brick glusterfs02:/gluster/brick2           49153     0          Y       12929
NFS Server on localhost                     2049      0          Y       12476
Self-heal Daemon on localhost               N/A       N/A        Y       12484
NFS Server on glusterfs02                   2049      0          Y       13309
Self-heal Daemon on glusterfs02             N/A       N/A        Y       13317

Task Status of Volume gs2
------------------------------------------------------------------------------
There are no active volume tasks

Status of volume: gs3
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick glusterfs01:/gluster/brick3           49154     0          Y       12208
Brick glusterfs02:/gluster/brick3           49154     0          Y       13029
NFS Server on localhost                     2049      0          Y       12476
NFS Server on glusterfs02                   2049      0          Y       13309

Task Status of Volume gs3
------------------------------------------------------------------------------
There are no active volume tasks
5.客户端配置与测试

(1)连接客户端192.168.200.30,更改主机名后,使用yum安装nfs服务器并启动

[root@localhost ~]# hostnamectl set-hostname client
[root@localhost ~]# bash
[root@client ~]# yum -y install nfs-utils
[root@client ~]# systemctl start rpcbind
[root@client ~]# systemctl start nfs-utils

(2)分布式卷gs1的数据写入测试。在客户端创建一个测试目录将gs1进行nfs卷挂载操作,在client上进行数据读写操作,然后登录glusterfs01和glusterfs02进行数据的查看

[root@client ~]# mkdir /opt/gs1
[root@client ~]# mount -t nfs 192.168.200.10:/gs1 /opt/gs1/
[root@client ~]# touch /opt/gs1/{1..10}
[root@client ~]# ls /opt/gs1/
1  10  2  3  4  5  6  7  8  9  lost+found

[root@glusterfs01 ~]# ls /gluster/brick1
1  5  7  8  9  lost+found

[root@glustfs02 ~]# ls /gluster/brick1/
10  2  3  4  6  lost+found

(3)分布式复制卷gs2的数据写入测试。在客户端创建一个测试目录将gs2进行nfs卷挂载操作,在client上进行数据读写操作,然后登录glusterfs01和glusterfs02进行数据的查看

[root@client ~]# mkdir /opt/gs2
[root@client ~]# mount -t nfs 192.168.200.10:/gs2 /opt/gs2/
[root@client ~]# touch /opt/gs2/{1..10}
[root@client ~]# ls /opt/gs2/
1  10  2  3  4  5  6  7  8  9  lost+found


[root@glusterfs01 ~]# ls /gluster/brick2
1  10  2  3  4  5  6  7  8  9  lost+found

[root@glustfs02 ~]# ls /gluster/brick2/
1  10  2  3  4  5  6  7  8  9  lost+found

(4)分布式条带卷gs3的数据写入测试。在客户端创建一个测试目录将gs3进行nfs卷挂载操作,在client上进行数据读写操作,然后登录glusterfs01和glusterfs02进行数据的查看

[root@client ~]# mkdir /opt/gs3
[root@client ~]# mount -o nolock -t nfs 192.168.200.10:/gs3 /opt/gs3
[root@client ~]# dd if=/dev/zero of=/root/test bs=1024 count=262114
262114+0 records in
262114+0 records out
268404736 bytes (268 MB) copied, 0.886357 s, 303 MB/s
[root@client ~]# ls
test
[root@client ~]# cp test /opt/gs3/
[root@client ~]# ls /opt/gs3/
lost+found  test
[root@client ~]# du -sh /opt/gs3/test
256M    /opt/gs3/test

[root@glusterfs01 ~]# du -sh /gluster/brick3/test
129M    /gluster/brick3/test

[root@glusterfs02 ~]# du -sh /gluster/brick3/test
128M    /gluster/brick3/test

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值