GlusterFS(上)

官网介绍

Gluster is a scalable, distributed file system that aggregates disk storage resources from multiple servers into a single global namespace.

简单说就是一个开源的分布式文件系统,可以用NFS或者SAMBA访问。
实验用node1作为客户端,node2 node3 node4 搭建GlusterFS集群。

准备环境

[root@node2 ~]# vim prepare_bricks.sh
[root@node2 ~]# cat prepare_bricks.sh
#!/bin/bash

# Prepare bricks
pvcreate /dev/sdb
vgcreate -s 4M vol /dev/sdb
lvcreate -l 100%FREE -T vol/pool
lvcreate -V 10G -T vol/pool -n brick
mkfs.xfs -i size=512 /dev/vol/brick
mkdir -p /data/brick${1}
echo "/dev/vol/brick /data/brick${1} xfs defaults 0 0" >> /etc/fstab
mount /data/brick${1}

# Install package and start service
yum install -y glusterfs-server
systemctl start glusterd
systemctl enable glusterd

[root@node2 ~]# sh prepare_bricks.sh 2
...略
[root@node2 ~]# df -Th
Filesystem              Type      Size  Used Avail Use% Mounted on
...略
/dev/mapper/vol-brick   xfs        10G   33M   10G   1% /data/brick2
[root@node2 ~]# scp prepare_bricks.sh node3:/root
...略
root@node3's password: 
prepare_bricks.sh                                                                                                                          100%  418   104.4KB/s   00:00    
[root@node2 ~]# scp prepare_bricks.sh node4:/root
...略
root@node4's password: 
prepare_bricks.sh                                                                                                                          100%  418    88.8KB/s   00:00
[root@node3 ~]# sh prepare_bricks.sh 3
...略
[root@node4 ~]# sh prepare_bricks.sh 4
...略

加入存储池

[root@node2 ~]# gluster peer probe node3
peer probe: success. 
[root@node2 ~]# gluster peer probe node4
peer probe: success. 
[root@node2 ~]# gluster pool list
UUID          Hostname   State
483b4b06-bcdb-4399-bc1c-a36e7a0e5274  node3      Connected 
fa553747-3feb-4422-8dad-b5f61a93aa39  node4      Connected 
19d39a4f-4e92-4ff4-a3a2-539d44358dec  localhost  Connected 
[root@node2 ~]# gluster peer status
Number of Peers: 2

Hostname: node3
Uuid: 483b4b06-bcdb-4399-bc1c-a36e7a0e5274
State: Peer in Cluster (Connected)

Hostname: node4
Uuid: fa553747-3feb-4422-8dad-b5f61a93aa39
State: Peer in Cluster (Connected)

创建卷

卷有5种类型:

1)分布式 - Distributed

默认的类型,将文件按照hash算法随机分布到 一个 brick 中存储。

[root@node2 ~]# gluster volume create vol_distributed node2:/data/brick2/distributed node3:/data/brick3/distributed node4:/data/brick4/distributed
volume create: vol_distributed: success: please start the volume to access data
[root@node2 ~]# gluster volume info vol_distributed

Volume Name: vol_distributed
Type: Distribute
Volume ID: ecd70c34-5808-46ee-b813-9ed6f707b1a3
Status: Created
Snapshot Count: 0
Number of Bricks: 3
Transport-type: tcp
Bricks:
Brick1: node2:/data/brick2/distributed
Brick2: node3:/data/brick3/distributed
Brick3: node4:/data/brick4/distributed
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
[root@node2 ~]# gluster volume start vol_distributed
volume start: vol_distributed: success
[root@node2 ~]# gluster volume status
Status of volume: vol_distributed
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick node2:/data/brick2/distributed        49152     0          Y       1821 
Brick node3:/data/brick3/distributed        49152     0          Y       1770 
Brick node4:/data/brick4/distributed        49152     0          Y       16476

Task Status of Volume vol_distributed
------------------------------------------------------------------------------
There are no active volume tasks

2)复制 - Replicated

将数据按照指定的份数同时存储到每个 brick 。

[root@node2 ~]# gluster volume create vol_replicated replica 3 node2:/data/brick2/replicated node3:/data/brick3/replicated node4:/data/brick4/replicated
volume create: vol_replicated: success: please start the volume to access data
[root@node2 ~]# gluster volume info vol_replicated

Volume Name: vol_replicated
Type: Replicate
Volume ID: e50727b4-d71b-4dab-b74a-cfd2a0027bb3
Status: Created
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: node2:/data/brick2/replicated
Brick2: node3:/data/brick3/replicated
Brick3: node4:/data/brick4/replicated
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: off
[root@node2 ~]# gluster volume start vol_replicated
volume start: vol_replicated: success
[root@node2 ~]# gluster volume status vol_replicated
Status of volume: vol_replicated
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick node2:/data/brick2/replicated         49153     0          Y       1873 
Brick node3:/data/brick3/replicated         49153     0          Y       1828 
Brick node4:/data/brick4/replicated         49153     0          Y       1811 
Self-heal Daemon on localhost               N/A       N/A        Y       1894 
Self-heal Daemon on node4                   N/A       N/A        Y       1832 
Self-heal Daemon on node3                   N/A       N/A        Y       1849 

Task Status of Volume vol_replicated
------------------------------------------------------------------------------
There are no active volume tasks

3)分散 - Dispersed

类似RAID5,数据分片存储到 brick 中,其中一个用作奇偶校验。

[root@node2 ~]# gluster volume create vol_dispersed disperse 3 redundancy 1 node2:/data/brick2/dispersed node3:/data/brick3/dispersed node4:/data/brick4/dispersed
volume create: vol_dispersed: success: please start the volume to access data
[root@node2 ~]# gluster volume info vol_dispersed

Volume Name: vol_dispersed
Type: Disperse
Volume ID: e3894a96-7823-43c7-8f24-c5b628eb86ed
Status: Created
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: node2:/data/brick2/dispersed
Brick2: node3:/data/brick3/dispersed
Brick3: node4:/data/brick4/dispersed
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
[root@node2 ~]# gluster volume start vol_dispersed
volume start: vol_dispersed: success
[root@node2 ~]# gluster volume status vol_dispersed
Status of volume: vol_dispersed
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick node2:/data/brick2/dispersed          49154     0          Y       2028 
Brick node3:/data/brick3/dispersed          49154     0          Y       1918 
Brick node4:/data/brick4/dispersed          49154     0          Y       16630
Self-heal Daemon on localhost               N/A       N/A        Y       1930 
Self-heal Daemon on node4                   N/A       N/A        Y       16558
Self-heal Daemon on node3                   N/A       N/A        Y       1851 

Task Status of Volume vol_dispersed
------------------------------------------------------------------------------
There are no active volume tasks

4)分布复制 - Distributed Replicated

既分布,又复制。

[root@node2 ~]# gluster volume create vol_distributed_replicated replica 3 node2:/data/brick2/distributed_replicated21 node3:/data/brick3/distributed_replicated31 node4:/data/brick4/distributed_replicated41 node2:/data/brick2/distributed_replicated22 node3:/data/brick3/distributed_replicated32 node4:/data/brick4/distributed_replicated42
volume create: vol_distributed_replicated: success: please start the volume to access data
[root@node2 ~]# gluster volume info vol_distributed_replicated

Volume Name: vol_distributed_replicated
Type: Distributed-Replicate
Volume ID: b8049701-9587-49ac-9cb2-1861421125c2
Status: Created
Snapshot Count: 0
Number of Bricks: 2 x 3 = 6
Transport-type: tcp
Bricks:
Brick1: node2:/data/brick2/distributed_replicated21
Brick2: node3:/data/brick3/distributed_replicated31
Brick3: node4:/data/brick4/distributed_replicated41
Brick4: node2:/data/brick2/distributed_replicated22
Brick5: node3:/data/brick3/distributed_replicated32
Brick6: node4:/data/brick4/distributed_replicated42
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
performance.client-io-threads: off
[root@node2 ~]# gluster volume start vol_distributed_replicated
volume start: vol_distributed_replicated: success
[root@node2 ~]# gluster volume status vol_distributed_replicated
Status of volume: vol_distributed_replicated
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick node2:/data/brick2/distributed_replic
ated21                                      49155     0          Y       2277 
Brick node3:/data/brick3/distributed_replic
ated31                                      49155     0          Y       2166 
Brick node4:/data/brick4/distributed_replic
ated41                                      49155     0          Y       2141 
Brick node2:/data/brick2/distributed_replic
ated22                                      49156     0          Y       2297 
Brick node3:/data/brick3/distributed_replic
ated32                                      49156     0          Y       2186 
Brick node4:/data/brick4/distributed_replic
ated42                                      49156     0          Y       2161 
Self-heal Daemon on localhost               N/A       N/A        Y       1894 
Self-heal Daemon on node3                   N/A       N/A        Y       1849 
Self-heal Daemon on node4                   N/A       N/A        Y       1832 

Task Status of Volume vol_distributed_replicated
------------------------------------------------------------------------------
There are no active volume tasks

5)分布分散 - Distributed Dispersed

既分布,又分散。

[root@node2 ~]# gluster volume create vol_distributed_dispersed disperse-data 2 redundancy 1 \
> node2:/data/brick2/distributed_dispersed21 \
> node3:/data/brick3/distributed_dispersed31 \
> node4:/data/brick4/distributed_dispersed41 \
> node2:/data/brick2/distributed_dispersed22 \
> node3:/data/brick3/distributed_dispersed32 \
> node4:/data/brick4/distributed_dispersed42
volume create: vol_distributed_dispersed: success: please start the volume to access data
[root@node2 ~]# gluster volume info vol_distributed_dispersed

Volume Name: vol_distributed_dispersed
Type: Distributed-Disperse
Volume ID: 797e2e88-61e0-4df4-a308-16c5140b2480
Status: Created
Snapshot Count: 0
Number of Bricks: 2 x (2 + 1) = 6
Transport-type: tcp
Bricks:
Brick1: node2:/data/brick2/distributed_dispersed21
Brick2: node3:/data/brick3/distributed_dispersed31
Brick3: node4:/data/brick4/distributed_dispersed41
Brick4: node2:/data/brick2/distributed_dispersed22
Brick5: node3:/data/brick3/distributed_dispersed32
Brick6: node4:/data/brick4/distributed_dispersed42
Options Reconfigured:
transport.address-family: inet
storage.fips-mode-rchecksum: on
nfs.disable: on
[root@node2 ~]# gluster volume start vol_distributed_dispersed
volume start: vol_distributed_dispersed: success
[root@node2 ~]# gluster volume status vol_distributed_dispersed
Status of volume: vol_distributed_dispersed
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick node2:/data/brick2/distributed_disper
sed21                                       49157     0          Y       2529 
Brick node3:/data/brick3/distributed_disper
sed31                                       49157     0          Y       17071
Brick node4:/data/brick4/distributed_disper
sed41                                       49157     0          Y       17051
Brick node2:/data/brick2/distributed_disper
sed22                                       49158     0          Y       2549 
Brick node3:/data/brick3/distributed_disper
sed32                                       49158     0          Y       17091
Brick node4:/data/brick4/distributed_disper
sed42                                       49158     0          Y       17071
Self-heal Daemon on localhost               N/A       N/A        Y       1894 
Self-heal Daemon on node4                   N/A       N/A        Y       1832 
Self-heal Daemon on node3                   N/A       N/A        Y       1849 

Task Status of Volume vol_distributed_dispersed
------------------------------------------------------------------------------
There are no active volume tasks

查看生成的目录

[root@node2 ~]# tree /data/brick2/
/data/brick2/
├── dispersed
├── distributed
├── distributed_dispersed21
├── distributed_dispersed22
├── distributed_replicated21
├── distributed_replicated22
└── replicated

7 directories, 0 files

创建完卷,node2 node3 node4虚拟机新建一个快照。

客户端挂载

可以通过3种方式挂载:

1)通过glusterfs挂载

[root@node1 ~]# yum install -y glusterfs glusterfs-fuse
[root@node1 ~]# mkdir /mnt/distributed
[root@node1 ~]# mount -t glusterfs node2:/vol_distributed /mnt/distributed/
[root@node1 ~]# df -Th
Filesystem              Type            Size  Used Avail Use% Mounted on
...略
node2:/vol_distributed  fuse.glusterfs   30G  407M   30G   2% /mnt/distributed

为防止由于node2挂掉而不可用,可以用多个节点挂载

[root@node1 ~]# vim /etc/fstab
[root@node1 ~]# cat /etc/fstab
...略
node2:/vol_distributed,node3:/vol_distributed,node4:/vol_distributed /mnt/distributed glusterfs defaults,_netdev 0 0
[root@node1 ~]# umount /mnt/distributed/
[root@node1 ~]# mount -a
[root@node1 ~]# df -Th
Filesystem              Type            Size  Used Avail Use% Mounted on
...略
node2:/vol_distributed  fuse.glusterfs   30G  407M   30G   2% /mnt/distributed
[root@node1 ~]# echo "Here is node1" > /mnt/distributed/welcome.txt

2)通过nfs挂载

用 NFS-Ganesha 导出目录

[root@node2 ~]# yum install -y nfs-ganesha nfs-ganesha-gluster
[root@node2 ~]# cp /etc/ganesha/ganesha.conf /etc/ganesha/ganesha.conf.bak
[root@node2 ~]# vim /etc/ganesha/ganesha.conf
[root@node2 ~]# egrep -v "#|^$" /etc/ganesha/ganesha.conf
EXPORT{
        Export_Id = 1 ;   # Export ID unique to each export
        Path = "/vol_replicated";  # Path of the volume to be exported. Eg: "/test_volume"

        FSAL {
                name = GLUSTER;
                hostname = "node2";  # IP of one of the nodes in the trusted pool
                volume = "vol_replicated";       # Volume name. Eg: "test_volume"
        }

        Access_type = RW;        # Access permissions
        Squash = No_root_squash; # To enable/disable root squashing
        Disable_ACL = TRUE;      # To enable/disable ACL
        Pseudo = "/vol_replicated_pseudo";       # NFSv4 pseudo path for this export. Eg: "/test_volume_pseudo"
        Protocols = "3,4" ;      # NFS protocols supported
        Transports = "UDP,TCP" ; # Transport protocols supported
        SecType = "sys";         # Security flavors supported
}
[root@node2 ~]# systemctl start nfs-ganesha
[root@node2 ~]# systemctl enable nfs-ganesha

客户端挂载

[root@node1 ~]# yum install -y nfs-utils
[root@node1 ~]# showmount -e node2
Export list for node2:
/vol_replicated (everyone)
[root@node1 ~]# mkdir /mnt/replicated
[root@node1 ~]# mount -t nfs node2:/vol_replicated /mnt/replicated/
[root@node1 ~]# df -Th
Filesystem              Type            Size  Used Avail Use% Mounted on
...略
node2:/vol_replicated   nfs              10G  135M  9.9G   2% /mnt/replicated
[root@node1 ~]# echo "Here is node1" > /mnt/replicated/welcome.txt

3)通过samba挂载

服务端准备

[root@node2 ~]# yum install -y samba samba-vfs-glusterfs
...略
[root@node2 ~]# adduser glusteruser
[root@node2 ~]# smbpasswd -a glusteruser
New SMB password:
Retype new SMB password:
Added user glusteruser.
[root@node2 ~]# vim /etc/samba/smb.conf
[root@node2 ~]# cat /etc/samba/smb.conf
...略
[gluster_vol_dispersed]
  comment = For samba share of volume vol_dispersed
  vfs objects = glusterfs
  glusterfs:volume = vol_dispersed
  glusterfs:logfile = /var/log/samba/glusterfs.%M.log
  glusterfs:loglevel = 7
  path = /
  read only = no
  guest ok = yes
    kernel share modes = no
[root@node2 ~]# systemctl start smb
[root@node2 ~]# systemctl enable smb

客户端挂载

立即用CIFS挂载的话不能写入数据,先用FUSE挂载一下,修改权限再挂载就可以;

[root@node1 ~]# mkdir /mnt/dispersed_temp
[root@node1 ~]# mount -t glusterfs node2:/vol_dispersed /mnt/dispersed_temp/
[root@node1 ~]# echo "Here is node1" > /mnt/dispersed_temp/welcome.txt
[root@node1 ~]# chmod 777 /mnt/dispersed_temp/
[root@node1 ~]# umount /mnt/dispersed_temp/

用CIFS挂载

[root@node1 ~]# yum install -y samba-client cifs-utils
[root@node1 ~]# smbclient -L node2 -U glusteruser
Enter SAMBA\glusteruser's password: 

  Sharename       Type      Comment
  ---------       ----      -------
  print$          Disk      Printer Drivers
  gluster_vol_dispersed Disk      For samba share of volume repvol
  IPC$            IPC       IPC Service (Samba 4.10.4)
  glusteruser     Disk      Home Directories
Reconnecting with SMB1 for workgroup listing.

  Server               Comment
  ---------            -------

  Workgroup            Master
  ---------            -------
[root@node1 ~]# mkdir /mnt/dispersed
[root@node1 ~]# mount -t cifs -o username=glusteruser,password=123456 //node2/gluster_vol_dispersed /mnt/dispersed
[root@node1 ~]# df -Th
Filesystem                    Type            Size  Used Avail Use% Mounted on
...略
//node2/gluster_vol_dispersed cifs             20G  272M   20G   2% /mnt/dispersed
[root@node1 ~]# echo "mount vol_dispersed via cifs from node1" > /mnt/dispersed/second.txt

下节预告
卷的常用选项:
限制IP连接、ACL、配额、扩容和缩小
快照
CTDB

本文转载自公众号:开源Ops
本文源连接:https://mp.weixin.qq.com/s/AxTZisaFybfJhM0-ZOWwiw

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值