Glusterfs

【centos6.5】

 

  • 环境布置:

192.168.16.12  gfs1  总功3块硬盘 gluster

192.168.16.13  gfs2  总功3块硬盘 gluster

192.168.16.14  gfs3  总功3块硬盘 gluster

192.168.16.15  gfs4  总功3块硬盘 gluster

192.168.16.16  web  只安装nfs-uilsjike

  • 挂载俩个光盘 搭建yum ,关闭防火墙、安全机制,以下四台主机配置一样

配置glusterd

每台主机上都的安装nfs-utils

[root@gfs1 ~]#wget ftp://172.16.0.1/repos/glusterfs*

[root@gfs1 ~]#wget ftp://172.16.0.1/repos/c61*

[root@gfs1 ~]# yum -y install glusterfs-server glusterfs-cli glusterfs-geo-replication

[root@gfs1 ~]# vim /etc/hosts

 

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4

::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.16.12 gfs1

192.168.16.13 gfs2

192.168.16.14 gfs3

192.168.16.15 gfs4

[root@gfs2 ~]# vim /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4

::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.16.12 gfs1

192.168.16.13 gfs2

192.168.16.14 gfs3

192.168.16.15 gfs4

[root@gfs2 ~]# /etc/init.d/glusterd start

Starting glusterd:                                         [确定]

[root@gfs3 ~]# vim /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4

::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.16.12 gfs1

192.168.16.13 gfs2

192.168.16.14 gfs3

192.168.16.15 gfs4

[root@gfs3 ~]# /etc/init.d/glusterd start

Starting glusterd:                                         [确定]

[root@gfs4 ~]# vim /etc/hosts

 

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4

::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.16.12 gfs1

192.168.16.13 gfs2

192.168.16.14 gfs3

192.168.16.15 gfs4

[root@gfs4 ~]# /etc/init.d/glusterd start

Starting glusterd:                                         [确定]

[root@gfs5 ~]# vim /etc/hosts

 

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4

::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.16.12 gfs1

192.168.16.13 gfs2

192.168.16.14 gfs3

192.168.16.15 gfs4

[root@gfs5 ~]# /etc/init.d/glusterd start

Starting glusterd:                                         [确定]

[root@gfs1 ~]# which glusterfs

/usr/sbin/glusterfs

[root@gfs1 ~]# glusterfs -V

glusterfs 3.4.6 built on Nov 13 2014 12:41:25

Repository revision: git://git.gluster.com/glusterfs.git

Copyright (c) 2006-2013 Red Hat, Inc. <http://www.redhat.com/>

GlusterFS comes with ABSOLUTELY NO WARRANTY.

It is licensed to you under your choice of the GNU Lesser

General Public License, version 3 or any later version (LGPLv3

or later), or the GNU General Public License, version 2 (GPLv2),

in all cases as published by the Free Software Foundation.

启停服务

[root@gfs1 ~]# /etc/init.d/glusterd status

glusterd 已停

[root@gfs1 ~]# /etc/init.d/glusterd start

Starting glusterd:                                         [确定]

[root@gfs1 ~]# /etc/init.d/glusterd status

glusterd (pid  2785) 正在运行...

[root@gfs1 ~]# chkconfig glusterd on

 

  • 存储主机加入存储信任池

[root@gfs1 ~]# gluster peer probe gfs2

peer probe: success

[root@gfs1 ~]# gluster peer probe gfs3

peer probe: success

[root@gfs1 ~]# gluster peer probe gfs4

peer probe: success

 

  • 查看虚拟机添加结果

[root@gfs1 ~]# gluster peer status

Number of Peers: 3

Hostname: gfs2

Port: 24007

Uuid: 610d3c5b-1186-4c53-9bd5-897e4dac8651

State: Peer in Cluster (Connected)

Hostname: gfs3

Port: 24007

Uuid: e10050d8-a1d3-4c2c-9ebf-38c8d997a76e

State: Peer in Cluster (Connected)

Hostname: gfs4

Port: 24007

Uuid: 6b617bd0-a0cb-40dc-823c-909761656ecd

State: Peer in Cluster (Connected)

 

  • 确保此包已经安装

[root@gfs1 ~]# rpm -qa xfsprogs

xfsprogs-3.1.1-20.el6.x86_64

[root@gfs2 ~]# rpm -qa xfsprogs

xfsprogs-3.1.1-20.el6.x86_64

[root@gfs3 ~]# rpm -qa xfsprogs

xfsprogs-3.1.1-20.el6.x86_64

[root@gfs4 ~]# rpm -qa xfsprogs

xfsprogs-3.1.1-20.el6.x86_64

 

  • 总共3块硬盘

[root@gfs1 ~]# fdisk -l

Disk /dev/sda: 42.9 GB, 42949672960 bytes

255 heads, 63 sectors/track, 5221 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x0000dd2f

 

   Device Boot      Start         End      Blocks   Id  System

/dev/sda1   *           1          64      512000   83  Linux

Partition 1 does not end on cylinder boundary.

/dev/sda2              64        5222    41430016   8e  Linux LVM

Disk /dev/sdb: 5368 MB, 5368709120 bytes

255 heads, 63 sectors/track, 652 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00000000

Disk /dev/sdc: 5368 MB, 5368709120 bytes

255 heads, 63 sectors/track, 652 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00000000

Disk /dev/mapper/VolGroup-lv_root: 40.3 GB, 40340815872 bytes

255 heads, 63 sectors/track, 4904 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00000000

Disk /dev/mapper/VolGroup-lv_swap: 2080 MB, 2080374784 bytes

255 heads, 63 sectors/track, 252 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Sector size (logical/physical): 512 bytes / 512 bytes

I/O size (minimum/optimal): 512 bytes / 512 bytes

Disk identifier: 0x00000000

[root@gfs1 ~]# mkfs.ext4 /dev/sdb

mke2fs 1.41.12 (17-May-2010)

/dev/sdb is entire device, not just one partition!

无论如何也要继续? (y,n) y

文件系统标签=

操作系统:Linux

块大小=4096 (log=2)

分块大小=4096 (log=2)

Stride=0 blocks, Stripe width=0 blocks

327680 inodes, 1310720 blocks

65536 blocks (5.00%) reserved for the super user

第一个数据块=0

Maximum filesystem blocks=1342177280

40 block groups

32768 blocks per group, 32768 fragments per group

8192 inodes per group

Superblock backups stored on blocks:

32768, 98304, 163840, 229376, 294912, 819200, 884736

正在写入inode表: 完成                            

Creating journal (32768 blocks): 完成

Writing superblocks and filesystem accounting information: 完成

This filesystem will be automatically checked every 27 mounts or

180 days, whichever comes first.  Use tune2fs -c or -i to override.

[root@gfs1 ~]# mkdir -p /gluster/brick1

[root@gfs1 ~]# mount /dev/sdb /gluster/brick1/

[root@gfs1 ~]# df -h

Filesystem                    Size  Used Avail Use% Mounted on

/dev/mapper/VolGroup-lv_root   37G  3.6G   32G  11% /

tmpfs                         491M   80K  491M   1% /dev/shm

/dev/sda1                     485M   35M  426M   8% /boot

/dev/sr0                      4.2G  4.2G     0 100% /media/cdrom

/dev/sdb                      5.0G  138M  4.6G   3% /gluster/brick1

[root@gfs2 ~]# mkfs.ext4 /dev/sdc

mke2fs 1.41.12 (17-May-2010)

/dev/sdc is entire device, not just one partition!

无论如何也要继续? (y,n) y

文件系统标签=

操作系统:Linux

块大小=4096 (log=2)

分块大小=4096 (log=2)

Stride=0 blocks, Stripe width=0 blocks

327680 inodes, 1310720 blocks

65536 blocks (5.00%) reserved for the super user

第一个数据块=0

Maximum filesystem blocks=1342177280

40 block groups

32768 blocks per group, 32768 fragments per group

8192 inodes per group

Superblock backups stored on blocks:

32768, 98304, 163840, 229376, 294912, 819200, 884736

正在写入inode表: 完成                            

Creating journal (32768 blocks): 完成

Writing superblocks and filesystem accounting information: 完成

This filesystem will be automatically checked every 32 mounts or

180 days, whichever comes first.  Use tune2fs -c or -i to override.

[root@gfs1 ~]# mkdir -p /gluster/brick2

[root@gfs1 ~]# mount /dev/sdc /gluster/brick2/

[root@gfs1 ~]# df -hT

Filesystem                   Type     Size  Used Avail Use% Mounted on

/dev/mapper/VolGroup-lv_root ext4      37G  3.6G   32G  11% /

tmpfs                        tmpfs    491M   80K  491M   1% /dev/shm

/dev/sda1                    ext4     485M   35M  426M   8% /boot

/dev/sr0                     iso9660  4.2G  4.2G     0 100% /media/cdrom

/dev/sdb                     ext4     5.0G  138M  4.6G   3% /gluster/brick1

/dev/sdc                     ext4     5.0G  138M  4.6G   3% /gluster/brick2

[root@gfs1 ~]# echo "mount /dev/sdb /gluster/brick1" >>/etc/rc.local

[root@gfs1 ~]# echo "mount /dev/sdc /gluster/brick2" >>/etc/rc.local

【以上4台gfs主机的安装都一致】

[root@gfs1 ~]# gluster volume create gs1 gfs1:/gluster/brick1 gfs2:/gluster/brick1 force

volume create: gs1: success: please start the volume to access data

[root@gfs1 ~]# gluster volume start gs1

volume start: gs1: success

[root@gfs4 ~]# gluster volume info

Volume Name: gs1

Type: Distribute

Volume ID: 42037eb3-3705-4927-ace2-f138a84963a4

Status: Started

Number of Bricks: 2

Transport-type: tcp

Bricks:

Brick1: gfs1:/gluster/brick1

Brick2: gfs2:/gluster/brick1

 

  • Volume的两种挂载方式

以glusterfs挂载

[root@gfs4 ~]# mount -t glusterfs 127.0.0.1:/gs1 /mnt/

[root@gfs4 ~]# df -h

Filesystem                    Size  Used Avail Use% Mounted on

/dev/mapper/VolGroup-lv_root   37G  3.6G   32G  11% /

tmpfs                         491M   80K  491M   1% /dev/shm

/dev/sda1                     485M   35M  426M   8% /boot

/dev/sr0                      4.2G  4.2G     0 100% /media/cdrom

/dev/sdc                      5.0G  138M  4.6G   3% /gluster/brick2

/dev/sdb                      5.0G  138M  4.6G   3% /gluster/brick1

127.0.0.1:/gs1                9.9G  277M  9.1G   3% /mnt

 

  • 如果此处不成功可以重启服务,重新挂载

[root@gfs1 ~]# touch /mnt/{1..5}

[root@gfs1 ~]# ls /mnt/

1  2  3  4  5  lost+found

[root@gfs2 ~]# mount -t glusterfs 127.0.0.1:/gs1 /mnt/

[root@gfs2 ~]# ls /mnt/

1  2  3  4  5  lost+found

[root@gfs3 ~]# mount -t glusterfs 127.0.0.1:/gs1 /mnt

[root@gfs3 ~]# ls /mnt/

1  2  3  4  5  lost+found

[root@gfs4 ~]# mount -t glusterfs 127.0.0.1:/gs1 /mnt/

[root@gfs4 ~]# ls /mnt/

1  2  3  4  5  lost+found

 

  • 创建分布复制卷

[root@gfs1 ~]# gluster volume create gs2 replica 2 gfs3:/gluster/brick1 gfs4:/gluster/brick1 force

volume create: gs2: success: please start the volume to access data

[root@gfs1 ~]# gluster volume info gs2 

Volume Name: gs2

Type: Replicate

Volume ID: 7c84cc12-bea5-45e5-805a-222c70192962

Status: Created

Number of Bricks: 1 x 2 = 2

Transport-type: tcp

Bricks:

Brick1: gfs3:/gluster/brick1

Brick2: gfs4:/gluster/brick1

[root@gfs1 ~]# gluster volume start gs2

volume start: gs2: success

 

  • 创建分布式条带卷

[root@gfs1 ~]# gluster volume create gs3 stripe 2 gfs1:/gluster/brick2 gfs2:/gluster/brick2 force

volume create: gs3: success: please start the volume to access data

[root@gfs1 ~]# gluster volume info gs3 

Volume Name: gs3

Type: Stripe

Volume ID: c1e84e60-cea2-4908-8b45-8470d93a7a19

Status: Created

Number of Bricks: 1 x 2 = 2

Transport-type: tcp

Bricks:

Brick1: gfs1:/gluster/brick2

Brick2: gfs2:/gluster/brick2

[root@gfs1 ~]# gluster volume start gs3

volume start: gs3: success

 

  • 进行卷的数据写入测试

[root@web ~]# mount -t nfs 192.168.16.12:/gs1 /mnt

[root@web ~]# df -h

Filesystem                    Size  Used Avail Use% Mounted on

/dev/mapper/VolGroup-lv_root   37G  3.6G   32G  11% /

tmpfs                         491M  228K  491M   1% /dev/shm

/dev/sda1                     485M   35M  426M   8% /boot

/dev/sr0                      4.2G  4.2G     0 100% /media/cdrom

192.168.16.12:/gs1            9.9G  277M  9.1G   3% /mnt

[root@web ~]# touch /mnt/{1..10}

[root@web ~]# ls /mnt/

1  10  2  3  4  5  6  7  8  9  lost+found

[root@gfs1 ~]# ls /gluster/brick1

1  5  7  8  9  lost+found

[root@gfs2 ~]# ls /gluster/brick1

10  2  3  4  6  lost+found

 

  • 分布式复制卷gs2的数据写入测试

[root@web ~]# mount -t nfs 192.168.16.12:/gs2 /mnt

[root@web ~]# df -h

Filesystem                    Size  Used Avail Use% Mounted on

/dev/mapper/VolGroup-lv_root   37G  3.6G   32G  11% /

tmpfs                         491M  228K  491M   1% /dev/shm

/dev/sda1                     485M   35M  426M   8% /boot

/dev/sr0                      4.2G  4.2G     0 100% /media/cdrom

192.168.16.12:/gs1            5.0G  138M  4.6G   3% /mnt

192.168.16.12:/gs2            5.0G  138M  4.6G   3% /mnt

[root@web ~]# ls /mnt/

lost+found

[root@web ~]# ls /mnt/

lost+found

[root@web ~]# touch /mnt/{20..30}

[root@web ~]# ls /mnt/

20  21  22  23  24  25  26  27  28  29  30  lost+found

[root@gfs3 ~]# ls /gluster/brick1

20  21  22  23  24  25  26  27  28  29  30  lost+found

[root@gfs4 ~]# ls /gluster/brick1

20  21  22  23  24  25  26  27  28  29  30  lost+found

 

  • 分布式条带卷gs3的数据写入测试

[root@web ~]# umount /mnt/

[root@web ~]# mount -t nfs 192.168.16.12:/gs3 /mnt/

[root@web ~]# df -h

Filesystem                    Size  Used Avail Use% Mounted on

/dev/mapper/VolGroup-lv_root   37G  3.6G   32G  11% /

tmpfs                         491M  228K  491M   1% /dev/shm

/dev/sda1                     485M   35M  426M   8% /boot

/dev/sr0                      4.2G  4.2G     0 100% /media/cdrom

192.168.16.12:/gs1            9.9G  276M  9.1G   3% /mnt

192.168.16.12:/gs3            9.9G  276M  9.1G   3% /mnt

[root@web ~]# dd if=/dev/zero of=/root/test bs=1024 count=262144

记录了262144+0 的读入

记录了262144+0 的写出

268435456字节(268 MB)已复制,5.6587 秒,47.4 MB/秒

[root@web ~]# du -sh test

256M test

[root@web ~]# ls /mnt/

lost+found

[root@web ~]# cp test /mnt/

[root@web ~]# ls /mnt/

lost+found  test

[root@web ~]# du -sh /mnt/test

257M /mnt/test

[root@gfs1 ~]# du -sh /gluster/brick2/test

129M /gluster/brick2/test

[root@gfs2 ~]#  du -sh /gluster/brick2/test

129M /gluster/brick2/test

 

  • 存储卷中brick块设备的扩容
  • 分布式复制卷的扩容

[root@gfs1 ~]# gluster volume add-brick gs2 replica 2 gfs3:/gluster/brick2 gfs4:/gluster/brick2 force

volume add-brick: success

[root@gfs1 ~]# gluster volume info gs2

Volume Name: gs2

Type: Distributed-Replicate

Volume ID: 7c84cc12-bea5-45e5-805a-222c70192962

Status: Started

Number of Bricks: 2 x 2 = 4

Transport-type: tcp

Bricks:

Brick1: gfs3:/gluster/brick1

Brick2: gfs4:/gluster/brick1

Brick3: gfs3:/gluster/brick2

Brick4: gfs4:/gluster/brick2

 

  • 查看扩容后的容量和写入测试

在web上挂载gs2并查看挂载目录的容量

[root@web ~]# umount /mnt/

[root@web ~]# umount /mnt/

[root@web ~]# mount -t nfs 192.168.16.12:/gs2 /mnt/

[root@web ~]# df -h

Filesystem                    Size  Used Avail Use% Mounted on

/dev/mapper/VolGroup-lv_root   37G  3.9G   32G  11% /

tmpfs                         491M  228K  491M   1% /dev/shm

/dev/sda1                     485M   35M  426M   8% /boot

/dev/sr0                      4.2G  4.2G     0 100% /media/cdrom

192.168.16.12:/gs2            9.9G  277M  9.1G   3% /mnt【已经扩容】

 

  • 在web上进行数据写入操作

[root@web ~]# ls /mnt/

20  21  22  23  24  25  26  27  28  29  30  lost+found

[root@web ~]# touch /mnt/{30..40}

[root@web ~]# ls /mnt/

20  22  24  26  28  30  32  34  36  38  40

21  23  25  27  29  31  33  35  37  39  lost+found

 

  • 在gfs3和gfs4上查看数据存到哪里了

[root@gfs3 ~]# gluster volume info gs2

Volume Name: gs2

Type: Distributed-Replicate

Volume ID: 7c84cc12-bea5-45e5-805a-222c70192962

Status: Started

Number of Bricks: 2 x 2 = 4

Transport-type: tcp

Bricks:

Brick1: gfs3:/gluster/brick1【组成gs2的块设备就在03和04上】

Brick2: gfs4:/gluster/brick1

Brick3: gfs3:/gluster/brick2

Brick4: gfs4:/gluster/brick2

[root@gfs3 ~]# ls /gluster/brick1

20  22  24  26  28  30  32  34  36  38  40

21  23  25  27  29  31  33  35  37  39  lost+found

[root@gfs3 ~]# ls /gluster/brick2

lost+found

[root@gfs4 ~]# ls /gluster/brick1

20  21  22  23  24  25  26  27  28  29  30  lost+found

[root@gfs4 ~]# ls /gluster/brick1

20  22  24  26  28  30  32  34  36  38  40

21  23  25  27  29  31  33  35  37  39  lost+found

[root@gfs4 ~]# ls /gluster/brick2

lost+found

 

  • 对gs2进行磁盘存储的扩容

[root@gfs1 ~]# gluster volume rebalance gs2 start

volume rebalance: gs2: success: Starting rebalance on volume gs2 has been successful.

ID: 94ca344e-f513-47d4-9d87-deb766130a06

  • 检测gs2块设备磁盘平衡结果

[root@gfs3 ~]# ls /gluster/brick1

20  22  24  26  28  30  32  34  36  38  40

21  23  25  27  29  31  33  35  37  39  lost+found

[root@gfs3 ~]# ls /gluster/brick2

20  22  24  26  28  30  32  34  36  38  40

21  23  25  27  29  31  33  35  37  39  lost+found

[root@gfs4 ~]# ls /gluster/brick1

20  22  24  26  28  30  32  34  36  38  40

21  23  25  27  29  31  33  35  37  39  lost+found

[root@gfs4 ~]# ls /gluster/brick2

20  22  24  26  28  30  32  34  36  38  40

21  23  25  27  29  31  33  35  37  39  lost+found

 

  • 存储卷的缩减与删除

[root@gfs1 ~]# gluster volume stop gs2

Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y

volume stop: gs2: success

[root@gfs1 ~]# gluster volume info gs2

[root@gfs1 ~]# gluster volume remove-brick gs2 replica 2 gfs3:/gluster/brick2 gfs4:/gluster/brick2 force【移除卷,因为复制卷且replica为2,因此每次移除必须是2的倍数】

Removing brick(s) can result in data loss. Do you want to Continue? (y/n) y

volume remove-brick commit force: success

[root@gfs1 ~]# gluster volume info gs2 【gs2卷已经移除】

Volume Name: gs2

Type: Replicate

Volume ID: 7c84cc12-bea5-45e5-805a-222c70192962

Status: Stopped

Number of Bricks: 1 x 2 = 2

Transport-type: tcp

Bricks:

Brick1: gfs3:/gluster/brick1

Brick2: gfs4:/gluster/brick1

 

  • 重新启动gs2卷

[root@gfs1 ~]# gluster volume start gs2【重启卷gs2】

volume start: gs2: success

[root@gfs1 ~]# gluster volume stop gs1【停止卷gs1】

Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y

volume stop: gs1: success

[root@gfs1 ~]# gluster volume delete gs1【删除卷gs1】

Deleting volume will erase all information about the volume. Do you want to continue? (y/n) y

volume delete: gs1: success

[root@gfs1 ~]# gluster volume info【查看卷信息,发现gs1已经没了】

Volume Name: gs2

Type: Replicate

Volume ID: 7c84cc12-bea5-45e5-805a-222c70192962

Status: Started

Number of Bricks: 1 x 2 = 2

Transport-type: tcp

Bricks:

Brick1: gfs3:/gluster/brick1

Brick2: gfs4:/gluster/brick1

Volume Name: gs3

Type: Stripe

Volume ID: c1e84e60-cea2-4908-8b45-8470d93a7a19

Status: Started

Number of Bricks: 1 x 2 = 2

Transport-type: tcp

Bricks:

Brick1: gfs1:/gluster/brick2

Brick2: gfs2:/gluster/brick2

 

  • 构建企业级分布式存储
  • 开启防火墙端口

[root@gfs1 ~]# iptables -I INPUT -p tcp --dport 24007:24011 -j ACCEPT

[root@gfs1 ~]# iptables -I INPUT -p tcp --dport 49162:49162 -j ACCEPT

[root@gfs1 ~]# cat /etc/glusterfs/glusterd.vol

volume management

    type mgmt/glusterd

    option working-directory /var/lib/glusterd

    option transport-type socket,rdma

    option transport.socket.keepalive-time 10

    option transport.socket.keepalive-interval 2

    option transport.socket.read-fail-log off

#   option base-port 49152【默认端口】

end-volume

 

  • 调整方法: gluster volume set <卷><参数>

[root@gfs1 ~]# gluster volume info gs2

Volume Name: gs2

Type: Replicate

Volume ID: 7c84cc12-bea5-45e5-805a-222c70192962

Status: Started

Number of Bricks: 1 x 2 = 2

Transport-type: tcp

Bricks:

Brick1: gfs3:/gluster/brick1

Brick2: gfs4:/gluster/brick1

[root@gfs1 ~]# gluster volume set gs2 performance.read-ahead on【设置预缓存优化】

volume set: success

[root@gfs1 ~]# gluster volume info gs2

Volume Name: gs2

Type: Replicate

Volume ID: 7c84cc12-bea5-45e5-805a-222c70192962

Status: Started

Number of Bricks: 1 x 2 = 2

Transport-type: tcp

Bricks:

Brick1: gfs3:/gluster/brick1

Brick2: gfs4:/gluster/brick1

Options Reconfigured:

performance.read-ahead: on【已经添加上】

[root@gfs1 ~]# gluster volume set gs2 performance.cache-size 256MB【设置读缓存大小】

volume set: success

[root@gfs1 ~]# gluster volume info gs2

Volume Name: gs2

Type: Replicate

Volume ID: 7c84cc12-bea5-45e5-805a-222c70192962

Status: Started

Number of Bricks: 1 x 2 = 2

Transport-type: tcp

Bricks:

Brick1: gfs3:/gluster/brick1

Brick2: gfs4:/gluster/brick1

Options Reconfigured:

performance.cache-size: 256MB

performance.read-ahead: on

 

  • 监控及日常维护

以下命令在复制卷的场景下才会有

[root@gfs1 ~]# gluster volume status gs2【查看节点NFS 是否在线】

Status of volume: gs2

Gluster process Port Online Pid

------------------------------------------------------------------------------

Brick gfs3:/gluster/brick1 4915227340

Brick gfs4:/gluster/brick1 491523604

NFS Server on localhost 2049 3787

Self-heal Daemon on localhost N/A  3772

NFS Server on gfs2 2049 3826

Self-heal Daemon on gfs2 N/A  3810

NFS Server on gfs3 2049 27375

Self-heal Daemon on gfs3 N/A  27353

NFS Server on gfs4 2049 3637

Self-heal Daemon on gfs4 N/A  3621

There are no active volume tasks

[root@gfs1 ~]# gluster volume heal gs2 full【启动完全修复】

Launching Heal operation on volume gs2 has been successful

Use heal info commands to check status

[root@gfs1 ~]# gluster volume heal gs2 info【查看需要修复的文件】

Gathering Heal info on volume gs2 has been successful

Brick gfs3:/gluster/brick1

Number of entries: 0

Brick gfs4:/gluster/brick1

Number of entries: 0

[root@gfs1 ~]# gluster volume heal gs2 info healed【查看修复成功的文件】

Gathering Heal info on volume gs2 has been successful

Brick gfs3:/gluster/brick1

Number of entries: 0

Brick gfs4:/gluster/brick1

Number of entries: 0

[root@gfs1 ~]# gluster volume heal gs2 info heal-failed【查看修复失败文件】

Gathering Heal info on volume gs2 has been successful

Brick gfs3:/gluster/brick1

Number of entries: 0

Brick gfs4:/gluster/brick1

Number of entries: 0

[root@gfs1 ~]# gluster vollume heal gs2 info split-brain【查看脑裂的文件】

unrecognized word: vollume (position 0)

[root@gfs1 ~]# gluster volume quota gs2 enable【激活quota功能】

Enabling quota has been successful

[root@gfs1 ~]# gluster volume quota gs2 disable【关闭quota功能】

Disabling quota will delete all the quota configuration. Do you want to continue? (y/n) y

 

Disabling quota has been successful

 

[root@gfs1 ~]# gluster volume quota gs2 enable【激活quota功能】

Enabling quota has been successful

 

[root@gfs1 ~]# gluster volume quota gs2 limit-usage /data 10GB【/gs2/data目录限制】

limit set on /data

 

[root@gfs1 ~]# gluster volume quota gs2 list 【quota信息列表】

path   limit_set      size

----------------------------------------------------------------------------------

/data                      10GB

[root@gfs1 ~]# gluster volume quota gs2 list  /data【限制目录的quota信息】

path   limit_set      size

----------------------------------------------------------------------------------

/data                      10GB

[root@gfs1 ~]# gluster volume set gs2 features.quota-timeout 5【设置信息的超时事实上时间】

volume set: success

 

[root@gfs1 ~]# gluster volume quota gs2 remove /data【删除某个目录的quota设置】

Removed quota limit on /data

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值