GlusterFS操作记录(5) GlusterFS+Heketi配置(独立部署)

GlusterFS作为分布式共享文件系统也用了很多年了,简单好用,但是一直没有系统化记录下来,很是零散。这次趁着Kubernets集成GlusterFS作为共享存储,尽量记录些东西。
GlusterFS操作记录(1) GlusterFS简述
GlusterFS操作记录(2) GlusterFS存储卷类型介绍
GlusterFS操作记录(3) GlusterFS架构介绍
GlusterFS操作记录(4) GlusterFS快速安装部署配置
GlusterFS操作记录(5) GlusterFS+Heketi配置(独立部署)
GlusterFS操作记录(6) GlusterFS+Heketi在Kubernets上的集成配置

一、GlusterFS集群配置

1. 准备3个节点,并初始化

主机ip地址角色备注
gluster-server-110.99.7.11gluster-server,heketi
gluster-server-210.99.7.12gluster-server
gluster-server-210.99.7.12gluster-server
gluster-client-110.99.7.10gluster-server挂载共享卷

初始化包括设置主机名称,网络,DNS等等,这里直接省略了。

配置/etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
10.99.7.11 gluster-server-1
10.99.7.12 gluster-server-2
10.99.7.13 gluster-server-3
10.99.7.10 gluster-client-1

2. 准备一块磁盘或者分区

每个虚拟机准备一块或者多块磁盘,并分别格式化,这里举例/vdb (50GB)

qemu-img create -f qcow2 gluster_server_1.disk 50G

<disk type='file' device='disk'>
      <driver name='qemu' type='qcow2' cache='none'/>
      <source file='/data/gluster_server_1.disk'/>
      <target dev='vdb' bus='virtio'/>
</disk>

virsh edit gluster_server_1
# fdisk -l
Disk /dev/vda: ... # 系统盘

Disk /dev/vdb: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

3. 安装glusterFS

gluster根据用户有很多个组件,这里简单安装有些组件就不配置了。

# 每个服务端主机上执行
yum install centos-release-gluster -y
# 我们选择5.2版本
yum list glusterfs --showduplicates|sort -r
    glusterfs.x86_64                  5.2-1.el7                      centos-gluster5
    glusterfs.x86_64                  5.1-1.el7                      centos-gluster5
    glusterfs.x86_64                  5.0-1.el7                      centos-gluster5
    glusterfs.x86_64                  3.12.2-18.el7                  base     

yum -y install glusterfs-server glusterfs-fuse

systemctl enable glusterd.service
systemctl start glusterd.service
systemctl status glusterd.service

4. 配置防火墙

这里是简单安装测试,没有深入配置防火墙。可以全部放行,或者直接关闭防火墙

# 直接关闭防火墙
systemctl stop firewalld.service
systemctl disable irewalld.service

5. 配置授信池(将节点加入到集群)

# 在gluster-server-1上将 2,3加入进来
gluster peer probe gluster-server-2
gluster peer probe gluster-server-3

# 在gluster-server-2或者3上(新版本没有必要了,之前版本需要时因为主机名称解析的问题)
gluster peer probe gluster-server-1

#任意节点
[root@gluster-server-1 ~]# gluster peer status
Number of Peers: 2

Hostname: gluster-server-2
Uuid: 6f32f6d4-9cd7-4b40-b7b6-100054b187f7
State: Peer in Cluster (Connected)

Hostname: gluster-server-3
Uuid: 5669dbef-3f71-4e3c-8fb1-9c6e11c0c434
State: Peer in Cluster (Connected)

6. 创建一个存储卷(测试)

备注:测试卷直接使用CentOS7上的一个普通目录,没有使用准备的磁盘(vdb是为heketi方式管理glusterfs准备的)

[各节点创建目录]
mkdir -p /data/brick1

gluster volume create volume0 replica 3 gluster-server-1:/data/brick1/volume0 gluster-server-2:/data/brick1/volume0 gluster-server-3:/data//brick1/volume0 force
#备注 这里使用force参数是因为我没有使用挂载磁盘,而是使用的root分区下的一个目录,不然会报错

gluster volume start volume0 
   #volume start: volume0: success

gluster volume info
	Volume Name: volume0
	Type: Replicate
	Volume ID: 8936cef1-125b-4689-963c-b558725925bc
	Status: Started
	Snapshot Count: 0
	Number of Bricks: 1 x 3 = 3
	Transport-type: tcp
	Bricks:
	Brick1: gluster-server-1:/data/brick1/volume0
	Brick2: gluster-server-2:/data/brick1/volume0
	Brick3: gluster-server-3:/data/brick1/volume0
	Options Reconfigured:
	transport.address-family: inet
	nfs.disable: on
	performance.client-io-threads: off

7. 配置客户端,测试存储卷

yum install centos-release-gluster -y
yum -y install glusterfs glusterfs-fuse

mkdir /volume0
mount -t glusterfs gluster-server-1:/volume0  /volume0
[root@gluster-client-1 glusterfs]# df -h
  gluster-server-1:/volume0  100G  2.1G   98G   3% /volume0

[test]
for i in `seq -w 1 100`; do cp -rp /var/log/messages /volume0/copy-test-$i; done
ls -lA /volume0/copy* | wc -l
100

[任意server节点]
备注:因为我们是3副本模式,只有3个brick,所有每个节点上都能看到全部100个文案
ls -lA /data/brick1/volume0/copy*

到这里,glusterfs的服务端集群安装配置以及客户端测试已经完成,默认配置的安装还是很简单的。

8. 删除测试卷

# 客户端解除mount
umount /volume0

# 服务端
gluster volume stop volume0
gluster volume delete volume0
rm -rf /data/brick1/volume0 # 所有服务节点执行

二、Heketi配置

Heketi是一个第三方程序,提供了一个RESTful 管理API以及命令行工具,可以用来管理GlusterFS卷的生命周期。

  1. Heketi动态在集群内选择bricks构建指定的volumes,以确保副本会分散到集群不同的故障域内。
  2. Heketi还支持任意数量的ClusterFS集群,以保证接入的云服务器不局限于单个GlusterFS集群。

Heketi以及Heketi-cli的安装可以直接在github上下载安装,centos7也提供了rpm安装方式。本文采用yum进行安装

1. 安装Heketi以及Heketi-cli

yum install heketi heketi-client -y

rpm -aq |grep heketi
heketi-client-8.0.0-1.el7.x86_64
heketi-8.0.0-1.el7.x86_64

搜优glusterfs服务节点加载模块

modprobe dm_thin_pool

2. 修改配置Heketi

vi /etc/heketi/heketi.json 
# 这里只展示需要修改的配置,其他默认配置参考原始文件(这里采用默认值)
{
    # 服务端口,可以根据需要进行修改,防止端口使用冲突
    "port": "8080",
    
    # 启用认证
    "use_auth": true,
    
    # 配置admin用的key
    "admin": {
         "key": "admin_secret"
    },
    
    # executor有三种,mock,ssh,Kubernets,这里使用ssh
    "executor": "ssh",
    # ssh相关配置
    "sshexec": {
      "keyfile": "/var/lib/heketi/id_rsa",
      "user": "root",
      "port": "22",
      "fstab": "/etc/fstab"
    },

    # heketi数据库文件位置(这里是默认路径)
    "db": "/var/lib/heketi/heketi.db",
    # 日志级别(none, critical, error, warning, info, debug)
    "loglevel" : "warning"
}

3. executor使用ssh,需要配置ssh秘钥访问所有gluster节点。

ssh-keygen -t rsa -q -f /var/lib/heketi/id_rsa -N ''
chown heketi:heketi /var/lib/heketi/id_rsa*
ssh-copy-id -i /var/lib/heketi/id_rsa root@gluster-server-1
ssh-copy-id -i /var/lib/heketi/id_rsa root@gluster-server-2
ssh-copy-id -i /var/lib/heketi/id_rsa root@gluster-server-3

4. 启动heketi服务

systemctl start heketi.service
systemctl enable heketi.service

systemctl status heketi.service
	● heketi.service - Heketi Server
	   Loaded: loaded (/usr/lib/systemd/system/heketi.service; disabled; vendor preset: disabled)
	   Active: active (running) since Mon 2019-01-21 15:40:58 CST; 2s ago
	 Main PID: 2967 (heketi)
	   CGroup: /system.slice/heketi.service
	           └─2967 /usr/bin/heketi --config=/etc/heketi/heketi.json
	
	Jan 21 15:40:58 gluster-server-1 systemd[1]: Started Heketi Server.
	Jan 21 15:40:58 gluster-server-1 systemd[1]: Starting Heketi Server...
	Jan 21 15:40:58 gluster-server-1 heketi[2967]: Heketi 8.0.0
	Jan 21 15:40:58 gluster-server-1 heketi[2967]: Authorization loaded
	Jan 21 15:40:58 gluster-server-1 heketi[2967]: Listening on port 8080

5. 通过topology文件添加glusterfs集群

heketi可以通过命令行手动添加glusterfs集群信息,也可以通过topology文件添加,实际产线中使用文件的方式易于维护。

# 没有就创建该文件
cat /etc/heketi/topology-cluster-1.json
{
    "clusters": [
        {
            "nodes": [
                {
                    "node": {
                        "hostnames": {
                            "manage": [
                                "10.99.7.11"
                            ],
                            "storage": [
                                "10.99.7.11"
                            ]
                        },
                        "zone": 1
                    },
                    "devices": [
                        "/dev/vdb"
                    ]
                },
                {
                    "node": {
                        "hostnames": {
                            "manage": [
                                "10.99.7.12"
                            ],
                            "storage": [
                                "10.99.7.12"
                            ]
                        },
                        "zone": 1
                    },
                    "devices": [
                        "/dev/vdb"
                    ]
                },
                {
                    "node": {
                        "hostnames": {
                            "manage": [
                                "10.99.7.13"
                            ],
                            "storage": [
                                "10.99.7.13"
                            ]
                        },
                        "zone": 1
                    },
                    "devices": [
                        "/dev/vdb"
                    ]
                }              
            ]
        }
    ]
}

heketi-cli --user admin --secret admin_secret --server http://10.99.7.11:8080 topology load --json /etc/heketi/topology-cluster-1.json
Creating cluster ... ID: 77b24830f331f2e12ca064d7daab3e43
        Allowing file volumes on cluster.
        Allowing block volumes on cluster.
        Creating node 10.99.7.11 ... ID: 52a177be9da6b0e00bd3863cb2565e6a
                Adding device /dev/vdb ... OK
        Creating node 10.99.7.12 ... ID: 508df329b279b6f73027f3aa9d81e005
                Adding device /dev/vdb ... OK
        Creating node 10.99.7.13 ... ID: 80ed10d504d363c0ba099855f9431f69
                Adding device /dev/vdb ... OK

# 因为启用了认证,每次命令行都需要输入认证信息,因为就有一个集群,可以配置如下
alias heketi-cli='heketi-cli --server "http://10.99.7.11:8080" --user admin --secret admin_secret'

6. 通过heketi-cli管理glusterfs卷声明周期

# 创建一个测试卷(大小10GB,默认3副本)
heketi-cli volume create --size=10 
	Name: vol_a0eef26f5873eb842dd65db819b2fa65
	Size: 10
	Volume Id: a0eef26f5873eb842dd65db819b2fa65
	Cluster Id: 77b24830f331f2e12ca064d7daab3e43
	Mount: 10.99.7.12:vol_a0eef26f5873eb842dd65db819b2fa65
	Mount Options: backup-volfile-servers=10.99.7.11,10.99.7.13
	Block: false
	Free Size: 0
	Reserved Size: 0
	Block Hosting Restriction: (none)
	Block Volumes: []
	Durability Type: replicate
	Distributed+Replica: 3

heketi-cli volume list
	Id:a0eef26f5873eb842dd65db819b2fa65    Cluster:77b24830f331f2e12ca064d7daab3e43    Name:vol_a0eef26f5873eb842dd65db819b2fa65

# 通过gluster命令查询下
# gluster volume list
vol_a0eef26f5873eb842dd65db819b2fa65

# gluster volume info vol_a0eef26f5873eb842dd65db819b2fa65 
 
Volume Name: vol_a0eef26f5873eb842dd65db819b2fa65
Type: Replicate
Volume ID: 9ce4e4d6-48d7-4685-87ff-6de80eaf4166
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 10.99.7.11:/var/lib/heketi/mounts/vg_6233c772878f135485b9ef759e8faa7d/brick_d365d8eb8bd0979a829f667b45d17df4/brick
Brick2: 10.99.7.12:/var/lib/heketi/mounts/vg_1a3767125942101c0304a52ccef213b7/brick_0dbefac4edcdb43ff719f1e254ecc996/brick
Brick3: 10.99.7.13:/var/lib/heketi/mounts/vg_e2c7ef9056c649121eef4c03eaed4110/brick_77d18cb0021f4180930583c887ee2933/brick
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off

# gluster volume status vol_a0eef26f5873eb842dd65db819b2fa65
Status of volume: vol_a0eef26f5873eb842dd65db819b2fa65
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick 10.99.7.11:/var/lib/heketi/mounts/vg_
6233c772878f135485b9ef759e8faa7d/brick_d365
d8eb8bd0979a829f667b45d17df4/brick          49152     0          Y       4160 
Brick 10.99.7.12:/var/lib/heketi/mounts/vg_
1a3767125942101c0304a52ccef213b7/brick_0dbe
fac4edcdb43ff719f1e254ecc996/brick          49153     0          Y       3491 
Brick 10.99.7.13:/var/lib/heketi/mounts/vg_
e2c7ef9056c649121eef4c03eaed4110/brick_77d1
8cb0021f4180930583c887ee2933/brick          49152     0          Y       3374 
Self-heal Daemon on localhost               N/A       N/A        Y       4183 
Self-heal Daemon on gluster-server-2        N/A       N/A        Y       3514 
Self-heal Daemon on gluster-server-3        N/A       N/A        Y       3397 
 
Task Status of Volume vol_a0eef26f5873eb842dd65db819b2fa65
------------------------------------------------------------------------------
There are no active volume tasks

# cat /etc/fstab
...
/dev/mapper/vg_6233c772878f135485b9ef759e8faa7d-brick_d365d8eb8bd0979a829f667b45d17df4 /var/lib/heketi/mounts/vg_6233c772878f135485b9ef759e8faa7d/brick_d365d8eb8bd0979a829f667b45d17df4 xfs rw,inode64,noatime,nouuid 1 2

# pvdisplay,vgdisplay,lvdisplay
# pvdisplay 
  --- Physical volume ---
  PV Name               /dev/vdb
  VG Name               vg_6233c772878f135485b9ef759e8faa7d
  PV Size               50.00 GiB / not usable 132.00 MiB
  Allocatable           yes 
  PE Size               4.00 MiB
  Total PE              12767
  Free PE               10181
  Allocated PE          2586
  PV UUID               amSE30-3PyS-jYQJ-LzYd-MziU-oPzw-qesxjU

# vgdisplay 
  --- Volume group ---
  VG Name               vg_6233c772878f135485b9ef759e8faa7d
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  21
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               1
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               49.87 GiB
  PE Size               4.00 MiB
  Total PE              12767
  Alloc PE / Size       2586 / 10.10 GiB
  Free  PE / Size       10181 / <39.77 GiB
  VG UUID               W9IgyL-ld7N-fTD6-RL6D-YNwj-y6dd-9SrROI

# lvdisplay
  --- Logical volume ---
  LV Name                tp_040396bb756b39a9df9642ffa5848114
  VG Name                vg_6233c772878f135485b9ef759e8faa7d
  LV UUID                GhsMPO-hZkz-Sgtu-a8x5-BShv-f431-tHpAtN
  LV Write Access        read/write
  LV Creation host, time gluster-server-1, 2019-01-21 16:07:29 +0800
  LV Pool metadata       tp_040396bb756b39a9df9642ffa5848114_tmeta
  LV Pool data           tp_040396bb756b39a9df9642ffa5848114_tdata
  LV Status              available
  # open                 2
  LV Size                10.00 GiB
  Allocated pool data    0.18%
  Allocated metadata     0.09%
  Current LE             2560
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           252:2
   
  --- Logical volume ---
  LV Path                /dev/vg_6233c772878f135485b9ef759e8faa7d/brick_d365d8eb8bd0979a829f667b45d17df4
  LV Name                brick_d365d8eb8bd0979a829f667b45d17df4
  VG Name                vg_6233c772878f135485b9ef759e8faa7d
  LV UUID                qXbd1K-rQB1-aLuU-LOj9-Jobq-No6D-p1FNYE
  LV Write Access        read/write
  LV Creation host, time gluster-server-1, 2019-01-21 16:07:29 +0800
  LV Pool name           tp_040396bb756b39a9df9642ffa5848114
  LV Status              available
  # open                 1
  LV Size                10.00 GiB
  Mapped size            0.18%
  Current LE             2560
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           252:4

客户端挂载使用


mount -t glusterfs gluster-server-1:/vol_a0eef26f5873eb842dd65db819b2fa65 /volume0/
cd /volume0/
for i in `seq -w 1 100`; do cp -rp /var/log/messages /volume0/copy-test-$i; done
ls -lA /volume0/copy* | wc -l
100

7. Kubernets通过heketi自动配置glusterfs存储卷

参考 kubernets使用glusterfs+Heketi存储配置(自动模式)

  • 1
    点赞
  • 9
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
要使用 Kubernetes 部署 GlusterFS,需要进行以下步骤: 1. 安装 GlusterFS 在 Kubernetes 集群中的每个节点上安装 GlusterFS,可以使用以下命令: ```shell $ sudo apt-get install software-properties-common -y $ sudo add-apt-repository ppa:gluster/glusterfs-7 -y $ sudo apt-get update $ sudo apt-get install glusterfs-server -y ``` 2. 创建 GlusterFS 卷 可以使用 GlusterFS 的命令行工具 `gluster` 创建卷。首先需要创建一个 GlusterFS 集群,可以使用以下命令: ```shell $ sudo gluster peer probe <node1> $ sudo gluster peer probe <node2> ``` 其中,`<node1>` 和 `<node2>` 是 Kubernetes 集群中的两个节点。然后可以使用以下命令创建 GlusterFS 卷: ```shell $ sudo gluster volume create <volume-name> replica 2 <node1>:/data/glusterfs <node2>:/data/glusterfs force ``` 其中,`<volume-name>` 是卷的名称,`<node1>` 和 `<node2>` 是 GlusterFS 集群中的两个节点,`/data/glusterfs` 是 GlusterFS 的存储目录。 3. 创建 Kubernetes 存储类 可以使用以下 YAML 文件创建 Kubernetes 存储类: ```yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: glusterfs-storage provisioner: kubernetes.io/glusterfs parameters: resturl: "http://<node1>:24007" restauthenabled: "false" restuser: "" secretNamespace: "default" secretName: "heketi-secret" ``` 其中,`<node1>` 是 GlusterFS 集群中的一个节点。 4. 创建 Kubernetes PV 和 PVC 可以使用以下 YAML 文件创建 Kubernetes PV 和 PVC: ```yaml kind: PersistentVolume apiVersion: v1 metadata: name: glusterfs-pv spec: storageClassName: glusterfs-storage capacity: storage: 1Gi accessModes: - ReadWriteMany glusterfs: endpoints: glusterfs-cluster path: <volume-name> readOnly: false --- kind: PersistentVolumeClaim apiVersion: v1 metadata: name: glusterfs-pvc spec: storageClassName: glusterfs-storage accessModes: - ReadWriteMany resources: requests: storage: 1Gi ``` 其中,`<volume-name>` 是 GlusterFS 卷的名称。 5. 使用 PVC 创建 Pod 可以使用以下 YAML 文件创建 Kubernetes Pod: ```yaml apiVersion: v1 kind: Pod metadata: name: glusterfs-pod spec: containers: - name: glusterfs-container image: nginx volumeMounts: - name: glusterfs-volume mountPath: /var/www/html volumes: - name: glusterfs-volume persistentVolumeClaim: claimName: glusterfs-pvc ``` 这个 Pod 会自动挂载 GlusterFS 存储卷到 `/var/www/html` 目录。 以上就是使用 Kubernetes 部署 GlusterFS 的步骤。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值