Heketi+Glusterfs+K8S-Storageclass

1 篇文章 1 订阅
1 篇文章 0 订阅

K8S中strageclass动态存储卷使用heketi管理的Glusterfs

部署方式未集成在K8S中,使用外部部署heketi、glusterfs通过StorageClass接入K8S集群。

作者: Subversion 
K8S版本:V1.16.2
heketi版本:9.0.0
heketi-cli版本:9.0.0
glusterfs版本:9.4

你需要有一个K8S集群,以及三台以上可用的服务器并保证每台服务器挂载了一块裸设备 如下:

[root@kubemaster ~]# lsblk
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0   20G  0 disk 
├─sda1            8:1    0    1G  0 part /boot
└─sda2            8:2    0   19G  0 part 
  ├─centos-root 253:0    0   17G  0 lvm  /
  └─centos-swap 253:1    0    2G  0 lvm  
sdb               8:16   0  100G  0 disk 
sr0              11:0    1 1024M  0 rom  
[root@kubemaster ~]# kubectl get node
NAME         STATUS   ROLES    AGE   VERSION
kubemaster   Ready    master   88d   v1.16.2
kubenode1    Ready    <none>   88d   v1.16.2
kubenode2    Ready    <none>   88d   v1.16.2

将分区后的磁盘更改为裸设备 此步骤谨慎使用!!!

dd if=/dev/zero of=/dev/sdb bs=1024 count=10240  

一、节点信息

HostnameIpaddrMemoryCpu
heketi192.168.150.1434G4C
kubemaster192.168.150.1338G4C
kubenode1192.168.150.1368G4C
kubenode2192.168.150.1378G4C

部署时使用kubemaster作为heketi管理端时,初始化gluster集群报错。
所以新增加了一台服务器作为heketi管理端
错误信息:New Node doesn't have glusterd running

1.1 映射hosts

heketi节点操作

[root@heketi ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.150.143 heketi
192.168.150.133 kubemaster
192.168.150.136 kubenode1
192.168.150.137 kubenode2

1.2 免密登录root

heketi节点操作

[root@heketi ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:3T0ChQyggV9AkZCq7+H3AgHLROFBcWhHLBGF+K1p4bQ root@heketi
The key's randomart image is:
+---[RSA 2048]----+
|+O%O=+...o ..    |
|+*=o.o.   o.     |
|+*oo..    .      |
|o.= o    . o .   |
|.o *    S . o o  |
|. E          . . |
| o..             |
| ...o            |
| .o. o.          |
+----[SHA256]-----+
[root@heketi ~]# ssh-copy-id kubenode1
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'kubenode1 (192.168.150.136)' can't be established.
ECDSA key fingerprint is SHA256:u6UqgPMi9h2ewnS2IZQiVcyfOjvf49GyECXmzbo6DqY.
ECDSA key fingerprint is MD5:6a:0c:55:bb:3b:7a:e6:5b:b9:d0:ba:d6:73:3f:54:24.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@kubenode1's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'kubenode1'"
and check to make sure that only the key(s) you wanted were added.

[root@heketi ~]# ssh-copy-id kubenode2
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'kubenode2 (192.168.150.137)' can't be established.
ECDSA key fingerprint is SHA256:u6UqgPMi9h2ewnS2IZQiVcyfOjvf49GyECXmzbo6DqY.
ECDSA key fingerprint is MD5:6a:0c:55:bb:3b:7a:e6:5b:b9:d0:ba:d6:73:3f:54:24.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@kubenode2's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'kubenode2'"
and check to make sure that only the key(s) you wanted were added.


[root@heketi ~]# ssh-copy-id kubemaster
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'kubemaster (192.168.150.133)' can't be established.
ECDSA key fingerprint is SHA256:u6UqgPMi9h2ewnS2IZQiVcyfOjvf49GyECXmzbo6DqY.
ECDSA key fingerprint is MD5:6a:0c:55:bb:3b:7a:e6:5b:b9:d0:ba:d6:73:3f:54:24.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@kubemaster's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'kubemaster'"
and check to make sure that only the key(s) you wanted were added.

[root@heketi ~]# ssh-copy-id heketi
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'heketi (192.168.150.143)' can't be established.
ECDSA key fingerprint is SHA256:u6UqgPMi9h2ewnS2IZQiVcyfOjvf49GyECXmzbo6DqY.
ECDSA key fingerprint is MD5:6a:0c:55:bb:3b:7a:e6:5b:b9:d0:ba:d6:73:3f:54:24.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@heketi's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'heketi'"
and check to make sure that only the key(s) you wanted were added.

二、安装Glusterfs

四台节点都需要安装

HostnameIpaddrCpuMemory
heketi192.168.150.1434C4C
kubemaster192.168.150.1334C8G
kubenode1192.168.150.1364C8G
kubenode2192.168.150.1374C8G

2.1 使用yum安装

yum -y install centos-release-gluster
yum -y install glusterfs glusterfs-server glusterfs-fuse glusterfs-rdma glusterfs-geo-replication glusterfs-devel
[root@heketi ~]# gluster --version
glusterfs 9.4
Repository revision: git://git.gluster.org/glusterfs.git
Copyright (c) 2006-2016 Red Hat, Inc. <https://www.gluster.org/>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
or later), or the GNU General Public License, version 2 (GPLv2),
in all cases as published by the Free Software Foundation.

2.2 启动glusterfsd

[root@heketi ~]# systemctl restart glusterfsd
[root@heketi ~]# systemctl enable glusterfsd
Created symlink from /etc/systemd/system/multi-user.target.wants/glusterfsd.service to /usr/lib/systemd/system/glusterfsd.service.
[root@heketi ~]# systemctl status glusterfsd
● glusterfsd.service - GlusterFS brick processes (stopping only)
   Loaded: loaded (/usr/lib/systemd/system/glusterfsd.service; disabled; vendor preset: disabled)
   Active: active (exited) since 一 2022-01-10 03:53:26 CST; 17s ago
  Process: 3023 ExecStart=/bin/true (code=exited, status=0/SUCCESS)
 Main PID: 3023 (code=exited, status=0/SUCCESS)

1月 10 03:53:26 heketi systemd[1]: Starting GlusterFS brick processes (stopping only)...
1月 10 03:53:26 heketi systemd[1]: Started GlusterFS brick processes (stopping only).

2.3 启动glusterd

[root@heketi ~]# systemctl start glusterd
[root@heketi ~]# systemctl enable glusterd
Created symlink from /etc/systemd/system/multi-user.target.wants/glusterd.service to /usr/lib/systemd/system/glusterd.service.
[root@heketi ~]# systemctl status glusterd
● glusterd.service - GlusterFS, a clustered file-system server
   Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled)
   Active: active (running) since 一 2022-01-10 02:38:03 CST; 16s ago
     Docs: man:glusterd(8)
 Main PID: 2109 (glusterd)
   CGroup: /system.slice/glusterd.service
           └─2109 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO

1月 10 02:38:02 heketi systemd[1]: Starting GlusterFS, a clustered file-system server...
1月 10 02:38:03 heketi systemd[1]: Started GlusterFS, a clustered file-system server.
1月 10 02:38:11 heketi systemd[1]: [/usr/lib/systemd/system/glusterd.service:4] Unknown lvalue 'StartLimitBurst' in section 'Unit'
1月 10 02:38:11 heketi systemd[1]: [/usr/lib/systemd/system/glusterd.service:5] Unknown lvalue 'StartLimitIntervalSec' in section 'Unit'

三、安装Glusterfs管理端Heketi

四台节点都需要安装

HostnameIpaddrCpuMemory
heketi192.168.150.1434C4C
kubemaster192.168.150.1334C8G
kubenode1192.168.150.1364C8G
kubenode2192.168.150.1374C8G

3.1 yum安装heketi

heketi命令相关

[root@heketi ~]# yum install -y heketi heketi-client
[root@heketi ~]# heketi -v
Heketi 9.0.0
[root@heketi ~]# heketi-cli -v
heketi-cli 9.0.0

3.2 配置hekeit用户免密登录root

heketi节点操作

[root@heketi ~]# ssh-keygen -f /etc/heketi/heketi_key -t rsa -N ''
Generating public/private rsa key pair.
Your identification has been saved in /etc/heketi/heketi_key.
Your public key has been saved in /etc/heketi/heketi_key.pub.
The key fingerprint is:
SHA256:rTUHdBSM3C4yrr0DDsrJO3QVcoNzFKOgwI3r3Q8D/Yo root@heketi
The key's randomart image is:
+---[RSA 2048]----+
|o +  o+. ..=+.   |
|.+ o+.=. .o.o    |
|. . o= o  ..     |
| . . .. o....    |
|. . o...So+..    |
| ...o+...o o     |
| + +.o=+.        |
|  *E .o.o        |
|  .o    .o       |
+----[SHA256]-----+

[root@heketi ~]# touch /etc/heketi/gluster.json
[root@heketi ~]# chown -R heketi:heketi /etc/heketi/
[root@heketi ~]#  ll /etc/heketi/
总用量 12
-rw-r--r-- 1 heketi heketi    0 1月  10 02:42 gluster.json
-rw-r--r-- 1 heketi heketi 1927 4月  18 2019 heketi.json
-rw------- 1 heketi heketi 1675 1月  10 02:42 heketi_key
-rw-r--r-- 1 heketi heketi  393 1月  10 02:42 heketi_key.pub

3.3 密钥分发

[root@heketi etc]# ssh-copy-id -i /etc/heketi/heketi_key.pub 192.168.150.143
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/etc/heketi/heketi_key.pub"
The authenticity of host '192.168.150.143 (192.168.150.143)' can't be established.
ECDSA key fingerprint is SHA256:u6UqgPMi9h2ewnS2IZQiVcyfOjvf49GyECXmzbo6DqY.
ECDSA key fingerprint is MD5:6a:0c:55:bb:3b:7a:e6:5b:b9:d0:ba:d6:73:3f:54:24.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@192.168.150.143's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh '192.168.150.143'"
and check to make sure that only the key(s) you wanted were added.

[root@heketi ~]# ssh-copy-id -i /etc/heketi/heketi_key.pub 192.168.150.137
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/etc/heketi/heketi_key.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@192.168.150.137's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh '192.168.150.137'"
and check to make sure that only the key(s) you wanted were added.

[root@heketi ~]# ssh-copy-id -i /etc/heketi/heketi_key.pub 192.168.150.136
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/etc/heketi/heketi_key.pub"
The authenticity of host '192.168.150.136 (192.168.150.136)' can't be established.
ECDSA key fingerprint is SHA256:u6UqgPMi9h2ewnS2IZQiVcyfOjvf49GyECXmzbo6DqY.
ECDSA key fingerprint is MD5:6a:0c:55:bb:3b:7a:e6:5b:b9:d0:ba:d6:73:3f:54:24.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@192.168.150.136's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh '192.168.150.136'"
and check to make sure that only the key(s) you wanted were added.
[root@heketi ~]# ssh-copy-id -i /etc/heketi/heketi_key.pub 192.168.150.133
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/etc/heketi/heketi_key.pub"
The authenticity of host '192.168.150.133 (192.168.150.133)' can't be established.
ECDSA key fingerprint is SHA256:u6UqgPMi9h2ewnS2IZQiVcyfOjvf49GyECXmzbo6DqY.
ECDSA key fingerprint is MD5:6a:0c:55:bb:3b:7a:e6:5b:b9:d0:ba:d6:73:3f:54:24.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@192.168.150.133's password: 

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh '192.168.150.133'"
and check to make sure that only the key(s) you wanted were added.

3.4 测试登录

heketi节点操作

[root@heketi ~]# ssh -i /etc/heketi/heketi_key root@192.168.150.143
Last login: Mon Jan 10 03:11:33 2022 from 192.168.150.1

[root@heketi ~]# ssh -i /etc/heketi/heketi_key root@192.168.150.136
Last login: Mon Jan 10 02:21:31 2022 from 192.168.150.1

[root@heketi ~]# ssh -i /etc/heketi/heketi_key root@192.168.150.137
Last login: Mon Jan 10 02:21:36 2022 from 192.168.150.1

[root@heketi ~]# ssh -i /etc/heketi/heketi_key root@192.168.150.133
Last login: Mon Jan 10 02:21:23 2022 from 192.168.150.1

3.5 修改配置文件

heketi节点操作
配置文件中备注要删除 否则无法启动 !!!

[root@heketi ~]# vi /etc/heketi/heketi.json
{
  "_port_comment": "Heketi Server Port Number",
  //修改一下端口防止冲突
  "port": "18888", 
  // 默认值false,不需要认证 不修改
  "_use_auth": "Enable JWT authorization. Please enable for deployment",
  "use_auth": false,

  "_jwt": "Private keys for access",
  "jwt": {
    "_admin": "Admin has access to all APIs",
    "admin": {
      "key": "My Secret"
    },
    "_user": "User only has access to /volumes endpoint",
    "user": {
      "key": "My Secret"
    }
  },

  "_glusterfs_comment": "GlusterFS Configuration",
  "glusterfs": {
    "_executor_comment": [
      "Execute plugin. Possible choices: mock, ssh",
      "mock: This setting is used for testing and development.",
      "      It will not send commands to any node.",
      "ssh:  This setting will notify Heketi to ssh to the nodes.",
      "      It will need the values in sshexec to be configured.",
      "kubernetes: Communicate with GlusterFS containers over",
      "            Kubernetes exec api."
    ],
    // mock:测试环境下创建的volume无法挂载;
	// kubernetes:在GlusterFS由kubernetes创建时采用
	// ssh:这里使用
    "executor": "ssh",
    "_sshexec_comment": "SSH username and private key file information",
    "sshexec": {
      "keyfile": "/etc/heketi/heketi_key", // 这里修改密钥
      "user": "root",
      "port": "22", //按自己ssh端口修改
      "fstab": "/etc/fstab" //创建的volume挂载位置
    },

    "_kubeexec_comment": "Kubernetes configuration",
    "kubeexec": {
      "host" :"https://kubernetes.host:8443",
      "cert" : "/path/to/crt.file",
      "insecure": false,
      "user": "kubernetes username",
      "password": "password for kubernetes user",
      "namespace": "OpenShift project or Kubernetes namespace",
      "fstab": "Optional: Specify fstab file on node.  Default is /etc/fstab"
    },

    "_db_comment": "Database file name",
    "db": "/var/lib/heketi/heketi.db", // 存储位置

    "_loglevel_comment": [
      "Set log level. Choices are:",
      "  none, critical, error, warning, info, debug",
      "Default is warning"
    ],
    "loglevel" : "debug"
  }
}

3.6 拷贝文件密钥

heketi节点操作

[root@heketi ~]# scp /etc/heketi/heketi* kubenode1:/etc/heketi/
heketi.json                                                                                                                                                                                                                                 100% 1848   827.6KB/s   00:00    
heketi_key                                                                                                                                                                                                                                  100% 1675     1.7MB/s   00:00    
heketi_key.pub                                                                                                                                                                                                                              100%  393   666.0KB/s   00:00    
[root@heketi ~]# scp /etc/heketi/heketi* kubenode2:/etc/heketi/
heketi.json                                                                                                                                                                                                                                 100% 1848     2.2MB/s   00:00    
heketi_key                                                                                                                                                                                                                                  100% 1675     1.0MB/s   00:00    
heketi_key.pub                                                                                                                                                                                                                              100%  393   552.1KB/s   00:00    
[root@heketi ~]# scp /etc/heketi/heketi* kubemaster:/etc/heketi/
heketi.json                                                                                                                                                                                                                                 100% 1848     1.6MB/s   00:00
heketi_key                                                                                                                                                                                                                                  100% 1675     2.2MB/s   00:00    
heketi_key.pub                                                                                                                                                                                                                              100%  393   825.3KB/s   00:00    

3.7 启动heketi

heketi节点操作

[root@heketi ~]# systemctl restart heketi
[root@heketi ~]# systemctl enable heketi
Created symlink from /etc/systemd/system/multi-user.target.wants/heketi.service to /usr/lib/systemd/system/heketi.service.

[root@heketi ~]# systemctl status heketi
● heketi.service - Heketi Server
   Loaded: loaded (/usr/lib/systemd/system/heketi.service; enabled; vendor preset: disabled)
   Active: active (running) since 一 2022-01-10 02:50:13 CST; 13s ago
 Main PID: 2428 (heketi)
   CGroup: /system.slice/heketi.service
           └─2428 /usr/bin/heketi --config=/etc/heketi/heketi.json

1月 10 02:50:13 heketi systemd[1]: Started Heketi Server.
1月 10 02:50:13 heketi heketi[2428]: Heketi 9.0.0
1月 10 02:50:13 heketi heketi[2428]: [heketi] INFO 2022/01/10 02:50:13 Loaded ssh executor
1月 10 02:50:13 heketi heketi[2428]: [heketi] INFO 2022/01/10 02:50:13 Volumes per cluster limit is set to default value of 1000
1月 10 02:50:13 heketi heketi[2428]: [heketi] INFO 2022/01/10 02:50:13 GlusterFS Application Loaded
1月 10 02:50:13 heketi heketi[2428]: [heketi] INFO 2022/01/10 02:50:13 Started Node Health Cache Monitor
1月 10 02:50:13 heketi heketi[2428]: [heketi] INFO 2022/01/10 02:50:13 Started background pending operations cleaner
1月 10 02:50:13 heketi heketi[2428]: Listening on port 18888
1月 10 02:50:23 heketi heketi[2428]: [heketi] INFO 2022/01/10 02:50:23 Starting Node Health Status refresh
1月 10 02:50:23 heketi heketi[2428]: [heketi] INFO 2022/01/10 02:50:23 Cleaned 0 nodes from health cache

3.7 启动剩余节点heketi

heketi节点操作

[root@heketi ~]# ssh kubenode1 "chown -R heketi.heketi /etc/heketi/"
[root@heketi ~]# ssh kubenode2 "chown -R heketi.heketi /etc/heketi/"
[root@heketi ~]# ssh kubemaster "chown -R heketi.heketi /etc/heketi/"
[root@heketi ~]# ssh kubemaster "systemctl restart heketi"
[root@heketi ~]# ssh kubenode1 "systemctl restart heketi"
[root@heketi ~]# ssh kubenode2 "systemctl restart heketi"

3.8 测试heketi

heketi节点操作

[root@heketi ~]# curl http://127.0.0.1:18888/hello
Hello from Heketi
[root@heketi ~]# curl http://kubenode1:18888/hello
Hello from Heketi
[root@heketi ~]# curl http://kubenode2:18888/hello
Hello from Heketi
[root@heketi ~]# curl http://kubemaster:18888/hello
Hello from Heketi


3.9 配置环境变量

heketi节点操作

[root@heketi ~]# export HEKETI_CLI_SERVER=http://192.168.150.143:18888
[root@heketi ~]#  cd /etc && cp profile{,.bak}
[root@heketi etc]# echo "export HEKETI_CLI_SERVER=http://192.168.150.143:18888" >> profile
[root@heketi etc]# cd ~ && source /etc/profile
[root@heketi etc]#  cat profile | grep HEKETI_CLI_SERVER
export HEKETI_CLI_SERVER=http://192.168.150.143:18888

3.10 配置初始化heketi配置文件

heketi节点操作
"hostnames"中"manage"字段尽量使用IP地址。否则初始化各种坑

[root@heketi ~]# vi /etc/heketi/gluster.json
{
    "clusters": [
        {
            "nodes": [
                {
                    "node": {
                        "hostnames": {
                            "manage": [
                                "192.168.150.133"
                            ],
                            "storage": [
                                "192.168.150.133"
                            ]
                        },
                        "zone": 1
                    },
                    "devices": [
                        {
                            "name": "/dev/sdb",
                            "destroydata": false
                        }
                    ]
                },
                {
                    "node": {
                        "hostnames": {
                            "manage": [
                                "192.168.150.136"
                            ],
                            "storage": [
                                "192.168.150.136"
                            ]
                        },
                        "zone": 2
                    },
                    "devices": [
                        {
                            "name": "/dev/sdb",
                            "destroydata": false
                        }
                    ]
                },
                {
                    "node": {
                        "hostnames": {
                            "manage": [
                                "192.168.150.137"
                            ],
                            "storage": [
                                "192.168.150.137"
                            ]
                        },
                        "zone": 1
                    },
                    "devices": [
                        {
                            "name": "/dev/sdb",
                            "destroydata": false
                        }
                    ]
                },
                {
                    "node": {
                        "hostnames": {
                            "manage": [
                                "192.168.150.143"
                            ],
                            "storage": [
                                "192.168.150.143"
                            ]
                        },
                        "zone": 2
                    },
                    "devices": [
                        {
                            "name": "/dev/sdb",
                            "destroydata": false
                        }
                    ]
                }
            ]
        }
    ]
}


3.11 初始化集群

heketi节点操作

[root@heketi ~]# heketi-cli topology load --json=/etc/heketi/gluster.json
Creating cluster ... ID: c83299b735444e523fbcd238d905ef77
	Allowing file volumes on cluster.
	Allowing block volumes on cluster.
	Creating node 192.168.150.133 ... ID: 3928ca5010f69dcfd1e79428b30e497c
		Adding device /dev/sdb ... OK
	Creating node 192.168.150.136 ... ID: ad54095ca8b64aae3ed3c45423486a69
		Adding device /dev/sdb ... OK
	Creating node 192.168.150.137 ... ID: 5bb5f051dd9ceb27688a7cd1882d3dcd
		Adding device /dev/sdb ... OK
	Creating node 192.168.150.143 ... ID: 33792ff9ee3d484d5fae5955dc2ae715
		Adding device /dev/sdb ... OK

3.12 创建测试volume

heketi节点操作

[root@heketi ~]#  heketi-cli volume create --size=2
Name: vol_49429bc2e587b22ff0c907af986eb651
Size: 2
Volume Id: 49429bc2e587b22ff0c907af986eb651
Cluster Id: c83299b735444e523fbcd238d905ef77
Mount: 192.168.150.143:vol_49429bc2e587b22ff0c907af986eb651
Mount Options: backup-volfile-servers=192.168.150.133,192.168.150.137,192.168.150.136
Block: false
Free Size: 0
Reserved Size: 0
Block Hosting Restriction: (none)
Block Volumes: []
Durability Type: replicate
Distribute Count: 1
Replica Count: 3 #默认三副本

3.13 gluster查看volume

[root@heketi ~]# gluster volume list
vol_49429bc2e587b22ff0c907af986eb651
[root@heketi ~]# gluster volume info
 
Volume Name: vol_49429bc2e587b22ff0c907af986eb651
Type: Replicate
Volume ID: d70d64a0-8254-496c-8393-e50aa1e5eae5
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 192.168.150.133:/var/lib/heketi/mounts/vg_1d6c19781d852833797846ac08d0d376/brick_216823fdd6a8c657e4bda61830b32e42/brick
Brick2: 192.168.150.143:/var/lib/heketi/mounts/vg_59a06bbff9d630400087d42e03d222c8/brick_659ea2f39458e1fec88423e73ce653e8/brick
Brick3: 192.168.150.137:/var/lib/heketi/mounts/vg_b0b80d45a808502a5ff401b2aebb7ec6/brick_51a9181eb6e790730dee25c845d6a72b/brick
Options Reconfigured:
user.heketi.id: 49429bc2e587b22ff0c907af986eb651
cluster.granular-entry-heal: on
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off

3.13 heketi删除volume

[root@heketi ~]#  heketi-cli volume delete 49429bc2e587b22ff0c907af986eb651
Volume 49429bc2e587b22ff0c907af986eb651 deleted

3.12 指定创建一个两副本的volume

[root@heketi ~]#  heketi-cli volume create --replica=2 --size=2
Name: vol_8d2d1d3c204b43bf10dd3ab9af542d10
Size: 2
Volume Id: 8d2d1d3c204b43bf10dd3ab9af542d10
Cluster Id: c83299b735444e523fbcd238d905ef77
Mount: 192.168.150.143:vol_8d2d1d3c204b43bf10dd3ab9af542d10
Mount Options: backup-volfile-servers=192.168.150.133,192.168.150.137,192.168.150.136
Block: false
Free Size: 0
Reserved Size: 0
Block Hosting Restriction: (none)
Block Volumes: []
Durability Type: replicate
Distribute Count: 1
Replica Count: 2 #两副本

四、创建StorageClass

K8S master节点操作

4.1 配置yaml文件并启动

[root@kubemaster data]# vi gluster-storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: gluster
provisioner: kubernetes.io/glusterfs
parameters:
  resturl: http://192.168.150.143:18888
  restauthenabled: "false"
  volumetype: "replicate:2"
allowVolumeExpansion: true #这个参数控制可以动态扩容 
[root@kubemaster data]# kubectl apply -f gluster-storageclass.yaml
storageclass.storage.k8s.io/gluster created

4.2 查看

K8S master节点操作

[root@kubemaster data]# kubectl get StorageClass
NAME              PROVISIONER               AGE
gluster   kubernetes.io/glusterfs   63s
[root@kubemaster data]# kubectl describe StorageClass
Name:            gluster
IsDefaultClass:  No
Annotations:     kubectl.kubernetes.io/last-applied-configuration={"allowVolumeExpansion":true,"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"gluster"},"parameters":{"restauthenabled":"false","resturl":"http://192.168.150.143:18888","volumetype":"replicate:1"},"provisioner":"kubernetes.io/glusterfs"}

Provisioner:           kubernetes.io/glusterfs
Parameters:            restauthenabled=false,resturl=http://192.168.150.143:18888,volumetype=replicate:1
AllowVolumeExpansion:  True
MountOptions:          <none>
ReclaimPolicy:         Delete
VolumeBindingMode:     Immediate
Events:                <none>

4.3 创建测试pvc

K8S master节点操作

[root@kubemaster data]# vi test-pvc.yaml 
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-pvc
  namespace: default
  annotations:
    volume.beta.kubernetes.io/storage-class: "gluster"
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi
[root@kubemaster data]# kubectl apply -f test-pvc.yaml
persistentvolumeclaim/test-pvc created

4.4 查看测试pvc,pv

[root@kubemaster data]# kubectl get pvc
NAME       STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
test-pvc   Bound    pvc-eff26402-0d54-43ed-9742-87b13157dc95   10Gi       RWX            gluster        3s
[root@kubemaster data]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM              STORAGECLASS   REASON   AGE
pvc-eff26402-0d54-43ed-9742-87b13157dc95   10Gi       RWX            Delete           Bound    default/test-pvc   gluster                 4s
[root@kubemaster data]# kubectl describe pv/pvc-eff26402-0d54-43ed-9742-87b13157dc95
Name:            pvc-eff26402-0d54-43ed-9742-87b13157dc95
Labels:          <none>
Annotations:     Description: Gluster-Internal: Dynamically provisioned PV
                 gluster.kubernetes.io/heketi-volume-id: 00e9aee29f6773febbc21f473a12c615
                 gluster.org/type: file
                 kubernetes.io/createdby: heketi-dynamic-provisioner
                 pv.beta.kubernetes.io/gid: 2000
                 pv.kubernetes.io/bound-by-controller: yes
                 pv.kubernetes.io/provisioned-by: kubernetes.io/glusterfs
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:    gluster
Status:          Bound
Claim:           default/test-pvc
Reclaim Policy:  Delete
Access Modes:    RWX
VolumeMode:      Filesystem
Capacity:        10Gi
Node Affinity:   <none>
Message:         
Source:
    Type:                Glusterfs (a Glusterfs mount on the host that shares a pod's lifetime)
    EndpointsName:       glusterfs-dynamic-eff26402-0d54-43ed-9742-87b13157dc95
    EndpointsNamespace:  default
    Path:                vol_00e9aee29f6773febbc21f473a12c615
    ReadOnly:            false
Events:                  <none>

[root@kubemaster data]# kubectl describe pvc/test-pvc
Name:          test-pvc
Namespace:     default
StorageClass:  gluster
Status:        Bound
Volume:        pvc-eff26402-0d54-43ed-9742-87b13157dc95
Labels:        <none>
Annotations:   kubectl.kubernetes.io/last-applied-configuration:
                 {"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{"volume.beta.kubernetes.io/storage-class":"gluster"},"name":"...
               pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-class: gluster
               volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/glusterfs
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      10Gi
Access Modes:  RWX
VolumeMode:    Filesystem
Mounted By:    <none>
Events:
  Type    Reason                 Age    From                         Message
  ----    ------                 ----   ----                         -------
  Normal  ProvisioningSucceeded  3m11s  persistentvolume-controller  Successfully provisioned volume pvc-eff26402-0d54-43ed-9742-87b13157dc95 using kubernetes.io/glusterfs

4.5 测试使用pvc

[root@kubemaster data]# vi test-pvc-busybox.yaml 
kind: Pod
apiVersion: v1
metadata:
  name: test-pvc-pod
  namespace: default
spec:
  containers:
  - name: test-pvc-pod
    image: busybox
    imagePullPolicy: IfNotPresent #Always #Never
    command: ["/bin/sh","-c"]
    args: ['while true;do sleep 3600; done']
    volumeMounts:
    - name: test-pvc-data
      mountPath: "/test-pvc"
  volumes:
  - name: test-pvc-data
    persistentVolumeClaim:
      claimName: test-pvc

[root@kubemaster data]# kubectl get pod
NAME           READY   STATUS    RESTARTS   AGE
test-pvc-pod   1/1     Running   0          100s

已经挂载成功

[root@kubemaster data]# kubectl exec -it pod/test-pvc-pod /bin/sh
/ # ls
bin       dev       etc       home      proc      root      sys       test-pvc  tmp       usr       var
/ # df -h test-pvc/
Filesystem                Size      Used Available Use% Mounted on
192.168.150.136:vol_00e9aee29f6773febbc21f473a12c615
                         10.0G    135.6M      9.9G   1% /test-pvc

4.6 gluster查看heketi自动划分的volume

heketi节点操作

[root@heketi ~]# gluster volume info vol_00e9aee29f6773febbc21f473a12c615
 
Volume Name: vol_00e9aee29f6773febbc21f473a12c615
Type: Replicate
Volume ID: 9a0e0bbc-ec00-46d2-96a6-1655a8cfbc8b
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: 192.168.150.136:/var/lib/heketi/mounts/vg_ddb6a45f4970a13fa08b05a6bec0688f/brick_9447c7c74bbd28e6fefdd71f435ba175/brick
Brick2: 192.168.150.137:/var/lib/heketi/mounts/vg_b0b80d45a808502a5ff401b2aebb7ec6/brick_dd194ff99fe2b7964294578de8e6397e/brick
Options Reconfigured:
user.heketi.id: 00e9aee29f6773febbc21f473a12c615
cluster.granular-entry-heal: on
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off

4.7 动态扩容pvc

K8S目前V1.16.2版本未发现可以缩容、只能扩容
K8S master节点操作

[root@kubemaster data]# vi test-pvc.yaml 
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: test-pvc
  namespace: default
  annotations:
    volume.beta.kubernetes.io/storage-class: "gluster"
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 100Gi #改成100G
[root@kubemaster data]# kubectl apply -f test-pvc.yaml
persistentvolumeclaim/test-pvc configured
[root@kubemaster data]# kubectl get pv,pvc
NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM              STORAGECLASS   REASON   AGE
persistentvolume/pvc-eff26402-0d54-43ed-9742-87b13157dc95   100Gi      RWX            Delete           Bound    default/test-pvc   gluster                 23m

NAME                             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/test-pvc   Bound    pvc-eff26402-0d54-43ed-9742-87b13157dc95   100Gi      RWX            gluster        23m
[root@kubemaster data]# 

heketi节点操作
如下扩容出两个分配90G的磁盘

[root@heketi /]# gluster volume info vol_00e9aee29f6773febbc21f473a12c615
 
Volume Name: vol_00e9aee29f6773febbc21f473a12c615
Type: Distributed-Replicate
Volume ID: 9a0e0bbc-ec00-46d2-96a6-1655a8cfbc8b
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: 192.168.150.136:/var/lib/heketi/mounts/vg_ddb6a45f4970a13fa08b05a6bec0688f/brick_9447c7c74bbd28e6fefdd71f435ba175/brick
Brick2: 192.168.150.137:/var/lib/heketi/mounts/vg_b0b80d45a808502a5ff401b2aebb7ec6/brick_dd194ff99fe2b7964294578de8e6397e/brick
Brick3: 192.168.150.133:/var/lib/heketi/mounts/vg_1d6c19781d852833797846ac08d0d376/brick_57636a9924cc2237899ba4bc447073b4/brick
Brick4: 192.168.150.143:/var/lib/heketi/mounts/vg_59a06bbff9d630400087d42e03d222c8/brick_92e87484dcb03fca742928e68160f7c5/brick
Options Reconfigured:
user.heketi.id: 00e9aee29f6773febbc21f473a12c615
cluster.granular-entry-heal: on
storage.fips-mode-rchecksum: on
transport.address-family: inet
nfs.disable: on
performance.client-io-threads: off

[root@heketi /]# df -h /var/lib/heketi/mounts/vg_59a06bbff9d630400087d42e03d222c8/brick_92e87484dcb03fca742928e68160f7c5/brick
文件系统                                                                                容量  已用  可用 已用% 挂载点
/dev/mapper/vg_59a06bbff9d630400087d42e03d222c8-brick_92e87484dcb03fca742928e68160f7c5   90G   34M   90G    1% /var/lib/heketi/mounts/vg_59a06bbff9d630400087d42e03d222c8/brick_92e87484dcb03fca742928e68160f7c5

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

李雨93

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值