Kubernetes实录-第一篇-集群部署配置(23) Kubernetes集成部署glusterfs+Heketi提供持久存储PV(集成模式)

Kubernetes实录系列记录文档完整目录参考: Kubernetes实录-目录

相关记录链接地址 :

前2篇文档Kubernetes使用glusterfs提供持久存储PV(手动模式)Kubernetes使用glusterfs+Heketi提供持久存储PV(自动模式) 使用的glusterfs和Heketi都是独立于kubernetes之外部署的。本文档记录使用容器模式与kubernetes集成部署。

一、Kubernetes环境

1.1 Kubernetes环境信息

主机名称ip地址操作系统角色备注
ejucsmaster-shqs-110.99.12.201CentOS 7.8proxy, masterglusterfs /dev/sd{b…e}
ejucsmaster-shqs-210.99.12.202CentOS 7.8proxy, masterglusterfs /dev/sd{b…e}
ejucsmaster-shqs-310.99.12.203CentOS 7.8proxy, masterglusterfs /dev/sd{b…e}
ejucsnode-shqs-110.99.12.204CentOS 7.8workerglusterfs /dev/sd{b…e}
ejucsnode-shqs-210.99.12.205CentOS 7.8workerglusterfs /dev/sd{b…e}
ejucsnode-shqs-210.99.12.206CentOS 7.8workerglusterfs /dev/sd{b…e}

kubernetes的部署可以参考本系列文档,具体进入目标连接查看。
在这里插入图片描述
备注:如果只是想配置个glusterfs集群集成到Kubernetes也一样的配置:将glusterfs部署到相关节点,然后配置heketi,之后修改定义topolopy.json并加载

1.2 glusterfs使用的裸盘

范围:master节点与node节点配置一致

fdisk -l
	Disk /dev/sdb: 599.6 GB, 599550590976 bytes, 1170997248 sectors
	Disk /dev/sdc: 599.6 GB, 599550590976 bytes, 1170997248 sectors
	Disk /dev/sdd: 599.6 GB, 599550590976 bytes, 1170997248 sectors
	Disk /dev/sde: 599.6 GB, 599550590976 bytes, 1170997248 sectors

1.3 kubernetes节点glusterfs客户端准备工作

1.3.1 加载模块

范围:所有kubernetes节点

touch /etc/sysconfig/modules/custom.modules
vi /etc/sysconfig/modules/custom.modules
modprobe dm_thin_pool
modprobe dm_snampshot
modprobe dm_mirror

chmod +x /etc/sysconfig/modules/custom.modules
source /etc/sysconfig/modules/custom.modules
lsmod |grep dm

备注: dm_snampshot 这个内核模块没有找到并加载成功,可能缺少响应的安装程序[待处理]

1.3.2 kubernetes节点安装glusterfs-fuse用于挂在存储卷

这里直接安装最新版本7 也可以指定版本安装。

范围:所有kubernetes节点

yum install centos-release-gluster -y
yum install -y glusterfs-fuse
1.3.3 kubernetes节点(master)安装heketi-cli工具

heketi-cli这个工具可以安装在任何位置,这里为了方便管理员在kubernetes中管理和配置GlusterFS+heketi,就直接在经常使用kubectl命令操作kubernetes集群的master节点上安装(为了方便所哟偶master节点都安装了)
范围:kubernetes master节点
后面在kubernetes集群中部署heketi的版本是9 因此这里heketi-cli也使用9版本.

yum install  heketi-client -y
rpm -qa |grep heketi-client
	heketi-client-9.0.0-1.el7.x86_64

1.4 glusterfs+heketi使用kubernetes命名空间约定

kubernetes是可以创建多个命名空间(namespace)的,有个一个特殊的命名空间叫kube-system,集群服务一般都在该命名空间下。
这里约定所有集群本身服务组件以及之后创建的管理集群或者为整个其他提供服务去第三方组件或者服务都在该命名空间下创建并管理。

glusterfs+heketi与kubernetes集群配置的github上的官方文档都是没有指定命名空间的,默认的命名空间是default不符合我们的要求。有2种方式可以操作,切换命名空间或者在配置文件中明确指定命名空间,本记录文档统一采用修改下载的配置文档明确指定命名空间为kube-system的方式:
所有资源对象的metadata部分添加:

kind: xxxx
apiVersion: apps/v1
metadata:
  name: xxxxxxxxxx
  namespace: kube-system
...

1.5 下载相关部署文件

github位置: https://github.com/heketi/heketi/blob/master/extras/kubernetes/

wget https://raw.githubusercontent.com/heketi/heketi/v9.0.0/extras/kubernetes/glusterfs-daemonset.json
wget https://raw.githubusercontent.com/heketi/heketi/v9.0.0/extras/kubernetes/heketi-bootstrap.json
wget https://raw.githubusercontent.com/heketi/heketi/v9.0.0/extras/kubernetes/heketi-deployment.json
wget https://raw.githubusercontent.com/heketi/heketi/v9.0.0/extras/kubernetes/heketi-service-account.json
wget https://raw.githubusercontent.com/heketi/heketi/v9.0.0/extras/kubernetes/heketi-start.sh
wget https://raw.githubusercontent.com/heketi/heketi/v9.0.0/extras/kubernetes/heketi.json
wget https://raw.githubusercontent.com/heketi/heketi/v9.0.0/extras/kubernetes/topology-sample.json

备注:下载的都是json文件,我在实际使用中转为yaml使用<我这边统一yaml格式管理>
因为我需要配置2组glusterfs,我将文件目录整理为如下:

# tree glusterfs/
glusterfs/
├── 1_glusterfs-system   #在master节点上部署glusterfs-system集群,用于管理/组件等系统服务
│   ├── glusterfs-daemonset.yaml
│   └── topology-sample.json
├── 2_glusterfs-public    # 在node节点上部署glusterfs-public集群,用于实际业务系统
│   ├── glusterfs-daemonset.yaml
│   └── topology-sample.json
├── 3_heketi    # 管理glusterfs集群工具组件           
│   ├── heketi-bootstrap.yaml
│   ├── heketi-deployment.yaml
│   ├── heketi.json
│   ├── heketi-service-account.yaml
│   └── heketi-storage.json
├── 4_storageclass-glusterfs.yaml     # 定义2个glusterfs集群在Kubernetes中使用的StorageClass
└── 5_demo    # 测试验证            
    ├── public-nginx.yaml
    ├── public-pvc.yaml
    ├── sys-nginx.yaml
    └── system-pvc.yaml

二、kubernetes集群内集成部署glusterfs

为Kubernetes节点添加label,用于指定安装daemonset过滤条件

kubectl label node ejucsmaster-shqs-1 storagenode=glusterfs-system
kubectl label node ejucsmaster-shqs-2 storagenode=glusterfs-system
kubectl label node ejucsmaster-shqs-3 storagenode=glusterfs-system

kubectl label node ejucsnode-shqs-1 storagenode=glusterfs-public
kubectl label node ejucsnode-shqs-2 storagenode=glusterfs-public
kubectl label node ejucsnode-shqs-3 storagenode=glusterfs-public

kubectl get node --show-labels
NAME                 STATUS   ROLES    AGE   VERSION   LABELS
ejucsmaster-shqs-1   Ready    master   17d   v1.18.5   kubernetes.io/hostname=ejucsmaster-shqs-1,kubernetes.io/os=linux,node-role.kubernetes.io/master=,storagenode=glusterfs-system
ejucsmaster-shqs-2   Ready    master   17d   v1.18.5   kubernetes.io/hostname=ejucsmaster-shqs-2,kubernetes.io/os=linux,node-role.kubernetes.io/master=,storagenode=glusterfs-system
ejucsmaster-shqs-3   Ready    master   17d   v1.18.5   kubernetes.io/hostname=ejucsmaster-shqs-3,kubernetes.io/os=linux,node-role.kubernetes.io/master=,storagenode=glusterfs-system
ejucsnode-shqs-1     Ready    node     17d   v1.18.5   kubernetes.io/hostname=ejucsnode-shqs-1,kubernetes.io/os=linux,node-role.kubernetes.io/node=,storagenode=glusterfs-public
ejucsnode-shqs-2     Ready    node     17d   v1.18.5   kubernetes.io/hostname=ejucsnode-shqs-2,kubernetes.io/os=linux,node-role.kubernetes.io/node=,storagenode=glusterfs-public
ejucsnode-shqs-3     Ready    node     17d   v1.18.5   kubernetes.io/hostname=ejucsnode-shqs-3,kubernetes.io/os=linux,node-role.kubernetes.io/node=,storagenode=glusterfs-public

2.1 修改glusterfs-daemonset.json 以满足自定义需求

满足条件1:使用kube-system命名空间
满足条件2:分别指定在3台master节点和其他node节点以daemonset模式启动(这里分为2组daemonset,也可以一组都部署了)。因为master节点默认是不参与调度的(当然可以配置参与调度),需要设置tolerations
满足条件3:网络模式采用hostNetwork
满足条件4:glusterfs pod挂载volume默认列表缺少/run/udev 导致pvcreate出现问题(不知道别人是否遇到了),需要添加一个volume

  • glusterfs-system
# cat glusterfs/1_glusterfs-system/glusterfs-daemonset.yaml 

kind: DaemonSet
apiVersion: apps/v1
metadata:
  name: glusterfs-system
  namespace: kube-system
  labels:
    glusterfs: glusterfs-system-deployment
  annotations:
    description: glusterfs system cluster
    tags: glusterfs
spec:
  selector:
    matchLabels:
      glusterfs-node: glusterfs-system-daemonset
  template:
    metadata:
      name: glusterfs-system
      labels:
        glusterfs-node: glusterfs-system-daemonset
    spec:
      tolerations:
      - key: node-role.kubernetes.io/master
        operator: "Equal"
        value: ""
        effect: NoSchedule
      nodeSelector:
        storagenode: glusterfs-system
      hostNetwork: true
      containers:
      - image: gluster/gluster-centos:latest
        imagePullPolicy: Always
        name: glusterfs
        volumeMounts:
        - name: glusterfs-heketi
          mountPath: "/var/lib/heketi"
        - name: glusterfs-run
          mountPath: "/run"
        - name: glusterfs-lvm
          mountPath: "/run/lvm"
        - name: glusterfs-udev
          mountPath: "/run/udev"
        - name: glusterfs-etc
          mountPath: "/etc/glusterfs"
        - name: glusterfs-logs
          mountPath: "/var/log/glusterfs"
        - name: glusterfs-config
          mountPath: "/var/lib/glusterd"
        - name: glusterfs-dev
          mountPath: "/dev"
        - name: glusterfs-cgroup
          mountPath: "/sys/fs/cgroup"
        securityContext:
          capabilities: {}
          privileged: true
        readinessProbe:
          timeoutSeconds: 3
          initialDelaySeconds: 60
          exec:
            command:
            - "/bin/bash"
            - "-c"
            - systemctl status glusterd.service
        livenessProbe:
          timeoutSeconds: 3
          initialDelaySeconds: 60
          exec:
            command:
            - "/bin/bash"
            - "-c"
            - systemctl status glusterd.service
      volumes:
      - name: glusterfs-heketi
        hostPath:
          path: "/var/lib/heketi"
      - name: glusterfs-run
      - name: glusterfs-lvm
        hostPath:
          path: "/run/lvm"
      - name: glusterfs-etc
        hostPath:
          path: "/etc/glusterfs"
      - name: glusterfs-logs
        hostPath:
          path: "/var/log/glusterfs"
      - name: glusterfs-config
        hostPath:
          path: "/var/lib/glusterd"
      - name: glusterfs-dev
        hostPath:
          path: "/dev"
      - name: glusterfs-cgroup
        hostPath:
          path: "/sys/fs/cgroup"
      - name: glusterfs-udev
        hostPath:
          path: "/run/udev"
  • glusterfs-public
# cat glusterfs/2_glusterfs-public/glusterfs-daemonset.yaml 

kind: DaemonSet
apiVersion: apps/v1
metadata:
  name: glusterfs-public
  namespace: kube-system
  labels:
    glusterfs: glusterfs-public-deployment
  annotations:
    description: glusterfs public cluster
    tags: glusterfs
spec:
  selector:
    matchLabels:
      glusterfs-node: glusterfs-public-daemonset
  template:
    metadata:
      name: glusterfs-public
      labels:
        glusterfs-node: glusterfs-public-daemonset
    spec:
      nodeSelector:
        storagenode: glusterfs-public
      hostNetwork: true
      containers:
      - image: gluster/gluster-centos:latest
        imagePullPolicy: Always
        name: glusterfs
        volumeMounts:
        - name: glusterfs-heketi
          mountPath: "/var/lib/heketi"
        - name: glusterfs-run
          mountPath: "/run"
        - name: glusterfs-lvm
          mountPath: "/run/lvm"
        - name: glusterfs-udev
          mountPath: "/run/udev"
        - name: glusterfs-etc
          mountPath: "/etc/glusterfs"
        - name: glusterfs-logs
          mountPath: "/var/log/glusterfs"
        - name: glusterfs-config
          mountPath: "/var/lib/glusterd"
        - name: glusterfs-dev
          mountPath: "/dev"
        - name: glusterfs-cgroup
          mountPath: "/sys/fs/cgroup"
        securityContext:
          capabilities: {}
          privileged: true
        readinessProbe:
          timeoutSeconds: 3
          initialDelaySeconds: 60
          exec:
            command:
            - "/bin/bash"
            - "-c"
            - systemctl status glusterd.service
        livenessProbe:
          timeoutSeconds: 3
          initialDelaySeconds: 60
          exec:
            command:
            - "/bin/bash"
            - "-c"
            - systemctl status glusterd.service
      volumes:
      - name: glusterfs-heketi
        hostPath:
          path: "/var/lib/heketi"
      - name: glusterfs-run
      - name: glusterfs-lvm
        hostPath:
          path: "/run/lvm"
      - name: glusterfs-etc
        hostPath:
          path: "/etc/glusterfs"
      - name: glusterfs-logs
        hostPath:
          path: "/var/log/glusterfs"
      - name: glusterfs-config
        hostPath:
          path: "/var/lib/glusterd"
      - name: glusterfs-dev
        hostPath:
          path: "/dev"
      - name: glusterfs-cgroup
        hostPath:
          path: "/sys/fs/cgroup"
      - name: glusterfs-udev
        hostPath:
          path: "/run/udev"

2.2 kubernetes集群内部署glusterfs-server

这里配置为2组,如果配置为1组,修改glusterfs-daemonset.yaml 为一个就可以了(差别只是label过滤条件)

kubectl apply -f glusterfs/1_glusterfs-system/glusterfs-daemonset.yaml 
kubectl apply -f glusterfs/2_glusterfs-public/glusterfs-daemonset.yaml
kubectl get ds -n kube-system
	NAME                         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                     AGE
	glusterfs-public             3         3         3       3            3           storagenode=glusterfs-public      2d14h
	glusterfs-system             3         3         3       3            3           storagenode=glusterfs-system      2d14h

kubectl get pods -n kube-system -o wide
NAME                                         READY   STATUS    RESTARTS   AGE     IP                NODE
glusterfs-public-fgl7c                       1/1     Running   0          2d15h   10.99.12.205      ejucsnode-shqs-2   
glusterfs-public-fxljn                       1/1     Running   0          2d15h   10.99.12.204      ejucsnode-shqs-1   
glusterfs-public-qk7nk                       1/1     Running   0          2d15h   10.99.12.206      ejucsnode-shqs-3   
glusterfs-system-lhwgz                       1/1     Running   0          2d14h   10.99.12.202      ejucsmaster-shqs-2 
glusterfs-system-nrgqz                       1/1     Running   0          2d14h   10.99.12.203      ejucsmaster-shqs-3 
glusterfs-system-pngsc                       1/1     Running   0          2d14h   10.99.12.201      ejucsmaster-shqs-1

kubectl exec -it glusterfs-system-lhwgz  bash -n kube-system
[root@ejucsmaster-shqs-2 /]# ip add
	# 可以看到使用hostnetwork模式,这里的网络与主机一致。
	...
	6: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
	    inet 10.99.12.202/24 brd 10.99.12.255 scope global bond0
	7: bond1: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
	    inet 10.99.13.202/24 brd 10.99.13.255 scope global bond1

[root@ejucsmaster-shqs-2 /]# fdisk -l
	Disk /dev/sdb: 599.6 GB, 599550590976 bytes, 1170997248 sectors
	Disk /dev/sdc: 599.6 GB, 599550590976 bytes, 1170997248 sectors
	Disk /dev/sdd: 599.6 GB, 599550590976 bytes, 1170997248 sectors
	Disk /dev/sde: 599.6 GB, 599550590976 bytes, 1170997248 sectors

三、kubernetes集群内部署heketi(server)

3.1 创建Heketi服务帐户

  • 配置文件明确指定命名空间

    # cat glusterfs/3_heketi/heketi-service-account.yaml 
    
    kind: ServiceAccount
    apiVersion: v1
    metadata:
      name: heketi-service-account
      namespace: "kube-system"
    
  • 创建ServiceAccount

    kubectl apply -f glusterfs/3_heketi/heketi-service-account.yaml 
    	serviceaccount/heketi-service-account created
    
    kubectl get serviceaccount  -n kube-system
    	NAME                                 SECRETS   AGE
    	heketi-service-account               1         2d21h
    
  • 服务账户授权
    给该服务帐户的授权绑定相应的权限来控制gluster的pod,通过为服务帐户创建群集角色绑定来完成此操作。

    # 此处授权的名称空间为kube-systemHeketi所能操作的glusterfs之Pod在此名称空间内,否则此角色将无法访问到glusterfs
    kubectl create clusterrolebinding heketi-gluster-admin --clusterrole=edit --serviceaccount=kube-system:heketi-service-account
    

3.2 创建保存Heketi服务的配置的secret

从github上下载的heketi.json文件,默认参数是没有开启认证的。如果需要启用认证可以修改该文档。参考文档:GlusterFS操作记录(5) GlusterFS+Heketi配置(独立部署),这里采用默认配置。

  • 配置文件heketi.json中的glusterfs/executor的值kubernetes
  • Secret的名称空间,必须与gluserfs Pod位于同一名称空间kube-system
kubectl create secret generic heketi-config-secret --from-file=glusterfs/3_heketi/heketi.json   -n kube-system
	secret/heketi-config-secret created

kubectl get secrets -n kube-system
	heketi-config-secret                             Opaque                                1      22m

3.3 部署heketi-bootstrap

3.3.1 修改heketi-bootstrap.yaml文件,确保命名空间是kube-system

需要修改的地方有2处,分别是资源Service,Deployment

# cat glusterfs/3_heketi/heketi-bootstrap.yaml 

kind: List
apiVersion: v1
items:
- kind: Service
  apiVersion: v1
  metadata:
    name: deploy-heketi
    namespace: kube-system
    labels:
      glusterfs: heketi-service
      deploy-heketi: support
    annotations:
      description: Exposes Heketi Service
  spec:
    selector:
      name: deploy-heketi
    ports:
    - name: deploy-heketi
      port: 8080
      targetPort: 8080
- kind: Deployment
  apiVersion: apps/v1
  metadata:
    name: deploy-heketi
    namespace: kube-system
    labels:
      glusterfs: heketi-deployment
      deploy-heketi: deployment
    annotations:
      description: Defines how to deploy Heketi
  spec:
    selector:
      matchLabels:
        name: deploy-heketi
        glusterfs: heketi-pod
        deploy-heketi: pod
    replicas: 1
    template:
      metadata:
        name: deploy-heketi
        labels:
          name: deploy-heketi
          glusterfs: heketi-pod
          deploy-heketi: pod
      spec:
        serviceAccountName: heketi-service-account
        containers:
        - image: heketi/heketi:9
          imagePullPolicy: Always
          name: deploy-heketi
          env:
          - name: HEKETI_EXECUTOR
            value: kubernetes
          - name: HEKETI_DB_PATH
            value: "/var/lib/heketi/heketi.db"
          - name: HEKETI_FSTAB
            value: "/var/lib/heketi/fstab"
          - name: HEKETI_SNAPSHOT_LIMIT
            value: '14'
          - name: HEKETI_KUBE_GLUSTER_DAEMONSET
            value: 'y'
          ports:
          - containerPort: 8080
          volumeMounts:
          - name: db
            mountPath: "/var/lib/heketi"
          - name: config
            mountPath: "/etc/heketi"
          readinessProbe:
            timeoutSeconds: 3
            initialDelaySeconds: 3
            httpGet:
              path: "/hello"
              port: 8080
          livenessProbe:
            timeoutSeconds: 3
            initialDelaySeconds: 30
            httpGet:
              path: "/hello"
              port: 8080
        volumes:
        - name: db
        - name: config
          secret:
            secretName: heketi-config-secret
3.3.2 部署
kubectl apply -f glusterfs/3_heketi/heketi-bootstrap.yaml
	service/deploy-heketi created
	deployment.extensions/deploy-heketi created

kubectl get svc -n kube-system
	NAME                      TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)           AGE
	deploy-heketi             ClusterIP   10.109.97.157    <none>        8080/TCP          38s

kubectl get deployment -n kube-system
	NAME                   READY   UP-TO-DATE   AVAILABLE   AGE
	deploy-heketi          1/1     1            1           64s

#验证
curl http://10.109.97.157:8080/hello
	Hello from Heketi

#使用Heketi-cli命令行工具向Heketi提供要管理的GlusterFS集群的信息,通过变量HEKETI_CLI_SERVER来找自已的服务端(在kubernetes集群内,例如master节点)
export HEKETI_CLI_SERVER=http://10.109.97.157:8080 

# heketi-cli cluster list
	Clusters:

3.4 使用拓扑文件,通过heketi-cli配置glusterfs集群

从github上下载的topology-sample.json需要修改为满足自己集群的配置,主要修改如下内容:

  • manage为相应节点的host那么
  • storage 是glusterfs存储卷使用的网络IP地址(我这个与管理网络不同,采用单独网络)
  • devices 列表按照实际情况配置

修改后的文件内容 cat topology-sample.json

  • glusterfs-system
# cat glusterfs/1_glusterfs-system/topology-sample.json 
{
  "clusters": [
    {
      "nodes": [
        {
          "node": {
            "hostnames": {
              "manage": [
                "ejucsmaster-shqs-1"
              ],
              "storage": [
                "10.99.13.201"
              ]
            },
            "zone": 1
          },
          "devices": [
            {
              "name": "/dev/sdb",
              "destroydata": true
            },
            {
              "name": "/dev/sdc",
              "destroydata": true
            },
            {
              "name": "/dev/sdd",
              "destroydata": true
            },
            {
              "name": "/dev/sde",
              "destroydata": true
            }
          ]
        },
        {
          "node": {
            "hostnames": {
              "manage": [
                "ejucsmaster-shqs-2"
              ],
              "storage": [
                "10.99.13.202"
              ]
            },
            "zone": 2
          },
          "devices": [
            {
              "name": "/dev/sdb",
              "destroydata": true
            },
            {
              "name": "/dev/sdc",
              "destroydata": true
            },
            {
              "name": "/dev/sdd",
              "destroydata": true
            },
            {
              "name": "/dev/sde",
              "destroydata": true
            }
          ]
        },
        {
          "node": {
            "hostnames": {
              "manage": [
                "ejucsmaster-shqs-3"
              ],
              "storage": [
                "10.99.12.203"
              ]
            },
            "zone": 3
          },
          "devices": [
            {
              "name": "/dev/sdb",
              "destroydata": true
            },
            {
              "name": "/dev/sdc",
              "destroydata": true
            },
            {
              "name": "/dev/sdd",
              "destroydata": true
            },
            {
              "name": "/dev/sde",
              "destroydata": true
            }
          ]
        }
      ]
    }
  ]
}
  • glusterfs-public
cat glusterfs/2_glusterfs-public/topology-sample.json 
{
  "clusters": [
    {
      "nodes": [
        {
          "node": {
            "hostnames": {
              "manage": [
                "ejucsnode-shqs-1"
              ],
              "storage": [
                "10.99.13.204"
              ]
            },
            "zone": 1
          },
          "devices": [
            {
              "name": "/dev/sdb",
              "destroydata": true
            },
            {
              "name": "/dev/sdc",
              "destroydata": true
            },
            {
              "name": "/dev/sdd",
              "destroydata": true
            },
            {
              "name": "/dev/sde",
              "destroydata": true
            }
          ]
        },
        {
          "node": {
            "hostnames": {
              "manage": [
                "ejucsnode-shqs-2"
              ],
              "storage": [
                "10.99.13.205"
              ]
            },
            "zone": 2
          },
          "devices": [
            {
              "name": "/dev/sdb",
              "destroydata": true
            },
            {
              "name": "/dev/sdc",
              "destroydata": true
            },
            {
              "name": "/dev/sdd",
              "destroydata": true
            },
            {
              "name": "/dev/sde",
              "destroydata": true
            }
          ]
        },
        {
          "node": {
            "hostnames": {
              "manage": [
                "ejucsnode-shqs-3"
              ],
              "storage": [
                "10.99.13.206"
              ]
            },
            "zone": 3
          },
          "devices": [
            {
              "name": "/dev/sdb",
              "destroydata": true
            },
            {
              "name": "/dev/sdc",
              "destroydata": true
            },
            {
              "name": "/dev/sdd",
              "destroydata": true
            },
            {
              "name": "/dev/sde",
              "destroydata": true
            }
          ]
        }
      ]
    }
  ]
}
# glustefs-system
heketi-cli topology load --json=glusterfs/1_glusterfs-system/topology-sample.json
	Creating cluster ... ID: 7cc8b15572ae28cdac0e497c13ca87ae
	        Allowing file volumes on cluster.
	        Allowing block volumes on cluster.
	        Creating node ejucsmaster-shqs-1 ... ID: 56b8f82e6010668ff692535f2dc0e114
	                Adding device /dev/sdb ... OK
	                Adding device /dev/sdc ... OK
	                Adding device /dev/sdd ... OK
	                Adding device /dev/sde ... OK
	        Creating node ejucsmaster-shqs-2 ... ID: 4233c0daac3612ec2d616f883064c256
	                Adding device /dev/sdb ... OK
	                Adding device /dev/sdc ... OK
	                Adding device /dev/sdd ... OK
	                Adding device /dev/sde ... OK
	        Creating node ejucsmaster-shqs-3 ... ID: 8fa67b4bf7a1868630b740fc38309fa1
	                Adding device /dev/sdb ... OK
	                Adding device /dev/sdc ... OK
	                Adding device /dev/sdd ... OK
	                Adding device /dev/sde ... OK

# glusterfs-public  [与上面的文件可以合并为一个文件,这里分开处理]
heketi-cli topology load --json=glusterfs/2_glusterfs-public/topology-sample.json
	Creating cluster ... ID: e13cb6456e43121d2a85ad0d613b2ad7
	        Allowing file volumes on cluster.
	        Allowing block volumes on cluster.
	        Creating node ejucsmaster-shqs-1 ... ID: 709f48cf3dc391b8768bae40b67ef036
	                Adding device /dev/sdb ... OK
	                Adding device /dev/sdc ... OK
	                Adding device /dev/sdd ... OK
	                Adding device /dev/sde ... OK
	        Creating node ejucsmaster-shqs-2 ... ID: dbb384d6757b90e11bcb6b175c100e64
	                Adding device /dev/sdb ... OK
	                Adding device /dev/sdc ... OK
	                Adding device /dev/sdd ... OK
	                Adding device /dev/sde ... OK
	        Creating node ejucsmaster-shqs-3 ... ID: 2d3c07c38565233f8bf7d069e7495732
	                Adding device /dev/sdb ... OK
	                Adding device /dev/sdc ... OK
	                Adding device /dev/sdd ... OK
	                Adding device /dev/sde ... OK



# heketi-cli cluster list
Clusters:
Id:7cc8b15572ae28cdac0e497c13ca87ae [file][block]
Id:e13cb6456e43121d2a85ad0d613b2ad7 [file][block]


heketi-cli topology info
# 输出省略

3.5 验证(1),这一步可以跳过,等完全配置后最后验证

其实到这里glusterfs+heketi服务已经可以使用了,但是heketi数据库是临时存储的,随pod消亡。下面先验证,之后配置使用glusterfs存储卷存储heketi数据库。

  • 创建storageclass
    因为没有启用认证,所有认证相关的参数注释掉了
    cat glusterfs-storageclass.yaml
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: glusterfs-storage-class-2
    provisioner: kubernetes.io/glusterfs
    allowVolumeExpansion: true
    reclaimPolicy: Delete
    parameters:
      resturl: "http://10.109.97.157:8080"
      clusterid: "e13cb6456e43121d2a85ad0d613b2ad7"
      volumetype: "replicate:3"
      gidMax: "50000"
      gidMin: "40000"
      #restauthenabled: "true"
      #restuser: "admin"
      #restuserkey: "admin_secret"
      #secretNamespace: "kube-system"
      #secretName: "heketi-secret"
    
    kubectl apply -f glusterfs-storageclass.yaml 
    	storageclass.storage.k8s.io/glusterfs-storage-class-2 created
    
    kubectl get sc 
    	NAME                        PROVISIONER               AGE
    	glusterfs-storage-class-2   kubernetes.io/glusterfs   31s
    
  • 创建pvc
    cat glusterfs-pvc.yaml 
    
    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: gluster1
      namespace: default
    spec:
      storageClassName: glusterfs-storage-class-2
      accessModes:
      - ReadWriteMany
      resources:
        requests:
          storage: 2Gi
    
    kubectl apply -f glusterfs-pvc.yaml
    	persistentvolumeclaim/gluster1 created
    
    kubectl get pvc
    	NAME       STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                AGE
    	gluster1   Bound    pvc-b52ffe6f-2eb5-11e9-8a0e-1418776411a1   2Gi        RWX            glusterfs-storage-class-2   23s
    
    kubectl get pv
    	NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM              STORAGECLASS                REASON   AGE
    	pvc-b52ffe6f-2eb5-11e9-8a0e-1418776411a1   2Gi        RWX            Delete           Bound    default/gluster1   glusterfs-storage-class-2            31s
    
  • 创建一个pod,通过该pvc使用存储卷
cat nginx.yaml 

kind: Pod
apiVersion: v1
metadata:
  name: nginx-glusterfs
  labels:
    name: nginx-glusterfs
spec:
  containers:
  - name: nginx-glusterfs
    image: nginx
    ports:
    - name: web
      containerPort: 80
    volumeMounts:
    - name: gluster-vol1
      mountPath: /usr/share/nginx/html
  volumes:
  - name: gluster-vol1
    persistentVolumeClaim:
      claimName: gluster1
kubectl apply -f nginx.yaml 
	pod/nginx-glusterfs created

kubectl get pods
	NAME              READY   STATUS    RESTARTS   AGE     IP             NODE               NOMINATED NODE   READINESS GATES
	nginx-glusterfs   1/1     Running   0          2m28s   192.168.4.16   ejucsnode-shqs-2   <none>           <none>

kubectl exec -it nginx-glusterfs bash
root@nginx-glusterfs:/# df -h
	Filesystem                                         Size  Used Avail Use% Mounted on
	overlay                                            526G  7.4G  519G   2% /
	tmpfs                                               64M     0   64M   0% /dev
	tmpfs                                               63G     0   63G   0% /sys/fs/cgroup
	/dev/sda5                                          526G  7.4G  519G   2% /etc/hosts
	shm                                                 64M     0   64M   0% /dev/shm
	10.99.13.201:vol_43db1521d836d1a1720f3809e0c5dd0a  2.0G   53M  2.0G   3% /usr/share/nginx/html
	tmpfs                                               63G   12K   63G   1% /run/secrets/kubernetes.io/serviceaccount
	tmpfs                                               63G     0   63G   0% /proc/acpi
	tmpfs                                               63G     0   63G   0% /proc/scsi
	tmpfs                                               63G     0   63G   0% /sys/firmware

root@nginx-glusterfs:/# echo "hello GlusterFS" >> /usr/share/nginx/html/index.html
root@nginx-glusterfs:/# exit

curl http://192.168.4.16 
hello GlusterFS

kubectl exec -it glusterfs-7vljr bash -n kube-system
	[root@ejucsmaster-shqs-2 /]# ls /var/lib/heketi/mounts/vg_d8eec2ad379ecb8a494a5f85e5e73b0f/brick_54ea5fa3dda9c9634578ffb37399816d/brick/ 
	index.html
	[root@ejucsmaster-shqs-2 /]# cat /var/lib/heketi/mounts/vg_d8eec2ad379ecb8a494a5f85e5e73b0f/brick_54ea5fa3dda9c9634578ffb37399816d/brick/index.html 
	hello GlusterFS
	[root@ejucsmaster-shqs-2 /]# 

删除测试数据

kubectl delete -f nginx.yaml
kubectl delete -f glusterfs-pvc.yaml
kubectl delete -f glusterfs-storageclass.yaml

3.6 创建heketi存储卷

heketi-cli setup-openshift-heketi-storage
Saving heketi-storage.json

# 指定在kube-system命名空间操作
kubectl apply -f heketi-storage.json -n kube-system
	secret/heketi-storage-secret created
	endpoints/heketi-storage-endpoints created
	service/heketi-storage-endpoints created
	job.batch/heketi-storage-copy-job created

# 等下
kubectl get jobs -n kube-system
NAME                      COMPLETIONS   DURATION   AGE
heketi-storage-copy-job   1/1           2s         78s

作业完成后,删除bootstrap Heketi实例相关的组件

kubectl delete all,service,jobs,deployment,secret --selector="deploy-heketi" -n kube-system
	pod "deploy-heketi-77745f8c4-24wl5" deleted
	service "deploy-heketi" deleted
	deployment.apps "deploy-heketi" deleted
	replicaset.apps "deploy-heketi-77745f8c4" deleted
	job.batch "heketi-storage-copy-job" deleted
	secret "heketi-storage-secret" deleted

3.7 kubernetes集群内创建持久使用的heketi服务

# cat glusterfs/3_heketi/heketi-deployment.yaml 
---
kind: List
apiVersion: v1
items:
- kind: Secret
  apiVersion: v1
  metadata:
    name: heketi-db-backup
    namespace: kube-system
    labels:
      glusterfs: heketi-db
      heketi: db
  data: {}
  type: Opaque
- kind: Service
  apiVersion: v1
  metadata:
    name: heketi
    namespace: kube-system
    labels:
      glusterfs: heketi-service
      deploy-heketi: support
    annotations:
      description: Exposes Heketi Service
  spec:
    selector:
      name: heketi
    ports:
    - name: heketi
      port: 8080
      targetPort: 8080
- kind: Deployment
  apiVersion: apps/v1
  metadata:
    namespace: kube-system
    name: heketi
    labels:
      glusterfs: heketi-deployment
    annotations:
      description: Defines how to deploy Heketi
  spec:
    selector:
      matchLabels:
        name: heketi
        glusterfs: heketi-pod
    replicas: 1
    template:
      metadata:
        name: heketi
        labels:
          name: heketi
          glusterfs: heketi-pod
      spec:
        serviceAccountName: heketi-service-account
        containers:
        - image: heketi/heketi:9
          imagePullPolicy: Always
          name: heketi
          env:
          - name: HEKETI_EXECUTOR
            value: kubernetes
          - name: HEKETI_DB_PATH
            value: "/var/lib/heketi/heketi.db"
          - name: HEKETI_FSTAB
            value: "/var/lib/heketi/fstab"
          - name: HEKETI_SNAPSHOT_LIMIT
            value: '14'
          - name: HEKETI_KUBE_GLUSTER_DAEMONSET
            value: 'y'
          ports:
          - containerPort: 8080
          volumeMounts:
          - mountPath: "/backupdb"
            name: heketi-db-secret
          - name: db
            mountPath: "/var/lib/heketi"
          - name: config
            mountPath: "/etc/heketi"
          readinessProbe:
            timeoutSeconds: 3
            initialDelaySeconds: 3
            httpGet:
              path: "/hello"
              port: 8080
          livenessProbe:
            timeoutSeconds: 3
            initialDelaySeconds: 30
            httpGet:
              path: "/hello"
              port: 8080
        volumes:
        - name: db
          glusterfs:
            endpoints: heketi-storage-endpoints
            path: heketidbstorage
        - name: heketi-db-secret
          secret:
            secretName: heketi-db-backup
        - name: config
          secret:
            secretName: heketi-config-secret

部署配置

kubectl apply -f glusterfs/3_heketi/heketi-deployment.yaml -n kube-system
	secret/heketi-db-backup created
	service/heketi created
	deployment.extensions/heketi created

kubectl get deploy -n  kube-system
	NAME                   READY   UP-TO-DATE   AVAILABLE   AGE
	heketi                    1/1     1            1           2d14h


kubectl get svc -n kube-system
	NAME                       TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)           AGE
	heketi                     ClusterIP   10.97.146.26   <none>        8080/TCP                  2d14h

# 添加为系统变量
vi  /etc/profile
...
export HEKETI_CLI_SERVER=http://10.97.146.26:8080

# 临时生效
export HEKETI_CLI_SERVER=http://10.97.146.26:8080

heketi-cli volume list
Id:86f96b263fe64b1c4278f572e09c0d14    Cluster:7cc8b15572ae28cdac0e497c13ca87ae    Name:heketidbstorage

heketi 数据库使用GlusterFS卷,heketi pod重新启动时都不会重置。

  • 创建实际使用的StorageClass
# cat glusterfs/4_storageclass-glusterfs.yaml 
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: glusterfs-system
provisioner: kubernetes.io/glusterfs
allowVolumeExpansion: true
reclaimPolicy: Retain
parameters:
  resturl: "http://10.97.146.26:8080"
  clusterid: "7cc8b15572ae28cdac0e497c13ca87ae"
  volumetype: "replicate:3"
  gidMax: "9000"
  gidMin: "3000"
  #restauthenabled: "true"
  #restuser: "admin"
  #restuserkey: "admin_secret"
  #secretNamespace: "kube-system"
  #secretName: "heketi-secret"

---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: glusterfs-public
provisioner: kubernetes.io/glusterfs
allowVolumeExpansion: true
reclaimPolicy: Retain
parameters:
  resturl: "http://10.97.146.26:8080"
  clusterid: "e13cb6456e43121d2a85ad0d613b2ad7"
  volumetype: "replicate:3"
  gidMax: "9000"
  gidMin: "3000"
  #restauthenabled: "true"
  #restuser: "admin"
  #restuserkey: "admin_secret"
  #secretNamespace: "kube-system"
  #secretName: "heketi-secret"
kubectl apply -f glusterfs/4_storageclass-glusterfs.yaml 

3.8 验证(2)

与3.5 验证(1)类似

# tree glusterfs/5_demo/
glusterfs/5_demo/
├── public-nginx.yaml
├── public-pvc.yaml
├── sys-nginx.yaml
└── system-pvc.yaml
# cat glusterfs/5_demo/public-pvc.yaml 
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: public-pvc-1
  namespace: default
spec:
  storageClassName: glusterfs-public
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 5Gi

# cat glusterfs/5_demo/public-nginx.yaml 
kind: Pod
apiVersion: v1
metadata:
  name: public-nginx
  labels:
    name: pulic-nginx
spec:
  containers:
  - name: public-nginx
    image: nginx
    ports:
    - name: web
      containerPort: 80
    volumeMounts:
    - name: public-pvc-1
      mountPath: /usr/share/nginx/html
  volumes:
  - name: public-pvc-1
    persistentVolumeClaim:
      claimName: public-pvc-1
# namespace = default
kubectl apply -f glusterfs/5_demo/public-pvc.yaml 
kubectl apply -f glusterfs/5_demo/public-nginx.yaml 

kubectl get pods -o wide
	NAME              READY   STATUS    RESTARTS   AGE     IP             NODE               NOMINATED NODE   READINESS GATES
	public-nginx   1/1     Running   0          2m25s   192.168.4.18   ejucsnode-shqs-2   <none>           <none>
kubectl exec -it public-nginx  bash
	echo "hello Glusterfs" >> /usr/share/nginx/html/index.html

curl 192.168.4.18
	hello Glusterfs

四、结尾

到这里就配置结束了,后期的使用场景等配置了再补充吧

  • 0
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值