kubernetes集群部署GlusterFS分布式文件系统

关于GlusterFS和Heketi

GlusterFS Heketi 组合提供了一个强大、灵活和易于管理的分布式存储解决方案,适用于各种规模和需求的应用场景。GlusterFS 提供了高性能和可靠性的分布式文件系统,而 Heketi 则简化了 GlusterFS 的部署和管理流程,使得用户可以更轻松地搭建和管理存储基础设施。

GlusterFS 是一个开源的分布式文件系统,旨在提供可扩展性、高性能和高可靠性的存储解决方案。它基于分布式存储和对象存储的概念,允许将多个存储服务器汇聚成一个统一的文件系统,并提供了数据冗余、负载均衡、弹性伸缩等功能。

以下是 GlusterFS 的一些主要特点和功能:

  1. 可扩展性:GlusterFS 可以轻松地扩展到成百上千台服务器,从而支持 PB 级别的数据存储需求。

  2. 高性能:GlusterFS 具有并行 I/O、缓存和负载均衡等特性,可以实现高并发、高吞吐量的数据访问。

  3. 数据冗余:GlusterFS 支持多种数据冗余方式,包括复制、条带化和分布式条带化,可以保障数据的安全性和可靠性。

  4. 透明性:GlusterFS 提供了统一的命名空间,使得用户无需关心数据存储在哪个节点上,可以透明地访问整个文件系统。

  5. 弹性伸缩:GlusterFS 可以动态地添加或删除存储节点,从而实现存储容量的动态扩展和收缩。

  6. 灵活性:GlusterFS 支持多种存储协议,包括 NFS、CIFS、FTP 等,可以与各种应用和操作系统无缝集成。

Heketi(以前称为 GlusterFS Manager)是一个用于管理 GlusterFS 的管理器,旨在简化 GlusterFS 的部署、管理和维护。它提供了一个 RESTful API,通过该 API 可以轻松地创建、删除、调整存储卷,以及动态地分配存储资源。

以下是 Heketi 的一些主要功能:

  1. 自动化管理:Heketi 可以自动化地管理 GlusterFS 集群,包括存储卷的创建、删除、调整大小等操作。

  2. 动态分配资源:Heketi 可以根据需求动态地分配存储资源,从而最大化资源利用率。

  3. 简化部署:Heketi 提供了一组命令行工具和 RESTful API,可以简化 GlusterFS 的部署和管理流程。

  4. 故障恢复:Heketi 可以自动检测和处理存储节点的故障,从而保障数据的可靠性和可用性。

  5. 集成性:Heketi 可以与其他管理工具和自动化平台集成,例如 Kubernetes、OpenShift 等,从而实现存储资源的自动化管理和调度。

搭建前准备工作

首先需要搭建一套Kubernetes集群环境,因Heketi要求GlusterFS至少有3个节点,所以,最好kubernetes集群的节点有4个(当然3个也可以),我准备了6个节点(所有节点都要配置时间同步,确保时间一致性):

类型节点名分区情况标签
master节点linshi-k8s-52/dev/sda1 / /dev/sda1 /var
普通node节点linshi-k8s-54/dev/sda1 / /dev/sda1 /varlabel=commnode
普通node节点linshi-k8s-57/dev/sda1 / /dev/sda1 /varlabel=commnode
gluserfs用node节点linshi-k8s-clusterfs-55/dev/sda1 / /dev/sdb 裸盘(不格式化)storagenode=glusterfs
gluserfs用node节点linshi-k8s-clusterfs-56/dev/sda1 / /dev/sdb 裸盘(不格式化)storagenode=glusterfs
gluserfs用node节点linshi-k8s-clusterfs-58/dev/sda1 / /dev/sdb 裸盘(不格式化)storagenode=glusterfs

为了能够使用GlusterFS,需要在计划使用GlusterFS的Node上安装GlusterFS客户端(普通node节点也要装)

yum -y install glusterfs glusterfs-fuse

GlusterFS管理服务容器需要以特权模式运行,需要kubernetes启用特权模式

 ~]# systemctl cat kube-apiserver
 # /usr/lib/systemd/system/kube-apiserver.service
 ExecStart=/opt/kubernetes/server/bin/kube-apiserver \
     --allow-privileged=true  

准备部署GlusterFS管理服务的节点上,需要打上标签,例如:“storagenode=glusterfs”

kubectl label node linshi-k8s-clusterfs-55 storagenode=glusterfs
kubectl label node linshi-k8s-clusterfs-56 storagenode=glusterfs
kubectl label node linshi-k8s-clusterfs-58 storagenode=glusterfs

创建GlusterFS管理服务容器集群

GlusterFS管理服务容器以DaemonSet方式进行部署,即:每个Node上都运行一个GlusterFS管理服务。glusterfs-daemonset.yaml内容如下:

---
kind: DaemonSet
apiVersion: apps/v1
metadata:
  name: glusterfs
  labels:
    glusterfs: daemonset
  annotations:
    description: GlusterFS DaemonSet
    tags: glusterfs
spec:
  selector:
    matchLabels:
      glusterfs-node: pod
  template:
    metadata:
      name: glusterfs
      labels:
        glusterfs-node: pod
    spec:
      nodeSelector:
        storagenode: glusterfs
      hostNetwork: true
      containers:
      - image: 192.168.XXXX/library/gluster-centos
        imagePullPolicy: IfNotPresent
        name: glusterfs
        volumeMounts:
        - name: glusterfs-heketi
          mountPath: "/var/lib/heketi"
        - name: glusterfs-run
          mountPath: "/run"
        - name: glusterfs-lvm
          mountPath: "/run/lvm"
        - name: glusterfs-etc
          mountPath: "/etc/glusterfs"
        - name: glusterfs-logs
          mountPath: "/var/log/glusterfs"
        - name: glusterfs-config
          mountPath: "/var/lib/glusterd"
        - name: glusterfs-dev
          mountPath: "/dev"
        - name: glusterfs-misc
          mountPath: "/var/lib/misc/glusterfsd"
        - name: glusterfs-cgroup
          mountPath: "/sys/fs/cgroup"
          readOnly: true
        - name: glusterfs-ssl
          mountPath: "/etc/ssl"
          readOnly: true
        securityContext:
          capabilities: {}
          privileged: true
        readinessProbe:
          timeoutSeconds: 3
          initialDelaySeconds: 40
          exec:
            command:
            - "/bin/bash"
            - "-c"
            - systemctl status glusterd.service
          periodSeconds: 25
          successThreshold: 1
          failureThreshold: 15
        livenessProbe:
          timeoutSeconds: 3
          initialDelaySeconds: 40
          exec:
            command:
            - "/bin/bash"
            - "-c"
            - systemctl status glusterd.service
          periodSeconds: 25
          successThreshold: 1
          failureThreshold: 15
      volumes:
      - name: glusterfs-heketi
        hostPath:
          path: "/var/lib/heketi"
      - name: glusterfs-run
      - name: glusterfs-lvm
        hostPath:
          path: "/run/lvm"
      - name: glusterfs-etc
        hostPath:
          path: "/etc/glusterfs"
      - name: glusterfs-logs
        hostPath:
          path: "/var/log/glusterfs"
      - name: glusterfs-config
        hostPath:
          path: "/var/lib/glusterd"
      - name: glusterfs-dev
        hostPath:
          path: "/dev"
      - name: glusterfs-misc
        hostPath:
          path: "/var/lib/misc/glusterfsd"
      - name: glusterfs-cgroup
        hostPath:
          path: "/sys/fs/cgroup"
      - name: glusterfs-ssl
        hostPath:
          path: "/etc/ssl"
 ~]# docker pull gluster/gluster-centos:latest
 ~]# docker tag gluster/gluster-centos 192.168.XXXX/library/gluster-centos
 ~]# kubectl apply -f glusterfs-daemonset.yaml 
 ~]# kubectl get po -o wide
 NAME                             READY   STATUS    RESTARTS   AGE     IP               NODE                      NOMINATED NODE   READINESS GATES
 glusterfs-2lzr7                  1/1     Running   0          2d19h   192.XXXX.56   linshi-k8s-clusterfs-56   <none>           <none>
 glusterfs-6qzdc                  1/1     Running   0          2d19h   192.XXXX.55   linshi-k8s-clusterfs-55   <none>           <none>
 glusterfs-lr9ff                  1/1     Running   0          2d19h   192.XXXX.58   linshi-k8s-clusterfs-58   <none>           <none>

创建Heketi服务

在部署Heketi之前需要给它授权,否则无法访问GlusterFS管理服务。首先,创建一个ServiceAccount对象(heketi-service-account.yaml):

apiVersion: v1
kind: ServiceAccount
metadata:
  name: heketi-service-account

给heketi-service-account绑定权限:

 ~]# kubectl create clusterrole heketi-gluster-role --verb=get,list,watch,create 
--resource=pods,pods/status,pods/exec
 ~]# kubectl create clusterrolebinding heketi-gluster-admin 
--clusterrole=heketi-gluster-role --serviceaccount=default:heketi-service-account
  ​
 ~]# kubectl create -f heketi-service-account.yaml 
 ~]# kubectl get sa|grep heketi
 NAME                     SECRETS   AGE
 heketi-service-account   1         3d1h

部署Heketi服务(heketi-deployment-svc.yaml )

heketi服务的环境变量HEKETI_ADMIN_KEY,请自行修改(最好不要用默认的)

kind: Deployment
apiVersion: apps/v1
metadata:
  name: deploy-heketi
  labels:
    glusterfs: heketi-deployment
    deploy-heketi: heketi-deployment
  annotations:
    description: Defines how to deploy Heketi
spec:
  selector:
    matchLabels:
      name: deploy-heketi
  replicas: 1
  template:
    metadata:
      name: deploy-heketi
      labels:
        name: deploy-heketi
        glusterfs: heketi-pod
    spec:
      serviceAccountName: heketi-service-account
      containers:
      - image: 192.168.XXXX/library/heketi
        name: deploy-heketi
        env:
        - name: HEKETI_EXECUTOR
          value: kubernetes
        - name: HEKETI_FSTAB
          value: "/var/lib/heketi/fstab"
        - name: HEKETI_SNAPSHOT_LIMIT
          value: '14'
        - name: HEKETI_CLI_SERVER
          value: "http://localhost:8080"
        - name: HEKETI_ADMIN_KEY
          value: "admin123"
        - name: HEKETI_KUBE_GLUSTER_DAEMONSET
          value: "y"
        ports:
        - containerPort: 8080
        volumeMounts:
        - name: db
          mountPath: "/var/lib/heketi"
        readinessProbe:
          timeoutSeconds: 3
          initialDelaySeconds: 3
          httpGet:
            path: "/hello"
            port: 8080
        livenessProbe:
          timeoutSeconds: 3
          initialDelaySeconds: 30
          httpGet:
            path: "/hello"
            port: 8080
      volumes:
      - name: db
        hostPath:
          path: "/heketi-data"
---
kind: Service
apiVersion: v1
metadata:
  name: deploy-heketi
  labels:
    glusterfs: heketi-service
    deploy-heketi: support
  annotations:
    description: Exposes Heketo Service
spec:
  type: NodePort
  selector:
    name: deploy-heketi
  ports:
  - name: deploy-heketi
    port: 8080
    targetPort: 8080
    nodePort: 38080
 ~]# kubectl apply -f heketi-deployment-svc.yaml 
 ~]# kubectl get po |grep heketi
 deploy-heketi-64558d59d7-vz56v   1/1     Running   0          2d22h

为Heketi设置GlusterFS集群

在Heketi能够管理GlusterFS集群之前,需要为其设置GlusterFS集群的信息。可以通过topology.json文件完成GlusterFS集群的定义。

Heketi要求GlusterFS集群至少有3个节点。

topology.json配置文件:

clusters.nodes.node.hostnames.manage:设置nodename或hostname

clusters.nodes.node.hostnames.storage:设置nodeip

clusters.nodes.node.devices:设置存储裸设备,可以设置多块,以供Heketi自动完成PV、VG 、LV的创建。

{ 
  "clusters": [
    {
      "nodes": [
        {
          "node": {
            "hostnames": {
              "manage": [
                "linshi-k8s-clusterfs-55"
              ],
              "storage": [
                "192.XXXX.55"
              ]
            },
            "zone": 1
          },
          "devices": [
            "/dev/sdb"
          ]
        },
        {
          "node": {
            "hostnames": {
              "manage": [
                "linshi-k8s-clusterfs-56"
              ],
              "storage": [
                "192.XXXX.56"
              ]
            },
            "zone": 1
          },
          "devices": [
            "/dev/sdb"
          ]
        },
        {
          "node": {
            "hostnames": {
              "manage": [
                "linshi-k8s-clusterfs-58"
              ],
              "storage": [
                "192.XXXX.58"
              ]
            },
            "zone": 1
          },
          "devices": [
            "/dev/sdb"
          ]
        } 
      ]
    }
  ]
}

进入到heketi容器,使用命令行工具heketi-cli完成GlusterFS集群的配置:

~]# kubectl exec -it deploy-heketi-64558d59d7-vz56v -- /bin/bash
~]# export HEKETI_CLI_SERVER=http://192.XXXX.54:38080
~]# cd /var/lib/heketi/
~]# heketi-cli topology load --json=topology.json --user admin --secret admin123
Creating cluster ... ID: 145a1f5fd81b8299deb2f885efeefa4c
        Allowing file volumes on cluster.
        Allowing block volumes on cluster.
        Creating node linshi-k8s-clusterfs-55 ... ID: dded86116e8092e7034193b17db89e7c
                Adding device /dev/sdb ... OK
        Creating node linshi-k8s-clusterfs-56 ... ID: b87c03ec7f0879979b46cdaaf52b7b11
                Adding device /dev/sdb ... OK
        Creating node linshi-k8s-clusterfs-58 ... ID: 8c49ae684daf6806e0fd8d3e71e1768b
                Adding device /dev/sdb ... OK

经过上面的操作,Heketi完成了GlusterFS集群的创建,并在GlusterFS集群各个节点的/dev/sdb盘上成功创建了PV和VG。

~]# pvs
  PV         VG                                  Fmt  Attr PSize   PFree  
  /dev/sda2  centos_linshi-k8s-clusterfs-55      lvm2 a--  <29.00g      0 
  /dev/sdb   vg_49f62450516d6cfd0280d994b40b70ca lvm2 a--   39.87g <38.86g
~]# vgs
  VG                                  #PV #LV #SN Attr   VSize   VFree  
  centos_linshi-k8s-clusterfs-55        1   1   0 wz--n- <29.00g      0 
  vg_49f62450516d6cfd0280d994b40b70ca   1   2   0 wz--n-  39.87g <38.86g

通过下面的命令,查看Heketi的topology信息,可以看到Node和Device的详细信息,包括磁盘空间大小和剩余空间。此时,Volume和Brick还未创建。

~]# heketi-cli topology info --user admin --secret admin123 

Cluster Id: 145a1f5fd81b8299deb2f885efeefa4c

    File:  true
    Block: true

    Volumes:

    Nodes:

        Node Id: 8c49ae684daf6806e0fd8d3e71e1768b
        State: online
        Cluster Id: 145a1f5fd81b8299deb2f885efeefa4c
        Zone: 1
        Management Hostnames: linshi-k8s-clusterfs-58
        Storage Hostnames: 192.XXXX.58
        Devices:
                Id:7679bedee4b8208721f5d3bc71c31948   State:online    Size (GiB):39      Used (GiB):0       Free (GiB):39      
                        Known Paths: /dev/sdb
                        Bricks:

        Node Id: b87c03ec7f0879979b46cdaaf52b7b11
        State: online
        Cluster Id: 145a1f5fd81b8299deb2f885efeefa4c
        Zone: 1
        Management Hostnames: linshi-k8s-clusterfs-56
        Storage Hostnames: 192.XXXX.56
        Devices:
                Id:2e3aaa5d446899a751d086005e57a361   State:online    Size (GiB):39      Used (GiB):0       Free (GiB):39      
                        Known Paths: /dev/sdb
                        Bricks:

        Node Id: dded86116e8092e7034193b17db89e7c
        State: online
        Cluster Id: 145a1f5fd81b8299deb2f885efeefa4c
        Zone: 1
        Management Hostnames: linshi-k8s-clusterfs-55
        Storage Hostnames: 192.XXXX.55
        Devices:
                Id:6d1ce90e0ff01dc926f630a955f90ccc   State:online    Size (GiB):39      Used (GiB):0       Free (GiB):39      
                        Known Paths: /dev/sdb
                        Bricks:

定义StorageClass

创建secret对象

##1.
cat > glusterfs_secret.yaml <<EOF
apiVersion: v1
kind: Secret
metadata:
  name: heketi-secret
  namespace: default
data:
  # base64 encoded password. E.g.: echo -n "admin123" | base64
  key: YWRtaW4xMjM=
type: kubernetes.io/glusterfs
EOF

##2.
kubectl apply -f glusterfs_secret.yaml

storageclass-gluster-heketi.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: gluster-heketi
provisioner: kubernetes.io/glusterfs
reclaimPolicy: Delete
allowVolumeExpansion: true
parameters:
  volumeMode: Block
  resturl: "http://192.XXXX.54:38080"
  restauthenabled: "true"
  restuser: "admin"
  secretNamespace: "default"
  secretName: "heketi-secret"

~]# kubectl apply -f storageclass-gluster-heketi.yaml 
~]# kubectl get sc
NAME             PROVISIONER               RECLAIMPOLICY   VOLUMEBINDINGMODE   
ALLOWVOLUMEEXPANSION   AGE
gluster-heketi   kubernetes.io/glusterfs   Delete          Immediate           
true                   2d23h

定义PVC

定义文件如下,该PVC申请1G空间,StorageClass使用gluster-heketi。PVC一旦生成,系统便触发Heketi进行相应操作:

1、GlusterFS集群上创建Brick,再创建Volume。

2、PVC创建成功后会自动创建PV

3、PV的Endpoint和Path也由Heketi自动关联到GlusterFS上

pvc-gluster-heketi.yaml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: pvc-gluster-heketi
spec:
  storageClassName: gluster-heketi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

~]# kubectl apply -f pvc-gluster-heketi.yaml 
~]# kubectl get pvc
查看PVC,状态Pending了好长时间
NAME                 STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS     AGE
pvc-gluster-heketi   Bound    pvc-b8cf1063-122f-436d-ac5a-ff5862d66846   1Gi        RWO            gluster-heketi   2d23h
~]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                        STORAGECLASS     REASON   AGE
pvc-b8cf1063-122f-436d-ac5a-ff5862d66846   1Gi        RWO            Delete           Bound    default/pvc-gluster-heketi   gluster-heketi            2d23h
~]# kubectl describe pv pvc-b8cf1063-122f-436d-ac5a-ff5862d66846 
Name:            pvc-b8cf1063-122f-436d-ac5a-ff5862d66846
Labels:          <none>
Annotations:     Description: Gluster-Internal: Dynamically provisioned PV
                 gluster.kubernetes.io/heketi-volume-id: 20bd5715aed57741c866152bc1a00835
                 gluster.org/type: file
                 kubernetes.io/createdby: heketi-dynamic-provisioner
                 pv.beta.kubernetes.io/gid: 2000
                 pv.kubernetes.io/bound-by-controller: yes
                 pv.kubernetes.io/provisioned-by: kubernetes.io/glusterfs
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:    gluster-heketi
Status:          Bound
Claim:           default/pvc-gluster-heketi
Reclaim Policy:  Delete
Access Modes:    RWO
VolumeMode:      Filesystem
Capacity:        1Gi
Node Affinity:   <none>
Message:         
Source:
    Type:                Glusterfs (a Glusterfs mount on the host that shares a pod's lifetime)
    EndpointsName:       glusterfs-dynamic-b8cf1063-122f-436d-ac5a-ff5862d66846
    EndpointsNamespace:  default
    Path:                vol_20bd5715aed57741c866152bc1a00835
    ReadOnly:            false
Events:                  <none>

定义Pod使用PVC资源

定义Pod centos-demo使用PVC:pvc-gluster-heketi,挂载到容器内/host/gluster。

apiVersion: v1
kind: Pod
metadata:
  name: centos-demo
  labels:
    type: os-demo 
    version: 1.0.0
  namespace: 'default'
spec:
  restartPolicy: OnFailure
  nodeName: linshi-k8s-54
  containers:
  - name: centos
    image: 192.XXXX.101/library/centos
    imagePullPolicy: IfNotPresent
    resources:
      requests:
        cpu: 100m
        memory: 128Mi
      limits:
        cpu: 200m
        memory: 256Mi
    securityContext:
      privileged: true
    volumeMounts:
    - name: "proc-dir"
      mountPath: "/host/proc"
    - name: "opt-dir"
      mountPath: "/host/opt"
    - name: "etc-dir"
      mountPath: "/host/etc"
    - name: gluster-volume
      mountPath: "/host/gluster"
  volumes:
  - name: "proc-dir"
    hostPath:
      path: /proc
  - name: "opt-dir"
    hostPath:
      path: /opt
  - name: "etc-dir"
    hostPath:
      path: /etc
  - name: gluster-volume
    persistentVolumeClaim:
      claimName: pvc-gluster-heketi

~]# kubectl apply -f centos-demo.yaml
~]# kubectl exec -it centos-demo -- /bin/bash
/]# df -hT
Filesystem                                          Type            Size  Used Avail Use% Mounted on
overlay                                             overlay          39G  2.3G   37G   6% /
tmpfs                                               tmpfs            64M     0   64M   0% /dev
tmpfs                                               tmpfs           3.9G     0  3.9G   0% /sys/fs/cgroup
/dev/mapper/centos_linshi--k8s--54-root             xfs              20G  7.5G   13G  38% /host/etc
192.XXXX.56:vol_20bd5715aed57741c866152bc1a00835 fuse.glusterfs 1014M   43M  972M   5% /host/gluster
/dev/mapper/centos_linshi--k8s--54-var              xfs              39G  2.3G   37G   6% /etc/hosts
shm                                                 tmpfs            64M     0   64M   0% /dev/shm
tmpfs                                               tmpfs           256M   12K  256M   1% /run/secrets/kubernetes.io/serviceaccount
[root@centos-demo /]# cd /host/gluster/
[root@centos-demo gluster]# touch test-file
[root@centos-demo gluster]# echo "hello-glusterfs">test-file 
[root@centos-demo gluster]# cat test-file 
hello-glusterfs

FAQ

Heketi加载GlusterFS配置报token丢失错误

heketi-cli topology load --json=topology.json Error: Unable to get topology information: Invalid JWT token: Token missing iss claim 因为clusterfs访问需要认证,所以换成命令: heketi-cli topology load --json=topology.json --user admin --secret admin123

Heketi加载GlusterFS配置创建node失败错误

heketi-cli topology load --json=topology.json --user admin --secret admin123 Creating cluster ... ID: 18a2b447ded26ac0b169efc2d3703150 Allowing file volumes on cluster. Allowing block volumes on cluster. Creating node linshi-k8s-clusterfs-55 ... Unable to create node: New Node doesn't have glusterd running Creating node linshi-k8s-clusterfs-56 ... Unable to create node: New Node doesn't have glusterd running Creating node linshi-k8s-clusterfs-58 ... Unable to create node: New Node doesn't have glusterd running 因为heketi-service-account没有权限: kubectl create clusterrolebinding heketi-gluster-admin --clusterrole=edit --serviceaccount=default:heketi-service-account

创建pvc一直处于pending状态

# kubectl get persistentvolumeclaim -o wide NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE VOLUMEMODE pvc-gluster-heketi Pending gluster-heketi 8s Filesystem 查看pvc的日志: Operation for "provision-default/pvc-gluster-heketi[2ee4f999-95fd-4982-a36d-88ec0c7c35ba]" failed. No retries permitted until 2024-04-15 14:06:21.357227113 +0800 CST m=+252629.514399927 (durationBeforeRetry 500ms). Error: failed to create volume: failed to create volume: see kube-controller-manager.log for details 但是,通过heketi-cli topology info --user admin --secret admin123 查看,一直在创建Brick。 后来将Heketi的数据目录删掉,将GlusterFS服务容器以及Heketi服务容器重新创建了一遍,然后通过topology.json重新创建了集群和StorageClass,再创建PV就可以了。

定义挂载GlusterFS的Volume,使用命令kubectl apply -f centos-demo.yaml创建Pod失败

 mount gluster 报错:unknown filesystem type 'glusterfs'  ​  centos-demo被调度的node上没有安装GlusterFS客户端
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值