创建存储池和文件系统
使用cephfs必须开启mds服务(元数据服务的守护进程),此进程管理与CephFS上存储的文件相关的元数据,并协调对Ceph存储集群的访问。因此,若要使用CephFS接口,需要在存储集群中至少部署一个MDS实例
1、安装ceph-mds
[root@ceph-1 ~]# yum -y install ceph-mds # 启动ceph-mds [root@ceph-1 ~]# systemctl start ceph-mds@ceph-1.service [root@ceph-1 ~]# systemctl status ceph-mds@ceph-1 ● ceph-mds@ceph-1.service - Ceph metadata server daemon Loaded: loaded (/usr/lib/systemd/system/ceph-mds@.service; disabled; vendor preset: disabled) Active: active (running) since 三 2023-06-28 17:03:38 CST; 7s ago Main PID: 19708 (ceph-mds) CGroup: /system.slice/system-ceph\x2dmds.slice/ceph-mds@ceph-1.service └─19708 /usr/bin/ceph-mds -f --cluster ceph --id ceph-1 --setuser ceph --setgroup ceph 6月 28 17:03:38 ceph-1 systemd[1]: Started Ceph metadata server daemon. 6月 28 17:03:38 ceph-1 ceph-mds[19708]: starting mds.ceph-1 at
2、查看集群状态和ceph-mds
状态
# 查看集群状态 [root@ceph-1 ~]# ceph -s cluster: id: bca9e238-6c4b-4d0f-ab32-6ed4d780d886 health: HEALTH_OK services: mon: 3 daemons, quorum ceph-1,ceph-2,ceph-3 (age 23m) mgr: ceph-1(active, since 3d), standbys: ceph-2, ceph-3 mds: 1 up:standby osd: 9 osds: 9 up (since 2d), 9 in (since 2w) data: pools: 11 pools, 417 pgs objects: 24.36k objects, 18 GiB usage: 46 GiB used, 39 GiB / 86 GiB avail pgs: 417 active+clean # 查看ceph-mds状态 [root@ceph-1 ~]# ceph mds stat 1 up:standby
ceph-mds
状态说明:
-
up:active
:表示 MDS 实例正常运行且处于活动状态。 -
up:standby
:表示 MDS 实例正常运行但处于待机状态,等待需要时接管元数据服务。 -
up:replay
:表示 MDS 实例正在重播日志,通常在 MDS 启动后发生。 -
down
:表示 MDS 实例当前处于停机状态。
3、创建cephfs
对应的pools
-
使用
cephfs
之前需要事先集群中创建一个文件 系统,并为其分别指定元数据和数据相关的存储池,本次测试创建名为k8s-cephfs
的文件系统,使用k8s-cephfs-metadata
作为元数据存储池,使用k8s-cephfs
为数据存储池
# 创建存储池 [root@ceph-1 ~]# ceph osd pool create k8s-cephfs 32 pool 'k8s-cephfs' created [root@ceph-1 ~]# ceph osd pool create k8s-cephfs-metadata 32 pool 'k8s-cephfs-metadata' created # 创建文件系统 [root@ceph-1 ceph]# ceph fs new cephfs k8s-cephfs-metadata k8s-cephfs new fs with metadata pool 18 and data pool 17 # 查看创建的cephfs [root@ceph-1 ceph]# ceph fs ls name: cephfs, metadata pool: k8s-cephfs-metadata, data pools: [k8s-cephfs ] # 查看cephfs状态 [root@ceph-1 ceph]# ceph fs status cephfs - 0 clients ====== RANK STATE MDS ACTIVITY DNS INOS 0 active ceph-1 Reqs: 0 /s 10 13 POOL TYPE USED AVAIL k8s-cephfs-metadata metadata 1024k 7974M k8s-cephfs data 0 7974M MDS version: ceph version 15.2.8 (bdf3eebcd22d7d0b3dd4d5501bee5bac354d5b55) octopus (stable)
3.1、删除cephfs
-
如果发现创建错误,可以采用以下步骤,删除cephfs。
1、失效 MDS 守护进程:
[root@ceph-1 ceph]# ceph fs fail cephfs cephfs marked not joinable; MDS cannot join the cluster. All MDS ranks marked failed.
-
执行此命令将把所有的 MDS 守护进程标记为失败状态,以便允许删除文件系统。
2、确认 MDS 守护进程已失效:
[root@ceph-1 ceph]# ceph fs status cephfs - 0 clients ====== RANK STATE MDS ACTIVITY DNS INOS 0 failed POOL TYPE USED AVAIL k8s-cephfs metadata 1024k 7979M k8s-cephfs-metadata data 0 7979M STANDBY MDS ceph-1 MDS version: ceph version 15.2.8 (bdf3eebcd22d7d0b3dd4d5501bee5bac354d5b55) octopus (stable)
-
运行此命令以查看文件系统的状态。确保所有 MDS 守护进程的状态为
failed
。
3、删除 Ceph 文件系统:
[root@ceph-1 ceph]# ceph fs rm cephfs --yes-i-really-mean-it
-
执行此命令后,将删除指定的 Ceph 文件系统及其关联的元数据池和数据池。
4、检查和清理残留的池:
[root@ceph-1 ceph]# ceph osd pool ls detail pool 1 'device_health_metrics' replicated size 2 min_size 1 crush_rule 0 object_hash rjenkins pg_num 1 pgp_num 1 autoscale_mode off last_change 375 flags hashpspool stripe_width 0 pg_num_min 1 application mgr_devicehealth pool 2 '.rgw.root' replicated size 2 min_size 1 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode off last_change 282 flags hashpspool stripe_width 0 application rgw pool 3 'sh-puxi.rgw.control' replicated size 2 min_size 1 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode off last_change 282 flags hashpspool stripe_width 0 pool 4 'sh-puxi.rgw.log' replicated size 2 min_size 1 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode off last_change 282 flags hashpspool stripe_width 0 pool 5 'sh-puxi.rgw.meta' replicated size 2 min_size 1 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode off last_change 282 flags hashpspool stripe_width 0 pool 6 'sh-puxi.rgw.buckets.index' replicated size 2 min_size 1 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode off last_change 282 flags hashpspool stripe_width 0 pool 7 'sh-puxi.rgw.buckets.data' replicated size 2 min_size 1 crush_rule 0 object_hash rjenkins pg_num 64 pgp_num 64 autoscale_mode off last_change 282 flags hashpspool stripe_width 0 application rgw pool 8 'sh-puxi.rgw.buckets.non-ec' replicated size 2 min_size 1 crush_rule 0 object_hash rjenkins pg_num 8 pgp_num 8 autoscale_mode off last_change 282 flags hashpspool stripe_width 0 pool 9 'volumes' replicated size 2 min_size 1 crush_rule 0 object_hash rjenkins pg_num 256 pgp_num 256 autoscale_mode off last_change 387 flags hashpspool,selfmanaged_snaps stripe_width 0 application cephfs pool 13 'k8s-volumes' replicated size 2 min_size 1 crush_rule 0 object_hash rjenkins pg_num 64 pgp_num 64 autoscale_mode off last_change 404 flags hashpspool,selfmanaged_snaps stripe_width 0 application rbd pool 15 'k8s-cephfs' replicated size 2 min_size 1 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode off last_change 425 flags hashpspool stripe_width 0 pg_autoscale_bias 4 pg_num_min 16 recovery_priority 5 application cephfs pool 16 'k8s-cephfs-metadata' replicated size 2 min_size 1 crush_rule 0 object_hash rjenkins pg_num 32 pgp_num 32 autoscale_mode off last_change 425 flags hashpspool stripe_width 0 application cephfs
-
运行此命令以列出所有池的详细信息。确保删除了之前创建的错误池(如
k8s-cephfs
和k8s-cephfs-metadata
)。
-
删除pool池
[root@ceph-1 ceph]# ceph osd pool delete k8s-cephfs k8s-cephfs --yes-i-really-really-mean-it pool 'k8s-cephfs' removed [root@ceph-1 ceph]# ceph osd pool delete k8s-cephfs-metadata k8s-cephfs-metadata --yes-i-really-really-mean-it pool 'k8s-cephfs-metadata' removed
4、为cephfs
创建并授权用户
[root@ceph-1 ceph]# ceph auth get-or-create client.k8s-cephfs mon 'allow r' mds 'allow' osd 'allow rwx pool=k8s-cephfs, allow rwx pool=k8s-cephfs-metadata' -o /etc/ceph/ceph.client.k8s-cephfs-user.keyring [root@ceph-1 ceph]# [root@ceph-1 ceph]# ceph auth get client.k8s-cephfs exported keyring for client.k8s-cephfs [client.k8s-cephfs] key = AQDbApxkkTtyHxAAKRf08sul0MOhDx1pGVHKJg== caps mds = "allow" caps mon = "allow r" caps osd = "allow rwx pool=k8s-cephfs, allow rwx pool=k8s-cephfs-metadata"
5、获取ceph
集群monitor
信息
-
主要是需要其中的fsid和mon的node信息
[root@ceph-1 ceph]# ceph mon dump dumped monmap epoch 2 epoch 2 fsid bca9e238-6c4b-4d0f-ab32-6ed4d780d886 last_changed 2023-08-03T16:28:09.828402+0800 created 2023-08-03T16:27:20.841546+0800 min_mon_release 15 (octopus) 0: [v2:100.86.13.9:3300/0,v1:100.86.13.9:6789/0] mon.ceph-1 1: [v2:100.86.13.95:3300/0,v1:100.86.13.95:6789/0] mon.ceph-2 2: [v2:100.86.13.243:3300/0,v1:100.86.13.243:6789/0] mon.ceph-3
6、k8s集群安装 ceph-common
-
每个k8s节点都需要安装
[root@k8s-master ~]# yum -y install ceph-common
7、将ceph配置拷贝到k8s集群
-
每个k8s节点都需要拷贝ceph配置文件
[root@ceph-1 ~]# scp -r /etc/ceph root@100.86.13.200:/etc
7.1、在k8s集群节点执行ceph命令,验证是否可用
[root@k8s-master ~]# ceph -s cluster: id: bca9e238-6c4b-4d0f-ab32-6ed4d780d886 health: HEALTH_OK services: mon: 3 daemons, quorum ceph-1,ceph-2,ceph-3 (age 3d) mgr: ceph-1(active, since 3d), standbys: ceph-2, ceph-3 mds: cephfs:1 {0=ceph-1=up:active} osd: 9 osds: 9 up (since 3d), 9 in (since 3d) data: pools: 11 pools, 417 pgs objects: 31 objects, 2.2 KiB usage: 9.1 GiB used, 81 GiB / 90 GiB avail pgs: 417 active+clean
8、将ceph用户k8s-cephfs的keyring信息保存到k8s的secret中
8.1、使用base64加密
[root@k8s-master ~]# echo "AQDbApxkkTtyHxAAKRf08sul0MOhDx1pGVHKJg==" |base64 QVFEYkFweGtrVHR5SHhBQUtSZjA4c3VsME1PaER4MXBHVkhLSmc9PQo=
8.2、创建secret
注意:
-
type: "kubernetes.io/rbd"
#类型必须是这个 -
namespace
需要根据pod使用的名称空间进行创建
[root@k8s-master ~]# cat k8s-cephfs-secret.yaml apiVersion: v1 kind: Secret metadata: name: k8s-cephfs-keyring namespace: default type: "kubernetes.io/rbd" data: key: "QVFCUWs5QmsvTmxjR3hBQVdjTlptd3l6SHdhdUpuNktCeGtHMFE9PQo="
8.3、查看secret
[root@k8s-master ~]# kubectl apply -f k8s-cephfs-secret.yaml secret/k8s-cephfs-keyring created [root@k8s-master ~]# kubectl get secrets NAME TYPE DATA AGE default-token-rhzrf kubernetes.io/service-account-token 3 2d20h k8s-cephfs-keyring kubernetes.io/rbd 1 6s
9、验证可用性
9.1、创建静态pv
9.1.1、创建pv
[root@k8s-master ~]# cat nginx-pv-2g.yaml apiVersion: v1 kind: PersistentVolume metadata: name: storageclass-pv-2g spec: accessModes: ["ReadWriteOnce"] capacity: storage: 2Gi persistentVolumeReclaimPolicy: Retain rbd: monitors: ["100.86.13.9:6789", "100.86.13.95:6789", "100.86.13.243:6789"] pool: k8s-cephfs image: storageclass-pv-2g user: k8s-cephfs secretRef: name: k8s-cephfs-keyring fsType: xfs
9.1.2、创建pvc
[root@k8s-master ~]# cat nginx-pvc-2g.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: storageclass-pv-2g namespace: default spec: accessModes: ["ReadWriteOnce"] resources: requests: storage: 2Gi
9.1.3、创建pod
[root@k8s-master ~]# cat nginx-cephfs.yaml apiVersion: v1 kind: Pod metadata: name: nginx-cephfs spec: containers: - name: nginx image: nginx imagePullPolicy: IfNotPresent volumeMounts: - name: web mountPath: /usr/share/nginx/html/ volumes: - name: web persistentVolumeClaim: claimName: storageclass-pv-2g readOnly: false
9.1.4、创建卷
[root@k8s-master ~]# rbd create storageclass-pv-2g -s 2G -p k8s-cephfs [root@k8s-master ~]# rbd feature disable k8s-cephfs/storageclass-pv-2g object-map fast-diff deep-flatten [root@k8s-master ~]# rbd -p k8s-cephfs ls kubernetes-dynamic-pvc-5b3c7b71-3501-11ee-8df3-c6250ce21846 kubernetes-dynamic-pvc-5c42d493-3502-11ee-8df3-c6250ce21846 pods-volumes storageclass-pv-2g
9.1.5、启动并验证
[root@k8s-master ~]# kubectl apply -f nginx-pv-2g.yaml persistentvolume/storageclass-pv-2g created [root@k8s-master ~]# kubectl apply -f nginx-pvc-2g.yaml persistentvolumeclaim/storageclass-pv-2g created [root@k8s-master ~]# kubectl apply -f nginx-cephfs.yaml pod/nginx-cephfs created [root@k8s-master ~]# kubectl exec -it nginx-cephfs /bin/bash kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. root@nginx-cephfs:/# mount| grep rbd /dev/rbd1 on /usr/share/nginx/html type xfs (rw,relatime,attr2,inode64,sunit=8192,swidth=8192,noquota) root@nginx-cephfs:/# echo 'hello 2023-08-07' > /usr/share/nginx/html/index.html root@nginx-cephfs:/# curl 127.0.0.1:80 hello 2023-08-07
9.2、创建动态pv
9.2.1、下载git命令,拉取 external-storage.git
仓库,部署 rbd-provisioner
-
external-storage.git
是一个 GitHub 代码仓库,它是 Kubernetes 社区维护的一个存储项目,提供了一种通过动态配置(Dynamic Provisioning)在 Kubernetes 中使用外部存储提供程序的方法。
[root@k8s-master ~]# yum -y install git [root@k8s-master ~]# git clone https://github.com/kubernetes-incubator/external-storage.git 正克隆到 'external-storage'... remote: Enumerating objects: 64319, done. remote: Total 64319 (delta 0), reused 0 (delta 0), pack-reused 64319 接收对象中: 100% (64319/64319), 113.79 MiB | 6.05 MiB/s, done. 处理 delta 中: 100% (29663/29663), done. [root@k8s-master ~]# cd external-storage/ceph/rbd/deploy/rbac/ [root@k8s-master rbac]# ll 总用量 24 -rw-r--r-- 1 root root 275 8月 7 16:22 clusterrolebinding.yaml -rw-r--r-- 1 root root 743 8月 7 16:22 clusterrole.yaml -rw-r--r-- 1 root root 484 8月 7 16:22 deployment.yaml -rw-r--r-- 1 root root 255 8月 7 16:22 rolebinding.yaml -rw-r--r-- 1 root root 260 8月 7 16:22 role.yaml -rw-r--r-- 1 root root 70 8月 7 16:22 serviceaccount.yaml [root@k8s-master rbac]# cat rolebinding.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: rbd-provisioner roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: rbd-provisioner subjects: - kind: ServiceAccount name: rbd-provisioner namespace: default [root@k8s-master rbac]# cat clusterrolebinding.yaml kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: rbd-provisioner subjects: - kind: ServiceAccount name: rbd-provisioner namespace: default roleRef: kind: ClusterRole name: rbd-provisioner apiGroup: rbac.authorization.k8s.io [root@k8s-master rbac]# cat clusterrole.yaml kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: rbd-provisioner rules: - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["create", "update", "patch"] - apiGroups: [""] resources: ["services"] resourceNames: ["kube-dns","coredns"] verbs: ["list", "get"] - apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list", "watch", "create", "update", "patch"] [root@k8s-master rbac]# cat deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: rbd-provisioner spec: replicas: 1 selector: matchLabels: app: rbd-provisioner strategy: type: Recreate template: metadata: labels: app: rbd-provisioner spec: containers: - name: rbd-provisioner image: "quay.io/external_storage/rbd-provisioner:latest" env: - name: PROVISIONER_NAME value: ceph.com/rbd serviceAccount: rbd-provisioner
安装rbac目录下的所有yaml文件
[root@k8s-master deploy]# pwd /root/external-storage/ceph/rbd/deploy [root@k8s-master deploy]# kubectl apply -f rbac/ clusterrole.rbac.authorization.k8s.io/rbd-provisioner created clusterrolebinding.rbac.authorization.k8s.io/rbd-provisioner created deployment.apps/rbd-provisioner created role.rbac.authorization.k8s.io/rbd-provisioner created rolebinding.rbac.authorization.k8s.io/rbd-provisioner created serviceaccount/rbd-provisioner created #如果镜像拉取慢,可以手动pull镜像 [root@k8s-master deploy]# docker pull quay.io/external_storage/rbd-provisioner:latest latest: Pulling from external_storage/rbd-provisioner 256b176beaff: Pull complete b4ecb0f03fba: Pull complete 0ce433cb7726: Pull complete Digest: sha256:94fd36b8625141b62ff1addfa914d45f7b39619e55891bad0294263ecd2ce09a Status: Downloaded newer image for quay.io/external_storage/rbd-provisioner:latest quay.io/external_storage/rbd-provisioner:latest
查看是否安装成功
[root@k8s-master ~]# kubectl get pods NAME READY STATUS RESTARTS AGE pod-use-scret-keyring 1/1 Running 0 71m rbd-provisioner-76f6bc6669-dhctf 1/1 Running 0 99s
9.2.2、部署StorageClass
参数说明:
-
provisioner:指定用于动态配置 PV 的存储提供程序。本示例中
ceph.com/rbd
对应的是rbd-provisioner
。 -
reclaimPolicy:定义当与该存储类绑定的 PVC 被删除后,PV 中持久化的数据的处理策略。
Retain
表示 PV 中的数据将被保留,不会被自动清理。 -
parameters:用于指定存储提供程序(
ceph.com/rbd
)的配置参数。 -
monitors:指定 Ceph 存储集群的监视器(monitors)地址。监视器是 Ceph 存储集群的核心组件,用于维护集群状态和数据位置信息。这里指定了三个监视器地址,它们用逗号分隔。
-
adminId:指定用于在 Ceph 存储池中创建 RBD 镜像时使用的用户名称。这个用户需要具有适当的权限来执行相关操作。
-
adminSecretName:指定存储 Ceph 管理用户的凭据信息的 Secret 名称。该 Secret 包含了连接 Ceph 存储集群所需的认证信息。
-
adminSecretNamespace:指定存储 Ceph 管理用户凭据信息的 Secret 所在的命名空间。
-
pool:指定 Ceph 存储池的名称。RBD 镜像将创建在这个指定的存储池中。
-
userId:指定用于挂载 RBD 镜像时使用的用户名称。
-
userSecretName:指定存储 Ceph 用户的凭据信息的 Secret 名称。该 Secret 包含了用于挂载 RBD 镜像的认证信息。
-
userSecretNamespace:指定存储 Ceph 用户凭据信息的 Secret 所在的命名空间。
-
fsType:指定 RBD 镜像挂载到 Pod 时使用的文件系统类型,这里是 xfs。
-
imageFormat:指定创建的 RBD 镜像的格式,这里设置为 "2"。
-
imageFeatures:指定创建的 RBD 镜像启用的特性,这里启用了 "layering" 特性。
[root@k8s-master ~]# cat kubesphere-storageclass-cephfs.yaml apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: kubesphere-storageclass provisioner: ceph.com/rbd reclaimPolicy: Retain parameters: monitors: 100.86.13.9:6789,100.86.13.95:6789,100.86.13.243:6789 adminId: k8s-cephfs adminSecretName: k8s-cephfs-keyring adminSecretNamespace: default pool: k8s-cephfs userId: k8s-cephfs userSecretName: k8s-cephfs-keyring userSecretNamespace: default fsType: xfs imageFormat: "2" imageFeatures: "layering" [root@k8s-master ~]# kubectl apply -f kubesphere-storageclass-cephfs.yaml storageclass.storage.k8s.io/kubesphere-storageclass created [root@k8s-master ~]# kubectl get sc NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE kubesphere-storageclass ceph.com/rbd Retain Immediate false 6m18s
9.2.3、创建pvc,验证可用性
[root@k8s-master ~]# cat storageclass-pvc02-5g.yaml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: storageclass-pvc02-5g spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 5Gi storageClassName: kubesphere-storageclass [root@k8s-master ~]# kubectl apply -f storageclass-pvc02-5g.yaml persistentvolumeclaim/storageclass-pvc02-5g created [root@k8s-master ~]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE storageclass-pvc02-5g Bound pvc-08aacf47-8f63-4836-939d-5a18674a1b3b 5Gi RWO kubesphere-storageclass 5s
9.3、报错记录
9.3.1、pvc状态为pending
[root@k8s-master ~]# kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE storageclass-pvc02-5g Pending kubesphere-storageclass 59s
1、查看pvc报错信息
-
未获取有用信息
[root@k8s-master ~]# kubectl describe pvc storageclass-pvc02-5g Name: storageclass-pvc02-5g Namespace: default StorageClass: kubesphere-storageclass Status: Pending Volume: Labels: <none> Annotations: volume.beta.kubernetes.io/storage-provisioner: ceph.com/rbd Finalizers: [kubernetes.io/pvc-protection] Capacity: Access Modes: VolumeMode: Filesystem Used By: <none> Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal ExternalProvisioning 3s (x2 over 13s) persistentvolume-controller waiting for a volume to be created, either by external provisioner "ceph.com/rbd" or manually created by system administrator
2、查看 rbd-provisioner
日志
-
发现报错信息:
selfLink was empty, can't make reference
[root@k8s-master ~]# kubectl logs rbd-provisioner-76f6bc6669-dhctf rbd-provisioner 。。。 。。。 I0807 08:40:21.519837 1 controller.go:987] provision "default/storageclass-pvc02-5g" class "kubesphere-storageclass": started E0807 08:40:21.522716 1 controller.go:1004] provision "default/storageclass-pvc02-5g" class "kubesphere-storageclass": unexpected error getting claim reference: selfLink was empty, can't make reference I0807 08:42:08.609146 1 controller.go:987] provision "default/storageclass-pvc02-5g" class "kubesphere-storageclass": started E0807 08:42:08.611614 1 controller.go:1004] provision "default/storageclass-pvc02-5g" class "kubesphere-storageclass": unexpected error getting claim reference: selfLink was empty, can't make reference
3、解决方式:
-
Kubernetes版本是1.20.x的版本以上的可能会遇到这种问题,我使用的k8s版本为:v1.21.14
在/etc/kubernetes/manifests/kube-apiserver.yaml
的文件中添加:
--feature-gates=RemoveSelfLink=false
等待kube-apiserver
重启成功,再次创建pvc,创建成功。