【K8S】poststarthook/rbac/bootstrap-roles failed: not finished

1.获取资源状态无法获得值

kubectl get csr 

[root@K8S1 work]# kubectl get csr
No resources found

--查看日志 
journalctl -u kube-apiserver.service --no-pager > 1.log 
journalctl -u kubelet.service --no-pager > 2.log 

vi 2.log 
Jul 20 17:26:39 K8S1 kubelet[59786]: I0720 17:26:39.360809   59786 certificate_manager.go:270] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates
Jul 20 17:26:39 K8S1 kubelet[59786]: E0720 17:26:39.364491   59786 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Unauthorized
Jul 20 17:26:40 K8S1 kubelet[59786]: E0720 17:26:40.167353   59786 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Jul 20 17:26:45 K8S1 kubelet[59786]: E0720 17:26:45.169147   59786 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Jul 20 17:26:50 K8S1 kubelet[59786]: E0720 17:26:50.170266   59786 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"
Jul 20 17:26:55 K8S1 kubelet[59786]: E0720 17:26:55.171148   59786 kubelet.go:2349] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"



vi 1.log
Jul 20 13:43:46 K8S1 kube-apiserver[49383]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
Jul 20 13:43:46 K8S1 kube-apiserver[49383]: I0720 13:43:46.687771   49383 healthz.go:257] poststarthook/rbac/bootstrap-roles,poststarthook/scheduling/bootstrap-system-priority-classes check failed: readyz
Jul 20 13:43:46 K8S1 kube-apiserver[49383]: [-]poststarthook/rbac/bootstrap-roles failed: not finished
Jul 20 13:43:46 K8S1 kube-apiserver[49383]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: not finished
Jul 20 13:43:46 K8S1 kube-apiserver[49383]: I0720 13:43:46.729020   49383 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
Jul 20 13:43:46 K8S1 kube-apiserver[49383]: I0720 13:43:46.736063   49383 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
Jul 20 13:43:46 K8S1 kube-apiserver[49383]: I0720 13:43:46.736091   49383 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
Jul 20 13:43:46 K8S1 kube-apiserver[49383]: I0720 13:43:46.783838   49383 healthz.go:257] poststarthook/rbac/bootstrap-roles check failed: readyz
Jul 20 13:43:46 K8S1 kube-apiserver[49383]: [-]poststarthook/rbac/bootstrap-roles failed: not finished



2.原因分析 

vi bootstrap.secret.yaml

token-id 的获取,token-secret获取方法,随机生成,写入bootstrap.secret.yaml 即可。

 TOKEN_ID=$(head -c 30 /dev/urandom | od -An -t x | tr -dc a-f3-9|cut -c 3-8)
 TOKEN_SECRET=$(head -c 16 /dev/urandom | md5sum | head -c 16)

echo $TOKEN_ID $TOKEN_SECRET

3.重新配置。

kubectl delete -f bootstrap.secret.yaml

kubectl create -f bootstrap.secret.yaml

4.重新获取集群状态

[root@K8S1 work]# kubectl get csr 
NAME        AGE   SIGNERNAME                                    REQUESTOR                 REQUESTEDDURATION   CONDITION
csr-lsj5m   0s    kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:bab759   <none>              Approved,Issued

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
下面是一个基本的 Kubernetes YAML 文件示例,可用于部署容器镜像 `registry.cn-beijing.aliyuncs.com/mydlq/nfs-subdir-external-provisioner:v4.0.0`: ```yaml apiVersion: v1 kind: ServiceAccount metadata: name: nfs-subdir-external-provisioner --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: name: nfs-subdir-external-provisioner-runner rules: - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["create", "update", "patch"] --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: run-nfs-subdir-external-provisioner roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: nfs-subdir-external-provisioner-runner subjects: - kind: ServiceAccount name: nfs-subdir-external-provisioner namespace: default --- apiVersion: apps/v1 kind: Deployment metadata: name: nfs-subdir-external-provisioner spec: replicas: 1 selector: matchLabels: app: nfs-subdir-external-provisioner template: metadata: labels: app: nfs-subdir-external-provisioner spec: serviceAccountName: nfs-subdir-external-provisioner containers: - name: nfs-subdir-external-provisioner image: registry.cn-beijing.aliyuncs.com/mydlq/nfs-subdir-external-provisioner:v4.0.0 env: - name: PROVISIONER_NAME value: nfs-subdir-external-provisioner - name: NFS_SERVER value: your.nfs.server.ip - name: NFS_PATH value: /path/to/nfs/share volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes volumes: - name: nfs-client-root nfs: server: your.nfs.server.ip path: /path/to/nfs/share ``` 注意替换 `NFS_SERVER` 和 `NFS_PATH` 的值为你的 NFS 服务器的 IP 地址和共享路径。同时,需要根据实际情况修改应用的名称、命名空间和其他配置。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值