【kubernetes】记录一次namespace 处于Terminating状态的处理方法

记录一次k8s namespace 处于Terminating状态的处理方法




问题背景

使用k8s部署rook-ceph后,想要删除集群重新部署的,结果方法不对,导致rook-ceph命名空间状态一直处于Terminating,使用kubectl get all -n rook-ceph命令显示已经没有资源,但仍无法删除。

尝试过的方法

  1. 删除时带上–force --grace-period=0参数 ,无法删除;
    kubectl delete namespace rook-ceph --force --grace-period=0
  2. 使用kubectl edit namespaces rook-ceph将finalizer的value删除,显示命名空间成功被编辑,但是kubectl get ns看到rook-ceph命名空间仍然存在,且状态依旧是Terminating;
    参考文章:https://segmentfault.com/a/1190000016924414

最终成功的方法

NAMESPACE=rook-ceph
kubectl proxy &
kubectl get namespace $NAMESPACE -o json |jq '.spec = {"finalizers":[]}' >temp.json
curl -k -H "Content-Type: application/json" -X PUT --data-binary @temp.json 127.0.0.1:8001/api/v1/namespaces/$NAMESPACE/finalize

将上面的代码执行完成后,再使用kubectl get ns命令,发现rook-ceph命名空间已经没了,最后别忘了把后台运行的kubectl proxy进程kill掉。
参考
https://stackoverflow.com/questions/52369247/namespace-stucked-as-terminating-how-i-removed-it
https://github.com/kubernetes/kubernetes/issues/60807#issuecomment-466278490

问题分析

命名空间无法删除通常是因为还有资源在使用这个命名空间,这个资源可能自己的状态也已经不正常了。
在通过kubectl delete -f operator.yaml删除operator的时候,虽然都显示deleted,发现命令夯死。手动逐个删除的时候发现,在删除clusters.ceph.rook.io的时候命令(kubectl delete crd clusters.ceph.rook.io)夯死,查看资源使用情况发现cluster.ceph.rook.io占用着rook-ceph命名空间:

[root@k8s-master rook]# kubectl api-resources --namespaced=true -o name | xargs -n 1 kubectl get --show-kind --ignore-not-found -n rook-ceph
Error from server (MethodNotAllowed): the server does not allow this method on the requested resource
Error from server (MethodNotAllowed): the server does not allow this method on the requested resource
NAME                             AGE
cluster.ceph.rook.io/rook-ceph   1d

使用kubectl edit crd clusters.ceph.rook.io编辑clusters.ceph.rook.io

# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"apiextensions.k8s.io/v1beta1","kind":"CustomResourceDefinition","metadata":{"annotations":{},"name":"clusters.ceph.rook.io","namespace":""},"spec":{"group":"ceph.rook.io","names":{"kind":"Cluster","listKind":"ClusterList","plural":"clusters","shortNames":["rcc"],"singular":"cluster"},"scope":"Namespaced","version":"v1beta1"}}
  creationTimestamp: 2019-02-21T02:00:26Z
  deletionGracePeriodSeconds: 0
  deletionTimestamp: 2019-02-22T09:01:44Z
  finalizers:
  - customresourcecleanup.apiextensions.k8s.io
  generation: 1
  name: clusters.ceph.rook.io
  resourceVersion: "6630441"
  selfLink: /apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions/clusters.ceph.rook.io
  uid: 74fdfb7b-357c-11e9-9add-fa163efae430
spec:
  additionalPrinterColumns:
  - JSONPath: .metadata.creationTimestamp
    description: |-
      CreationTimestamp is a timestamp representing the server time when this object was created. It is not guaranteed to be set in happens-before order across separate operations. Clients may not set this value. It is represented in RFC3339 form and is in UTC.

      Populated by the system. Read-only. Null for lists. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
    name: Age
    type: date
  group: ceph.rook.io
  names:
    kind: Cluster
    listKind: ClusterList
    plural: clusters
    shortNames:
    - rcc
    singular: cluster
  scope: Namespaced
  version: v1beta1
  versions:
  - name: v1beta1
    served: true
    storage: true
status:
  acceptedNames:
    kind: Cluster
    listKind: ClusterList
    plural: clusters
    shortNames:
    - rcc
    singular: cluster
  conditions:
  - lastTransitionTime: 2019-02-21T02:00:26Z
    message: no conflicts found
    reason: NoConflicts
    status: "True"
    type: NamesAccepted
  - lastTransitionTime: null
    message: the initial names have been accepted
    reason: InitialNamesAccepted
    status: "True"
    type: Established
  - lastTr,ansitionTime: 2019-02-21T07:28:49Z
    message: CustomResource deletion is in progress
    reason: InstanceDeletionInProgress
    status: "True"
    type: Terminating
  storedVersions:
  - v1beta1

可以看到cluster.ceph.rook.io也Terminating,把finalizers的值删掉,保存,cluster.ceph.rook.io便会自己删掉。此时rook-ceph便也会自动删除了。

附上正确删除rook集群的方法

https://github.com/rook/rook/blob/master/Documentation/ceph-teardown.md

  • 2
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值