K8S排水错误汇总(忽略DaemonSet管理Pod、Mysql集群排水报错、Mongo集群排水报错)

【成功排水展示】

初始状态

[root@DoM01 ~]# kubectl get node
NAME    STATUS   ROLES    AGE    VERSION
dom01   Ready    master   579d   v1.15.2
dom02   Ready    master   579d   v1.15.2
dom03   Ready    master   579d   v1.15.2
don01   Ready    <none>   579d   v1.15.2
don02   Ready    <none>   579d   v1.15.2
don03   Ready    <none>   579d   v1.15.2
don04   Ready    <none>   579d   v1.15.2
don05   Ready    <none>   579d   v1.15.2
don06   Ready    <none>   349d   v1.15.2
don07   Ready    <none>   292d   v1.15.2
don08   Ready    <none>   292d   v1.15.2

排水命令

kubectl drain don02 --ignore-daemonsets --delete-local-data

正确排水的输出

[root@DoM01 mongodb]# kubectl drain don02 --ignore-daemonsets --delete-local-data
node/don02 already cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-admin/k8s-mon-daemonset-ggz45, kube-system/kube-flannel-ds-amd64-lbt5l, kube-system/kube-proxy-w2dfg
node/don02 drained

排水后结果

[root@DoM01 mongodb]# kubectl get node
NAME    STATUS                     ROLES    AGE    VERSION
dom01   Ready                      master   580d   v1.15.2
dom02   Ready                      master   580d   v1.15.2
dom03   Ready                      master   580d   v1.15.2
don01   Ready                      <none>   580d   v1.15.2
don02   Ready,SchedulingDisabled   <none>   580d   v1.15.2
don03   Ready                      <none>   580d   v1.15.2
don04   Ready                      <none>   580d   v1.15.2
don05   Ready                      <none>   580d   v1.15.2
don06   Ready                      <none>   350d   v1.15.2
don07   Ready                      <none>   293d   v1.15.2
don08   Ready                      <none>   293d   v1.15.2

恢复节点

[root@DoM01 mongodb]# kubectl uncordon don02
node/don02 uncordoned
[root@DoM01 mongodb]# kubectl get node
NAME    STATUS   ROLES    AGE    VERSION
dom01   Ready    master   580d   v1.15.2
dom02   Ready    master   580d   v1.15.2
dom03   Ready    master   580d   v1.15.2
don01   Ready    <none>   580d   v1.15.2
don02   Ready    <none>   580d   v1.15.2
don03   Ready    <none>   580d   v1.15.2
don04   Ready    <none>   580d   v1.15.2
don05   Ready    <none>   580d   v1.15.2
don06   Ready    <none>   350d   v1.15.2
don07   Ready    <none>   293d   v1.15.2
don08   Ready    <none>   293d   v1.15.2

【FAQ】

1. 忽略DaemonSet管理的Pod

语法

kubectl drain don02 --ignore-daemonsets

报错示例

[root@DoM01 ~]# kubectl drain don02
node/don02 cordoned
error: unable to drain node "don02", aborting command...

There are pending nodes to be drained:
 don02
cannot delete DaemonSet-managed Pods (use --ignore-daemonsets to ignore): kube-admin/k8s-mon-daemonset-ggz45, kube-system/kube-flannel-ds-amd64-lbt5l, kube-system/kube-proxy-w2dfg

2. 删除本地数据

语法

kubectl drain don02 --delete-local-data

报错示例(mysql集群节点删除失败)

[root@DoM01 ~]# kubectl drain don02
node/don02 already cordoned
error: unable to drain node "don02", aborting command...

There are pending nodes to be drained:
 don02
error: cannot delete Pods with local storage (use --delete-local-data to override): mysql/mysqlha-2

3. mongo集群排水报错

【报错】

[root@DoM01 ~]# kubectl drain don02 --ignore-daemonsets --delete-local-data
node/don02 already cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-admin/k8s-mon-daemonset-ggz45, kube-system/kube-flannel-ds-amd64-lbt5l, kube-system/kube-proxy-w2dfg
……
error when evicting pod "mongodb-secondary-0" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.
evicting pod "mongodb-secondary-0"

【分析】

报错说违反了pod的中断预算,说明我们对"mongodb-secondary-0"的删除不合法。我们看一下helm的设置。

replicaSet:
  ## Whether to create a MongoDB replica set for high availability or not
  enabled: true
  useHostnames: true

  ## Name of the replica set
  ##
  name: rs0

  ## Key used for replica set authentication
  ##
  # key: key

  ## Number of replicas per each node type
  ##
  replicas:
    secondary: 2
    arbiter: 1

  ## Pod Disruption Budget
  ## ref: https://kubernetes.io/docs/concepts/workloads/pods/disruptions/
  pdb:
    enabled: true
    minAvailable:
      secondary: 2
      arbiter: 1

如上可见,启动2个从节点,1个仲裁节点。最低保存2个从节点,1个仲裁节点。
果然我们删除其中一个从节点会不合规。

我们看一下namespace中的情况:

[root@DoM01 mongodb]# kubectl get pod -n mongodb
NAME                  READY   STATUS    RESTARTS   AGE
mongodb-arbiter-0     1/1     Running   0          63d
mongodb-primary-0     2/2     Running   4          41d
mongodb-secondary-0   2/2     Running   0          123d
mongodb-secondary-1   2/2     Running   1          181d

和helm中设置一致。

【解决】

修改values.yml文件如下

  ## Number of replicas per each node type
  ##
  replicas:
    secondary: 2
    arbiter: 1

  ## Pod Disruption Budget
  ## ref: https://kubernetes.io/docs/concepts/workloads/pods/disruptions/
  pdb:
    enabled: true
    minAvailable:
      secondary: 1
      arbiter: 1
  • 更新helm
[root@DoM01 mongodb]# helm upgrade mongodb -n mongodb ./

在这里插入图片描述

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

玄德公笔记

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值