k8s删除增加一个node及网络

删除node

在master节点操作,准备删除manager.node

[root@worker ~]# kubectl get nodes
NAME           STATUS     ROLES    AGE     VERSION
manager.node   NotReady   <none>   6h36m   v1.17.0
master.node    Ready      <none>   6h46m   v1.17.0
worker.node    Ready      master   21h     v1.17.0

注意,这里是kubectl delete nodes,node加了s

[root@worker ~]# kubectl delete nodes manager.node
node "manager.node" deleted
[root@worker ~]# kubectl get nodes
NAME          STATUS   ROLES    AGE     VERSION
master.node   Ready    <none>   6h46m   v1.17.0
worker.node   Ready    master   21h     v1.17.0

清空集群信息

[root@manager network-scripts]# kubeadm reset
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
W1231 16:23:44.293553   27107 reset.go:99] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get node registration: failed to get corresponding node: nodes "manager.node" not found
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W1231 16:23:55.799045   27107 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
W1231 16:24:20.825890   27107 cleanupnode.go:65] [reset] The kubelet service could not be stopped by kubeadm: [exit status 1]
W1231 16:24:20.825934   27107 cleanupnode.go:66] [reset] Please ensure kubelet is stopped manually
[reset] Unmounting mounted directories in "/var/lib/kubelet"
W1231 16:24:20.936289   27107 cleanupnode.go:81] [reset] Failed to remove containers: exit status 1
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.

清空网络信息

ifconfig cni0 down
ip link delete cni0
ifconfig flannel.1 down
ip link delete flannel.1
ifconfig docker0 down
rm -rf /var/lib/cni/
rm -rf /etc/cni/net.d

重新加入

[root@manager ~]# kubeadm join XX.XX.XX.52:6443 --token 43umr8.df94e49pkj7fyv90 --discovery-token-ca-cert-hash sha256:9858fb015dd519696df382e675f3614630b2d3e7f2e6a83086bef1884bb0a0e2
W1231 17:19:10.669867    2933 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.03.0-ce. Latest validated version: 19.03
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

查看master节点的token

[root@manager ~]# kubeadm token list
TOKEN                     TTL         EXPIRES                     USAGES                   DESCRIPTION                                                EXTRA GROUPS
43umr8.df94e49pkj7fyv90   56m         2019-12-31T18:32:36+08:00   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token

重新生成token

[root@manager ~]# kubeadm token create
W1231 17:37:34.108407    7422 validation.go:28] Cannot validate kube-proxy config - no validator is available
W1231 17:37:34.108440    7422 validation.go:28] Cannot validate kubelet config - no validator is available
pzq7je.9osdlv2t5t42mg5a

找不到--discovery-token-ca-cert-hash的值,可以使用以下命令生成

[root@manager ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
9858fb015dd519696df382e675f3614630b2d3e7f2e6a83086bef1884bb0a0e2

重新查看节点信息

[root@worker coredns]# kubectl get nodes
NAME           STATUS   ROLES    AGE     VERSION
manager.node   Ready    <none>   103s    v1.17.0
master.node    Ready    <none>   7h44m   v1.17.0
worker.node    Ready    master   22h     v1.17.0
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值