k8s节点踢出重新加入集群
Master上操作
先查看node情况
[root@k8s-master01:5 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master01.amngrvmm.dc01.scf Ready master 3y187d v1.17.6
k8s-master02.amngrvmm.dc01.scf NotReady master 3y187d v1.17.6
k8s-master03.amngrvmm.dc01.scf Ready master 3y187d v1.17.6
k8s-node01.amngrvmm.dc01.scf Ready <none> 3y187d v1.17.6
k8s-node02.amngrvmm.dc01.scf NotReady <none> 3y187d v1.17.6
k8s-node03.amngrvmm.dc01.scf NotReady <none> 3y187d v1.17.6
驱逐调节点上的pod
[root@k8s-master01:5 ~]# kubectl drain k8s-node02.amngrvmm.dc01.scf --delete-local-data --force --ignore-daemonsets
删除节点
[root@k8s-master01:13 ~]# kubectl delete node k8s-node02.amngrvmm.dc01.scf
node "k8s-node02.amngrvmm.dc01.scf" deleted
token查看
[root@k8s-master01:15 ~]# kubeadm token list
token 过期的话会加入不成功提示Unauthorized,生成一个新的token就好了。
[root@k8s-master01:15 ~]# kubeadm token create --print-join-command
W1206 15:27:52.862032 16809 validation.go:28] Cannot validate kube-proxy config - no validator is available
W1206 15:27:52.862139 16809 validation.go:28] Cannot validate kubelet config - no validator is available
kubeadm join k8s-vip.amngrvmm.dc01.scf:8443 --token ql40bh.49w0n45y09qcno2t --discovery-token-ca-cert-hash sha256:8b40ee4a7317c1ad71b4707ff528219d46a1db88468a5c91cfc7856dba77a17d
node上执行
[root@k8s-node02:1 ~]# kubeadm reset
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W1206 15:26:30.817616 4553 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
重新加入集群
[root@k8s-node02:2 ~]# kubeadm join k8s-vip.amngrvmm.dc01.scf:8443 --token m9maek.vpzyk0iy0rgxze6f --discovery-token-ca-cert-hash sha256:8b40ee4a7317c1ad71b4707ff528219d46a1db88468a5c91cfc7856dba77a17d
W1206 15:28:00.806806 5131 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[WARNING FileExisting-tc]: tc not found in system path
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
最后再查看node
[root@k8s-master01:16 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master01.amngrvmm.dc01.scf Ready master 3y187d v1.17.6
k8s-master02.amngrvmm.dc01.scf Ready master 3y187d v1.17.6
k8s-master03.amngrvmm.dc01.scf Ready master 3y187d v1.17.6
k8s-node01.amngrvmm.dc01.scf Ready <none> 3y187d v1.17.6
k8s-node02.amngrvmm.dc01.scf Ready <none> 89s v1.17.6
k8s-node03.amngrvmm.dc01.scf Ready <none> 83s v1.17.6