k8s注册节点出现kube-flannel-ds服务状态Init:0/1、Init:ImagePullBackOff或者某节点NotReady

转载自:https://www.cnblogs.com/liuyi778/p/12771259.html

 

1、错误提示

1.1、节点状态

1

2

3

4

5

[root@master ~]# kubectl get nodes

NAME     STATUS     ROLES    AGE    VERSION

master   Ready      master   2d2h   v1.18.2

node1    NotReady   <none>   31m    v1.18.2

[root@master ~]#

1.2、组件状态

回到顶部

1.2.1、master节点查看

1

2

3

4

5

6

7

8

9

10

11

12

[root@master ~]# kubectl get pod -n kube-system -o wide

NAME                             READY   STATUS     RESTARTS   AGE    IP           NODE     NOMINATED NODE   READINESS GATES

coredns-7ff77c879f-78sl5         1/1     Running    2          2d2h   10.244.0.6   master   <none>           <none>

coredns-7ff77c879f-pv744         1/1     Running    2          2d2h   10.244.0.7   master   <none>           <none>

etcd-master                      1/1     Running    2          2d2h   10.1.1.11    master   <none>           <none>

kube-apiserver-master            1/1     Running    2          2d2h   10.1.1.11    master   <none>           <none>

kube-controller-manager-master   1/1     Running    2          2d2h   10.1.1.11    master   <none>           <none>

kube-flannel-ds-amd64-h5skl      1/1     Running    2          2d2h   10.1.1.11    master   <none>           <none>

kube-flannel-ds-amd64-mg4n5      0/1     Init:0/1   0          31m    10.1.1.13    node1    <none>           <none>

kube-proxy-j7np7                 1/1     Running    0          31m    10.1.1.13    node1    <none>           <none>

kube-proxy-x7s46                 1/1     Running    2          2d2h   10.1.1.11    master   <none>           <none>

kube-scheduler-master            1/1     Running    2          2d2h   10.1.1.11    master   <none>           <none>

回到顶部

1.2.2、查看node1节点容器运行状态

1

2

3

4

5

6

root@node1:~# docker ps -a

CONTAINER ID        IMAGE                                                COMMAND                  CREATED             STATUS              PORTS               NAMES

76fee67569a2        registry.aliyuncs.com/google_containers/kube-proxy   "/usr/local/bin/kube…"   33 minutes ago      Up 33 minutes                           k8s_kube-proxy_kube-proxy-j7np7_kube-system_9c28dac9-f5f5-460e-93b3-d8679d0867e2_0

2c7fa6fa86a3        registry.aliyuncs.com/google_containers/pause:3.2    "/pause"                 33 minutes ago      Up 33 minutes                           k8s_POD_kube-proxy-j7np7_kube-system_9c28dac9-f5f5-460e-93b3-d8679d0867e2_0

0d570648b79f        registry.aliyuncs.com/google_containers/pause:3.2    "/pause"                 33 minutes ago      Up 33 minutes                           k8s_POD_kube-flannel-ds-amd64-mg4n5_kube-system_c7496136-fe22-438d-8267-9d69f705311e_0

root@node1:~#

  

 

1.3、查看hosts文件配置(如果hosts没问题,则忽略这一步,后面的步骤还是需要照做,亲测可以修复成功,zhoulidong-2021-1-5留)

回到顶部

1.3.1、master节点

1

2

3

4

5

[root@master ~]# cat /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4

::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

10.1.1.11 master

10.1.1.11 master

  

此时可以看到,少了node1节点的映射

回到顶部

1.3.2、node1节点

1

2

3

4

5

6

7

8

root@node1:~# cat /etc/hosts

127.0.0.1   localhost

127.0.1.1   node1

 

# The following lines are desirable for IPv6 capable hosts

::1     localhost ip6-localhost ip6-loopback

ff02::1 ip6-allnodes

ff02::2 ip6-allrouters

同样的,在node1节点上也没有master节点的主机映射信息

回到顶部

1.3.3、更新主机映射

master节点

1

2

3

4

5

6

7

[root@master ~]# echo '10.1.1.13 node1' >> /etc/hosts

[root@master ~]# cat /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4

::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

10.1.1.11 master

10.1.1.11 master

10.1.1.13 node1

node1节点

1

2

3

4

5

6

7

8

9

10

11

root@node1:~# echo -e "10.1.1.11 master Master\n10.1.1.13 node1 Node1" >> /etc/hosts

root@node1:~# cat /etc/hosts

127.0.0.1   localhost

127.0.1.1   node1

 

# The following lines are desirable for IPv6 capable hosts

::1     localhost ip6-localhost ip6-loopback

ff02::1 ip6-allnodes

ff02::2 ip6-allrouters

10.1.1.11 master Master

10.1.1.13 node1 Node1

1.4、主机通讯检测

回到顶部

1.4.1、master节点

1

2

3

4

5

6

7

8

9

10

11

[root@master ~]# ping node1 -c 5

PING node1 (10.1.1.13) 56(84) bytes of data.

64 bytes from node1 (10.1.1.13): icmp_seq=1 ttl=64 time=0.331 ms

64 bytes from node1 (10.1.1.13): icmp_seq=2 ttl=64 time=0.330 ms

64 bytes from node1 (10.1.1.13): icmp_seq=3 ttl=64 time=0.468 ms

64 bytes from node1 (10.1.1.13): icmp_seq=4 ttl=64 time=0.614 ms

64 bytes from node1 (10.1.1.13): icmp_seq=5 ttl=64 time=0.469 ms

 

--- node1 ping statistics ---

5 packets transmitted, 5 received, 0% packet loss, time 4002ms

rtt min/avg/max/mdev = 0.330/0.442/0.614/0.107 ms

回到顶部

1.4.2、node1节点

1

2

3

4

5

6

7

8

9

10

11

root@node1:~# ping master -c 5

PING master (10.1.1.11) 56(84) bytes of data.

64 bytes from master (10.1.1.11): icmp_seq=1 ttl=64 time=0.479 ms

64 bytes from master (10.1.1.11): icmp_seq=2 ttl=64 time=0.262 ms

64 bytes from master (10.1.1.11): icmp_seq=3 ttl=64 time=0.249 ms

64 bytes from master (10.1.1.11): icmp_seq=4 ttl=64 time=0.428 ms

64 bytes from master (10.1.1.11): icmp_seq=5 ttl=64 time=0.308 ms

 

--- master ping statistics ---

5 packets transmitted, 5 received, 0% packet loss, time 94ms

rtt min/avg/max/mdev = 0.249/0.345/0.479/0.092 ms

2、重启k8s服务

2.1、所有节点重启服务

回到顶部

master节点

1

2

3

4

5

6

7

[root@master ~]# systemctl restart kubelet docker

[root@master ~]# kubectl get nodes

The connection to the server 10.1.1.11:6443 was refused - did you specify the right host or port?

[root@master ~]# kubectl get nodes

NAME     STATUS     ROLES    AGE    VERSION

master   Ready      master   2d2h   v1.18.2

node1    NotReady   <none>   45m    v1.18.2

回到顶部

node1节点

1

2

3

4

5

6

7

8

root@node1:~# systemctl restart kubelet docker

root@node1:~# docker ps -a

CONTAINER ID        IMAGE                                               COMMAND                  CREATED              STATUS                          PORTS               NAMES

9a8f714be9f6        0d40868643c6                                        "/usr/local/bin/kube…"   About a minute ago   Up About a minute                                   k8s_kube-proxy_kube-proxy-j7np7_kube-system_9c28dac9-f5f5-460e-93b3-d8679d0867e2_2

aceb8ae3a07b        registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 About a minute ago   Up About a minute                                   k8s_POD_kube-proxy-j7np7_kube-system_9c28dac9-f5f5-460e-93b3-d8679d0867e2_2

dd608fbcc5f5        registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 About a minute ago   Up About a minute                                   k8s_POD_kube-flannel-ds-amd64-mg4n5_kube-system_c7496136-fe22-438d-8267-9d69f705311e_0

e9b073aa917e        0d40868643c6                                        "/usr/local/bin/kube…"   2 minutes ago        Exited (2) About a minute ago                       k8s_kube-proxy_kube-proxy-j7np7_kube-system_9c28dac9-f5f5-460e-93b3-d8679d0867e2_1

71d69c4dccc5        registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 2 minutes ago        Exited (0) About a minute ago                       k8s_POD_kube-proxy-j7np7_kube-system_9c28dac9-f5f5-460e-93b3-d8679d0867e2_1

2.2、删除node1节点,重新加入

回到顶部

2.2.1、删除节点

1

2

[root@master ~]# kubectl delete node node1

node "node1" deleted

回到顶部

2.2.2、生成注册命令

1

2

3

[root@master ~]# kubeadm token create --print-join-command

W0425 01:02:19.391867   62603 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]

kubeadm join 10.1.1.11:6443 --token 757a06.wnp34zge3cdcqag6     --discovery-token-ca-cert-hash sha256:b1ab3a019f671de99e3af0d9fd023078ad64941a3b8cd56c2a65624f0a218642

回到顶部

2.2.3、删除所有容器(node1)

1

2

3

4

5

6

root@node1:~# docker ps -qa | xargs docker rm -f

5e71e6e988d8

5c2ff662e72b

9a8f714be9f6

aceb8ae3a07b

dd608fbcc5f5

回到顶部

2.2.4、重新注册

1

2

3

4

5

6

7

8

9

10

root@node1:~# kubeadm join 10.1.1.11:6443 --token 757a06.wnp34zge3cdcqag6     --discovery-token-ca-cert-hash sha256:b1ab3a019f671de99e3af0d9fd023078ad64941a3b8cd56c2a65624f0a218642

W0425 01:03:08.461617   22573 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.

[preflight] Running pre-flight checks

    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/

error execution phase preflight: [preflight] Some fatal errors occurred:

    [ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists

    [ERROR Port-10250]: Port 10250 is in use

    [ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists

[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

To see the stack trace of this error execute with --v=5 or higher

  下面开始解决报错

2.3、解决重新注册失败的问题

回到顶部

2.3.1、删除旧的配置文件

1

root@node1:~# rm -f /etc/kubernetes/kubelet.conf

回到顶部

2.3.2、重启k8s及docker服务

1

2

root@node1:~# systemctl restart docker kubelet

root@node1:~#

回到顶部

2.3.3、删除旧的ca文件

1

2

root@node1:~# rm -f /etc/kubernetes/pki/ca.crt

root@node1:~#

2.4、重新注册

回到顶部

node节点

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

root@node1:~# kubeadm join 10.1.1.11:6443 --token 757a06.wnp34zge3cdcqag6     --discovery-token-ca-cert-hash sha256:b1ab3a019f671de99e3af0d9fd023078ad64941a3b8cd56c2a65624f0a218642

W0425 01:09:45.778629   23773 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.

[preflight] Running pre-flight checks

    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/

[preflight] Reading configuration from the cluster...

[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'

[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace

[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"

[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"

[kubelet-start] Starting the kubelet

[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

 

This node has joined the cluster:

* Certificate signing request was sent to apiserver and a response was received.

* The Kubelet was informed of the new secure connection details.

 

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

回到顶部

master节点

1

2

3

4

[root@master ~]# kubectl get nodes

NAME     STATUS   ROLES    AGE    VERSION

master   Ready    master   2d2h   v1.18.2

node1    Ready    <none>   38s    v1.18.2

 

至此,成功!

 

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值