Kubernetes CoreDNS 状态是 CrashLoopBackOff 解决思路

# kubectl get all -n kube-system 
NAME                                        READY   STATUS             RESTARTS   AGE
pod/canal-6z88v                             2/2     Running            0          20h
pod/canal-bnh7s                             2/2     Running            0          20h
pod/coredns-5c98db65d4-2jthp                0/1     CrashLoopBackOff   246        20h
pod/coredns-5c98db65d4-sgl8j                0/1     CrashLoopBackOff   246        20h
pod/etcd-k8smaster                          1/1     Running            0          20h
pod/kube-apiserver-k8smaster                1/1     Running            0          20h
pod/kube-controller-manager-k8smaster       1/1     Running            0          20h
pod/kube-proxy-4xqdk                        1/1     Running            0          20h
pod/kube-proxy-nnkdq                        1/1     Running            0          20h
pod/kube-scheduler-k8smaster                1/1     Running            0          20h
pod/kubernetes-dashboard-7d75c474bb-hr6jw   1/1     Running            0          27s

发现CoreDNS 状态是CrashLoopBackOff 

查看详细信息


# kubectl describe -n kube-system pod/coredns-5c98db65d4-2jthp
Name:                 coredns-5c98db65d4-2jthp
Namespace:            kube-system
Priority:             2000000000
Priority Class Name:  system-cluster-critical
Node:                 k8smaster/192.168.2.201
Start Time:           Thu, 27 Jun 2019 13:42:52 +0800
Labels:               k8s-app=kube-dns
                      pod-template-hash=5c98db65d4
Annotations:          cni.projectcalico.org/podIP: 10.244.0.3/32
Status:               Running
IP:                   10.244.0.3
Controlled By:        ReplicaSet/coredns-5c98db65d4
Containers:
  coredns:
    Container ID:  docker://ea8a124037c22ff43866d53c4444a7b70f3433c53605159cd3c466273c2b7d7c
    Image:         k8s.gcr.io/coredns:1.3.1
    Image ID:      docker-pullable://k8s.gcr.io/coredns@sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4
    Ports:         53/UDP, 53/TCP, 9153/TCP
    Host Ports:    0/UDP, 0/TCP, 0/TCP
    Args:
      -conf
      /etc/coredns/Corefile
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Fri, 28 Jun 2019 10:19:41 +0800
      Finished:     Fri, 28 Jun 2019 10:19:41 +0800
    Ready:          False
    Restart Count:  246
    Limits:
      memory:  170Mi
    Requests:
      cpu:        100m
      memory:     70Mi
    Liveness:     http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
    Readiness:    http-get http://:8080/health delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /etc/coredns from config-volume (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from coredns-token-4x4qh (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  config-volume:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      coredns
    Optional:  false
  coredns-token-4x4qh:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  coredns-token-4x4qh
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  beta.kubernetes.io/os=linux
Tolerations:     CriticalAddonsOnly
                 node-role.kubernetes.io/master:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason   Age                   From                Message
  ----     ------   ----                  ----                -------
  Warning  BackOff  51s (x5879 over 20h)  kubelet, k8smaster  Back-off restarting failed container

找到原因当部署在Kubernetes中的CoreDNS Pod检测到循环时,CoreDNS Pod将开始“CrashLoopBackOff”。这是因为每当CoreDNS检测到循环并退出时,Kubernetes将尝试重新启动Pod。

找到 /etc/resolv.conf 里面的文件

# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
#     DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
nameserver 127.0.1.1

发现那么sever 127.0.1.1是指向本地,修改文件为

# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8)
#     DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN
#nameserver 127.0.1.1
nameserver 8.8.8.8
nameserver 8.8.4.4

然后重启服务

# systemctl daemon-reload
# systemctl restart docker

检查发现已经正常

# kubectl get -n kube-system pod
NAME                                    READY   STATUS    RESTARTS   AGE
canal-6z88v                             2/2     Running   0          23h
canal-bnh7s                             2/2     Running   4          23h
coredns-5c98db65d4-2jthp                1/1     Running   276        23h
coredns-5c98db65d4-sgl8j                1/1     Running   276        23h
etcd-k8smaster                          1/1     Running   2          23h
kube-apiserver-k8smaster                1/1     Running   2          23h
kube-controller-manager-k8smaster       1/1     Running   4          23h
kube-proxy-4xqdk                        1/1     Running   2          23h
kube-proxy-nnkdq                        1/1     Running   0          23h
kube-scheduler-k8smaster                1/1     Running   2          23h
kubernetes-dashboard-7d75c474bb-hr6jw   1/1     Running   0          154m

 

  • 1
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 4
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值