kubeadmin kube-apiserver Exited 始终起不来查因记录

kubeadmin kube-apiserver Exited 始终起不来查因记录

[root@k8s-master01 log]# crictl ps -a
CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
b7af23a98302e       fce326961ae2d       16 seconds ago       Running             etcd                      29                  16fc6f83a01d2       etcd-k8s-master01
1d0efa5c0c12d       a31e1d84401e6       About a minute ago   Exited              kube-apiserver            1693                dfc4c0a0c3e03       kube-apiserver-k8s-master01
275f08ddab851       fce326961ae2d       6 minutes ago        Exited              etcd                      28                  16fc6f83a01d2       etcd-k8s-master01
b11025cbc4661       5d7c5dfd3ba18       6 hours ago          Running             kube-controller-manager   27                  0ff05b544ff48       kube-controller-manager-k8s-master01
4db4688c2687f       dafd8ad70b156       30 hours ago         Running             kube-scheduler            25                  5f4d13cedf450       kube-scheduler-k8s-master01
b311bf0e66852       54637cb36d4a1       7 days ago           Running             calico-node               0                   ff2f4ac3783bb       calico-node-2zqhn
108695e1af006       a1a5060fe43dc       9 days ago           Running             kuboard                   2                   7bee3baf06a62       kuboard-cc79974cd-t9jth
536a8cdfb0a9b       115053965e86b       9 days ago           Running             metrics-scraper           2                   046881f3feea3       metrics-scraper-7f4896c5d7-6w6ld
c91c3382c9c9d       556768f31eb1d       9 days ago           Running             kube-proxy                6                   ce658d774a03b       kube-proxy-gsv75
[root@k8s-master01 log]#

查日志

 cat /var/log/messages|grep kube-apiserver|grep -i error
 
Feb 16 12:08:16 k8s-master01 kubelet: E0216 12:08:16.192310    8996 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-k8s-master01_kube-system(d6b90b54ef1ec678fb3557edd6baf627)\"" pod="kube-system/kube-apiserver-k8s-master01" podUID=d6b90b54ef1ec678fb3557edd6baf627

由于 kube-apiserver 起不来
kubectl describe
kubectl logs
之类查问题的命令都用不了

用的containerd.service 容器时
crictl logs 命令可用

[root@k8s-master01 log]# crictl  ps -a
CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
b3c010e0082be       a31e1d84401e6       39 seconds ago      Exited              kube-apiserver            1701                dfc4c0a0c3e03       kube-apiserver-k8s-master01
401eba6ca6507       fce326961ae2d       2 minutes ago       Running             etcd                      36                  16fc6f83a01d2       etcd-k8s-master01
b11025cbc4661       5d7c5dfd3ba18       7 hours ago         Running             kube-controller-manager   27                  0ff05b544ff48       kube-controller-manager-k8s-master01
4db4688c2687f       dafd8ad70b156       31 hours ago        Running             kube-scheduler            25                  5f4d13cedf450       kube-scheduler-k8s-master01
b311bf0e66852       54637cb36d4a1       7 days ago          Running             calico-node               0                   ff2f4ac3783bb       calico-node-2zqhn
108695e1af006       a1a5060fe43dc       9 days ago          Running             kuboard                   2                   7bee3baf06a62       kuboard-cc79974cd-t9jth
536a8cdfb0a9b       115053965e86b       9 days ago          Running             metrics-scraper           2                   046881f3feea3       metrics-scraper-7f4896c5d7-6w6ld
c91c3382c9c9d       556768f31eb1d       9 days ago          Running             kube-proxy                6                   ce658d774a03b       kube-proxy-gsv75
[root@k8s-master01 log]# crictl  logs b3c010e0082be
I0216 04:41:58.479545       1 server.go:555] external host was not specified, using 192.168.40.240
I0216 04:41:58.480162       1 server.go:163] Version: v1.26.0
I0216 04:41:58.480188       1 server.go:165] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
I0216 04:41:58.662208       1 shared_informer.go:273] Waiting for caches to sync for node_authorizer
I0216 04:41:58.662957       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0216 04:41:58.662977       1 plugins.go:161] Loaded 12 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
E0216 04:42:18.665556       1 run.go:74] "command failed" err="context deadline exceeded"
W0216 04:42:18.665571       1 logging.go:59] [core] [Channel #1 SubChannel #2] grpc: addrConn.createTransport failed to connect to {
  "Addr": "127.0.0.1:2379",
  "ServerName": "127.0.0.1",
  "Attributes": null,
  "BalancerAttributes": null,
  "Type": 0,
  "Metadata": null
}. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
[root@k8s-master01 log]#

貌似连接2379 etcd服务端口异常

crictl  logs 860b2d24e75c3  #etcd容器的id
{"level":"info","ts":"2023-02-16T04:48:39.238Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e7a1aee40b9b8621 [logterm: 107, index: 5657524] sent MsgPreVote request to 9331fa9e272a9c3f at term 107"}
{"level":"warn","ts":"2023-02-16T04:48:40.151Z","caller":"etcdserver/server.go:2075","msg":"failed to publish local member to cluster through raft","local-member-id":"e7a1aee40b9b8621","local-member-attributes":"{Name:k8s-master01 ClientURLs:[https://192.168.40.240:2379]}","request-path":"/0/members/e7a1aee40b9b8621/attributes","publish-timeout":"7s","error":"etcdserver: request timed out"}
{"level":"info","ts":"2023-02-16T04:48:40.438Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e7a1aee40b9b8621 is starting a new election at term 107"}
{"level":"info","ts":"2023-02-16T04:48:40.438Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e7a1aee40b9b8621 became pre-candidate at term 107"}
{"level":"info","ts":"2023-02-16T04:48:40.438Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e7a1aee40b9b8621 received MsgPreVoteResp from e7a1aee40b9b8621 at term 107"}
{"level":"info","ts":"2023-02-16T04:48:40.438Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e7a1aee40b9b8621 [logterm: 107, index: 5657524] sent MsgPreVote request to 5db16f545fa302b7 at term 107"}
{"level":"info","ts":"2023-02-16T04:48:40.438Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e7a1aee40b9b8621 [logterm: 107, index: 5657524] sent MsgPreVote request to 9331fa9e272a9c3f at term 107"}
{"level":"warn","ts":"2023-02-16T04:48:40.447Z","caller":"etcdhttp/metrics.go:173","msg":"serving /health false; no leader"}
{"level":"warn","ts":"2023-02-16T04:48:40.448Z","caller":"etcdhttp/metrics.go:86","msg":"/health error","output":"{\"health\":\"false\",\"reason\":\"RAFT NO LEADER\"}","status-code":503}
.......
{"level":"warn","ts":"2023-02-16T06:55:15.139Z","caller":"etcdserver/server.go:2075","msg":"failed to publish local member to cluster through raft","local-member-id":"e7a1aee40b9b8621","local-member-attributes":"{Name:k8s-master01 ClientURLs:[https://192.168.40.240:2379]}","request-path":"/0/members/e7a1aee40b9b8621/attributes","publish-timeout":"7s","error":"etcdserver: request timed out"}
[root@k8s-master01 log]#

从日志看,像是etcd内部异常
参考
http://www.caotama.com/1864029.html
受启发,etcd备节点需要先启动,主节点接入备节点才能完整工作。

查备节点,发现因为多网卡IP是动态

[root@k8s-master03 manifests]# crictl ps -a
CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
a62d091cb77d6       a31e1d84401e6       24 seconds ago      Exited              kube-apiserver            92                  f32dfab0c6d61       kube-apiserver-k8s-master03
580675496bfae       fce326961ae2d       21 minutes ago      Exited              etcd                      88                  40fa395fc5f61       etcd-k8s-master03
d68f4e7158815       5d7c5dfd3ba18       7 hours ago         Running             kube-controller-manager   29                  e46f277014232       kube-controller-manager-k8s-master03
7e42849ff064e       dafd8ad70b156       7 hours ago         Running             kube-scheduler            28                  95d3e74185619       kube-scheduler-k8s-master03
69a554f8f249e       5d7c5dfd3ba18       3 days ago          Exited              kube-controller-manager   28                  d435a4f2a2550       kube-controller-manager-k8s-master03
927c7330986d4       dafd8ad70b156       3 days ago          Exited              kube-scheduler            27                  6bd230494824d       kube-scheduler-k8s-master03
2b61030c4f724       5185b96f0becf       13 days ago         Exited              coredns                   1                   417cff0be932c       coredns-567c556887-gqxc2
e62bfc4d92bf2       54637cb36d4a1       13 days ago         Exited              calico-node               2                   65cb679894ec0       calico-node-4qgnf
e419b9d4ed335       54637cb36d4a1       13 days ago         Exited              mount-bpffs               0                   65cb679894ec0       calico-node-4qgnf
bce9d5d9b2faa       628dd70880410       13 days ago         Exited              install-cni               0                   65cb679894ec0       calico-node-4qgnf
1508b3b2344e8       556768f31eb1d       13 days ago         Exited              kube-proxy                2                   4d7e52b0ee4b5       kube-proxy-qn6lx
ee1622dfb7d36       628dd70880410       13 days ago         Exited              upgrade-ipam              2                   65cb679894ec0       calico-node-4qgnf
[root@k8s-master03 manifests]# 
[root@k8s-master03 manifests]# 
[root@k8s-master03 manifests]# crictl logs 580675496bfae
{"level":"info","ts":"2023-02-16T08:48:38.698Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.19.217.2:2379","--cert-file=/etc/kubernetes/pki/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/eted","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.19.217.2:2380","--initial-cluster=k8s-master03=https://172.19.217.2:2380,k8s-master02=https://172.19.217.32:2380,k8s-master01=https://192.168.40.240:2380","--initial-cluster-state=existing","--key-file=/etc/kubernetes/pki/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.19.217.2:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.19.217.2:2380","--name=k8s-master03","--peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/etc/kubernetes/pki/etcd/peer.key","--peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt","--snapshot-count=10000","--trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt"]}
{"level":"info","ts":"2023-02-16T08:48:38.698Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/eted","dir-type":"member"}
{"level":"info","ts":"2023-02-16T08:48:38.698Z","caller":"embed/etcd.go:124","msg":"configuring peer listeners","listen-peer-urls":["https://172.19.217.2:2380"]}
{"level":"info","ts":"2023-02-16T08:48:38.698Z","caller":"embed/etcd.go:484","msg":"starting with peer TLS","tls-info":"cert = /etc/kubernetes/pki/etcd/peer.crt, key = /etc/kubernetes/pki/etcd/peer.key, client-cert=, client-key=, trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2023-02-16T08:48:38.698Z","caller":"embed/etcd.go:373","msg":"closing etcd server","name":"k8s-master03","data-dir":"/var/lib/eted","advertise-peer-urls":["https://172.19.217.2:2380"],"advertise-client-urls":["https://172.19.217.2:2379"]}
{"level":"info","ts":"2023-02-16T08:48:38.698Z","caller":"embed/etcd.go:375","msg":"closed etcd server","name":"k8s-master03","data-dir":"/var/lib/eted","advertise-peer-urls":["https://172.19.217.2:2380"],"advertise-client-urls":["https://172.19.217.2:2379"]}
{"level":"fatal","ts":"2023-02-16T08:48:38.698Z","caller":"etcdmain/etcd.go:204","msg":"discovery failed","error":"listen tcp 172.19.217.2:2380: bind: cannot assign requested address","stacktrace":"go.etcd.io/etcd/server/v3/etcdmain.startEtcdOrProxyV2\n\tgo.etcd.io/etcd/server/v3/etcdmain/etcd.go:204\ngo.etcd.io/etcd/server/v3/etcdmain.Main\n\tgo.etcd.io/etcd/server/v3/etcdmain/main.go:40\nmain.main\n\tgo.etcd.io/etcd/server/v3/main.go:32\nruntime.main\n\truntime/proc.go:225"}
[root@k8s-master03 manifests]# 

https://172.19.217.2:2380
应该是因为多网卡动态获取的ip,eth1网卡为 ssh连接专用网卡,但eth0-IP已经变化
将eth0网卡静态成该地址172.19.217.2。

修复 。

  • 1
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
如果您在启动 kube-apiserver.service 时遇到 "Unit kube-apiserver.service not found" 错误,可能是因为服务单元文件未正确安装或未正确配置。 您可以尝试以下步骤来解决该问题: 1. 确认 Kubernetes 的二进制文件已正确安装并可用。您可以通过运行 `which kube-apiserver` 命令来检查 kube-apiserver 的安装路径。如果命令返回路径,则说明二进制文件已正确安装。 2. 检查 Kubernetes 的服务单元文件是否存在。服务单元文件通常位于 `/etc/systemd/system/` 目录下,以 `.service` 结尾。确保存在名为 `kube-apiserver.service` 的服务单元文件。如果文件不存在,您可能需要重新安装 Kubernetes 或手动创建服务单元文件。 3. 如果服务单元文件存在,请确保文件的权限设置正确。运行以下命令来更正权限: ``` sudo chmod 644 /etc/systemd/system/kube-apiserver.service ``` 4. 如果服务单元文件存在但配置有误,您可以尝试重新配置该文件。打开 `/etc/systemd/system/kube-apiserver.service` 文件,并确保其中的配置正确无误。您可以参考 Kubernetes 官方文档或其他可靠来源获取正确的配置信息。 5. 在进行任何更改后,运行以下命令重新加载 systemd 配置并启动 kube-apiserver 服务: ``` sudo systemctl daemon-reload sudo systemctl start kube-apiserver.service ``` 如果问题仍然存在,请提供更多详细信息,以便我能够更好地帮助您解决问题。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值