搭建k8s集群错误集

1

etcd

810 14:12:32 k8master-1 etcd[23435]: {
   "level":"warn","ts":"2022-08-10T14:12:32.069+0800","caller":"rafthttp/http.go:500","msg":"request cluster ID mismatch","local-member-id":"44ec88b2ad8081e","local-member-cluster-id":"ced548654624706f","local-member-server-version":"3.5.0","local-member-server-minimum-cluster-version":"3.0.0","remote-peer-server-name":"1d412b7cdf0f5787","remote-peer-server-version":"3.5.0","remote-peer-server-minimum-cluster-version":"3.0.0","remote-peer-cluster-id":"8c96ad28e090da8f"}

kube-apiserver

E0810 14:15:31.208449   22888 controller.go:223] unable to sync kubernetes service: etcdserver: requested lease not found
E0810 14:15:41.208772   22888 controller.go:223] unable to sync kubernetes service: etcdserver: requested lease not found

排查:

[root@k8master-1 work]#  /app/k8s/bin/etcdctl --cacert=/etc/kubernetes/cert/ca.pem  --cert=/etc/etcd/cert/etcd.pem --key=/etc/etcd/cert/etcd-key.pem --endpoints=https://192.168.159.156:2379,https://192.168.159.158:2379,https://192.168.159.159:2379 member list  -w table
+------------------+---------+------------+------------------------------+------------------------------+------------+
|        ID        | STATUS  |    NAME    |          PEER ADDRS          |         CLIENT ADDRS         | IS LEARNER |
+------------------+---------+------------+------------------------------+------------------------------+------------+
|  44ec88b2ad8081e | started | k8master-1 | https://192.168.159.156:2380 |                              |      false |
|  7d173c333430d55 | started | k8worker-2 | https://192.168.159.159:2380 | https://192.168.159.159:2379 |      false |
| 1d412b7cdf0f5787 | started | k8worker-1 | https://192.168.159.158:2380 | https://192.168.159.158:2379 |      false |
+------------------+---------+------------+------------------------------+------------------------------+------------+

[root@k8master-1 work]#  /app/k8s/bin/etcdctl --cacert=/etc/kubernetes/cert/ca.pem  --cert=/etc/etcd/cert/etcd.pem --key=/etc/etcd/cert/etcd-key.pem --endpoints=https://192.168.159.156:2379,https://192.168.159.158:2379,https://192.168.159.159:2379 endpoint status  -w table
+------------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|           ENDPOINT           |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+------------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://192.168.159.156:2379 |  44ec88b2ad8081e |   3.5.0 |  741 kB |      true |      false |        14 |       6986 |               6986 |        |
| https://192.168.159.158:2379 | 1d412b7cdf0f5787 |   3.5.0 |  1.3 MB |      true |      false |        17 |      40171 |              40171 |        |
| https://192.168.159.159:2379 |  7d173c333430d55 |   3.5.0 |  1.3 MB |     false |      false |        17 |      40171 |              40171 |        |
+------------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

如果出现 IS LEADER 2个true,检查日志发现: request cluster ID mismatch
需要删除:
/app/k8s/etcd/work/* #
/app/k8s/etcd/wal/* #
再重启服务。

解决方法:

 1031  systemctl stop  etcd.service
 1032  systemctl status  etcd.service
 1033  rm -f /app/k8s/etcd/work/*
 1034  rm -f /app/k8s/etcd/wal/*
 1035  systemctl start  etcd.service

正常日志:

 1070  810 15:04:35 k8worker-2 etcd[56620]: {
   "level":"info","ts":"2022-08-10T15:04:35.319+0800","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2245}
 1071  810 15:04:35 k8worker-2 etcd[56620]: {
   "level":"info","ts":"2022-08-10T15:04:35.319+0800","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":2245,"took":"63.833µs"}
 1072  810 15:09:35 k8worker-2 etcd[56620]: {
   "level":"info","ts":"2022-08-10T15:09:35.326+0800","caller":"mvcc/index.go:214","msg":"compact tree index","revision":2247}
 1073  810 15:09:35 k8worker-2 etcd[56620]: {
   "level":"info","ts":"2022-08-10T15:09:35.327+0800","caller":"mvcc/kvstore_compaction.go:57","msg":"finished scheduled compaction","compact-revision":2247,"took":"46.555µs"

2

etcd

{
   "level":"fatal","ts":"2022-08-10T15:03:50.046+0800","caller":"etcdmain/etcd.go:203","msg":"discovery failed","error":"cannot fetch cluster info from peer urls: could not retrieve cluster information from the given URLs","stacktrace":"go.etcd.io/etcd/server/v3/etcdmain.startEtcdOrProxyV2\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/server/etcdmain/etcd.go:203\ngo.etcd.io/etcd/server/v3/etcdmain.Main\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/server/etcdmain/main.go:40\nmain.main\n\t/tmp/etcd-release-3.5.0/etcd/release/etcd/server/main.go:32\nruntime.main\n\t/home/remote/sbatsche/.gvm/gos/go1.16.3/src/runtime/proc.go:225"}
810 15:03:50 k8master-1 systemd[1]: etcd.service: main process exited, code=exited, status=1/FAILURE
810 15:03:50 k8master-1 systemd[1]: Failed to start Etcd Server.
810 15:03:50 k8master-1 systemd[1]: Unit etcd.service entered failed state.
810 15:03:50 k8master-1 systemd[1]: etcd.service failed.

其他节点没有启动,其他节点启动即可。

3

kube-controller-manager

810 15:23:01 k8master-1 kube-controller-manager[35641]: unable to load configmap based request-header-client-ca-file: Get "https://192.168.159.156:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": x509: certificate signed by unknown authority

排查过程:

[root@k8master-1 work]# cat /etc/systemd/system/kube-controller-manager.service |grep pem
  --client-ca-file=/etc/kubernetes/cert/ca.pem \
  --cluster-signing-cert-file=/etc/kubernetes/cert/ca.pem \
  --cluster-signing-key-file=/etc/kubernetes/cert/ca-key.pem \
  --root-ca-file=/etc/kubernetes/cert/ca.pem \
  --service-account-private-key-file=/etc/kubernetes/cert/apiserver-key.pem \
  --tls-cert-file=/etc/kubernetes/cert/kube-controller-manager.pem \
  --tls-private-key-file=/etc/kubernetes/cert/kube-controller-manager-key.pem \
[root@k8master-1 work]# cfssl certinfo -cert /etc/kubernetes/cert/ca.pem
{
   
  "subject": 
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Rancher是一个开源的容器管理平台,可以用来搭建和管理Kubernetes集群。使用Rancher搭建Kubernetes集群的步骤可以分为三个主要部分:虚拟机环境配置、安装Rancher和通过Rancher安装Kubernetes集群。 在虚拟机环境配置部分,你需要配置一台或多台虚拟机作为Kubernetes集群的节点。这包括设置虚拟机的操作系统和资源分配等配置。 接下来,在安装Rancher的部分,你需要在Docker中安装Rancher,这将提供一个可视化的管理界面来管理和监控Kubernetes集群。 最后,在通过Rancher安装Kubernetes集群的部分,你需要按照一系列步骤来配置和安装Kubernetes集群。这包括安装RKE和kubectl工具、使用RKE安装Kubernetes、设置环境变量、安装和配置Helm等。 当然,如果你想清理Rancher创建的Kubernetes集群,还可以按照相应的步骤进行清理操作。 综上所述,使用Rancher搭建Kubernetes集群的主要步骤包括虚拟机环境配置、安装Rancher和通过Rancher安装Kubernetes集群。<span class="em">1</span> #### 引用[.reference_title] - *1* [Rancher搭建k8s集群](https://blog.csdn.net/aa18855953229/article/details/112200578)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_1"}}] [.reference_item style="max-width: 100%"] [ .reference_list ]

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值