【k8s 1.18.3异常bug】掉电后kubernetes无法启动异常

问题描述

单节点kubernetes 1.18.3 集群在掉电后重启后,kubectl命令无法找到apiserver

# kubectl get pods
The connection to the server 192.168.145.10:6443 was refused - did you specify the right host or port?

定位问题

查看master正在运行的容器

# docker ps
CONTAINER ID        IMAGE                                                           COMMAND                  CREATED             STATUS              PORTS               NAMES
d5d2357ea65e        registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2   "/pause"                 27 minutes ago      Up 27 minutes                           k8s_POD_etcd-192.168.145.10_kube-system_21189a7ddc51b638176c56716eb91256_2
57b6c19b6bd6        da26705ccb4b                                                    "kube-controller-man…"   10 hours ago        Up 10 hours                             k8s_kube-controller-manager_kube-controller-manager-192.168.145.10_kube-system_e8215a6985e0ba96a78073d5dcf09c37_4
dc5eee2a1626        76216c34ed0c                                                    "kube-scheduler --au…"   10 hours ago        Up 10 hours                             k8s_kube-scheduler_kube-scheduler-192.168.145.10_kube-system_c63a370803ea358d14eb11f27c64756f_2
f80fc375a286        registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2   "/pause"                 10 hours ago        Up 10 hours                             k8s_POD_kube-scheduler-192.168.145.10_kube-system_c63a370803ea358d14eb11f27c64756f_2
9c05c9c22574        registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2   "/pause"                 10 hours ago        Up 10 hours                             k8s_POD_kube-controller-manager-192.168.145.10_kube-system_e8215a6985e0ba96a78073d5dcf09c37_2
e6dc2d8d799b        registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2   "/pause"                 10 hours ago        Up 10 hours                             k8s_POD_kube-apiserver-192.168.145.10_kube-system_b50742472f0e509111661bf79af9b13a_2

发现_kube-apiserber_和_etcd_的都没有起来,那么问题定位就清楚了,查看_etcd_处于Exited状态的容器日志

# docker logs --tail=1000 242f749c0b65
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2020-06-26 03:43:48.883399 I | etcdmain: etcd Version: 3.4.3
2020-06-26 03:43:48.883427 I | etcdmain: Git SHA: 3cf2f69b5
2020-06-26 03:43:48.883429 I | etcdmain: Go Version: go1.12.12
2020-06-26 03:43:48.883431 I | etcdmain: Go OS/Arch: linux/amd64
2020-06-26 03:43:48.883434 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2
2020-06-26 03:43:48.883481 N | etcdmain: the server is already initialized as member before, starting as etcd member...
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2020-06-26 03:43:48.883509 I | embed: peerTLS: cert = /etc/kubernetes/pki/etcd/peer.crt, key = /etc/kubernetes/pki/etcd/peer.key, trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true, crl-file = 
2020-06-26 03:43:48.884075 I | embed: name = 192.168.145.10
2020-06-26 03:43:48.884092 I | embed: data dir = /var/lib/etcd
2020-06-26 03:43:48.884095 I | embed: member dir = /var/lib/etcd/member
2020-06-26 03:43:48.884097 I | embed: heartbeat = 100ms
2020-06-26 03:43:48.884099 I | embed: election = 1000ms
2020-06-26 03:43:48.884101 I | embed: snapshot count = 10000
2020-06-26 03:43:48.884127 I | embed: advertise client URLs = https://192.168.145.10:2379
2020-06-26 03:43:48.884131 I | embed: initial advertise peer URLs = https://192.168.145.10:2380
2020-06-26 03:43:48.884134 I | embed: initial cluster = 
2020-06-26 03:43:48.885537 I | etcdserver: recovered store from snapshot at index 250026
2020-06-26 03:43:48.887606 C | etcdserver: recovering backend from snapshot error: failed to find database snapshot file (snap: snapshot file doesn't exist)
panic: recovering backend from snapshot error: failed to find database snapshot file (snap: snapshot file doesn't exist)
        panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x20 pc=0xc2cc4e]

goroutine 1 [running]:
go.etcd.io/etcd/etcdserver.NewServer.func1(0xc000246f50, 0xc000244f48)
        /tmp/etcd-release-3.4.3/etcd/release/etcd/etcdserver/server.go:335 +0x3e
panic(0xed6960, 0xc00026ea40)
        /usr/local/go/src/runtime/panic.go:522 +0x1b5
github.com/coreos/pkg/capnslog.(*PackageLogger).Panicf(0xc00013dda0, 0x10aeaf5, 0x2a, 0xc000245018, 0x1, 0x1)
        /home/ec2-user/go/pkg/mod/github.com/coreos/pkg@v0.0.0-20160727233714-3ac0863d7acf/capnslog/pkg_logger.go:75 +0x135
go.etcd.io/etcd/etcdserver.NewServer(0x7ffdd25a9e70, 0xe, 0x0, 0x0, 0x0, 0x0, 0xc0001aaf80, 0x1, 0x1, 0xc0001ab100, ...)
        /tmp/etcd-release-3.4.3/etcd/release/etcd/etcdserver/server.go:456 +0x42f7
go.etcd.io/etcd/embed.StartEtcd(0xc000208000, 0xc000208580, 0x0, 0x0)
        /tmp/etcd-release-3.4.3/etcd/release/etcd/embed/etcd.go:211 +0x9d0
go.etcd.io/etcd/etcdmain.startEtcd(0xc000208000, 0x108423e, 0x6, 0x1, 0xc00015f1d0)
        /tmp/etcd-release-3.4.3/etcd/release/etcd/etcdmain/etcd.go:302 +0x40
go.etcd.io/etcd/etcdmain.startEtcdOrProxyV2()
        /tmp/etcd-release-3.4.3/etcd/release/etcd/etcdmain/etcd.go:144 +0x2f71
go.etcd.io/etcd/etcdmain.Main()
        /tmp/etcd-release-3.4.3/etcd/release/etcd/etcdmain/main.go:46 +0x38
main.main()
        /tmp/etcd-release-3.4.3/etcd/release/etcd/main.go:28 +0x20

可以看到启动失败的原因是etcd在执行recover过程处理异常

2020-06-26 03:43:48.887606 C | etcdserver: recovering backend from snapshot error: failed to find database snapshot file (snap: snapshot file doesn't exist)
panic: recovering backend from snapshot error: failed to find database snapshot file (snap: snapshot file doesn't exist)
        panic: runtime error: invalid memory address or nil pointer dereference

etcd异常分析

为了更好的排查问题,下载对应版本etcd安装包,直接在master宿主机etcd命令启动

  ./etcd \
  --advertise-client-urls=https://192.168.145.10:2379 \
  --cert-file=/etc/kubernetes/pki/etcd/server.crt \
  --client-cert-auth=true \
  --data-dir=/var/lib/etcd \
  --initial-advertise-peer-urls=https://192.168.145.10:2380 \
  --initial-cluster=192.168.145.10=https://192.168.145.10:2380 \
  --key-file=/etc/kubernetes/pki/etcd/server.key \
  --listen-client-urls=https://127.0.0.1:2379,https://192.168.145.10:2379 \
  --listen-metrics-urls=http://127.0.0.1:2381 \
  --listen-peer-urls=https://192.168.145.10:2380 \
  --name=192.168.145.10 \
  --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt \
  --peer-client-cert-auth=true \
  --peer-key-file=/etc/kubernetes/pki/etcd/peer.key \
  --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt \
  --snapshot-count=10000 \
  --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt \
  --log-level='debug' \
  --logger=zap 

命令中新增参数有:
–log-level=‘debug’ 使用debug日志级别,方别排查问题

–logger=zap 日志记录指定’zap’以json格式 输出日志,可以将日志结构化输出看到更多日志细节。

执行结果如下:

{"level":"info","ts":"2020-06-26T12:05:00.053+0800","caller":"etcdmain/etcd.go:134","msg":"server has been already initialized","data-dir":"/var/lib/etcd","dir-type":"member"}
{"level":"info","ts":"2020-06-26T12:05:00.053+0800","caller":"embed/etcd.go:117","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.145.10:2380"]}
{"level":"info","ts":"2020-06-26T12:05:00.053+0800","caller":"embed/etcd.go:465","msg":"starting with peer TLS","tls-info":"cert = /etc/kubernetes/pki/etcd/peer.crt, key = /etc/kubernetes/pki/etcd/peer.key, trusted-ca = /etc/kubernetes/pki/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
{"level":"info","ts":"2020-06-26T12:05:00.053+0800","caller":"embed/etcd.go:127","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.145.10:2379"]}
{"level":"info","ts":"2020-06-26T12:05:00.053+0800","caller":"embed/etcd.go:299","msg":"starting an etcd server","etcd-version":"3.4.3","git-sha":"3cf2f69b5","go-version":"go1.12.12","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"192.168.145.10","data-dir":"/var/lib/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.145.10:2380"],"listen-peer-urls":["https://192.168.145.10:2380"],"advertise-client-urls":["https://192.168.145.10:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.145.10:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-size-bytes":2147483648,"pre-vote":false,"initial-corrupt-check":false,"corrupt-check-time-interval":"0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":""}
{"level":"info","ts":"2020-06-26T12:05:00.054+0800","caller":"etcdserver/backend.go:79","msg":"opened backend db","path":"/var/lib/etcd/member/snap/db","took":"555.779µs"}
{"level":"info","ts":"2020-06-26T12:05:00.055+0800","caller":"etcdserver/server.go:443","msg":"recovered v2 store from snapshot","snapshot-index":250026,"snapshot-size":"7.5 kB"}
{"level":"warn","ts":"2020-06-26T12:05:00.058+0800","caller":"snap/db.go:92","msg":"failed to find [SNAPSHOT-INDEX].snap.db","snapshot-index":250026,"snapshot-file-path":"/var/lib/etcd/member/snap/000000000003d0aa.snap.db","error":"snap: snapshot file doesn't exist"}
{"level":"panic","ts":"2020-06-26T12:05:00.058+0800","caller":"etcdserver/server.go:454","msg":"failed to recover v3 backend from snapshot","error":"failed to find database snapshot file (snap: snapshot file doesn't exist)","stacktrace":"go.etcd.io/etcd/etcdserver.NewServer\n\t/tmp/etcd-release-3.4.3/etcd/release/etcd/etcdserver/server.go:454\ngo.etcd.io/etcd/embed.StartEtcd\n\t/tmp/etcd-release-3.4.3/etcd/release/etcd/embed/etcd.go:211\ngo.etcd.io/etcd/etcdmain.startEtcd\n\t/tmp/etcd-release-3.4.3/etcd/release/etcd/etcdmain/etcd.go:302\ngo.etcd.io/etcd/etcdmain.startEtcdOrProxyV2\n\t/tmp/etcd-release-3.4.3/etcd/release/etcd/etcdmain/etcd.go:144\ngo.etcd.io/etcd/etcdmain.Main\n\t/tmp/etcd-release-3.4.3/etcd/release/etcd/etcdmain/main.go:46\nmain.main\n\t/tmp/etcd-release-3.4.3/etcd/release/etcd/main.go:28\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:200"}
panic: failed to recover v3 backend from snapshot
        panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x20 pc=0xc2cc4e]

goroutine 1 [running]:
go.etcd.io/etcd/etcdserver.NewServer.func1(0xc000392f50, 0xc000390f48)
        /tmp/etcd-release-3.4.3/etcd/release/etcd/etcdserver/server.go:335 +0x3e
panic(0xed6960, 0xc000048050)
        /usr/local/go/src/runtime/panic.go:522 +0x1b5
go.uber.org/zap/zapcore.(*CheckedEntry).Write(0xc0000d0c60, 0xc0000c2340, 0x1, 0x1)
        /home/ec2-user/go/pkg/mod/go.uber.org/zap@v1.10.0/zapcore/entry.go:229 +0x546
go.uber.org/zap.(*Logger).Panic(0xc00007fa40, 0x10ae243, 0x2a, 0xc0000c2340, 0x1, 0x1)
        /home/ec2-user/go/pkg/mod/go.uber.org/zap@v1.10.0/logger.go:225 +0x7f
go.etcd.io/etcd/etcdserver.NewServer(0x7ffd9142473d, 0xe, 0x0, 0x0, 0x0, 0x0, 0xc00016ac00, 0x1, 0x1, 0xc00016ad80, ...)
        /tmp/etcd-release-3.4.3/etcd/release/etcd/etcdserver/server.go:454 +0x3c85
go.etcd.io/etcd/embed.StartEtcd(0xc000200000, 0xc000200580, 0x0, 0x0)
        /tmp/etcd-release-3.4.3/etcd/release/etcd/embed/etcd.go:211 +0x9d0
go.etcd.io/etcd/etcdmain.startEtcd(0xc000200000, 0x108423e, 0x6, 0xc00016b401, 0x2)
        /tmp/etcd-release-3.4.3/etcd/release/etcd/etcdmain/etcd.go:302 +0x40
go.etcd.io/etcd/etcdmain.startEtcdOrProxyV2()
        /tmp/etcd-release-3.4.3/etcd/release/etcd/etcdmain/etcd.go:144 +0x2f71
go.etcd.io/etcd/etcdmain.Main()
        /tmp/etcd-release-3.4.3/etcd/release/etcd/etcdmain/main.go:46 +0x38
main.main()
        /tmp/etcd-release-3.4.3/etcd/release/etcd/main.go:28 +0x20

定位报错位置:

{"level":"warn","ts":"2020-06-26T12:05:00.058+0800","caller":"snap/db.go:92","msg":"failed to find [SNAPSHOT-INDEX].snap.db","snapshot-index":250026,"snapshot-file-path":"/var/lib/etcd/member/snap/000000000003d0aa.snap.db","error":"snap: snapshot file doesn't exist"}

但是-/var/lib/etcd/文件目录结构如下:

.
└── member
    ├── snap
    │   ├── 0000000000000003-0000000000033466.snap
    │   ├── 0000000000000003-0000000000035b77.snap
    │   ├── 0000000000000003-0000000000038288.snap
    │   ├── 0000000000000003-000000000003a999.snap
    │   ├── 0000000000000003-000000000003d0aa.snap
    │   └── db
    └── wal
        ├── 0000000000000000-0000000000000000.wal
        ├── 0000000000000001-0000000000014a00.wal
        ├── 0000000000000002-000000000002a59b.wal
        └── 0.tmp 

明显etcd需要recover的snapshot文件不在snap目录中,作者君怀疑可能是个bug…

etcd:failed to find [SNAPSHOT-INDEX].snap.db解题思路

遇事不决先摆渡

确实有,k8s实践9:一次失败的kubernetes集群崩溃处理记录,还有其他几篇文章,做法是清除etcd数据文件重启服务,这在生产环境绝对是不能这样搞…

然后,吐槽下百度,正经通过异常代码查询,第N次吐血…最后使用bing搜索,发现github的kubernetes和etcd的issue都有提到这个问题,作者君顺便也跟了下

https://github.com/etcd-io/etcd/issues/11949

https://github.com/kubernetes/kubernetes/issues/88574

到目前还都没有解决…后续解决办法出来后继续更新,先留个坑
在这里插入图片描述

数据库不备份就是耍流氓

在无法快速解决异常的时候,第一时间要恢复生产集群,还是赶紧用备份进行还原

复习下etcd备份命令

./etcdctl snapshot save  /tmp/snapshot.db.20200628\
 --cacert=/etc/kubernetes/pki/etcd/ca.crt \
 --cert=/etc/kubernetes/pki/etcd/server.crt \
 --key=/etc/kubernetes/pki/etcd/server.key \
 --endpoints="https://192.168.145.10:2379" 

#备份下key列表,用于恢复后验证key是否完整
 ./etcdctl get / --prefix --keys-only  --cacert=/etc/kubernetes/pki/etcd/ca.crt  --cert=/etc/kubernetes/pki/etcd/server.crt  --key=/etc/kubernetes/pki/etcd/server.key  --endpoints="https://192.168.145.10:2379"  | grep -v "^$" > /tmp/snapshot.num_list.20200628.txt

恢复

此时etcd是处于stop状态

  1. 保存损坏的etcd数据文件
mv  /var/lib/etcd  /var/lib/etcd.2020-06-27 -r
  1. 使用保存的数据文件恢复
./etcdctl snapshot restore  snapshot.db.20200628 --data-dir /var/lib/etcd \
 --cacert=/etc/kubernetes/pki/etcd/ca.crt \
 --cert=/etc/kubernetes/pki/etcd/server.crt \
 --key=/etc/kubernetes/pki/etcd/server.key \
 --endpoints="https://192.168.145.10:2379" 
  1. 启动etcd
./etcd \
--advertise-client-urls=https://192.168.145.10:2379 \
--cert-file=/etc/kubernetes/pki/etcd/server.crt \
--client-cert-auth=true \
--data-dir=/var/lib/etcd \
--initial-advertise-peer-urls=https://192.168.145.10:2380 \
--initial-cluster=192.168.145.10=https://192.168.145.10:2380 \
--key-file=/etc/kubernetes/pki/etcd/server.key \
--listen-client-urls=https://127.0.0.1:2379,https://192.168.145.10:2379 \
--listen-metrics-urls=http://127.0.0.1:2381 \
--listen-peer-urls=https://192.168.145.10:2380 \
--name=192.168.145.10 \
--peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt \
--peer-client-cert-auth=true \
--peer-key-file=/etc/kubernetes/pki/etcd/peer.key \
--peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt \
--snapshot-count=10000 \
--trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt \
--log-level='debug' \
--logger=zap

*启动成功 *

修复损坏etcd的数据文件

在这里插入图片描述

  • 1
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 7
    评论
。 5. 完整示例 下面是一个完整的示例代码,包括表格的 HTML、JavaScript 和数据在Spring Boot中,可以使用`ApplicationRunner`或`CommandLineRunner`接口来实现在Spring容器启动后执行文件: ```html <!DOCTYPE html> <html> <head> <title>Bootstrap Table 行编辑状态</title> <meta charset某些逻辑。 可以创建一个实现`ApplicationRunner`或`CommandLineRunner`接口的类,并且将其声明为`@Component`,这样它将自动被Spring容器扫描并执行。在`run()`方法中,您可以="utf-8"> <meta name="viewport" content="width=device-width, initial-scale=1"> <link rel="stylesheet执行任何逻辑,例如检查nacos配置是否被正确加载。 示例代码如下: ```java @Component public" href="https://cdn.bootcdn.net/ajax/libs/bootstrap/4.6.0/css/bootstrap.min.css"> <link rel="stylesheet class MyApplicationRunner implements ApplicationRunner { @Autowired private NacosProperties nacosProperties; @Override public void run(ApplicationArguments args) throws Exception { // 在nacos配置文件加载完成后执行的逻辑 if (nacosProperties" href="https://cdn.bootcdn.net/ajax/libs/bootstrap-table/1.18.3/bootstrap-table.min.css"> </head> <body> != null) { System.out.println("Nacos properties loaded successfully!"); } } } ``` 在上面的示例中 <div class="container"> <h2>Bootstrap Table 行编辑状态</h2> <table id="table" data,我们注入了一个`NacosProperties`对象来检查nacos配置是否被正确加载。您可以根据自己的需求注入任何依赖项。 希望能帮到你!

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 7
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

^白开水^

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值