环境

  • 系统:Amazon Linux 2
  • kubernetes:v1.23.0
  • docker:20.10.7

复现步骤

1、通过 yum 或 rpm 安装 kubelet kubectl kubeadm,并

systemctl enable --now kubelet

2、初始化 sudo kubeadm init --config=kubeadm.yaml ,到最后初始化失败,提示kubelet 健康状态不正常:k8serr02.png
查看系统日志 /var/log/messages,提示

server.go:302] "Failed to run kubelet" err="failed to run Kubelet: misconfiguration: kubelet cgroup driver: \"systemd\" is different from docker cgroup driver: \"cgroupfs\""

解决方案

1、重置未初始化成功的kubeadm配置

echo y|kubeadm reset

2、修改docker,只需在/etc/docker/daemon.json中,添加"exec-opts": ["native.cgroupdriver=systemd"]即可,本文最初的docker配置可供参考。

mkdir /etc/docker
cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "insecure-registries": [
    "172.21.19.137:5000"
  ],
  "registry-mirrors" : [
    "https://8xpk5wnt.mirror.aliyuncs.com"
  ]
}
EOF

3、修改kubelet:

cat > /var/lib/kubelet/config.yaml <<EOF
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
EOF

4、重启docker 与 kubelet:

systemctl daemon-reload
systemctl restart docker
systemctl restart kubelet

5、检查 docker info|grep "Cgroup Driver" 是否输出 Cgroup Driver: systemd

[root@k8s-master ~]# docker info|grep "Cgroup Driver"
 Cgroup Driver: cgroupfs

段落引用再次执行kubeadm init时,我发现kubeadm将cgroupDriver的配置到了/var/lib/kubelet/kubeadm-flags.env

[root@k8s-master ~]# cat /var/lib/kubelet/kubeadm-flags.env
KUBELET_KUBEADM_ARGS="--network-plugin=cni --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.6"

后续检查/var/lib/kubelet/config.yaml 发现,里边已经被新的配置替换掉了;
另外,在配置期间,我这里一直出现kubelet健康状态不正常的问题,重置了kubeadm,删除了执行用户的家目录下的~/.kube,之后正常了
ps: 有趣的是,kubelet自启动后,会周期性重启,还是会提示docker的cgroup driver是与kubelet不同,等kubeadm初始化成功就不这样了。