K8S初始化报错

初始化信息

kubeadm init \
 --apiserver-advertise-address=192.168.40.128 \
 --image-repository registry.aliyuncs.com/google_containers \
 --kubernetes-version v1.22.1 \
 --service-cidr=10.2.0.0/16 \
 --pod-network-cidr=10.244.0.0/16

报错信息

error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR ImagePull]: failed to pull image registry.aliyuncs.com/google_containers/coredns:v1.8.4: output: Error response from daemon: manifest for registry.aliyuncs.com/google_containers/coredns:v1.8.4 not found: manifest unknown: manifest unknown
, error: exit status 1

解决方法

在初始化之前执行如下命令:

docker pull coredns/coredns:1.8.4
docker tag coredns/coredns:1.8.4 registry.aliyuncs.com/google_containers/coredns:v1.8.4

解决以上问题后又出现了新的报错信息

[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

        Unfortunately, an error has occurred:
                timed out waiting for the condition

        This error is likely caused by:
                - The kubelet is not running
                - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

        If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                - 'systemctl status kubelet'
                - 'journalctl -xeu kubelet'

        Additionally, a control plane component may have crashed or exited when started by the container runtime.
        To troubleshoot, list all containers using your preferred container runtimes CLI.

        Here is one example how you may list all Kubernetes containers running in docker:
                - 'docker ps -a | grep kube | grep -v pause'
                Once you have found the failing container, you can inspect its logs with:
                - 'docker logs CONTAINERID'

error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

根据以上关于 kubelet的报错,查看/var/log/messages日志,发现有以下问题

Aug 23 18:17:21 localhost kubelet: E0823 18:17:21.059626    9490 server.go:294] "Failed to run kubelet" err="failed to run Kubelet: misconfiguration: kubelet cgroup driver: \"systemd\" is different from docker cgroup driver: \"cgroupfs\""
Aug 23 18:17:21 localhost systemd: kubelet.service: main process exited, code=exited, status=1/FAILURE
Aug 23 18:17:21 localhost systemd: Unit kubelet.service entered failed state.
Aug 23 18:17:21 localhost systemd: kubelet.service failed.

因此kubelet启动失败的原因是:kubelet的cgroup driver是cgroupfs,docker的 cgroup driver是systemd,两者不一致导致。

解决方法

分别修改docker与控制平台的kubelet为systemd 【官方推荐】
鉴于用的k8s版本有点新,本文只记录当前1.18.x的修改方式,其他版本请参详官方:

重置未初始化成功的kubeadm配置

echo y|kubeadm reset

修改docker,只需在/etc/docker/daemon.json中,添加"exec-opts": [“native.cgroupdriver=systemd”]即可,
修改kubelet:

cat > /var/lib/kubelet/config.yaml <<EOF
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: systemd
EOF

重启docker 与 kubelet:

systemctl daemon-reload
systemctl restart docker
systemctl restart kubelet

检查 docker info|grep “Cgroup Driver” 是否输出 Cgroup Driver: systemd
在这里插入图片描述此时再次执行初始化,问题解决
在这里插入图片描述

  • 2
    点赞
  • 13
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值