不需要使用tag 解决k8s集群kubeadm init初始化镜像pull不下来的问题
)
网上现有解决方案是通过指定版本包并且打tag方式,这种方法也可以,但是非常麻烦!本文介绍一种非常简单的方法一键解决问题!
第一次初始化kubeadm
第一次初始化kubeadm master节点经常会遇到init错误,包括使用阿里镜像源也会出错,如下所示:
[root@localhost yum.repos.d]# kubeadm init --kubernetes-version=v1.18.1 --apiserver-advertise-address 192.168.0.102 --pod-network-cidr=10.244.0.0/16
W0418 07:42:11.016578 13232 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.1
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected “cgroupfs” as the Docker cgroup driver. The recommended driver is “systemd”. Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using ‘kubeadm config images pull’
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.18.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.18.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.18.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.18.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.2: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/etcd:3.4.3-0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.6.7: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
最强解决方案
-
查看kubeadm版本号
[root@localhost yum.repos.d]# kubeadm version
kubeadm version: &version.Info{Major:“1”, Minor:“18”, GitVersion:“v1.18.1”, GitCommit:“7879fc12a63337efff607952a323df90cdc7a335”, GitTreeState:“clean”, BuildDate:“2020-04-08T17:36:32Z”, GoVersion:“go1.13.9”, Compiler:“gc”, Platform:“linux/amd64”}
[root@localhost yum.repos.d]#如果版本号不对,初始化的时候使用阿里源也是会找不到包,比如我当时init没有指定包,我安装的是v1.18.1,但是kubeadm默认查找v1.18.2,当时阿里源还没有把最新版本更新出来,当然找不到了,会报错。
-
使用如下命令
kubeadm init --kubernetes-version=v1.18.1 --image-repository registry.aliyuncs.com/google_containers --apiserver-advertise-address 192.168.0.102 --pod-network-cidr=10.244.0.0/16 -
初始化成功
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown
(
i
d
−
u
)
:
(id -u):
(id−u):(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.0.102:6443 --token zthn0z.ck1vb1o2yq0e3m7a
–discovery-token-ca-cert-hash sha256:2dc47a545835a853d8c81131ad48e6a6ec06b2fc82daa955fffb13b85cbc5a78