kubeadm init 报错E0613解决办法

kubeadm init 运行结束发生以下情况

root@k8s-master:~# kubeadm init --config kubeadm.yaml
[init] Using Kubernetes version: v1.27.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W0613 15:49:09.857712    2140 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
W0613 15:49:09.950790    2140 checks.go:835] detected that the sandbox image "registry.k8s.io/pause:3.8" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.9" as the CRI sandbox image.
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.244.0.1 192.168.100.11]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.100.11 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.100.11 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
W0613 15:49:11.550029    2140 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.2, falling back to the nearest etcd version (3.5.7-0)
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
        timed out waiting for the condition

This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.
Here is one example how you may list all running Kubernetes containers by using crictl:
        - 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
        Once you have found the failing container, you can inspect its logs with:
        - 'crictl --runtime-endpoint unix:///var/run/containerd/containerd.sock logs CONTAINERID'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

查询 journalctl -fu kubelet
可得报错信息如下

root@k8s-master:~# journalctl -fu kubelet
-- Logs begin at Mon 2023-06-12 08:06:00 PDT. --
Jun 13 16:12:18 k8s-master kubelet[3249]: E0613 16:12:18.042275    3249 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.100.11:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.100.11:6443: connect: connection refused
Jun 13 16:12:20 k8s-master kubelet[3249]: E0613 16:12:20.212909    3249 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.100.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/k8s-master?timeout=10s\": dial tcp 192.168.100.11:6443: connect: connection refused" interval="7s"
Jun 13 16:12:20 k8s-master kubelet[3249]: I0613 16:12:20.316818    3249 kubelet_node_status.go:70] "Attempting to register node" node="k8s-master"
Jun 13 16:12:20 k8s-master kubelet[3249]: E0613 16:12:20.317040    3249 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://192.168.100.11:6443/api/v1/nodes\": dial tcp 192.168.100.11:6443: connect: connection refused" node="k8s-master"
Jun 13 16:12:22 k8s-master kubelet[3249]: E0613 16:12:22.207794    3249 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"k8s-master.17685ad421f431a6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"k8s-master", UID:"k8s-master", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"k8s-master"}, FirstTimestamp:time.Date(2023, time.June, 13, 16, 12, 7, 599468966, time.Local), LastTimestamp:time.Date(2023, time.June, 13, 16, 12, 7, 599468966, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://192.168.100.11:6443/api/v1/namespaces/default/events": dial tcp 192.168.100.11:6443: connect: connection refused'(may retry after sleeping)
Jun 13 16:12:22 k8s-master kubelet[3249]: E0613 16:12:22.533307    3249 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://192.168.100.11:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 192.168.100.11:6443: connect: connection refused
Jun 13 16:12:25 k8s-master kubelet[3249]: W0613 16:12:25.203243    3249 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://192.168.100.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.100.11:6443: connect: connection refused
Jun 13 16:12:25 k8s-master kubelet[3249]: E0613 16:12:25.203284    3249 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.100.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.100.11:6443: connect: connection refused
Jun 13 16:12:25 k8s-master kubelet[3249]: W0613 16:12:25.389947    3249 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://192.168.100.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dk8s-master&limit=500&resourceVersion=0": dial tcp 192.168.100.11:6443: connect: connection refused
Jun 13 16:12:25 k8s-master kubelet[3249]: E0613 16:12:25.390267    3249 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.100.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dk8s-master&limit=500&resourceVersion=0": dial tcp 192.168.100.11:6443: connect: connection refused
Jun 13 16:12:26 k8s-master kubelet[3249]: W0613 16:12:26.527013    3249 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://192.168.100.11:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.100.11:6443: connect: connection refused
Jun 13 16:12:26 k8s-master kubelet[3249]: E0613 16:12:26.527497    3249 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.100.11:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.100.11:6443: connect: connection refused
Jun 13 16:12:27 k8s-master kubelet[3249]: E0613 16:12:27.214258    3249 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://192.168.100.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/k8s-master?timeout=10s\": dial tcp 192.168.100.11:6443: connect: connection refused" interval="7s"
Jun 13 16:12:27 k8s-master kubelet[3249]: I0613 16:12:27.317885    3249 kubelet_node_status.go:70] "Attempting to register node" node="k8s-master"
Jun 13 16:12:27 k8s-master kubelet[3249]: E0613 16:12:27.318259    3249 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://192.168.100.11:6443/api/v1/nodes\": dial tcp 192.168.100.11:6443: connect: connection refused" node="k8s-master"
Jun 13 16:12:27 k8s-master kubelet[3249]: E0613 16:12:27.661789    3249 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"k8s-master\" not found"
Jun 13 16:12:27 k8s-master kubelet[3249]: W0613 16:12:27.748609    3249 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://192.168.100.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.100.11:6443: connect: connection refused
Jun 13 16:12:27 k8s-master kubelet[3249]: E0613 16:12:27.748651    3249 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://192.168.100.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.100.11:6443: connect: connection refused
Jun 13 16:12:30 k8s-master kubelet[3249]: E0613 16:12:30.029682    3249 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to get sandbox image \"registry.k8s.io/pause:3.8\": failed to pull image \"registry.k8s.io/pause:3.8\": failed to pull and unpack image \"registry.k8s.io/pause:3.8\": failed to resolve reference \"registry.k8s.io/pause:3.8\": failed to do request: Head \"https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.8\": dial tcp 64.233.188.82:443: connect: connection refused"
Jun 13 16:12:30 k8s-master kubelet[3249]: E0613 16:12:30.029756    3249 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to get sandbox image \"registry.k8s.io/pause:3.8\": failed to pull image \"registry.k8s.io/pause:3.8\": failed to pull and unpack image \"registry.k8s.io/pause:3.8\": failed to resolve reference \"registry.k8s.io/pause:3.8\": failed to do request: Head \"https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.8\": dial tcp 64.233.188.82:443: connect: connection refused" pod="kube-system/etcd-k8s-master"
Jun 13 16:12:30 k8s-master kubelet[3249]: E0613 16:12:30.029771    3249 kuberuntime_manager.go:1122] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to get sandbox image \"registry.k8s.io/pause:3.8\": failed to pull image \"registry.k8s.io/pause:3.8\": failed to pull and unpack image \"registry.k8s.io/pause:3.8\": failed to resolve reference \"registry.k8s.io/pause:3.8\": failed to do request: Head \"https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.8\": dial tcp 64.233.188.82:443: connect: connection refused" pod="kube-system/etcd-k8s-master"
Jun 13 16:12:30 k8s-master kubelet[3249]: E0613 16:12:30.029808    3249 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"etcd-k8s-master_kube-system(a74c497ebf377fb2f6522c84c0ca39f8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"etcd-k8s-master_kube-system(a74c497ebf377fb2f6522c84c0ca39f8)\\\": rpc error: code = Unknown desc = failed to get sandbox image \\\"registry.k8s.io/pause:3.8\\\": failed to pull image \\\"registry.k8s.io/pause:3.8\\\": failed to pull and unpack image \\\"registry.k8s.io/pause:3.8\\\": failed to resolve reference \\\"registry.k8s.io/pause:3.8\\\": failed to do request: Head \\\"https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.8\\\": dial tcp 64.233.188.82:443: connect: connection refused\"" pod="kube-system/etcd-k8s-master" podUID=a74c497ebf377fb2f6522c84c0ca39f8
Jun 13 16:12:30 k8s-master kubelet[3249]: E0613 16:12:30.030571    3249 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to get sandbox image \"registry.k8s.io/pause:3.8\": failed to pull image \"registry.k8s.io/pause:3.8\": failed to pull and unpack image \"registry.k8s.io/pause:3.8\": failed to resolve reference \"registry.k8s.io/pause:3.8\": failed to do request: Head \"https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.8\": dial tcp 64.233.188.82:443: connect: connection refused"
Jun 13 16:12:30 k8s-master kubelet[3249]: E0613 16:12:30.030612    3249 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to get sandbox image \"registry.k8s.io/pause:3.8\": failed to pull image \"registry.k8s.io/pause:3.8\": failed to pull and unpack image \"registry.k8s.io/pause:3.8\": failed to resolve reference \"registry.k8s.io/pause:3.8\": failed to do request: Head \"https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.8\": dial tcp 64.233.188.82:443: connect: connection refused" pod="kube-system/kube-controller-manager-k8s-master"
Jun 13 16:12:30 k8s-master kubelet[3249]: E0613 16:12:30.030851    3249 kuberuntime_manager.go:1122] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to get sandbox image \"registry.k8s.io/pause:3.8\": failed to pull image \"registry.k8s.io/pause:3.8\": failed to pull and unpack image \"registry.k8s.io/pause:3.8\": failed to resolve reference \"registry.k8s.io/pause:3.8\": failed to do request: Head \"https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.8\": dial tcp 64.233.188.82:443: connect: connection refused" pod="kube-system/kube-controller-manager-k8s-master"
Jun 13 16:12:30 k8s-master kubelet[3249]: E0613 16:12:30.030936    3249 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"kube-controller-manager-k8s-master_kube-system(cdff31bc68b093ee28472f5a6a758127)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"kube-controller-manager-k8s-master_kube-system(cdff31bc68b093ee28472f5a6a758127)\\\": rpc error: code = Unknown desc = failed to get sandbox image \\\"registry.k8s.io/pause:3.8\\\": failed to pull image \\\"registry.k8s.io/pause:3.8\\\": failed to pull and unpack image \\\"registry.k8s.io/pause:3.8\\\": failed to resolve reference \\\"registry.k8s.io/pause:3.8\\\": failed to do request: Head \\\"https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.8\\\": dial tcp 64.233.188.82:443: connect: connection refused\"" pod="kube-system/kube-controller-manager-k8s-master" podUID=cdff31bc68b093ee28472f5a6a758127
Jun 13 16:12:30 k8s-master kubelet[3249]: E0613 16:12:30.031059    3249 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to get sandbox image \"registry.k8s.io/pause:3.8\": failed to pull image \"registry.k8s.io/pause:3.8\": failed to pull and unpack image \"registry.k8s.io/pause:3.8\": failed to resolve reference \"registry.k8s.io/pause:3.8\": failed to do request: Head \"https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.8\": dial tcp 64.233.188.82:443: connect: connection refused"
Jun 13 16:12:30 k8s-master kubelet[3249]: E0613 16:12:30.031161    3249 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to get sandbox image \"registry.k8s.io/pause:3.8\": failed to pull image \"registry.k8s.io/pause:3.8\": failed to pull and unpack image \"registry.k8s.io/pause:3.8\": failed to resolve reference \"registry.k8s.io/pause:3.8\": failed to do request: Head \"https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.8\": dial tcp 64.233.188.82:443: connect: connection refused" pod="kube-system/kube-scheduler-k8s-master"

此时没有CRI容器启动

进而查询
journalctl -u containerd

Jun 13 12:55:57 k8s-master containerd[960]: time="2023-06-13T12:55:57.704761243-07:00" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-k8s-master,Uid:a52f0d45d8e29be24f1c01726895212b,Namespace:kube-sy
stem,Attempt:0,} failed, error" error="failed to get sandbox image \"registry.k8s.io/pause:3.8\": failed to pull image \"registry.k8s.io/pause:3.8\": failed to pull and unpack image \"registry.k8s.io/pause:3.8\": failed to reso
lve reference \"registry.k8s.io/pause:3.8\": failed to do request: Head \"https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.8\": dial tcp 142.251.8.82:443: connect: connection refused"
Jun 13 12:55:57 k8s-master containerd[960]: time="2023-06-13T12:55:57.704919337-07:00" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
Jun 13 12:56:02 k8s-master containerd[960]: time="2023-06-13T12:56:02.736893788-07:00" level=info msg="trying next host" error="failed to do request: Head \"https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/man
ifests/3.8\": dial tcp 142.251.8.82:443: connect: connection refused" host=registry.k8s.io
Jun 13 12:56:02 k8s-master containerd[960]: time="2023-06-13T12:56:02.737437325-07:00" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:etcd-k8s-master,Uid:39611cf291fbfb4ad6b41d9771848214,Namespace:kube-system,Attem
pt:0,} failed, error" error="failed to get sandbox image \"registry.k8s.io/pause:3.8\": failed to pull image \"registry.k8s.io/pause:3.8\": failed to pull and unpack image \"registry.k8s.io/pause:3.8\": failed to resolve refere
nce \"registry.k8s.io/pause:3.8\": failed to do request: Head \"https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.8\": dial tcp 142.251.8.82:443: connect: connection refused"
Jun 13 12:56:02 k8s-master containerd[960]: time="2023-06-13T12:56:02.737502761-07:00" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
Jun 13 12:56:09 k8s-master containerd[960]: time="2023-06-13T12:56:09.704517199-07:00" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-k8s-master,Uid:a52f0d45d8e29be24f1c01726895212b,Namespace:kube-sys
tem,Attempt:0,}"
Jun 13 12:56:13 k8s-master containerd[960]: time="2023-06-13T12:56:13.739076556-07:00" level=info msg="trying next host" error="failed to do request: Head \"https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/man
ifests/3.8\": dial tcp 142.251.8.82:443: connect: connection refused" host=registry.k8s.io
Jun 13 12:56:13 k8s-master containerd[960]: time="2023-06-13T12:56:13.739917002-07:00" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-k8s-master,Uid:ce9fd6dc323b96fb0140ede93fb43bc2,Namespace:kube-sy
stem,Attempt:0,} failed, error" error="failed to get sandbox image \"registry.k8s.io/pause:3.8\": failed to pull image \"registry.k8s.io/pause:3.8\": failed to pull and unpack image \"registry.k8s.io/pause:3.8\": failed to reso
lve reference \"registry.k8s.io/pause:3.8\": failed to do request: Head \"https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.8\": dial tcp 142.251.8.82:443: connect: connection refused"
Jun 13 12:56:13 k8s-master containerd[960]: time="2023-06-13T12:56:13.739918962-07:00" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
Jun 13 12:56:15 k8s-master containerd[960]: time="2023-06-13T12:56:15.972465420-07:00" level=info msg="trying next host" error="failed to do request: Head \"https://europe-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause
/manifests/3.8\": dial tcp 142.251.8.82:443: connect: connection refused" host=registry.k8s.io
Jun 13 12:56:15 k8s-master containerd[960]: time="2023-06-13T12:56:15.973055472-07:00" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-k8s-master,Uid:95a27a9cb5c43394fb0fb640d7f5a4c4,Namespac
e:kube-system,Attempt:0,} failed, error" error="failed to get sandbox image \"registry.k8s.io/pause:3.8\": failed to pull image \"registry.k8s.io/pause:3.8\": failed to pull and unpack image \"registry.k8s.io/pause:3.8\": faile
d to resolve reference \"registry.k8s.io/pause:3.8\": failed to do request: Head \"https://europe-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.8\": dial tcp 142.251.8.82:443: connect: connection refused"
Jun 13 12:56:15 k8s-master containerd[960]: time="2023-06-13T12:56:15.973127470-07:00" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
Jun 13 12:56:16 k8s-master containerd[960]: time="2023-06-13T12:56:16.704884786-07:00" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:etcd-k8s-master,Uid:39611cf291fbfb4ad6b41d9771848214,Namespace:kube-system,Attemp
t:0,}"
Jun 13 12:56:27 k8s-master containerd[960]: time="2023-06-13T12:56:27.705647696-07:00" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-k8s-master,Uid:ce9fd6dc323b96fb0140ede93fb43bc2,Namespace:kube-sys
tem,Attempt:0,}"
Jun 13 12:56:30 k8s-master containerd[960]: time="2023-06-13T12:56:30.704958057-07:00" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-k8s-master,Uid:95a27a9cb5c43394fb0fb640d7f5a4c4,Namespace
:kube-system,Attempt:0,}"
Jun 13 12:56:31 k8s-master containerd[960]: time="2023-06-13T12:56:31.739393384-07:00" level=info msg="trying next host" error="failed to do request: Head \"https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/man
ifests/3.8\": dial tcp 142.251.8.82:443: connect: connection refused" host=registry.k8s.io
Jun 13 12:56:31 k8s-master containerd[960]: time="2023-06-13T12:56:31.740726060-07:00" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-k8s-master,Uid:a52f0d45d8e29be24f1c01726895212b,Namespace:kube-sy
stem,Attempt:0,} failed, error" error="failed to get sandbox image \"registry.k8s.io/pause:3.8\": failed to pull image \"registry.k8s.io/pause:3.8\": failed to pull and unpack image \"registry.k8s.io/pause:3.8\": failed to reso
lve reference \"registry.k8s.io/pause:3.8\": failed to do request: Head \"https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/pause/manifests/3.8\": dial tcp 142.251.8.82:443: connect: connection refused"
Jun 13 12:56:31 k8s-master containerd[960]: time="2023-06-13T12:56:31.740908149-07:00" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"

注意其中的failed to get sandbox image "registry.k8s.io/pause:3.8

我的临时解决办法
拉取国内镜像后通过

ctr -n k8s.io i tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.8 registry.k8s.io/pause:3.8

进行改名使得pause可以运行

详见https://blog.csdn.net/Bruce1114/article/details/124636325

  • 3
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值