解决 kubesphere安装时 kubelet 启动异常问题

执行kubesphere安装时卡到了最后环节如下:

fs.inotify.max_user_instances = 524288
kernel.pid_max = 65535
16:49:40 CST stdout: [ipd-cloud-bdp07.py]
setenforce: SELinux is disabled
Disabled
net.ipv4.ip_forward = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
kernel.sysrq = 0
kernel.core_uses_pid = 1
net.ipv4.tcp_syncookies = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-arptables = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.shmmax = 68719476736
kernel.shmall = 4294967296
kernel.panic = 1
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.default.accept_redirects = 0
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.default.arp_ignore = 1
net.ipv4.conf.default.arp_announce = 2
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.ip_no_pmtu_disc = 1
kernel.printk = 3 4 1 7
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.ip_local_port_range = 10000 65535
net.ipv4.tcp_max_tw_buckets = 180000
net.ipv4.tcp_fin_timeout = 15
net.ipv4.tcp_tw_recycle = 0
kernel.core_pattern = |/usr/local/bin/core_filter -P %P -p %p -e %e -t %t -s 102400
net.core.somaxconn = 4096
net.ipv4.tcp_max_syn_backlog = 4096
net.core.netdev_max_backlog = 4096
vm.vfs_cache_pressure = 200
vm.min_free_kbytes = 409600
vm.swappiness = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
fs.inotify.max_user_instances = 524288
kernel.pid_max = 65535
16:49:42 CST stdout: [ipd-cloud-bdp09.py]
setenforce: SELinux is disabled
Disabled
net.ipv4.ip_forward = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
kernel.sysrq = 0
kernel.core_uses_pid = 1
net.ipv4.tcp_syncookies = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-arptables = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.shmmax = 68719476736
kernel.shmall = 4294967296
kernel.panic = 1
net.ipv4.conf.all.accept_redirects = 0
net.ipv4.conf.default.accept_redirects = 0
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.conf.default.arp_ignore = 1
net.ipv4.conf.default.arp_announce = 2
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.ip_no_pmtu_disc = 1
kernel.printk = 3 4 1 7
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.ip_local_port_range = 10000 65535
net.ipv4.tcp_max_tw_buckets = 180000
net.ipv4.tcp_fin_timeout = 15
net.ipv4.tcp_tw_recycle = 0
kernel.core_pattern = |/usr/local/bin/core_filter -P %P -p %p -e %e -t %t -s 102400
net.core.somaxconn = 4096
net.ipv4.tcp_max_syn_backlog = 4096
net.core.netdev_max_backlog = 4096
vm.vfs_cache_pressure = 200
vm.min_free_kbytes = 409600
vm.swappiness = 1
net.ipv4.ip_local_reserved_ports = 30000-32767
vm.max_map_count = 262144
fs.inotify.max_user_instances = 524288
kernel.pid_max = 65535
16:49:42 CST success: [ipd-cloud-dmp01.ys]
16:49:42 CST success: [ipd-cloud-bdp08.py]
16:49:42 CST success: [ipd-cloud-bdp07.py]
16:49:42 CST success: [ipd-cloud-bdp09.py]
16:49:42 CST [ConfigureOSModule] configure the ntp server for each node
16:49:42 CST skipped: [ipd-cloud-dmp01.ys]
16:49:42 CST skipped: [ipd-cloud-bdp08.py]
16:49:42 CST skipped: [ipd-cloud-bdp07.py]
16:49:42 CST skipped: [ipd-cloud-bdp09.py]
16:49:42 CST [KubernetesStatusModule] Get kubernetes cluster status
16:49:42 CST success: [ipd-cloud-bdp07.py]
16:49:42 CST [InstallContainerModule] Sync docker binaries
16:49:42 CST skipped: [ipd-cloud-bdp07.py]
16:49:42 CST skipped: [ipd-cloud-bdp08.py]
16:49:42 CST skipped: [ipd-cloud-bdp09.py]
16:49:42 CST [InstallContainerModule] Generate containerd service
16:49:42 CST skipped: [ipd-cloud-bdp08.py]
16:49:42 CST skipped: [ipd-cloud-bdp09.py]
16:49:42 CST skipped: [ipd-cloud-bdp07.py]
16:49:42 CST [InstallContainerModule] Enable containerd
16:49:42 CST skipped: [ipd-cloud-bdp08.py]
16:49:42 CST skipped: [ipd-cloud-bdp09.py]
16:49:42 CST skipped: [ipd-cloud-bdp07.py]
16:49:42 CST [InstallContainerModule] Generate docker service
16:49:43 CST skipped: [ipd-cloud-bdp09.py]
16:49:43 CST skipped: [ipd-cloud-bdp08.py]
16:49:43 CST skipped: [ipd-cloud-bdp07.py]
16:49:43 CST [InstallContainerModule] Generate docker config
16:49:43 CST skipped: [ipd-cloud-bdp09.py]
16:49:43 CST skipped: [ipd-cloud-bdp08.py]
16:49:43 CST skipped: [ipd-cloud-bdp07.py]
16:49:43 CST [InstallContainerModule] Enable docker
16:49:43 CST skipped: [ipd-cloud-bdp08.py]
16:49:43 CST skipped: [ipd-cloud-bdp09.py]
16:49:43 CST skipped: [ipd-cloud-bdp07.py]
16:49:43 CST [InstallContainerModule] Add auths to container runtime
16:49:43 CST success: [ipd-cloud-bdp09.py]
16:49:43 CST success: [ipd-cloud-bdp08.py]
16:49:43 CST success: [ipd-cloud-dmp01.ys]
16:49:43 CST success: [ipd-cloud-bdp07.py]
16:49:43 CST [PullModule] Start to pull images on all nodes
16:49:43 CST message: [ipd-cloud-bdp07.py]
downloading image: dockerhub.kubekey.local/kubesphere/pause:3.4.1
16:49:43 CST message: [ipd-cloud-bdp08.py]
downloading image: dockerhub.kubekey.local/kubesphere/pause:3.4.1
16:49:43 CST message: [ipd-cloud-bdp09.py]
downloading image: dockerhub.kubekey.local/kubesphere/pause:3.4.1
16:49:44 CST message: [ipd-cloud-bdp08.py]
downloading image: dockerhub.kubekey.local/kubesphere/kube-proxy:v1.21.5
16:49:44 CST message: [ipd-cloud-bdp09.py]
downloading image: dockerhub.kubekey.local/kubesphere/kube-proxy:v1.21.5
16:49:44 CST message: [ipd-cloud-bdp07.py]
downloading image: dockerhub.kubekey.local/kubesphere/kube-apiserver:v1.21.5
16:49:44 CST message: [ipd-cloud-bdp08.py]
downloading image: dockerhub.kubekey.local/coredns/coredns:1.8.0
16:49:44 CST message: [ipd-cloud-bdp09.py]
downloading image: dockerhub.kubekey.local/coredns/coredns:1.8.0
16:49:44 CST message: [ipd-cloud-bdp07.py]
downloading image: dockerhub.kubekey.local/kubesphere/kube-controller-manager:v1.21.5
16:49:44 CST message: [ipd-cloud-bdp08.py]
downloading image: dockerhub.kubekey.local/kubesphere/k8s-dns-node-cache:1.15.12
16:49:44 CST message: [ipd-cloud-bdp09.py]
downloading image: dockerhub.kubekey.local/kubesphere/k8s-dns-node-cache:1.15.12
16:49:44 CST message: [ipd-cloud-bdp07.py]
downloading image: dockerhub.kubekey.local/kubesphere/kube-scheduler:v1.21.5
16:49:44 CST message: [ipd-cloud-bdp08.py]
downloading image: dockerhub.kubekey.local/calico/kube-controllers:v3.20.0
16:49:44 CST message: [ipd-cloud-bdp09.py]
downloading image: dockerhub.kubekey.local/calico/kube-controllers:v3.20.0
16:49:44 CST message: [ipd-cloud-bdp07.py]
downloading image: dockerhub.kubekey.local/kubesphere/kube-proxy:v1.21.5
16:49:44 CST message: [ipd-cloud-bdp08.py]
downloading image: dockerhub.kubekey.local/calico/cni:v3.20.0
16:49:44 CST message: [ipd-cloud-bdp09.py]
downloading image: dockerhub.kubekey.local/calico/cni:v3.20.0
16:49:44 CST message: [ipd-cloud-bdp07.py]
downloading image: dockerhub.kubekey.local/coredns/coredns:1.8.0
16:49:45 CST message: [ipd-cloud-bdp08.py]
downloading image: dockerhub.kubekey.local/calico/node:v3.20.0
16:49:45 CST message: [ipd-cloud-bdp09.py]
downloading image: dockerhub.kubekey.local/calico/node:v3.20.0
16:49:45 CST message: [ipd-cloud-bdp07.py]
downloading image: dockerhub.kubekey.local/kubesphere/k8s-dns-node-cache:1.15.12
16:49:45 CST message: [ipd-cloud-bdp08.py]
downloading image: dockerhub.kubekey.local/calico/pod2daemon-flexvol:v3.20.0
16:49:45 CST message: [ipd-cloud-bdp09.py]
downloading image: dockerhub.kubekey.local/calico/pod2daemon-flexvol:v3.20.0
16:49:45 CST message: [ipd-cloud-bdp07.py]
downloading image: dockerhub.kubekey.local/calico/kube-controllers:v3.20.0
16:49:45 CST message: [ipd-cloud-bdp07.py]
downloading image: dockerhub.kubekey.local/calico/cni:v3.20.0
16:49:45 CST message: [ipd-cloud-bdp07.py]
downloading image: dockerhub.kubekey.local/calico/node:v3.20.0
16:49:45 CST message: [ipd-cloud-bdp07.py]
downloading image: dockerhub.kubekey.local/calico/pod2daemon-flexvol:v3.20.0
16:49:46 CST success: [ipd-cloud-dmp01.ys]
16:49:46 CST success: [ipd-cloud-bdp08.py]
16:49:46 CST success: [ipd-cloud-bdp09.py]
16:49:46 CST success: [ipd-cloud-bdp07.py]
16:49:46 CST [ETCDPreCheckModule] Get etcd status
16:49:46 CST stdout: [ipd-cloud-bdp07.py]
ETCD_NAME=etcd-ipd-cloud-bdp07.py
16:49:46 CST stdout: [ipd-cloud-bdp08.py]
ETCD_NAME=etcd-ipd-cloud-bdp08.py
16:49:46 CST stdout: [ipd-cloud-bdp09.py]
ETCD_NAME=etcd-ipd-cloud-bdp09.py
16:49:46 CST success: [ipd-cloud-bdp07.py]
16:49:46 CST success: [ipd-cloud-bdp08.py]
16:49:46 CST success: [ipd-cloud-bdp09.py]
16:49:46 CST [CertsModule] Fetcd etcd certs
16:49:47 CST success: [ipd-cloud-bdp07.py]
16:49:47 CST skipped: [ipd-cloud-bdp08.py]
16:49:47 CST skipped: [ipd-cloud-bdp09.py]
16:49:47 CST [CertsModule] Generate etcd Certs
[certs] Using existing ca certificate authority
[certs] Using existing admin-ipd-cloud-bdp07.py certificate and key on disk
[certs] Using existing member-ipd-cloud-bdp07.py certificate and key on disk
[certs] Using existing node-ipd-cloud-bdp07.py certificate and key on disk
[certs] Using existing admin-ipd-cloud-bdp08.py certificate and key on disk
[certs] Using existing member-ipd-cloud-bdp08.py certificate and key on disk
[certs] Using existing admin-ipd-cloud-bdp09.py certificate and key on disk
[certs] Using existing member-ipd-cloud-bdp09.py certificate and key on disk
16:49:47 CST success: [LocalHost]
16:49:47 CST [CertsModule] Synchronize certs file
16:49:56 CST success: [ipd-cloud-bdp08.py]
16:49:56 CST success: [ipd-cloud-bdp09.py]
16:49:56 CST success: [ipd-cloud-bdp07.py]
16:49:56 CST [CertsModule] Synchronize certs file to master
16:49:56 CST skipped: [ipd-cloud-bdp07.py]
16:49:56 CST [InstallETCDBinaryModule] Install etcd using binary
16:49:58 CST success: [ipd-cloud-bdp08.py]
16:49:58 CST success: [ipd-cloud-bdp07.py]
16:49:58 CST success: [ipd-cloud-bdp09.py]
16:49:58 CST [InstallETCDBinaryModule] Generate etcd service
16:49:58 CST success: [ipd-cloud-bdp08.py]
16:49:58 CST success: [ipd-cloud-bdp09.py]
16:49:58 CST success: [ipd-cloud-bdp07.py]
16:49:58 CST [InstallETCDBinaryModule] Generate access address
16:49:58 CST skipped: [ipd-cloud-bdp09.py]
16:49:58 CST skipped: [ipd-cloud-bdp08.py]
16:49:58 CST success: [ipd-cloud-bdp07.py]
16:49:58 CST [ETCDConfigureModule] Health check on exist etcd
16:49:58 CST success: [ipd-cloud-bdp07.py]
16:49:58 CST success: [ipd-cloud-bdp09.py]
16:49:58 CST success: [ipd-cloud-bdp08.py]
16:49:58 CST [ETCDConfigureModule] Generate etcd.env config on new etcd
16:49:58 CST skipped: [ipd-cloud-bdp07.py]
16:49:58 CST skipped: [ipd-cloud-bdp08.py]
16:49:58 CST skipped: [ipd-cloud-bdp09.py]
16:49:58 CST [ETCDConfigureModule] Join etcd member
16:49:58 CST skipped: [ipd-cloud-bdp07.py]
16:49:58 CST skipped: [ipd-cloud-bdp08.py]
16:49:58 CST skipped: [ipd-cloud-bdp09.py]
16:49:58 CST [ETCDConfigureModule] Restart etcd
16:49:58 CST skipped: [ipd-cloud-bdp09.py]
16:49:58 CST skipped: [ipd-cloud-bdp07.py]
16:49:58 CST skipped: [ipd-cloud-bdp08.py]
16:49:58 CST [ETCDConfigureModule] Health check on new etcd
16:49:58 CST skipped: [ipd-cloud-bdp09.py]
16:49:58 CST skipped: [ipd-cloud-bdp07.py]
16:49:58 CST skipped: [ipd-cloud-bdp08.py]
16:49:58 CST [ETCDConfigureModule] Check etcd member
16:49:58 CST skipped: [ipd-cloud-bdp07.py]
16:49:58 CST skipped: [ipd-cloud-bdp09.py]
16:49:58 CST skipped: [ipd-cloud-bdp08.py]
16:49:58 CST [ETCDConfigureModule] Refresh etcd.env config on all etcd
16:50:00 CST success: [ipd-cloud-bdp07.py]
16:50:00 CST success: [ipd-cloud-bdp08.py]
16:50:00 CST success: [ipd-cloud-bdp09.py]
16:50:00 CST [ETCDConfigureModule] Health check on all etcd
16:50:00 CST success: [ipd-cloud-bdp09.py]
16:50:00 CST success: [ipd-cloud-bdp08.py]
16:50:00 CST success: [ipd-cloud-bdp07.py]
16:50:00 CST [ETCDBackupModule] Backup etcd data regularly
16:50:07 CST success: [ipd-cloud-bdp09.py]
16:50:07 CST success: [ipd-cloud-bdp08.py]
16:50:07 CST success: [ipd-cloud-bdp07.py]
16:50:07 CST [InstallKubeBinariesModule] Synchronize kubernetes binaries
16:50:24 CST success: [ipd-cloud-bdp08.py]
16:50:24 CST success: [ipd-cloud-bdp07.py]
16:50:24 CST success: [ipd-cloud-bdp09.py]
16:50:24 CST [InstallKubeBinariesModule] Synchronize kubelet
16:50:24 CST success: [ipd-cloud-bdp09.py]
16:50:24 CST success: [ipd-cloud-bdp08.py]
16:50:24 CST success: [ipd-cloud-bdp07.py]
16:50:24 CST [InstallKubeBinariesModule] Generate kubelet service
16:50:25 CST success: [ipd-cloud-bdp08.py]
16:50:25 CST success: [ipd-cloud-bdp09.py]
16:50:25 CST success: [ipd-cloud-bdp07.py]
16:50:25 CST [InstallKubeBinariesModule] Enable kubelet service
16:50:28 CST success: [ipd-cloud-bdp09.py]
16:50:28 CST success: [ipd-cloud-bdp08.py]
16:50:28 CST success: [ipd-cloud-bdp07.py]
16:50:28 CST [InstallKubeBinariesModule] Generate kubelet env
16:50:28 CST success: [ipd-cloud-bdp08.py]
16:50:28 CST success: [ipd-cloud-bdp09.py]
16:50:28 CST success: [ipd-cloud-bdp07.py]
16:50:28 CST [InitKubernetesModule] Generate kubeadm config
16:50:29 CST success: [ipd-cloud-bdp07.py]
16:50:29 CST [InitKubernetesModule] Init cluster using kubeadm
16:52:30 CST stdout: [ipd-cloud-bdp07.py]
W0608 16:50:29.813807   62695 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.21.5
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [ipd-cloud-bdp07.py ipd-cloud-bdp07.py.cluster.local ipd-cloud-bdp08.py ipd-cloud-bdp08.py.cluster.local ipd-cloud-bdp09.py ipd-cloud-bdp09.py.cluster.local ipd-cloud-dmp01.ys ipd-cloud-dmp01.ys.cluster.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost] and IPs [10.233.0.1 10.86.68.66 127.0.0.1 10.89.235.12 10.86.68.67 10.86.67.11]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.

	Unfortunately, an error has occurred:
		timed out waiting for the condition

	This error is likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'

	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.

	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'

error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
16:52:31 CST stdout: [ipd-cloud-bdp07.py]
[reset] Reading configuration from the cluster...
[reset] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0608 16:52:31.336774    6748 reset.go:99] [reset] Unable to fetch the kubeadm-config ConfigMap from cluster: failed to get config map: Get "https://lb.kubesphere.local:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config?timeout=10s": dial tcp 10.86.68.66:6443: connect: connection refused
[preflight] Running pre-flight checks
W0608 16:52:31.337046    6748 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]

The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d

The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.

If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.

The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
16:52:31 CST message: [ipd-cloud-bdp07.py]
init kubernetes cluster failed: Failed to exec command: sudo -E /bin/bash -c "/usr/local/bin/kubeadm init --config=/etc/kubernetes/kubeadm-config.yaml --ignore-preflight-errors=FileExisting-crictl"
W0608 16:50:29.813807   62695 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [10.233.0.10]; the provided value is: [169.254.25.10]
[init] Using Kubernetes version: v1.21.5
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [ipd-cloud-bdp07.py ipd-cloud-bdp07.py.cluster.local ipd-cloud-bdp08.py ipd-cloud-bdp08.py.cluster.local ipd-cloud-bdp09.py ipd-cloud-bdp09.py.cluster.local ipd-cloud-dmp01.ys ipd-cloud-dmp01.ys.cluster.local kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb.kubesphere.local localhost] and IPs [10.233.0.1 10.86.68.66 127.0.0.1 10.89.235.12 10.86.68.67 10.86.67.11]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] External etcd mode: Skipping etcd/ca certificate authority generation
[certs] External etcd mode: Skipping etcd/server certificate generation
[certs] External etcd mode: Skipping etcd/peer certificate generation
[certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation
[certs] External etcd mode: Skipping apiserver-etcd-client certificate generation
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.

	Unfortunately, an error has occurred:
		timed out waiting for the condition

	This error is likely caused by:

▽
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'

	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	To troubleshoot, list all containers using your preferred container runtimes CLI.

	Here is one example how you may list all Kubernetes containers running in docker:
		- 'docker ps -a | grep kube | grep -v pause'
		Once you have found the failing container, you can inspect its logs with:
		- 'docker logs CONTAINERID'

error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher: Process exited with status 1
16:52:31 CST retry: [ipd-cloud-bdp07.py]

然后我们定位问题:
journalctl -xeu kubelet


Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Flag --network-plugin has been deprecated, will be removed along with dockershim.
I0608 18:48:47.760571   55200 server.go:440] "Kubelet version" kubeletVersion="v1.21.5"
I0608 18:48:47.761117   55200 server.go:851] "Client rotation is on, will bootstrap in background"
I0608 18:48:47.764361   55200 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
I0608 18:48:47.765791   55200 dynamic_cafile_content.go:167] Starting client-ca-bundle::/etc/kubernetes/pki/ca.crt
I0608 18:48:47.765801   55200 container_manager_linux.go:991] "CPUAccounting not enabled for process" pid=55200
I0608 18:48:47.765951   55200 container_manager_linux.go:994] "MemoryAccounting not enabled for process" pid=55200
I0608 18:48:47.843172   55200 server.go:660] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
I0608 18:48:47.844266   55200 container_manager_linux.go:278] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
I0608 18:48:47.844603   55200 container_manager_linux.go:283] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[cpu:{i:{value:200 scale:-3} d:{Dec:<nil>} s:200m Format:DecimalSI} memory:{i:{value:262144000 scale:0} d:{Dec:<nil>} s:250Mi Format:BinarySI}] SystemReserved:map[cpu:{i:{value:200 scale:-3} d:{Dec:<nil>} s:200m Format:DecimalSI} memory:{i:{value:262144000 scale:0} d:{Dec:<nil>} s:250Mi Format:BinarySI}] HardEvictionThresholds:[{Signal:pid.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:memory.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:1000 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
I0608 18:48:47.844698   55200 topology_manager.go:120] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
I0608 18:48:47.844740   55200 container_manager_linux.go:314] "Initializing Topology Manager" policy="none" scope="container"
I0608 18:48:47.844768   55200 container_manager_linux.go:319] "Creating device plugin manager" devicePluginEnabled=true
I0608 18:48:47.844915   55200 kubelet.go:307] "Using dockershim is deprecated, please consider using a full-fledged CRI implementation"
I0608 18:48:47.844983   55200 client.go:78] "Connecting to docker on the dockerEndpoint" endpoint="unix:///var/run/docker.sock"
I0608 18:48:47.845033   55200 client.go:97] "Start docker client with request timeout" timeout="2m0s"
I0608 18:48:47.852243   55200 docker_service.go:566] "Hairpin mode is set but kubenet is not enabled, falling back to HairpinVeth" hairpinMode=promiscuous-bridge
I0608 18:48:47.852498   55200 docker_service.go:242] "Hairpin mode is set" hairpinMode=hairpin-veth
I0608 18:48:47.852687   55200 cni.go:239] "Unable to update cni config" err="no networks found in /etc/cni/net.d"
I0608 18:48:47.856616   55200 cni.go:239] "Unable to update cni config" err="no networks found in /etc/cni/net.d"
I0608 18:48:47.856706   55200 docker_service.go:257] "Docker cri networking managed by the network plugin" networkPluginName="cni"
I0608 18:48:47.856783   55200 cni.go:239] "Unable to update cni config" err="no networks found in /etc/cni/net.d"
I0608 18:48:47.866067   55200 docker_service.go:264] "Docker Info" dockerInfo=&{ID:ZDBP:HN4W:2242:4Q3N:6RF3:I7BD:H6XJ:6YDC:7233:5YGE:HUZB:TWZV Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:11 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff false] [userxattr false]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:26 OomKillDisable:true NGoroutines:34 SystemTime:2022-06-08T18:48:47.857369822+08:00 LoggingDriver:json-file CgroupDriver:systemd CgroupVersion:1 NEventsListener:0 KernelVersion:3.10.0-514.16.1.el7.x86_64 OperatingSystem:CentOS Linux 7 (Core) OSVersion:7 OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc0017700e0 NCPU:40 MemTotal:134170812416 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ipd-cloud-bdp07.py Labels:[] ExperimentalBuild:false ServerVersion:20.10.8 ClusterStore: ClusterAdvertise: Runtimes:map[io.containerd.runc.v2:{Path:runc Args:[] Shim:<nil>} io.containerd.runtime.v1.linux:{Path:runc Args:[] Shim:<nil>} runc:{Path:runc Args:[] Shim:<nil>}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:e25210fe30a0a703442421b0f60afac609f950a3 Expected:e25210fe30a0a703442421b0f60afac609f950a3} RuncCommit:{ID:v1.0.1-0-g4144b638 Expected:v1.0.1-0-g4144b638} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense:Community Engine DefaultAddressPools:[] Warnings:[]}
I0608 18:48:47.866120   55200 docker_service.go:277] "Setting cgroupDriver" cgroupDriver="systemd"
I0608 18:48:47.874861   55200 remote_runtime.go:62] parsed scheme: ""
I0608 18:48:47.875008   55200 remote_runtime.go:62] scheme "" not registered, fallback to default scheme
I0608 18:48:47.875056   55200 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock  <nil> 0 <nil>}] <nil> <nil>}
I0608 18:48:47.875092   55200 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0608 18:48:47.875185   55200 remote_image.go:50] parsed scheme: ""
I0608 18:48:47.875209   55200 remote_image.go:50] scheme "" not registered, fallback to default scheme
I0608 18:48:47.875230   55200 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock  <nil> 0 <nil>}] <nil> <nil>}
I0608 18:48:47.875247   55200 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0608 18:48:47.875381   55200 kubelet.go:404] "Attempting to sync node with API server"
I0608 18:48:47.875416   55200 kubelet.go:272] "Adding static pod path" path="/etc/kubernetes/manifests"
I0608 18:48:47.875487   55200 kubelet.go:283] "Adding apiserver pod source"
I0608 18:48:47.875529   55200 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
E0608 18:48:47.877013   55200 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://lb.kubesphere.local:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.86.68.66:6443: connect: connection refused
E0608 18:48:47.877032   55200 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://lb.kubesphere.local:6443/api/v1/nodes?fieldSelector=metadata.name%3Dipd-cloud-bdp07.py&limit=500&resourceVersion=0": dial tcp 10.86.68.66:6443: connect: connection refused
I0608 18:48:47.883161   55200 kuberuntime_manager.go:222] "Container runtime initialized" containerRuntime="docker" version="20.10.8" apiVersion="1.41.0"
E0608 18:48:48.760575   55200 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://lb.kubesphere.local:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.86.68.66:6443: connect: connection refused
E0608 18:48:49.301103   55200 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://lb.kubesphere.local:6443/api/v1/nodes?fieldSelector=metadata.name%3Dipd-cloud-bdp07.py&limit=500&resourceVersion=0": dial tcp 10.86.68.66:6443: connect: connection refused
E0608 18:48:51.239066   55200 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://lb.kubesphere.local:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.86.68.66:6443: connect: connection refused
E0608 18:48:51.860426   55200 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://lb.kubesphere.local:6443/api/v1/nodes?fieldSelector=metadata.name%3Dipd-cloud-bdp07.py&limit=500&resourceVersion=0": dial tcp 10.86.68.66:6443: connect: connection refused
I0608 18:48:52.857136   55200 cni.go:239] "Unable to update cni config" err="no networks found in /etc/cni/net.d"
E0608 18:48:54.155000   55200 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
	For verbose messaging see aws.Config.CredentialsChainVerboseErrors
I0608 18:48:54.155688   55200 server.go:1190] "Started kubelet"
I0608 18:48:54.155780   55200 server.go:149] "Starting to listen" address="0.0.0.0" port=10250
E0608 18:48:54.155791   55200 kubelet.go:1306] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache"
E0608 18:48:54.156355   55200 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ipd-cloud-bdp07.py.16f69f8b6565438a", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ipd-cloud-bdp07.py", UID:"ipd-cloud-bdp07.py", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ipd-cloud-bdp07.py"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc0a03d858946e78a, ext:6489781931, loc:(*time.Location)(0x74f4aa0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc0a03d858946e78a, ext:6489781931, loc:(*time.Location)(0x74f4aa0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://lb.kubesphere.local:6443/api/v1/namespaces/default/events": dial tcp 10.86.68.66:6443: connect: connection refused'(may retry after sleeping)
I0608 18:48:54.156925   55200 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
I0608 18:48:54.157702   55200 volume_manager.go:271] "Starting Kubelet Volume Manager"
I0608 18:48:54.157784   55200 desired_state_of_world_populator.go:141] "Desired state populator starts to run"
E0608 18:48:54.158594   55200 controller.go:144] failed to ensure lease exists, will retry in 200ms, error: Get "https://lb.kubesphere.local:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ipd-cloud-bdp07.py?timeout=10s": dial tcp 10.86.68.66:6443: connect: connection refused
E0608 18:48:54.159611   55200 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://lb.kubesphere.local:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.86.68.66:6443: connect: connection refused
I0608 18:48:54.164982   55200 server.go:409] "Adding debug handlers to kubelet server"
E0608 18:48:54.166377   55200 kubelet.go:2211] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
I0608 18:48:54.166507   55200 client.go:86] parsed scheme: "unix"
I0608 18:48:54.166527   55200 client.go:86] scheme "unix" not registered, fallback to default scheme
I0608 18:48:54.166632   55200 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}
I0608 18:48:54.166654   55200 clientconn.go:948] ClientConn switching balancer to "pick_first"
I0608 18:48:54.182659   55200 kubelet_network_linux.go:56] "Initialized protocol iptables rules." protocol=IPv4
E0608 18:48:54.186322   55200 kubelet_network_linux.go:79] "Failed to ensure that nat chain exists KUBE-MARK-DROP chain" err="error creating chain \"KUBE-MARK-DROP\": exit status 3: ip6tables v1.4.21: can't initialize ip6tables table `nat': Address family not supported by protocol\nPerhaps ip6tables or your kernel needs to be upgraded.\n"
I0608 18:48:54.186421   55200 kubelet_network_linux.go:64] "Failed to initialize protocol iptables rules; some functionality may be missing." protocol=IPv6
I0608 18:48:54.186458   55200 status_manager.go:157] "Starting to sync pod status with apiserver"
I0608 18:48:54.186484   55200 kubelet.go:1846] "Starting kubelet main sync loop"
E0608 18:48:54.186560   55200 kubelet.go:1870] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
E0608 18:48:54.187307   55200 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://lb.kubesphere.local:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.86.68.66:6443: connect: connection refused
E0608 18:48:54.257594   55200 kubelet.go:2291] "Error getting node" err="node \"ipd-cloud-bdp07.py\" not found"
I0608 18:48:54.278718   55200 kubelet_node_status.go:71] "Attempting to register node" node="ipd-cloud-bdp07.py"
E0608 18:48:54.279300   55200 kubelet_node_status.go:93] "Unable to register node with API server" err="Post \"https://lb.kubesphere.local:6443/api/v1/nodes\": dial tcp 10.86.68.66:6443: connect: connection refused" node="ipd-cloud-bdp07.py"
E0608 18:48:54.286649   55200 kubelet.go:1870] "Skipping pod synchronization" err="container runtime status check may not have completed yet"
E0608 18:48:54.358153   55200 kubelet.go:2291] "Error getting node" err="node \"ipd-cloud-bdp07.py\" not found"
E0608 18:48:54.359784   55200 controller.go:144] failed to ensure lease exists, will retry in 400ms, error: Get "https://lb.kubesphere.local:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ipd-cloud-bdp07.py?timeout=10s": dial tcp 10.86.68.66:6443: connect: connection refused
E0608 18:48:54.459526   55200 kubelet.go:2291] "Error getting node" err="node \"ipd-cloud-bdp07.py\" not found"
E0608 18:48:54.487124   55200 kubelet.go:1870] "Skipping pod synchronization" err="container runtime status check may not have completed yet"
I0608 18:48:54.501547   55200 kubelet_node_status.go:71] "Attempting to register node" node="ipd-cloud-bdp07.py"
E0608 18:48:54.502106   55200 kubelet_node_status.go:93] "Unable to register node with API server" err="Post \"https://lb.kubesphere.local:6443/api/v1/nodes\": dial tcp 10.86.68.66:6443: connect: connection refused" node="ipd-cloud-bdp07.py"
E0608 18:48:54.560111   55200 kubelet.go:2291] "Error getting node" err="node \"ipd-cloud-bdp07.py\" not found"
I0608 18:48:54.576687   55200 cpu_manager.go:199] "Starting CPU manager" policy="none"
I0608 18:48:54.576725   55200 cpu_manager.go:200] "Reconciling" reconcilePeriod="10s"
I0608 18:48:54.576752   55200 state_mem.go:36] "Initialized new in-memory state store"
I0608 18:48:54.576928   55200 state_mem.go:88] "Updated default CPUSet" cpuSet=""
I0608 18:48:54.576952   55200 state_mem.go:96] "Updated CPUSet assignments" assignments=map[]
I0608 18:48:54.576973   55200 policy_none.go:44] "None policy: Start"
E0608 18:48:54.579262   55200 node_container_manager_linux.go:57] "Failed to create cgroup" err="Unit type slice does not support transient units." cgroupName=[kubepods]
E0608 18:48:54.579305   55200 kubelet.go:1384] "Failed to start ContainerManager" err="Unit type slice does not support transient units."

解决方法
在kubernetes的issues中找到对应信息并且有解决方案https://github.com/kubernetes/kubernetes/issues/76820

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

e421083458

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值