学习k8s的记录-minikube实验

实验目的:如何使用 Minikube 在 Kubernetes 上运行一个应用示例。

安装minikube和kubectl

安装minikube :minikube start | minikube

过程和问题梳理:

1.安装minikube。安装过程没有问题,minikube start过程出现问题,根据提示安装了cri-docker,再次minikube start 根据提示,与time out 有关,Google查是代理问题,设置代理参数给linux的环境变量,没有成功,一直在time out。收获:学会看日志,和Google。

 
##安装
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-latest.x86_64.rpm
sudo rpm -Uvh minikube-latest.x86_64.rpm
##启动
minikube start
##启动问题
minikube start
* minikube v1.32.0 on Centos 7.9.2009
* Using the none driver based on existing profile

X Requested memory allocation (1819MB) is less than the recommended minimum 1900MB. Deployments may fail.

* Starting control plane node minikube in cluster minikube
* Restarting existing none bare metal machine for "minikube" ...

* Exiting due to NOT_FOUND_CRI_DOCKERD: 

* Suggestion: 

    The none driver with Kubernetes v1.24+ and the docker container-runtime requires cri-dockerd.
    
    Please install cri-dockerd using these instructions:
    
    https://github.com/Mirantis/cri-dockerd
##安装cri-dockerd
wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.8/cri-dockerd-0.3.8-3.el7.src.rpm(没成功,手动下载的)
 rpm -ivh cri-dockerd-0.3.8-3.el7.x86_64.rpm
 
 ##重启minikube,继续报错
  minikube start
* minikube v1.32.0 on Centos 7.9.2009
* Using the none driver based on existing profile

X Requested memory allocation (1819MB) is less than the recommended minimum 1900MB. Deployments may fail.

* Starting control plane node minikube in cluster minikube
* Restarting existing none bare metal machine for "minikube" ...
* OS release is CentOS Linux 7 (Core)
E1220 16:02:16.423705  129636 start.go:421] unable to disable preinstalled bridge CNI(s): failed to disable all bridge cni configs in "/etc/cni/net.d": sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;: exit status 1
stdout:

stderr:
find: ‘/etc/cni/net.d’: No such file or directory
* Preparing Kubernetes v1.28.3 on Docker 24.0.1 ...
! initialization failed, will try again: kubeadm init timed out in 10 minutes

## 一个关于代理的连接
https://github.com/kubernetes/kubeadm/issues/182


## 一些操作
  164  sudo systemctl set-environment HTTP_PROXY=127.0.0.1:1080
  165  sudo systemctl set-environment HTTPS_PROXY=127.0.0.1:1080
  166  sudo systemctl restart containerd.service
  167  sudo kubeadm config images pull
  168  minikube start

## 同样的错误


minikube start
* minikube v1.32.0 on Centos 7.9.2009
* Using the none driver based on existing profile
* Starting control plane node minikube in cluster minikube
* Restarting existing none bare metal machine for "minikube" ...
* OS release is CentOS Linux 7 (Core)
E1220 17:59:52.170484    3957 start.go:421] unable to disable preinstalled bridge CNI(s): failed to disable all bridge cni configs in "/etc/cni/net.d": sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;: exit status 1
stdout:

stderr:
find: ‘/etc/cni/net.d’: No such file or directory
* Preparing Kubernetes v1.28.3 on Docker 24.0.1 ...
! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": exit status 1
stdout:
[init] Using Kubernetes version: v1.28.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'

stderr:
	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR ImagePull]: failed to pull image registry.k8s.io/kube-apiserver:v1.28.3: output: E1220 17:59:55.167028    4628 remote_image.go:171] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://registry.k8s.io/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:1080: connect: connection refused" image="registry.k8s.io/kube-apiserver:v1.28.3"
time="2023-12-20T17:59:55+08:00" level=fatal msg="pulling image: rpc error: code = Unknown desc = Error response from daemon: Get \"https://registry.k8s.io/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:1080: connect: connection refused"
, error: exit status 1
	[ERROR ImagePull]: failed to pull image registry.k8s.io/kube-controller-manager:v1.28.3: output: E1220 17:59:55.369579    4653 remote_image.go:171] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://registry.k8s.io/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:1080: connect: connection refused" image="registry.k8s.io/kube-controller-manager:v1.28.3"
time="2023-12-20T17:59:55+08:00" level=fatal msg="pulling image: rpc error: code = Unknown desc = Error response from daemon: Get \"https://registry.k8s.io/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:1080: connect: connection refused"
, error: exit status 1
	[ERROR ImagePull]: failed to pull image registry.k8s.io/kube-scheduler:v1.28.3: output: E1220 17:59:55.594997    4678 remote_image.go:171] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://registry.k8s.io/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:1080: connect: connection refused" image="registry.k8s.io/kube-scheduler:v1.28.3"
time="2023-12-20T17:59:55+08:00" level=fatal msg="pulling image: rpc error: code = Unknown desc = Error response from daemon: Get \"https://registry.k8s.io/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:1080: connect: connection refused"
, error: exit status 1
	[ERROR ImagePull]: failed to pull image registry.k8s.io/kube-proxy:v1.28.3: output: E1220 17:59:55.804777    4703 remote_image.go:171] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://registry.k8s.io/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:1080: connect: connection refused" image="registry.k8s.io/kube-proxy:v1.28.3"
time="2023-12-20T17:59:55+08:00" level=fatal msg="pulling image: rpc error: code = Unknown desc = Error response from daemon: Get \"https://registry.k8s.io/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:1080: connect: connection refused"
, error: exit status 1
	[ERROR ImagePull]: failed to pull image registry.k8s.io/pause:3.9: output: E1220 17:59:56.012442    4732 remote_image.go:171] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://registry.k8s.io/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:1080: connect: connection refused" image="registry.k8s.io/pause:3.9"
time="2023-12-20T17:59:56+08:00" level=fatal msg="pulling image: rpc error: code = Unknown desc = Error response from daemon: Get \"https://registry.k8s.io/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:1080: connect: connection refused"
, error: exit status 1
	[ERROR ImagePull]: failed to pull image registry.k8s.io/etcd:3.5.9-0: output: E1220 17:59:56.213635    4756 remote_image.go:171] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://registry.k8s.io/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:1080: connect: connection refused" image="registry.k8s.io/etcd:3.5.9-0"
time="2023-12-20T17:59:56+08:00" level=fatal msg="pulling image: rpc error: code = Unknown desc = Error response from daemon: Get \"https://registry.k8s.io/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:1080: connect: connection refused"
, error: exit status 1
	[ERROR ImagePull]: failed to pull image registry.k8s.io/coredns/coredns:v1.10.1: output: E1220 17:59:56.430762    4780 remote_image.go:171] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://registry.k8s.io/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:1080: connect: connection refused" image="registry.k8s.io/coredns/coredns:v1.10.1"
time="2023-12-20T17:59:56+08:00" level=fatal msg="pulling image: rpc error: code = Unknown desc = Error response from daemon: Get \"https://registry.k8s.io/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:1080: connect: connection refused"
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

* 
X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": exit status 1
stdout:
[init] Using Kubernetes version: v1.28.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'

stderr:
	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR ImagePull]: failed to pull image registry.k8s.io/kube-apiserver:v1.28.3: output: E1220 17:59:59.193066    5200 remote_image.go:171] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://registry.k8s.io/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:1080: connect: connection refused" image="registry.k8s.io/kube-apiserver:v1.28.3"
time="2023-12-20T17:59:59+08:00" level=fatal msg="pulling image: rpc error: code = Unknown desc = Error response from daemon: Get \"https://registry.k8s.io/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:1080: connect: connection refused"
, error: exit status 1
	[ERROR ImagePull]: failed to pull image registry.k8s.io/kube-controller-manager:v1.28.3: output: E1220 17:59:59.400183    5224 remote_image.go:171] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://registry.k8s.io/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:1080: connect: connection refused" image="registry.k8s.io/kube-controller-manager:v1.28.3"
time="2023-12-20T17:59:59+08:00" level=fatal msg="pulling image: rpc error: code = Unknown desc = Error response from daemon: Get \"https://registry.k8s.io/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:1080: connect: connection refused"
, error: exit status 1
	[ERROR ImagePull]: failed to pull image registry.k8s.io/kube-scheduler:v1.28.3: output: E1220 17:59:59.586755    5249 remote_image.go:171] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://registry.k8s.io/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:1080: connect: connection refused" image="registry.k8s.io/kube-scheduler:v1.28.3"
time="2023-12-20T17:59:59+08:00" level=fatal msg="pulling image: rpc error: code = Unknown desc = Error response from daemon: Get \"https://registry.k8s.io/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:1080: connect: connection refused"
, error: exit status 1
	[ERROR ImagePull]: failed to pull image registry.k8s.io/kube-proxy:v1.28.3: output: E1220 17:59:59.799599    5273 remote_image.go:171] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://registry.k8s.io/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:1080: connect: connection refused" image="registry.k8s.io/kube-proxy:v1.28.3"
time="2023-12-20T17:59:59+08:00" level=fatal msg="pulling image: rpc error: code = Unknown desc = Error response from daemon: Get \"https://registry.k8s.io/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:1080: connect: connection refused"
, error: exit status 1
	[ERROR ImagePull]: failed to pull image registry.k8s.io/pause:3.9: output: E1220 18:00:00.040184    5301 remote_image.go:171] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://registry.k8s.io/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:1080: connect: connection refused" image="registry.k8s.io/pause:3.9"
time="2023-12-20T18:00:00+08:00" level=fatal msg="pulling image: rpc error: code = Unknown desc = Error response from daemon: Get \"https://registry.k8s.io/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:1080: connect: connection refused"
, error: exit status 1
	[ERROR ImagePull]: failed to pull image registry.k8s.io/etcd:3.5.9-0: output: E1220 18:00:00.234121    5326 remote_image.go:171] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://registry.k8s.io/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:1080: connect: connection refused" image="registry.k8s.io/etcd:3.5.9-0"
time="2023-12-20T18:00:00+08:00" level=fatal msg="pulling image: rpc error: code = Unknown desc = Error response from daemon: Get \"https://registry.k8s.io/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:1080: connect: connection refused"
, error: exit status 1
	[ERROR ImagePull]: failed to pull image registry.k8s.io/coredns/coredns:v1.10.1: output: E1220 18:00:00.432248    5350 remote_image.go:171] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://registry.k8s.io/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:1080: connect: connection refused" image="registry.k8s.io/coredns/coredns:v1.10.1"
time="2023-12-20T18:00:00+08:00" level=fatal msg="pulling image: rpc error: code = Unknown desc = Error response from daemon: Get \"https://registry.k8s.io/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:1080: connect: connection refused"
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

* 
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯

X Exiting due to GUEST_START: failed to start node: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": exit status 1
stdout:
[init] Using Kubernetes version: v1.28.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'

stderr:
	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR ImagePull]: failed to pull image registry.k8s.io/kube-apiserver:v1.28.3: output: E1220 17:59:59.193066    5200 remote_image.go:171] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://registry.k8s.io/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:1080: connect: connection refused" image="registry.k8s.io/kube-apiserver:v1.28.3"
time="2023-12-20T17:59:59+08:00" level=fatal msg="pulling image: rpc error: code = Unknown desc = Error response from daemon: Get \"https://registry.k8s.io/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:1080: connect: connection refused"
, error: exit status 1
	[ERROR ImagePull]: failed to pull image registry.k8s.io/kube-controller-manager:v1.28.3: output: E1220 17:59:59.400183    5224 remote_image.go:171] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://registry.k8s.io/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:1080: connect: connection refused" image="registry.k8s.io/kube-controller-manager:v1.28.3"
time="2023-12-20T17:59:59+08:00" level=fatal msg="pulling image: rpc error: code = Unknown desc = Error response from daemon: Get \"https://registry.k8s.io/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:1080: connect: connection refused"
, error: exit status 1
	[ERROR ImagePull]: failed to pull image registry.k8s.io/kube-scheduler:v1.28.3: output: E1220 17:59:59.586755    5249 remote_image.go:171] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://registry.k8s.io/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:1080: connect: connection refused" image="registry.k8s.io/kube-scheduler:v1.28.3"
time="2023-12-20T17:59:59+08:00" level=fatal msg="pulling image: rpc error: code = Unknown desc = Error response from daemon: Get \"https://registry.k8s.io/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:1080: connect: connection refused"
, error: exit status 1
	[ERROR ImagePull]: failed to pull image registry.k8s.io/kube-proxy:v1.28.3: output: E1220 17:59:59.799599    5273 remote_image.go:171] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://registry.k8s.io/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:1080: connect: connection refused" image="registry.k8s.io/kube-proxy:v1.28.3"
time="2023-12-20T17:59:59+08:00" level=fatal msg="pulling image: rpc error: code = Unknown desc = Error response from daemon: Get \"https://registry.k8s.io/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:1080: connect: connection refused"
, error: exit status 1
	[ERROR ImagePull]: failed to pull image registry.k8s.io/pause:3.9: output: E1220 18:00:00.040184    5301 remote_image.go:171] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://registry.k8s.io/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:1080: connect: connection refused" image="registry.k8s.io/pause:3.9"
time="2023-12-20T18:00:00+08:00" level=fatal msg="pulling image: rpc error: code = Unknown desc = Error response from daemon: Get \"https://registry.k8s.io/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:1080: connect: connection refused"
, error: exit status 1
	[ERROR ImagePull]: failed to pull image registry.k8s.io/etcd:3.5.9-0: output: E1220 18:00:00.234121    5326 remote_image.go:171] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://registry.k8s.io/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:1080: connect: connection refused" image="registry.k8s.io/etcd:3.5.9-0"
time="2023-12-20T18:00:00+08:00" level=fatal msg="pulling image: rpc error: code = Unknown desc = Error response from daemon: Get \"https://registry.k8s.io/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:1080: connect: connection refused"
, error: exit status 1
	[ERROR ImagePull]: failed to pull image registry.k8s.io/coredns/coredns:v1.10.1: output: E1220 18:00:00.432248    5350 remote_image.go:171] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://registry.k8s.io/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:1080: connect: connection refused" image="registry.k8s.io/coredns/coredns:v1.10.1"
time="2023-12-20T18:00:00+08:00" level=fatal msg="pul##ling image: rpc error: code = Unknown desc = Error response from daemon: Get \"https://registry.k8s.io/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:1080: connect: connection refused"
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher


##继续来
 sudo systemctl set-environment HTTP_PROXY=127.0.0.1:1080
[root@aubin aubin]# sudo systemctl set-environment HTTPS_PROXY=127.0.0.1:1080
[root@aubin aubin]# sudo systemctl restart containerd.service
[root@aubin aubin]# sudo kubeadm config images pull
Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock
To see the stack trace of this error execute with --v=5 or higher

##参考连接
https://github.com/kubernetes/kubeadm/issues/1495
##继续来
 sudo kubeadm config images pull --criSocket=/var/run/cri-dockerd.sock
 sudo kubeadm config images pull --criSocket=/var/run/cri-dockerd.sock
unknown flag: --criSocket
To see the stack trace of this error execute with --v=5 or higher
[root@aubin aubin]#  sudo kubeadm config images pull criSocket=/var/run/cri-dockerd.sock
unknown command "criSocket=/var/run/cri-dockerd.sock" for "kubeadm config images pull"
To see the stack trace of this error execute with --v=5 or higher
参考过的链接:
 ## 一个关于代理的连接
 https://github.com/kubernetes/kubeadm/issues/182
 ##参考连接
 https://github.com/kubernetes/kubeadm/issues/1495

2.重新来,返回linux之前的快照。重新安装minikube,使用 minikube start --driver=none指令启动minikube,也没成功,后面看官方文档,不适合新手操作。出现的问题:time out;containerd.sockh和unix:///var/run/cri-dockerd.sock选择。还是没有解决问题

##卸载重来
[root@aubin aubin]#    minikube  delete
* Uninstalling Kubernetes v1.28.3 using kubeadm ...
* Deleting "minikube" in none ...
* Removed all traces of the "minikube" cluster.
[root@aubin aubin]# minikube  start
* minikube v1.32.0 on Centos 7.9.2009
* Automatically selected the docker driver. Other choices: none, ssh
* The "docker" driver should not be used with root privileges. If you wish to continue as root, use --force.
* If you are running minikube within a VM, consider using --driver=none:
*   https://minikube.sigs.k8s.io/docs/reference/drivers/none/

##出错
minikube  start  --driver=none
* minikube v1.32.0 on Centos 7.9.2009
* Using the none driver based on user configuration
* Starting control plane node minikube in cluster minikube
* Running on localhost (CPUs=2, Memory=3770MB, Disk=10230MB) ...
* OS release is CentOS Linux 7 (Core)
E1221 15:38:25.412518   32931 start.go:421] unable to disable preinstalled bridge CNI(s): failed to disable all bridge cni configs in "/etc/cni/net.d": sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;: exit status 1
stdout:

stderr:
find: ‘/etc/cni/net.d’: No such file or directory
* Preparing Kubernetes v1.28.3 on Docker 24.0.1 ...
! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": exit status 1
stdout:
[init] Using Kubernetes version: v1.28.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'

stderr:
	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR ImagePull]: failed to pull image registry.k8s.io/kube-apiserver:v1.28.3: output: E1221 15:38:29.235129   33613 remote_image.go:171] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://registry.k8s.io/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:1080: connect: connection refused" image="registry.k8s.io/kube-apiserver:v1.28.3"
time="2023-12-21T15:38:29+08:00" level=fatal msg="pulling image: rpc error: code = Unknown desc = Error response from daemon: Get \"https://registry.k8s.io/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:1080: connect: connection refused"
, error: exit status 1
	[ERROR ImagePull]: failed to pull image registry.k8s.io/kube-controller-manager:v1.28.3: output: E1221 15:38:29.398391   33639 remote_image.go:171] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://registry.k8s.io/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:1080: connect: connection refused" image="registry.k8s.io/kube-controller-manager:v1.28.3"
time="2023-12-21T15:38:29+08:00" level=fatal msg="pulling image: rpc error: code = Unknown desc = Error response from daemon: Get \"https://registry.k8s.io/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:1080: connect: connection refused"
, error: exit status 1
	[ERROR ImagePull]: failed to pull image registry.k8s.io/kube-scheduler:v1.28.3: output: E1221 15:38:29.558403   33663 remote_image.go:171] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://registry.k8s.io/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:1080: connect: connection refused" image="registry.k8s.io/kube-scheduler:v1.28.3"
time="2023-12-21T15:38:29+08:00" level=fatal msg="pulling image: rpc error: code = Unknown desc = Error response from daemon: Get \"https://registry.k8s.io/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:1080: connect: connection refused"
, error: exit status 1
	[ERROR ImagePull]: failed to pull image registry.k8s.io/kube-proxy:v1.28.3: output: E1221 15:38:29.720338   33688 remote_image.go:171] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://registry.k8s.io/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:1080: connect: connection refused" image="registry.k8s.io/kube-proxy:v1.28.3"
time="2023-12-21T15:38:29+08:00" level=fatal msg="pulling image: rpc error: code = Unknown desc = Error response from daemon: Get \"https://registry.k8s.io/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:1080: connect: connection refused"
, error: exit status 1
	[ERROR ImagePull]: failed to pull image registry.k8s.io/pause:3.9: output: E1221 15:38:29.898180   33716 remote_image.go:171] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://registry.k8s.io/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:1080: connect: connection refused" image="registry.k8s.io/pause:3.9"
time="2023-12-21T15:38:29+08:00" level=fatal msg="pulling image: rpc error: code = Unknown desc = Error response from daemon: Get \"https://registry.k8s.io/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:1080: connect: connection refused"
, error: exit status 1
	[ERROR ImagePull]: failed to pull image registry.k8s.io/etcd:3.5.9-0: output: E1221 15:38:30.061195   33740 remote_image.go:171] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://registry.k8s.io/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:1080: connect: connection refused" image="registry.k8s.io/etcd:3.5.9-0"
time="2023-12-21T15:38:30+08:00" level=fatal msg="pulling image: rpc error: code = Unknown desc = Error response from daemon: Get \"https://registry.k8s.io/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:1080: connect: connection refused"
, error: exit status 1
	[ERROR ImagePull]: failed to pull image registry.k8s.io/coredns/coredns:v1.10.1: output: E1221 15:38:30.216889   33764 remote_image.go:171] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: Get \"https://registry.k8s.io/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:1080: connect: connection refused" image="registry.k8s.io/coredns/coredns:v1.10.1"
time="2023-12-21T15:38:30+08:00" level=fatal msg="pulling image: rpc error: code = Unknown desc = Error response from daemon: Get \"https://registry.k8s.io/v2/\": proxyconnect tcp: dial tcp 127.0.0.1:1080: connect: connection refused"
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher


## 网没了,答案找不到,学鸟哥去了
##网好了

##还是镜像拉取失败,和网络有关,Google的答案需要设置代理
-----------------------------------------------------------------
通过在 /etc./environment 中设置 HTTP_PROXY 和/ HTTPS_PROXY 或直接导出它,您可以为将在 shell 进程下运行的进程设置代理。kubeadm 使用 containerd 服务下载镜像,因此您需要设置 systemctl env

sudo systemctl set-environment HTTP_PROXY=127.0.0.1:1080
sudo systemctl set-environment HTTPS_PROXY=127.0.0.1:1080
sudo systemctl restart containerd.service
然后你可以用以下命令拉取包

sudo kubeadm config images pull
通过在 /etc./environment 中设置 HTTP_PROXY 和/ HTTPS_PROXY 或直接导出它,您可以为将在 shell 进程下运行的进程设置代理。kubeadm 使用 containerd 服务下载镜像,因此您需要设置 systemctl env

sudo systemctl set-environment HTTP_PROXY=127.0.0.1:1080
sudo systemctl set-environment HTTPS_PROXY=127.0.0.1:1080
sudo systemctl restart containerd.service
然后你可以用以下命令拉取包

sudo kubeadm config images pull
----------------------------------------------------------------------------
##继续来
sudo kubeadm config images pull
Found multiple CRI endpoints on the host. Please define which one do you wish to use by setting the 'criSocket' field in the kubeadm configuration file: unix:///var/run/containerd/containerd.sock, unix:///var/run/cri-dockerd.sock

##找kubeadm的配置文件 :kubeadm init --config /var/tmp/minikube/kubeadm.yaml 
修改配置文件不行

关闭了containerd
systemctl stop containerd


##还是有报错

sudo kubeadm config images pull 
I1221 16:45:56.570633   38374 version.go:256] remote version is much newer: v1.29.0; falling back to: stable-1.28
W1221 16:46:06.574698   38374 version.go:104] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.28.txt": Get "https://cdn.dl.k8s.io/release/stable-1.28.txt": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
W1221 16:46:06.574722   38374 version.go:105] falling back to the local client version: v1.28.2
failed to pull image "registry.k8s.io/kube-apiserver:v1.28.2": output: E1221 16:46:06.699498   38395 remote_image.go:171] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: error creating temporary lease: connection error: desc = \"transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout\": unavailable" image="registry.k8s.io/kube-apiserver:v1.28.2"
time="2023-12-21T16:46:06+08:00" level=fatal msg="pulling image: rpc error: code = Unknown desc = Error response from daemon: error creating temporary lease: connection error: desc = \"transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout\": unavailable"
, error: exit status 1
To see the stack trace of this error execute with --v=5 or higher


##关闭宿主机的防火墙后,实验再一个虚拟机上
sudo kubeadm config images pull 
I1221 16:55:53.631111   38626 version.go:256] remote version is much newer: v1.29.0; falling back to: stable-1.28
failed to pull image "registry.k8s.io/kube-apiserver:v1.28.5": output: E1221 16:55:57.851321   38654 remote_image.go:171] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: error creating temporary lease: connection error: desc = \"transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout\": unavailable" image="registry.k8s.io/kube-apiserver:v1.28.5"
time="2023-12-21T16:55:57+08:00" level=fatal msg="pulling image: rpc error: code = Unknown desc = Error response from daemon: error creating temporary lease: connection error: desc = \"transport: Error while dialing dial unix:///run/containerd/containerd.sock: timeout\": unavailable"
, error: exit status 1

## 算了,回退到第一步,重新安装minikube

下载minikube和cri-docker,然后重启minikube

3.重新开始,用minikube start --driver=docker指令启动minikube,minikube启动成功。在这个过程中,我在物理机上设置了代理。开始做实验。一些实验成功了,一些失败了。执行minikube addons enable ingress指令失败了。然后Google找原因。

查错步骤:

A.查看pod :kubectl get pods -A。发现pod不是ready状态,有报错。

B.查看出错问题pod情况:kubectl describe pod -n ingress-nginx ingress-nginx-controller-7c6974c4d8-gzfd8。找错之一,dial tcp 74.125.23.82:443: i/o timeout

怀疑是网络原因引起,可能是墙。在网上找解决方法,捣鼓,没有解决问题。

 ##第二天发现,报错firewalld is active。回退到某快照,确实有firewalld,一个解决连接
 https://github.com/kubernetes/kubeadm/issues/312
 ​
 ## 有看了一下我的命令 minikube start --driver=none,https://minikube.sigs.k8s.io/docs/drivers/none/
 ## 官方说建议高级用户使用,建议大多数用户使用docker驱动
 ​
 ##  来吧,minikube delete,重来
 ## https://minikube.sigs.k8s.io/docs/drivers/docker/
 
##用docker
minikube start --driver=docker
* minikube v1.32.0 on Centos 7.9.2009
* Using the docker driver based on user configuration
* The "docker" driver should not be used with root privileges. If you wish to continue as root, use --force.
* If you are running minikube within a VM, consider using --driver=none:
*   https://minikube.sigs.k8s.io/docs/reference/drivers/none/

X Exiting due to DRV_AS_ROOT: The "docker" driver should not be used with root privileges.
## 问题解决:https://github.com/kubernetes/minikube/issues/7903


## 发现我昨天的网络问题通过,设置网络代理解决了。
解决方法:在右下角网络图标,右键,选择网络设置,设置代理服务器,ip设置成127.0.0.1,端口设置成7890,和clash一样


[root@aubin ~]# minikube start --driver=docker --force
minikube start --driver=docker --force
* minikube v1.32.0 on Centos 7.9.2009
! minikube skips various validations when --force is supplied; this may lead to unexpected behavior
* Using the docker driver based on existing profile
* The "docker" driver should not be used with root privileges. If you wish to continue as root, use --force.
* If you are running minikube within a VM, consider using --driver=none:
*   https://minikube.sigs.k8s.io/docs/reference/drivers/none/
* Tip: To remove this root owned cluster, run: sudo minikube delete

X Requested memory allocation (1819MB) is less than the recommended minimum 1900MB. Deployments may fail.


X The requested memory allocation of 1819MiB does not leave room for system overhead (total system memory: 1819MiB). You may face stability issues.
* Suggestion: Start minikube with less memory allocated: 'minikube start --memory=1819mb'

* Starting control plane node minikube in cluster minikube
* Pulling base image ...
! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.42, but successfully downloaded docker.io/kicbase/stable:v0.0.42 as a fallback image
* docker "minikube" container is missing, will recreate.
* Creating docker container (CPUs=2, Memory=1819MB) ...
* Preparing Kubernetes v1.28.3 on Docker 24.0.7 ...
  - Generating certificates and keys ...
  - Booting up control plane ...
  - Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Verifying Kubernetes components...
* Enabled addons: default-storageclass, storage-provisioner
* Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
##终于起来了
minikube config set driver docker
! These changes will take effect upon a minikube delete and then a minikube start

##与集群交互
kubectl get po -A
NAMESPACE     NAME                               READY   STATUS    RESTARTS      AGE
kube-system   coredns-5dd5756b68-dqvrr           1/1     Running   0             13m
kube-system   etcd-minikube                      1/1     Running   0             14m
kube-system   kube-apiserver-minikube            1/1     Running   0             14m
kube-system   kube-controller-manager-minikube   1/1     Running   0             14m
kube-system   kube-proxy-5jkk2                   1/1     Running   0             13m
kube-system   kube-scheduler-minikube            1/1     Running   0             14m
kube-system   storage-provisioner                1/1     Running   1 (13m ago)   14m
##minikube 捆绑了 Kubernetes 仪表板
[root@aubin ~]# minikube dashboard
* Enabling dashboard ...
  - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
  - Using image docker.io/kubernetesui/dashboard:v2.7.0
* Some dashboard features require the metrics-server addon. To enable all features please run:
	minikube addons enable metrics-server	
* Verifying dashboard health ...
* Launching proxy ...
* Verifying proxy health ...
http://127.0.0.1:44832/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/

^C
##部署应用程序
#### 服务
[root@aubin ~]# kubectl create deployment hello-minikube --image=kicbase/echo-server:1.0
deployment.apps/hello-minikube created
[root@aubin ~]# kubectl expose deployment hello-minikube --type=NodePort --port=8080
service/hello-minikube exposed
[root@aubin ~]# kubectl get services hello-minikube
NAME             TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)          AGE
hello-minikube   NodePort   10.105.228.220   <none>        8080:32603/TCP   11s
[root@aubin ~]# kubectl port-forward service/hello-minikube 7080:8080
Forwarding from 127.0.0.1:7080 -> 8080
Forwarding from [::1]:7080 -> 8080
现在可以在http://localhost:7080/上找到。

####负载均衡器
[root@aubin ~]# kubectl create deployment balanced --image=kicbase/echo-server:1.0
deployment.apps/balanced created
[root@aubin ~]# kubectl expose deployment balanced --type=LoadBalancer --port=8080
service/balanced exposed
######在另一个窗口中,启动隧道以为“平衡”部署创建可路由 IP:
[root@aubin ~]# minikube tunnel
Status:	
	machine: minikube
	pid: 187481
	route: 10.96.0.0/12 -> 192.168.49.2
	minikube: Running
	services: [balanced]
    errors: 
		minikube: no errors
		router: no errors
		loadbalancer emulator: no errors
[root@aubin ~]# kubectl get services balanced
NAME       TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)          AGE
balanced   LoadBalancer   10.110.78.208   10.110.78.208   8080:30936/TCP   3m55s
在 <EXTERNAL-IP>:8080 上可用
####入口
###### 启用入口插件
[root@aubin ~]# minikube addons enable ingress


##很好,问题又来了
minikube addons enable ingress
* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
  - Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
* Verifying ingress addon...

X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]
* 
## 问题相关连接
https://github.com/kubernetes/minikube/issues/13872

[root@aubin ~]# kubectl get pods -A
NAMESPACE              NAME                                         READY   STATUS              RESTARTS      AGE
default                balanced-dc9897bb7-whl8n                     1/1     Running             0             24m
default                hello-minikube-7f54cff968-dxj6v              1/1     Running             0             4h17m
ingress-nginx          ingress-nginx-admission-create-g5h7g         0/1     ImagePullBackOff    0             15m
ingress-nginx          ingress-nginx-admission-patch-fk2vg          0/1     ImagePullBackOff    0             15m
ingress-nginx          ingress-nginx-controller-7c6974c4d8-gzfd8    0/1     ContainerCreating   0             15m
kube-system            coredns-5dd5756b68-dqvrr                     1/1     Running             0             4h52m
kube-system            etcd-minikube                                1/1     Running             0             4h52m
kube-system            kube-apiserver-minikube                      1/1     Running             0             4h52m
kube-system            kube-controller-manager-minikube             1/1     Running             0             4h52m
kube-system            kube-proxy-5jkk2                             1/1     Running             0             4h52m
kube-system            kube-scheduler-minikube                      1/1     Running             0             4h52m
kube-system            storage-provisioner                          1/1     Running             4 (19m ago)   4h52m
kubernetes-dashboard   dashboard-metrics-scraper-7fd5cb4ddc-jw4nr   1/1     Running             0             4h28m
kubernetes-dashboard   kubernetes-dashboard-8694d4445c-4shnm        1/1     Running             0             4h28m
[root@aubin ~]# kubectl describe pod -n ingress-nginx ingress-nginx-controller-7c6974c4d8-gzfd8
Events:
  Type     Reason       Age                 From               Message
  ----     ------       ----                ----               -------
  Normal   Scheduled    19m                 default-scheduler  Successfully assigned ingress-nginx/ingress-nginx-controller-7c6974c4d8-gzfd8 to minikube
  Warning  FailedMount  67s (x17 over 19m)  kubelet            MountVolume.SetUp failed for volume "webhook-cert" : secret "ingress-nginx-admission" not found
[root@aubin ~]# kubectl describe pod -n ingress-nginx ingress-nginx-admission-patch-fk2vg
Events:
  Type     Reason     Age                    From               Message
  ----     ------     ----                   ----               -------
  Normal   Scheduled  33m                    default-scheduler  Successfully assigned ingress-nginx/ingress-nginx-admission-patch-fk2vg to minikube
  Warning  Failed     32m                    kubelet            Failed to pull image "registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0@sha256:a7943503b45d552785aa3b5e457f169a5661fb94d82b8a3373bcd9ebaf9aac80": Error response from daemon: Get "https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/ingress-nginx/kube-webhook-certgen/manifests/sha256:a7943503b45d552785aa3b5e457f169a5661fb94d82b8a3373bcd9ebaf9aac80": dial tcp 142.251.170.82:443: i/o timeout
  Normal   Pulling    29m (x4 over 33m)      kubelet            Pulling image "registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0@sha256:a7943503b45d552785aa3b5e457f169a5661fb94d82b8a3373bcd9ebaf9aac80"
  Warning  Failed     29m (x4 over 32m)      kubelet            Error: ErrImagePull
  Warning  Failed     28m (x7 over 32m)      kubelet            Error: ImagePullBackOff
  Warning  Failed     18m (x5 over 31m)      kubelet            Failed to pull image "registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0@sha256:a7943503b45d552785aa3b5e457f169a5661fb94d82b8a3373bcd9ebaf9aac80": Error response from daemon: Get "https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/ingress-nginx/kube-webhook-certgen/manifests/sha256:a7943503b45d552785aa3b5e457f169a5661fb94d82b8a3373bcd9ebaf9aac80": dial tcp 74.125.23.82:443: i/o timeout
  Normal   BackOff    3m42s (x103 over 32m)  kubelet            Back-off pulling image "registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0@sha256:a7943503b45d552785aa3b5e457f169a5661fb94d82b8a3373bcd9ebaf9aac80"
[root@aubin ~]#  docker pull registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0@sha256:a7943503b45d552785aa3b5e457f169a5661fb94d82b8a3373bcd9ebaf9aac80
Error response from daemon: Get "https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/ingress-nginx/kube-webhook-certgen/manifests/sha256:a7943503b45d552785aa3b5e457f169a5661fb94d82b8a3373bcd9ebaf9aac80": dial tcp 142.250.157.82:443: i/o timeout

## 删掉重来,设置了网络吧?
[root@aubin ~]# minikube delete
* Deleting "minikube" in docker ...
* Deleting container "minikube" ...
* Removing /root/.minikube/machines/minikube ...
* Removed all traces of the "minikube" cluster.
[root@aubin ~]#  export NO_PROXY=localhost,127.0.0.1,10.96.0.0/12,192.168.59.0/24,192.168.49.0/24,192.168.39.0/24
[root@aubin ~]# minikube start --driver=docker --force
* minikube v1.32.0 on Centos 7.9.2009
! minikube skips various validations when --force is supplied; this may lead to unexpected behavior
* Using the docker driver based on user configuration
* The "docker" driver should not be used with root privileges. If you wish to continue as root, use --force.
* If you are running minikube within a VM, consider using --driver=none:
*   https://minikube.sigs.k8s.io/docs/reference/drivers/none/

X Requested memory allocation (1819MB) is less than the recommended minimum 1900MB. Deployments may fail.


X The requested memory allocation of 1819MiB does not leave room for system overhead (total system memory: 1819MiB). You may face stability issues.
* Suggestion: Start minikube with less memory allocated: 'minikube start --memory=1819mb'

* Using Docker driver with root privileges
* Starting control plane node minikube in cluster minikube
* Pulling base image ...
! minikube was unable to download gcr.io/k8s-minikube/kicbase:v0.0.42, but successfully downloaded docker.io/kicbase/stable:v0.0.42 as a fallback image
* Creating docker container (CPUs=2, Memory=1819MB) ...
* Found network options:
  - NO_PROXY=localhost,127.0.0.1,10.96.0.0/12,192.168.59.0/24,192.168.49.0/24,192.168.39.0/24
* Preparing Kubernetes v1.28.3 on Docker 24.0.7 ...
  - env NO_PROXY=localhost,127.0.0.1,10.96.0.0/12,192.168.59.0/24,192.168.49.0/24,192.168.39.0/24
  - Generating certificates and keys ...
  - Booting up control plane ...
  - Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: default-storageclass, storage-provisioner
* Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default


## 镜像拉取失败,墙的原因
https://github.com/kubernetes/minikube/issues/10544

[root@aubin ~]# minikube start \
>     --image-mirror-country=cn \
>     --image-repository=registry.cn-hangzhou.aliyuncs.com/google_containers \
>     --force \
>     --addons ingress
* minikube v1.32.0 on Centos 7.9.2009
! minikube skips various validations when --force is supplied; this may lead to unexpected behavior
* Using the docker driver based on existing profile
* The "docker" driver should not be used with root privileges. If you wish to continue as root, use --force.
* If you are running minikube within a VM, consider using --driver=none:
*   https://minikube.sigs.k8s.io/docs/reference/drivers/none/
* Tip: To remove this root owned cluster, run: sudo minikube delete

X Requested memory allocation (1819MB) is less than the recommended minimum 1900MB. Deployments may fail.


X The requested memory allocation of 1819MiB does not leave room for system overhead (total system memory: 1819MiB). You may face stability issues.
* Suggestion: Start minikube with less memory allocated: 'minikube start --memory=1819mb'

* Starting control plane node minikube in cluster minikube
* Pulling base image ...
* Updating the running docker "minikube" container ...
* Found network options:
  - NO_PROXY=localhost,127.0.0.1,10.96.0.0/12,192.168.59.0/24,192.168.49.0/24,192.168.39.0/24
* Preparing Kubernetes v1.28.3 on Docker 24.0.7 ...
  - env NO_PROXY=localhost,127.0.0.1,10.96.0.0/12,192.168.59.0/24,192.168.49.0/24,192.168.39.0/24
* Verifying Kubernetes components...
  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
  - Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
* Verifying ingress addon...
! Enabling 'ingress' returned an error: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: context deadline exceeded]
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
##成功了,猝不及防啊 ,白高兴了,还是没成功
 ## 问题解决:https://github.com/kubernetes/minikube/issues/7903
 ​

4.继续找解决方法。发现物理机和虚拟机都能ping通registry.k8s.io,但是minikube拉取镜像还是超时。怀疑是minikube和vpn直接有问题。开始设置HTTP_PROXY 和HTTPS_PROXY 和NO_PROXY字段。这里失败了几次,后面发现不带http://和https://的设置成功了,成功拉取了镜像。看网上的解释是说,需要设置这些网络相关的字段传递给minikube,让它能识别网络代理。

##我发现我的物理机能ping通registry.k8s.io

##怀疑是minikube安装在虚拟机上和物理机的网络直接有问题
##又是一片官方文档  
https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/
[root@aubin ~]# export HTTP_PROXY=http://127.0.0.1:7890
[root@aubin ~]# export HTTPS_PROXY=https://127.0.0.1:7890
[root@aubin ~]# export NO_PROXY=localhost,127.0.0.1,10.96.0.0/12,192.168.59.0/24,192.168.49.0/24,192.168.39.0/24
export HTTP_PROXY=http://192.168.18.13:7890
export HTTPS_PROXY=https://192.168.18.13:7890
vim ~/.bashrc

##早上发现物理机和虚拟机都可以ping registry.k8s.io
[root@aubin ~]# export NO_PROXY=localhost,127.0.0.1,10.96.0.0/12,192.168.59.0/24,192.168.49.0/24,192.168.39.0/24
export HTTP_PROXY=http://192.168.18.13:7890
export HTTPS_PROXY=https://192.168.18.13:7890
vim ~/.bashrc

## 痛苦面具
## 回退到安装了minikube
##设置了HTTP_PROXY 和HTTPS_PROXY 和NO_PROXY
重新安装minikube

可能是和vpn网络有关的东西,又出现报错,镜像拉取失败
Error response from daemon: Get "https://registry-1.docker.io/v2/": proxyconnect tcp: EOF

##终于成功了

[root@aubin ~]# minikube addons enable ingress
* ingress is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
  - Using image registry.k8s.io/ingress-nginx/controller:v1.9.4
  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
  - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
* Verifying ingress addon...
* The 'ingress' addon is enabled

####启动了入口插件
#### 应用,
[root@aubin ~]# kubectl apply -f https://storage.googleapis.com/minikube-site-examples/ingress-example.yaml
pod/foo-app created
service/foo-service created
pod/bar-app created
service/bar-service created
ingress.networking.k8s.io/example-ingress created
[root@aubin ~]# kubectl get ingress
NAME              CLASS   HOSTS   ADDRESS        PORTS   AGE
example-ingress   nginx   *       192.168.49.2   80      2m4s
######验证
[root@aubin ~]# curl 192.168.49.2/foo
Request served by foo-app

HTTP/1.1 GET /foo

Host: 192.168.49.2
Accept: */*
User-Agent: curl/7.29.0
X-Forwarded-For: 192.168.49.1
X-Forwarded-Host: 192.168.49.2
X-Forwarded-Port: 80
X-Forwarded-Proto: http
X-Forwarded-Scheme: http
X-Real-Ip: 192.168.49.1
X-Request-Id: a223d46224350d7853ef7a9aeb6e6668
X-Scheme: http
[root@aubin ~]# curl 192.168.49.2/bar
Request served by bar-app

HTTP/1.1 GET /bar

Host: 192.168.49.2
Accept: */*
User-Agent: curl/7.29.0
X-Forwarded-For: 192.168.49.1
X-Forwarded-Host: 192.168.49.2
X-Forwarded-Port: 80
X-Forwarded-Proto: http
X-Forwarded-Scheme: http
X-Real-Ip: 192.168.49.1
X-Request-Id: abc7e068e1f6d217ac1434cb197af9d7
X-Scheme: http



##管理集群
[root@aubin ~]# minikube  pause
* Pausing node minikube ... 
* Paused 18 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator
[root@aubin ~]# minikube unpause
* Unpausing node minikube ... 
* Unpaused 18 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator
##更改默认内存限制(需要重新启动):
[root@aubin ~]# minikube config set memory 9001
! These changes will take effect upon a minikube delete and then a minikube start
[root@aubin ~]# minikube addons list
|-----------------------------|----------|--------------|--------------------------------|
|         ADDON NAME          | PROFILE  |    STATUS    |           MAINTAINER           |
|-----------------------------|----------|--------------|--------------------------------|
| ambassador                  | minikube | disabled     | 3rd party (Ambassador)         |
| auto-pause                  | minikube | disabled     | minikube                       |
| cloud-spanner               | minikube | disabled     | Google                         |
| csi-hostpath-driver         | minikube | disabled     | Kubernetes                     |
| dashboard                   | minikube | disabled     | Kubernetes                     |
| default-storageclass        | minikube | enabled ✅   | Kubernetes                     |
| efk                         | minikube | disabled     | 3rd party (Elastic)            |
| freshpod                    | minikube | disabled     | Google                         |
| gcp-auth                    | minikube | disabled     | Google                         |
| gvisor                      | minikube | disabled     | minikube                       |
| headlamp                    | minikube | disabled     | 3rd party (kinvolk.io)         |
| helm-tiller                 | minikube | disabled     | 3rd party (Helm)               |
| inaccel                     | minikube | disabled     | 3rd party (InAccel             |
|                             |          |              | [info@inaccel.com])            |
| ingress                     | minikube | enabled ✅   | Kubernetes                     |
| ingress-dns                 | minikube | disabled     | minikube                       |
| inspektor-gadget            | minikube | disabled     | 3rd party                      |
|                             |          |              | (inspektor-gadget.io)          |
| istio                       | minikube | disabled     | 3rd party (Istio)              |
| istio-provisioner           | minikube | disabled     | 3rd party (Istio)              |
| kong                        | minikube | disabled     | 3rd party (Kong HQ)            |
| kubeflow                    | minikube | disabled     | 3rd party                      |
| kubevirt                    | minikube | disabled     | 3rd party (KubeVirt)           |
| logviewer                   | minikube | disabled     | 3rd party (unknown)            |
| metallb                     | minikube | disabled     | 3rd party (MetalLB)            |
| metrics-server              | minikube | disabled     | Kubernetes                     |
| nvidia-device-plugin        | minikube | disabled     | 3rd party (NVIDIA)             |
| nvidia-driver-installer     | minikube | disabled     | 3rd party (Nvidia)             |
| nvidia-gpu-device-plugin    | minikube | disabled     | 3rd party (Nvidia)             |
| olm                         | minikube | disabled     | 3rd party (Operator Framework) |
| pod-security-policy         | minikube | disabled     | 3rd party (unknown)            |
| portainer                   | minikube | disabled     | 3rd party (Portainer.io)       |
| registry                    | minikube | disabled     | minikube                       |
| registry-aliases            | minikube | disabled     | 3rd party (unknown)            |
| registry-creds              | minikube | disabled     | 3rd party (UPMC Enterprises)   |
| storage-provisioner         | minikube | enabled ✅   | minikube                       |
| storage-provisioner-gluster | minikube | disabled     | 3rd party (Gluster)            |
| storage-provisioner-rancher | minikube | disabled     | 3rd party (Rancher)            |
| volumesnapshots             | minikube | disabled     | Kubernetes                     |
|-----------------------------|----------|--------------|--------------------------------|

####创建运行旧版 Kubernetes 的第二个集群:
[root@aubin ~]# minikube start -p aged --kubernetes-version=v1.16.1 --driver=docker  --force 
* [aged] minikube v1.32.0 on Centos 7.9.2009
! minikube skips various validations when --force is supplied; this may lead to unexpected behavior
* Using the docker driver based on user configuration
* The "docker" driver should not be used with root privileges. If you wish to continue as root, use --force.
* If you are running minikube within a VM, consider using --driver=none:
*   https://minikube.sigs.k8s.io/docs/reference/drivers/none/
* Using Docker driver with root privileges
* Starting control plane node aged in cluster aged
* Pulling base image ...
* Creating docker container (CPUs=2, Memory=1800MB) ...
* Found network options:
  - HTTP_PROXY=192.168.18.13:7890
! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.58.2).
* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
  - HTTPS_PROXY=192.168.18.13:7890
  - NO_PROXY=localhost,127.0.0.1,10.96.0.0/12,192.168.59.0/24,192.168.49.0/24,192.168.39.0/24
* Preparing Kubernetes v1.16.1 on Docker 24.0.7 ...
  - env HTTP_PROXY=192.168.18.13:7890
  - env HTTPS_PROXY=192.168.18.13:7890
  - env NO_PROXY=localhost,127.0.0.1,10.96.0.0/12,192.168.59.0/24,192.168.49.0/24,192.168.39.0/24
    > kubectl.sha1:  41 B / 41 B [---------------------------] 100.00% ? p/s 0s
    > kubeadm.sha1:  41 B / 41 B [---------------------------] 100.00% ? p/s 0s
    > kubelet.sha1:  41 B / 41 B [---------------------------] 100.00% ? p/s 0s
    > kubeadm:  42.20 MiB / 42.20 MiB [-------------] 100.00% 6.30 MiB p/s 6.9s
    > kubectl:  44.52 MiB / 44.52 MiB [-------------] 100.00% 5.39 MiB p/s 8.5s
    > kubelet:  117.43 MiB / 117.43 MiB [------------] 100.00% 5.66 MiB p/s 21s
  - Generating certificates and keys ...
  - Booting up control plane ...
  - Configuring RBAC rules ...
  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Verifying Kubernetes components...
* Enabled addons: storage-provisioner, default-storageclass

! /usr/bin/kubectl is version 1.28.1, which may have incompatibilities with Kubernetes 1.16.1.
  - Want kubectl v1.16.1? Try 'minikube kubectl -- get pods -A'
* Done! kubectl is now configured to use "aged" cluster and "default" namespace by default
####删除所有 minikube 集群:
[root@aubin ~]# minikube delete --all
* Deleting "aged" in docker ...
* Removing /root/.minikube/machines/aged ...
* Removed all traces of the "aged" cluster.
* Deleting "minikube" in docker ...
* Removing /root/.minikube/machines/minikube ...
* Removed all traces of the "minikube" cluster.
* Successfully deleted all profiles
 
复盘一下

clash打开的 物理机上代理开着的

在虚拟机中设置了变量,文件"~/.bashrc"。请删除https://http://

export NO_PROXY=localhost,127.0.0.1,10.96.0.0/12,192.168.59.0/24,192.168.49.0/24,192.168.39.0/24 export HTTP_PROXY=192.168.18.13:7890 export HTTPS_PROXY=192.168.18.13:7890

参考连接:docker: Error response from daemon: Get https://registry-1.docker.io/v2/: proxyconnect tcp: EOF - Stack Overflow

然后重启docker,删除了minikube,然后minikube start。暂时没有出现镜像拉取的错误了。

ps:写的好潦草,以后再来看。

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值