centos 8 安装 kubernetes 1.20 版本 master

目标

在 centos8 上安装 kubenetes-1.20 版本

名称 版本 备注
os centos8
docker docker-ce-19.03.14-3.el8.x86_64
docker-ce-cli-19.03.14-3.el8.x86_64
containerd.io-1.4.3-3.1
kubelet-1.20 暂时不建议使用 docker-20 版本
kublet cri-tools-1.13.0-0.x86_64
kubelet-1.20.1-0.x86_64
kubernetes-cni-0.8.7-0.x86_64
kubeadm-1.20.1-0.x86_64
kubectl-1.20.1-0.x86_64

centos8

解决 yum 报错 Failed to set locale, defaulting to C.UTF-8

解决方法

localectl set-locale LANG=zh_CN.UTF-8

假如报错 -bash: warning: setlocale: LC_CTYPE: cannot change locale (zh_CN.utf8): No such file or directory

解决方法

# locale -a  | grep zh_CN.utf8
zh_CN.utf8
# vim   /etc/locale.conf   修改该文件
LANG="zh_CN.UTF-8"
# vim /etc/profile
LANG="zh_CN.UTF-8"
LC_ALL=C
export LC_ALL  LANG
重启即可解决

软件准备

yum -y install yum-utils device-mapper-persistent-data lvm2 libtool-ltdl libcgroup langpacks-en glibc-all-langpacks tc

docker 安装

软件源

https://download.docker.com/linux/centos/

添加源

# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
# yum-config-manager --enable docker-ce-nightly
# yum-config-manager --disable docker-ce-test

安装

yum install -y docker-ce-19.03.14-3.el8  docker-ce-cli-19.03.14-3.el8.  containerd.io-1.4.3-3.1.el8

配置 containerd

mkdir -p /etc/containerd
containerd config default | tee /etc/containerd/config.toml
sed -i 's/SystemdCgroup = false/SystemdCgroup = true/g' /etc/containerd/config.toml
systemctl restart containerd

配置 docker

/etc/docker/daemon.json

{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}

启动 docker

mkdir -p /etc/systemd/system/docker.service.d
systemctl daemon-reload
systemctl restart docker
systemctl enable docker

高版本 DOCKER 使用 containerd.service

systemctl restart containerd.service
systemctl enable containerd.service

配置 proxy 方法
vim /usr/lib/systemd/system/containerd.service

[Service]
Environment="HTTP_PROXY=http://proxy_ip:port"
Environment="HTTPS_PROXY=http://proxy_ip:port"

vim /usr/lib/systemd/system/docker.service

[Service]
Environment="HTTP_PROXY=http://proxy_ip:port"
Environment="HTTPS_PROXY=http://proxy_ip:port"

重启服务

systemctl restart containerd.service
systemctl restart docker

kubernetes 安装

软件源

[kubenetes]
name=kubenet
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enable=1
gpgcheck=0

通过 yum 获取软件版本方法

yum list kubectl --showduplicates | sort -r

软件安装

yum install kubelet kubeadm kubectl

潜在错误

# yum install -y kubectl kubeadm kubernetes-cni --downloadonly --downloaddir=.
上次元数据过期检查:1:32:36 前,执行于 2020年12月23日 星期三 12时01分31秒。
模块依赖问题

 问题 1: conflicting requests
  - nothing provides module(perl:5.26) needed by module perl-IO-Socket-SSL:2.066:8030020200715230104:1e4bbb35-0.x86_64
 问题 2: conflicting requests
  - nothing provides module(perl:5.26) needed by module perl-libwww-perl:6.34:8030020200716155257:b967a9a2-0.x86_64
未找到匹配的参数: kubernetes-cni
错误:没有任何匹配: kubernetes-cni

解决方法

yum reset perl-IO-Socket-SSL perl-libwww-perl

软件下载方法

下载的两种方法

# yum install -y kubectl kubeadm kubernetes-cni --downloadonly --downloaddir=.
# yumdownloader -y kubectl kubeadm kubernetes-cni 

master 配置

swapoff

sed -i /swap/d  /etc/fstab
swapoff -a

vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
Environment="KUBELET_EXTRA_ARGS=--fail-swap-on=false"

增加模块

echo -e "overlay\nbr_netfilter" > /etc/modules-load.d/containerd.conf
modprobe overlay
modprobe br_netfilter

优化

/etc/sysctl.d/kubernetes.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1

cgroup

/etc/default/kubelet
KUBELET_EXTRA_ARGS="--cgroup-driver=cgroupfs"

注意

软件安装后, kubelet 是无需启动,服务第一次启动是有 kubeadm 初始化时候启动

配置阿里云 registry

# docker login add8qc2u.mirror.aliyuncs.com --username=yourname@mail.com
Password: password
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

提前准备镜像

docker pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.20.1
docker pull registry.aliyuncs.com/google_containers/kube-proxy:v1.20.1
docker pull registry.aliyuncs.com/google_containers/kube-controller-manager:v1.20.1
docker pull registry.aliyuncs.com/google_containers/kube-scheduler:v1.20.1
docker pull registry.aliyuncs.com/google_containers/coredns:1.2.6
docker pull registry.aliyuncs.com/google_containers/etcd:3.2.24
docker pull  registry.aliyuncs.com/google_containers/pause:3.1
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1

转换镜像 tag

docker tag registry.aliyuncs.com/google_containers/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6
docker tag registry.aliyuncs.com/google_containers/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.1 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
docker tag registry.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1
docker tag registry.aliyuncs.com/google_containers/kube-proxy:v1.20.1   k8s.gcr.io/kube-proxy:v1.20.1
docker tag registry.aliyuncs.com/google_containers/kube-controller-manager:v1.20.1  k8s.gcr.io/kube-controller-manager:v1.20.1
docker tag registry.aliyuncs.com/google_containers/kube-scheduler:v1.20.1 k8s.gcr.io/kube-scheduler:v1.20.1
docker tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.20.1   k8s.gcr.io/kube-apiserver:v1.20.1

检查 image

# kubeadm config images list
W0109 10:50:54.208796   28689 version.go:103] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get "https://dl.k8s.io/release/stable-1.txt": x509: certificate signed by unknown authority
W0109 10:50:54.208851   28689 version.go:104] falling back to the local client version: v1.24.8
k8s.gcr.io/kube-apiserver:v1.24.8
k8s.gcr.io/kube-controller-manager:v1.24.8
k8s.gcr.io/kube-scheduler:v1.24.8
k8s.gcr.io/kube-proxy:v1.24.8
k8s.gcr.io/pause:3.7
k8s.gcr.io/etcd:3.5.5-0
k8s.gcr.io/coredns/coredns:v1.8.6

初始化 kubernetes

说明

–kubernetes-version v1.20. 对应了 rpm 及镜像版本信息
–pod-network-cidr=10.244.0.0/16 这里网络用于分配至 pod 使用(flannel 固定地址)
–image-repository registry.aliyuncs.com/google_containers 定义了公共 registry

注意:

假如你的网络配置发生了修改或者错误,则会导致 kube-controller-manager 无法正常启动
可以修改 /etc/kubernetes/manifests/kube-controller-manager.yaml 配置文件重新定义网络范围
重定义后,需要删除 -n kube-system 中 kube-controller-manager 对应 pod 才会生效
要详细日志则更加 --v=9 更详细日志

#  kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.20.1 --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.20.1
[preflight] Running pre-flight checks
        [WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Using existing apiserver-kubelet-client certificate and key on disk
[certs] Using existing front-proxy-ca certificate authority
[certs] Using existing front-proxy-client certificate and key on disk
[certs] Using existing etcd/ca certificate authority
[certs] Using existing etcd/server certificate and key on disk
[certs] Using existing etcd/peer certificate and key on disk
[certs] Using existing etcd/healthcheck-client certificate and key on disk
[certs] Using existing apiserver-etcd-client certificate and key on disk
[certs] Using the existing "sa" key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 14.501732 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.20" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node ns-yun-020040.vclound.com as control-plane by adding the labels "node-role.kubernetes.io/master=''" and "node-role.kubernetes.io/control-plane='' (deprecated)"
[mark-control-plane] Marking the node ns-yun-020040.vclound.com as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 6vdumb.px6saoy0o7v0exvs
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.189.20.40:6443 --token 6vdumb.px6saoy0o7v0exvs \
    --discovery-token-ca-cert-hash sha256:4f0ad49ff7fc66e4383b7734b1435b3aa0785061d6be8ea3a9efe298d6a39164

错误提示

初始化时候出现下面错误

I0109 11:21:28.148232    2692 round_trippers.go:553] GET https://10.189.20.64:6443/healthz?timeout=10s  in 0 milliseconds
I0109 11:21:28.148245    2692 round_trippers.go:570] HTTP Statistics: DNSLookup 0 ms Dial 0 ms TLSHandshake 0 ms Duration 0 ms
I0109 11:21:28.148256    2692 round_trippers.go:577] Response Headers:
I0109 11:21:28.647772    2692 round_trippers.go:466] curl -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.24.8 (linux/amd64) kubernetes/fdc7750" 'https://10.189.20.64:6443/healthz?timeout=10s'
I0109 11:21:28.648044    2692 round_trippers.go:508] HTTP Trace: Dial to tcp:10.189.20.64:6443 failed: dial tcp 10.189.20.64:6443: connect: connection refused
I0109 11:21:28.648079    2692 round_trippers.go:553] GET https://10.189.20.64:6443/healthz?timeout=10s  in 0 milliseconds
I0109 11:21:28.648095    2692 round_trippers.go:570] HTTP Statistics: DNSLookup 0 ms Dial 0 ms TLSHandshake 0 ms Duration 0 ms
I0109 11:21:28.648106    2692 round_trippers.go:577] Response Headers:
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

Unfortunately, an error has occurred:
        timed out waiting for the condition

This error is likely caused by:
        - The kubelet is not running
        - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
        - 'systemctl status kubelet'
        - 'journalctl -xeu kubelet'

初始化过程中 kubelet 无法启动
参考 messages 发现,在启动 kube-api 时候需要下载镜像 registry.k8s.io/pause:3.6 导致 kube-api 无法启动, 开通防火墙后初始化成功

kubernetes 配置

初始化后 /var/lib/kubelet/config.yaml 文件自动天填充了内容

apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 0s
    enabled: true
  x509:
    clientCAFile: /etc/kubernetes/pki/ca.crt
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 0s
    cacheUnauthorizedTTL: 0s
cgroupDriver: systemd
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
cpuManagerReconcilePeriod: 0s
evictionPressureTransitionPeriod: 0s
fileCheckFrequency: 0s
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 0s
imageMinimumGCAge: 0s
kind: KubeletConfiguration
logging: {}
nodeStatusReportFrequency: 0s
nodeStatusUpdateFrequency: 0s
rotateCertificates: true
runtimeRequestTimeout: 0s
shutdownGracePeriod: 0s
shutdownGracePeriodCriticalPods: 0s
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 0s
syncFrequency: 0s
volumeStatsAggPeriod: 0s

创建用户配置

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

健康监测

检查 kubelet 服务

# systemctl  status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
  Drop-In: /usr/lib/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Wed 2020-12-23 16:37:08 CST; 46min ago
     Docs: https://kubernetes.io/docs/
 Main PID: 56132 (kubelet)
    Tasks: 35 (limit: 823857)
   Memory: 80.1M
   CGroup: /system.slice/kubelet.service
           └─56132 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernet

参考 /var/log/messages 你会发现网路服务还没有成功启动

Dec 23 16:43:37 ns-yun-020040 kubelet[56132]: E1223 16:43:37.323455   56132 kubelet.go:2160] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Dec 23 16:43:39 ns-yun-020040 kubelet[56132]: W1223 16:43:39.174458   56132 cni.go:239] Unable to update cni config: no networks found in /etc/cni/net.d

因此 dns pod 启动会有问题

# kubectl get pods --all-namespaces
NAMESPACE     NAME                                                READY   STATUS    RESTARTS   AGE
kube-system   coredns-7f89b7bc75-ffmc4                            0/1     Pending   0          8m20s
kube-system   coredns-7f89b7bc75-wq8t4                            0/1     Pending   0          8m20s
kube-system   etcd-ns-yun-020040.vclound.com                      1/1     Running   0          8m27s
kube-system   kube-apiserver-ns-yun-020040.vclound.com            1/1     Running   0          8m27s
kube-system   kube-controller-manager-ns-yun-020040.vclound.com   1/1     Running   0          8m27s
kube-system   kube-proxy-chl82                                    1/1     Running   0          8m20s
kube-system   kube-scheduler-ns-yun-020040.vclound.com            1/1     Running   0          8m27s

解决方法:参考下面 flannel 配置

网络配置

flannel

下载配置

wget  https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

修改

  net-conf.json: |
    {
      "Network": "10.189.21.0/24",  <- 对应初始化时候提供给 pod 使用网络即可
      "Backend": {
        "Type": "vxlan"
      }
    }

启动网络

# kubectl apply -f kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

pod 检查, kubernetes 会自动生成 flannel pod, 参考下面状态
init 状态为一个初始化中

# kubectl get pods --all-namespaces
NAMESPACE     NAME                                                READY   STATUS     RESTARTS   AGE
kube-system   coredns-7f89b7bc75-ffmc4                            0/1     Pending    0          18m
kube-system   coredns-7f89b7bc75-wq8t4                            0/1     Pending    0          18m
kube-system   etcd-ns-yun-020040.vclound.com                      1/1     Running    0          18m
kube-system   kube-apiserver-ns-yun-020040.vclound.com            1/1     Running    0          18m
kube-system   kube-controller-manager-ns-yun-020040.vclound.com   1/1     Running    0          18m
kube-system   kube-flannel-ds-8bnfw                               0/1     Init:0/1   0          25s
kube-system   kube-proxy-chl82                                    1/1     Running    0          18m
kube-system   kube-scheduler-ns-yun-020040.vclound.com            1/1     Running    0          18m

参考信息, 你会发现 kubernetes 正在下载镜像

# kubectl -n kube-system describe kube-flannel-ds-8bnfw
....
...
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  47s   default-scheduler  Successfully assigned kube-system/kube-flannel-ds-8bnfw to ns-yun-020040.vclound.com
  Normal  Pulling    44s   kubelet            Pulling image "quay.io/coreos/flannel:v0.13.1-rc1"

当镜像下载完成后, 网络就可以用了

# kubectl get pods --all-namespaces
NAMESPACE     NAME                                                READY   STATUS    RESTARTS   AGE
kube-system   coredns-7f89b7bc75-ffmc4                            0/1     Pending   0          19m
kube-system   coredns-7f89b7bc75-wq8t4                            0/1     Pending   0          19m
kube-system   etcd-ns-yun-020040.vclound.com                      1/1     Running   0          19m
kube-system   kube-apiserver-ns-yun-020040.vclound.com            1/1     Running   0          19m
kube-system   kube-controller-manager-ns-yun-020040.vclound.com   1/1     Running   0          19m
kube-system   kube-flannel-ds-8bnfw                               1/1     Running   0          76s
kube-system   kube-proxy-chl82                                    1/1     Running   0          19m
kube-system   kube-scheduler-ns-yun-020040.vclound.com            1/1     Running   0          19m

网络可以使用, dns 也可以使用

# kubectl get pods --all-namespaces
NAMESPACE     NAME                                                READY   STATUS    RESTARTS   AGE
kube-system   coredns-7f89b7bc75-ffmc4                            1/1     Running   0          20m
kube-system   coredns-7f89b7bc75-wq8t4                            1/1     Running   0          20m
kube-system   etcd-ns-yun-020040.vclound.com                      1/1     Running   0          20m
kube-system   kube-apiserver-ns-yun-020040.vclound.com            1/1     Running   0          20m
kube-system   kube-controller-manager-ns-yun-020040.vclound.com   1/1     Running   0          20m
kube-system   kube-flannel-ds-8bnfw                               1/1     Running   0          2m26s
kube-system   kube-proxy-chl82                                    1/1     Running   0          20m
kube-system   kube-scheduler-ns-yun-020040.vclound.com            1/1     Running   0          20m

kubernetes master 单点配置到这里就基本上完成

特殊:

kubelet 版本 1.24 排雷

1.24 通常要使用 crictl 进行容器管理
需要对其进行配置否则会报下面错误信息
1.24 只能够使用 centos8, centos9 ( centos7 安装不成功 )
1.24 已经启动 docker, 因此不需要安装任何 docker rpm 只需要 containerd.io
镜像导入时候需要使用 ctr 并且留意 namespace 问题
上文中 /etc/systemd/system/kubelet.service.d/10-kubeadm.conf 不再需要配置

crictl 配置

# crictl images | grep pause
WARN[0000] image connect using default endpoints: [unix:///var/run/dockershim.sock unix:///run/containerd/containerd.sock unix:///run/crio/crio.sock unix:///var/run/cri-dockerd.sock]. As the default settings are now deprecated, you should set the endpoint instead.
ERRO[0000] unable to determine image API version: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing dial unix /var/run/dockershim.sock: connect: no such file or directory"
k8s.gcr.io/pause                                                  3.7                 221177c6082a8       311kB
registry.aliyuncs.com/google_containers/pause                     3.7                 221177c6082a8       311kB

配置

/etc/crictl.yaml
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 2
debug: false
pull-image-on-create: false

1.24.8 初始化日志参考

# kubeadm init --kubernetes-version=stable --pod-network-cidr=10.244.0.0/16 --v=9 | tee /tmp/kubenetes.log
I0110 19:37:45.421373   45478 initconfiguration.go:117] detected and using CRI socket: unix:///var/run/containerd/containerd.sock
I0110 19:37:45.421616   45478 interface.go:432] Looking for default routes with IPv4 addresses
I0110 19:37:45.421630   45478 interface.go:437] Default route transits interface "bond0.20"
I0110 19:37:45.422182   45478 interface.go:209] Interface bond0.20 is up
I0110 19:37:45.422257   45478 interface.go:257] Interface "bond0.20" has 2 addresses :[10.189.20.65/24 fe80::ee38:8fff:fe79:2726/64].
I0110 19:37:45.422285   45478 interface.go:224] Checking addr  10.189.20.65/24.
I0110 19:37:45.422302   45478 interface.go:231] IP found 10.189.20.65
I0110 19:37:45.422312   45478 interface.go:263] Found valid IPv4 address 10.189.20.65 for interface "bond0.20".
I0110 19:37:45.422324   45478 interface.go:443] Found active IP 10.189.20.65
I0110 19:37:45.422357   45478 kubelet.go:218] the value of KubeletConfiguration.cgroupDriver is empty; setting it to "systemd"
I0110 19:37:45.432613   45478 version.go:186] fetching Kubernetes version from URL: https://dl.k8s.io/release/stable.txt
I0110 19:37:45.971002   45478 version.go:255] remote version is much newer: v1.26.0; falling back to: stable-1.24
I0110 19:37:45.971049   45478 version.go:186] fetching Kubernetes version from URL: https://dl.k8s.io/release/stable-1.24.txt
[init] Using Kubernetes version: v1.24.9
[preflight] Running pre-flight checks
I0110 19:37:46.586352   45478 checks.go:570] validating Kubernetes and kubeadm version
I0110 19:37:46.586378   45478 checks.go:170] validating if the firewall is enabled and active
I0110 19:37:46.603927   45478 checks.go:205] validating availability of port 6443
I0110 19:37:46.604074   45478 checks.go:205] validating availability of port 10259
I0110 19:37:46.604108   45478 checks.go:205] validating availability of port 10257
I0110 19:37:46.604143   45478 checks.go:282] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml
I0110 19:37:46.604158   45478 checks.go:282] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml
I0110 19:37:46.604170   45478 checks.go:282] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml
I0110 19:37:46.604188   45478 checks.go:282] validating the existence of file /etc/kubernetes/manifests/etcd.yaml
I0110 19:37:46.604199   45478 checks.go:432] validating if the connectivity type is via proxy or direct
        [WARNING HTTPProxy]: Connection to "https://10.189.20.65" uses proxy "http://10.199.196.187:40404". If that is not intended, adjust your proxy settings
I0110 19:37:46.604248   45478 checks.go:471] validating http connectivity to first IP address in the CIDR
        [WARNING HTTPProxyCIDR]: connection to "10.96.0.0/12" uses proxy "http://10.199.196.187:40404". This may lead to malfunctional cluster setup. Make sure that Pod and Services IP ranges specified correctly as exceptions in proxy configuration
I0110 19:37:46.604284   45478 checks.go:471] validating http connectivity to first IP address in the CIDR
        [WARNING HTTPProxyCIDR]: connection to "10.244.0.0/16" uses proxy "http://10.199.196.187:40404". This may lead to malfunctional cluster setup. Make sure that Pod and Services IP ranges specified correctly as exceptions in proxy configuration
I0110 19:37:46.604313   45478 checks.go:106] validating the container runtime
I0110 19:37:46.628552   45478 checks.go:331] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
I0110 19:37:46.628623   45478 checks.go:331] validating the contents of file /proc/sys/net/ipv4/ip_forward
I0110 19:37:46.628653   45478 checks.go:646] validating whether swap is enabled or not
I0110 19:37:46.628689   45478 checks.go:372] validating the presence of executable crictl
I0110 19:37:46.628719   45478 checks.go:372] validating the presence of executable conntrack
I0110 19:37:46.628736   45478 checks.go:372] validating the presence of executable ip
I0110 19:37:46.628751   45478 checks.go:372] validating the presence of executable iptables
I0110 19:37:46.628770   45478 checks.go:372] validating the presence of executable mount
I0110 19:37:46.628787   45478 checks.go:372] validating the presence of executable nsenter
I0110 19:37:46.628804   45478 checks.go:372] validating the presence of executable ebtables
I0110 19:37:46.628819   45478 checks.go:372] validating the presence of executable ethtool
I0110 19:37:46.628834   45478 checks.go:372] validating the presence of executable socat
I0110 19:37:46.628851   45478 checks.go:372] validating the presence of executable tc
I0110 19:37:46.628866   45478 checks.go:372] validating the presence of executable touch
I0110 19:37:46.628882   45478 checks.go:518] running all checks
I0110 19:37:46.637898   45478 checks.go:403] checking whether the given node name is valid and reachable using net.LookupHost
I0110 19:37:46.638171   45478 checks.go:612] validating kubelet version
I0110 19:37:46.691474   45478 checks.go:132] validating if the "kubelet" service is enabled and active
        [WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
I0110 19:37:46.712764   45478 checks.go:205] validating availability of port 10250
I0110 19:37:46.712841   45478 checks.go:205] validating availability of port 2379
I0110 19:37:46.712879   45478 checks.go:205] validating availability of port 2380
I0110 19:37:46.712932   45478 checks.go:245] validating the existence and emptiness of directory /var/lib/etcd
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I0110 19:37:46.713084   45478 checks.go:834] using image pull policy: IfNotPresent
I0110 19:37:46.737468   45478 checks.go:843] image exists: k8s.gcr.io/kube-apiserver:v1.24.9
I0110 19:37:46.760526   45478 checks.go:843] image exists: k8s.gcr.io/kube-controller-manager:v1.24.9
I0110 19:37:46.783995   45478 checks.go:843] image exists: k8s.gcr.io/kube-scheduler:v1.24.9
I0110 19:37:46.806702   45478 checks.go:843] image exists: k8s.gcr.io/kube-proxy:v1.24.9
I0110 19:37:46.830232   45478 checks.go:851] pulling: k8s.gcr.io/pause:3.7
I0110 19:37:48.396242   45478 checks.go:843] image exists: k8s.gcr.io/etcd:3.5.5-0
I0110 19:37:48.419868   45478 checks.go:843] image exists: k8s.gcr.io/coredns/coredns:v1.8.6
[certs] Using certificateDir folder "/etc/kubernetes/pki"
I0110 19:37:48.419937   45478 certs.go:112] creating a new certificate authority for ca
[certs] Generating "ca" certificate and key
I0110 19:37:48.537341   45478 certs.go:522] validating certificate period for ca certificate
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local ns-yun-020065.vclound.com] and IPs [10.96.0.1 10.189.20.65]
[certs] Generating "apiserver-kubelet-client" certificate and key
I0110 19:37:48.786799   45478 certs.go:112] creating a new certificate authority for front-proxy-ca
[certs] Generating "front-proxy-ca" certificate and key
I0110 19:37:49.396479   45478 certs.go:522] validating certificate period for front-proxy-ca certificate
[certs] Generating "front-proxy-client" certificate and key
I0110 19:37:49.971475   45478 certs.go:112] creating a new certificate authority for etcd-ca
[certs] Generating "etcd/ca" certificate and key
I0110 19:37:50.081920   45478 certs.go:522] validating certificate period for etcd/ca certificate
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost ns-yun-020065.vclound.com] and IPs [10.189.20.65 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost ns-yun-020065.vclound.com] and IPs [10.189.20.65 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
I0110 19:37:51.230511   45478 certs.go:78] creating new public/private key files for signing service account users
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I0110 19:37:51.327552   45478 kubeconfig.go:103] creating kubeconfig file for admin.conf
[kubeconfig] Writing "admin.conf" kubeconfig file
I0110 19:37:51.546153   45478 kubeconfig.go:103] creating kubeconfig file for kubelet.conf
[kubeconfig] Writing "kubelet.conf" kubeconfig file
I0110 19:37:51.993205   45478 kubeconfig.go:103] creating kubeconfig file for controller-manager.conf
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
I0110 19:37:52.407803   45478 kubeconfig.go:103] creating kubeconfig file for scheduler.conf
[kubeconfig] Writing "scheduler.conf" kubeconfig file
I0110 19:37:52.674353   45478 kubelet.go:65] Stopping the kubelet
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
I0110 19:37:52.899760   45478 manifests.go:99] [control-plane] getting StaticPodSpecs
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I0110 19:37:52.900033   45478 certs.go:522] validating certificate period for CA certificate
I0110 19:37:52.900154   45478 manifests.go:125] [control-plane] adding volume "ca-certs" for component "kube-apiserver"
I0110 19:37:52.900170   45478 manifests.go:125] [control-plane] adding volume "etc-pki" for component "kube-apiserver"
I0110 19:37:52.900187   45478 manifests.go:125] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"
I0110 19:37:52.903121   45478 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
I0110 19:37:52.903144   45478 manifests.go:99] [control-plane] getting StaticPodSpecs
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I0110 19:37:52.903398   45478 manifests.go:125] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"
I0110 19:37:52.903414   45478 manifests.go:125] [control-plane] adding volume "etc-pki" for component "kube-controller-manager"
I0110 19:37:52.903424   45478 manifests.go:125] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"
I0110 19:37:52.903434   45478 manifests.go:125] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"
I0110 19:37:52.903444   45478 manifests.go:125] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"
I0110 19:37:52.904350   45478 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[control-plane] Creating static Pod manifest for "kube-scheduler"
I0110 19:37:52.904373   45478 manifests.go:99] [control-plane] getting StaticPodSpecs
I0110 19:37:52.904629   45478 manifests.go:125] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
I0110 19:37:52.905238   45478 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I0110 19:37:52.906024   45478 local.go:65] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
I0110 19:37:52.906041   45478 waitcontrolplane.go:83] [wait-control-plane] Waiting for the API server to be healthy
I0110 19:37:52.907986   45478 loader.go:372] Config loaded from file:  /etc/kubernetes/admin.conf
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
I0110 19:37:52.908814   45478 round_trippers.go:466] curl -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.24.8 (linux/amd64) kubernetes/fdc7750" 'https://10.189.20.65:6443/healthz?timeout=10s'
I0110 19:37:52.909272   45478 round_trippers.go:510] HTTP Trace: Dial to tcp:10.199.196.187:40404 succeed
I0110 19:37:52.913372   45478 round_trippers.go:553] GET https://10.189.20.65:6443/healthz?timeout=10s  in 4 milliseconds
I0110 19:37:52.913405   45478 round_trippers.go:570] HTTP Statistics: DNSLookup 0 ms Dial 0 ms TLSHandshake 0 ms Duration 4 ms
I0110 19:37:52.913427   45478 round_trippers.go:577] Response Headers:
I0110 19:37:53.413803   45478 round_trippers.go:466] curl -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.24.8 (linux/amd64) kubernetes/fdc7750" 'https://10.189.20.65:6443/healthz?timeout=10s'
I0110 19:37:53.414255   45478 round_trippers.go:510] HTTP Trace: Dial to tcp:10.199.196.187:40404 succeed
I0110 19:37:53.417547   45478 round_trippers.go:553] GET https://10.189.20.65:6443/healthz?timeout=10s  in 3 milliseconds
I0110 19:37:53.417574   45478 round_trippers.go:570] HTTP Statistics: DNSLookup 0 ms Dial 0 ms TLSHandshake 0 ms Duration 3 ms
I0110 19:37:53.417586   45478 round_trippers.go:577] Response Headers:
I0110 19:37:53.914401   45478 round_trippers.go:466] curl -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.24.8 (linux/amd64) kubernetes/fdc7750" 'https://10.189.20.65:6443/healthz?timeout=10s'
I0110 19:37:53.915166   45478 round_trippers.go:510] HTTP Trace: Dial to tcp:10.199.196.187:40404 succeed
I0110 19:37:53.919058   45478 round_trippers.go:553] GET https://10.189.20.65:6443/healthz?timeout=10s  in 4 milliseconds
I0110 19:37:53.919081   45478 round_trippers.go:570] HTTP Statistics: DNSLookup 0 ms Dial 0 ms TLSHandshake 0 ms Duration 4 ms
I0110 19:37:53.919092   45478 round_trippers.go:577] Response Headers:
I0110 19:37:54.413653   45478 round_trippers.go:466] curl -v -XGET  -H "Accept: application/json, */*" -H "User-Agent: kubeadm/v1.24.8 (linux/amd64) kubernetes/fdc7750" 'https://10.189.20.65:6443/healthz?timeout=10s'
I0110 19:37:54.414227   45478 round_trippers.go:510] HTTP Trace: Dial to tcp:10.199.196.187:40404 succeed
I0110 19:37:58.229998   45478 round_trippers.go:553] GET https://10.189.20.65:6443/healthz?timeout=10s 500 Internal Server Error in 3816 milliseconds
I0110 19:37:58.230037   45478 round_trippers.go:570] HTTP Statistics: DNSLookup 0 ms Dial 0 ms TLSHandshake 3805 ms ServerProcessing 6 ms Duration 3816 ms
I0110 19:37:58.230093   45478 round_trippers.go:577] Response Headers:
I0110 19:37:58.230120   45478 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid:
I0110 19:37:58.230136   45478 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid:
I0110 19:37:58.230148   45478 round_trippers.go:580]     Content-Length: 1263
I0110 19:37:58.230158   45478 round_trippers.go:580]     Date: Tue, 10 Jan 2023 11:37:58 GMT
I0110 19:37:58.230187   45478 round_trippers.go:580]     Audit-Id: d80eb279-2d6e-4c1b-ad70-a8a119c1876c
I0110 19:37:58.230240   45478 round_trippers.go:580]     Cache-Control: no-cache, private
I0110 19:37:58.230266   45478 round_trippers.go:580]     Content-Type: text/plain; charset=utf-8
I0110 19:37:58.230286   45478 round_trippers.go:580]     X-Content-Type-Options: nosniff
I0110 19:37:58.230353   45478 request.go:1154] Response Body: [+]ping ok
[+]log ok
[+]etcd ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[-]poststarthook/start-apiextensions-controllers failed: reason withheld
[-]poststarthook/crd-informer-synced failed: reason withheld
[-]poststarthook/bootstrap-controller failed: reason withheld
[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason w
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Terry_Tsang

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值