Kubernetes[k8s] 使用containerd,安装1.27.3 - 1.28.0安装教程

Kubernetes[k8s] 使用containerd,安装1.27.3 - 1.28.0安装教程

环境:centos7.9

节点IP地址
master1192.168.153.131
node1192.168.153.132
node2192.168.153.133
master2192.168.153.134
master3192.168.153.135

提前需要准备的内容:

# 所有机器设置hostname
hostnamectl set-hostname  master1
hostnamectl set-hostname  node1
​
# 所有机器增加内网ip和 master1 对应关系
vi /etc/hosts
​
如:
192.168.1.1 master1
192.168.1.3 node1
# 更新系统
yum update

安装时间同步NTP服务

1、# Cneos7默认已安装,chrony守护程序 的默认位置是/usr/sbin/chronyd。命令行实用程序将安装到/usr/bin/chronyc
yum install chrony
# 启动命令
systemctl start chronyd
systemctl status chronyd
systemctl enable chronyd
2、配置客户端操作
# Centos7.6已默认安装,查看状态
systemctl status  chronyd
# 注释其他server开头的配置,添加阿里云NTP公共时间同步服务器
vim /etc/chrony.conf
添加内容
server 192.168.1.1 iburst
server ntp.aliyun.com iburst
allow 0.0.0.0/0
local stratum 10
# 重启chronyd
systemctl restart   chronyd
# 查看时间同步源,查看时间同步进度
chronyc sources –v
3、配置服务端
# Centos7.9已默认安装,查看状态
systemctl status  chronyd
# # 注释其他server开头的配置,添加本地NTP公共时间同步服务器
vim /etc/chrony.conf
server 192.168.1.1 iburst
# 重启chronyd
systemctl restart   chronyd
# 查看时间同步源,查看时间同步进度
chronyc sources –v

安装步骤:

1、关闭防火墙

# 注意,如果不关闭防火墙,需要将kubernates所有端口放行
sudo systemctl stop firewalld.service 
sudo systemctl disable firewalld.service

2、关闭交换空间

sudo swapoff -a
sudo sed -i 's/.*swap.*/#&/' /etc/fstab

3、关闭selinux

getenforce
cat /etc/selinux/config
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
cat /etc/selinux/config

4、使用阿里云的yum库

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
# 是否开启本仓库
enabled=1
# 是否检查 gpg 签名文件
gpgcheck=0
# 是否检查 gpg 签名文件
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
​
EOF

5、设置网桥

# 设置所需的 sysctl 参数,参数在重新启动后保持不变
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
vm.swappiness=0
net.bridge.bridge-nf-call-iptables= 1
net.bridge.bridge-nf-call-ip6tables= 1
net.ipv4.ip_forward= 1
​
EOF
​
# 应用 sysctl 参数而不重新启动
sudo sysctl --system

6、安装containerd

由于新的Kubernates [1.24.0以上] 建议使用contanerd, 而且kubernates如何使用containerd 不会像使用docker一样,要中间转几层,故其性能很好。

​
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo 
​
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
sudo yum install -y containerd.io containerd
​
sudo systemctl stop containerd.service
​
sudo containerd config default > /etc/containerd/config.toml
sudo sed -i "s#registry.k8s.io/pause#registry.cn-hangzhou.aliyuncs.com/google_containers/pause#g" /etc/containerd/config.toml
​
# 更改/etc/containerd/config.toml  ,disabled_plugins 中删除 cri
cp /etc/containerd/config.toml   /etc/containerd/config.toml.bak
vi /etc/containerd/config.toml
sudo sed -i "s#SystemdCgroup = false#SystemdCgroup = true#g" /etc/containerd/config.toml
​
sudo systemctl enable --now containerd.service
sudo systemctl status containerd.service
​
sudo modprobe br_netfilter

7、安装K8S

sudo yum install -y kubelet-1.27.3-0 kubeadm-1.27.3-0 kubectl-1.27.3-0 --disableexcludes=kubernetes --nogpgcheck
sudo systemctl daemon-reload
sudo systemctl restart kubelet
sudo systemctl enable kubelet

8、初始化k8s master节点

kubeadm init --image-repository=registry.aliyuncs.com/google_containers 
​
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
​
# 让master参与服务调度,不做control-plane
kubectl taint node master1 node-role.kubernetes.io/control-plane-
kubectl label node master1 kubernetes.io/role=master
​
sudo  crictl config runtime-endpoint unix:///var/run/containerd/containerd.sock

增加k8s node节点

利用上一步中 kubeadm init后产生的命令在node节点中执行

kubeadm join 192.168.1.1:6443 --token token.fake  --discovery-token-ca-cert-hash sha256:fake
##token更新,重新生成token,代码如下:
​
kubeadm token create --print-join-command

9、安装calico网络插件

wget --no-check-certificate https://projectcalico.docs.tigera.io/archive/v3.25/manifests/calico.yaml
# 修改 calico.yaml 文件
vim calico.yaml
# 在 - name: CLUSTER_TYPE 下方添加如下内容
- name: CLUSTER_TYPE
  value: "k8s,bgp"
  # 下方为新增内容
- name: IP_AUTODETECTION_METHOD
  value: "interface=网卡名称"
# 例如:- name: IP_AUTODETECTION_METHOD
# 例如:  value: "interface=eth0" 可使用通配符,例如:interface="eth.*|en.*"
​
​
kubectl apply -f calico.yaml

查看集群状态

kubectl cluster-info
kubectl get nodes
kubectl get pods -A -o wide

常用命令:

# 查看镜像
ctr image list
或者
crictl images
 
# 拉取镜像, 分为非k8s容器用 和 k8s容器用。一定要加上--all-platforms
ctr i pull --all-platforms registry.xxxxx/pause:3.2
ctr -n k8s.io i pull --all-platforms registry.aliyuncs.com/google_containers/pause:3.2
或者,要登录的harbor
ctr i pull --user user:passwd --all-platforms registry.aliyuncs.com/google_containers /pause:3.2
或者,不推荐,没有 --all-platforms
crictl pull --creds user:passwd registry.aliyuncs.com/google_containers /pause:3.2
​
# 镜像打tag
镜像标记tag
ctr -n k8s.io i tag registry.xxxxx/pause:3.2 k8s.gcr.io/pause:3.2
或者 强制覆盖
ctr -n k8s.io i tag --force registry.xxxxx/pause:3.2 k8s.gcr.io/pause:3.2
​
# 删除镜像tag
ctr -n k8s.io i rm registry.xxxxx/pause:3.2
​
# 推送镜像
ctr i push --all-platforms  --user user:passwd registry.xxxxx/pause:3.2
# 导出/保存镜像
​
ctr -n=k8s.io  i export kube-apiserver:v1.28.0.tar xxxxx.com/kube-apiserver:v1.28.0 --all-platforms
​
ctr -n=k8s.io  i import kube-apiserver:v1.28.0.tar
​

加入master2节点

1、在唯一的master节点上执行 初始化主节点(master)时上传证书到 etcd 中
[root@master1 ~]# kubeadm init phase upload-certs --upload-certs
I0326 22:08:37.656047  130527 version.go:256] remote version is much newer: v1.29.3; falling back to: stable-1.27
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
27e913069579a62212108092eda2fbbe115dba409793e0a83401c5b439bf37f1
2.继续执行生成加入集群的token
[root@master01 ~]# kubeadm token create --print-join-command
kubeadm join 192.168.26.222:16443 --token xd1v1g.pgd7ql6gdhojr3gu --discovery-token-ca-cert-hash sha256:41b6a4f7cf1e374cba12770671176f024e9ea9db31051a21f21e4edab961d754 --control-plane --certificate-key 27e913069579a62212108092eda2fbbe115dba409793e0a83401c5b439bf37f1
​
#使用合并的join添加到master2中
[root@master02 ~]# kubeadm join 192.168.26.222:16443 --token kwrdra.06ip9206udqm7ojc --discovery-token-ca-cert-hash sha256:41b6a4f7cf1e374cba12770671176f024e9ea9db31051a21f21e4edab961d754 --control-plane --certificate-key 1b0dc9596a9e70d39874e799b935a23a26b295650579a23c4e4f23c138784aee 
​
[root@master2 ~]# kubectl cluster-info
Kubernetes control plane is running at https://192.168.153.131:6443
CoreDNS is running at https://192.168.153.131:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
​
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
​
​
#如果在第一个集群使用kubeadm初始化时没有指定vip则会出现以下报错:
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
error execution phase preflight: 
One or more conditions for hosting a new control plane instance is not satisfied.
​
unable to add a new control plane instance to a cluster that doesn't have a stable controlPlaneEndpoint address
​
Please ensure that:
* The cluster has a stable controlPlaneEndpoint address.
* The certificates that must be shared among control plane instances are provided.
​
#解决办法如下:
1.在主master节点查看kubeadm-config.yaml
[root@master01 ~]# kubectl -n kube-system get cm kubeadm-config -oyaml|grep controlPlaneEndpoint
如果没有筛选到controlPlaneEndpoint,代表初始化时没有配置
​
2.编辑kubeadm-config,添加controlPlaneEndpoint
kubectl -n kube-system edit cm kubeadm-config
​
3.新加入一行插入在此位置
[root@master01 ~]# kubectl -n kube-system get cm kubeadm-config -oyaml|grep -C 3 controlPlaneEndpoint
    apiVersion: kubeadm.k8s.io/v1beta3
    certificatesDir: /etc/kubernetes/pki
    clusterName: kubernetes
    controlPlaneEndpoint: 192.168.26.222:16443 #高可用的vip
    controllerManager: {}
    dns: {}
    etcd:
​
4.再次使用join命令加入master
​
#根据提示,执行新建文件夹等命令
​

#加入master3节点步骤和master2一样。

[root@master3 ~]# kubectl get nodes
NAME      STATUS   ROLES                  AGE     VERSION
master1   Ready    control-plane,master   20d     v1.27.3
master2   Ready    control-plane,master   5d17h   v1.27.3
master3   Ready    control-plane,master   6m3s    v1.27.3
node1     Ready    <none>                 17d     v1.27.3
node2     Ready    <none>                 17d     v1.27.3

错误集合:

#错误1:hostname "master1" could not be reached

[root@master1 ~]# kubeadm init --image-repository=registry.aliyuncs.com/google_containers
[WARNING Hostname]: hostname "master1" could not be reached
[WARNING Hostname]: hostname "master1": lookup master1 on 192.168.153.2:53: no such host

#解决:修改vim /etc/hosts文件内容同时对应的主机名称要确实对应关系

192.168.153.131 master1
192.168.153.132 node1
192.168.153.133 node2

#错误2:[ERROR FileContent--proc-sys-net-bridge-bridge-nf-call-iptables]: /proc/sys/net/bridge/bridge-nf-call-iptables does not exist

[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=... To see the stack trace of this error execute with --v=5 or higher #解决:修改在/etc/sysctl.conf中添加:

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

#如果:提示sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: 没有那个文件或目录 #则确认是否驱动加载完成,执行下面操作命令:

[root@master1 ~]# modprobe br_netfilter
[root@master1 ~]# bridge 
[root@master1 ~]# sysctl -p

#错误3:k8s从节点报错:The connection to the server localhost:8080 was refused - did you specify the right host or port?

[root@node2 ~]# kubectl get nodes
E0318 21:48:42.708435   99161 memcache.go:265] couldn't get current server API group list: Get "http://localhost:8080/api?timeout=32s": dial tcp [::1]:8080: connect: connection refused

#解决:原因:kubernetes-admin命令没有同步过来

将主节点的配置 /etc/kubernetes/admin.conf 复制到本机,再重新声明环境变量
1、复制配置文件
scp root@主节点服务器地址:/etc/kubernetes/admin.conf /etc/kubernetes/
2、添加环境变量
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
3、申明环境变量
source ~/.bash_profile

4、重新查看配置

[root@node2 ~]# kubectl get nodes
NAME      STATUS   ROLES                  AGE   VERSION
master1   Ready    control-plane,master   59m   v1.27.3
node1     Ready    <none>                 44m   v1.27.3
node2     Ready    <none>                 43m   v1.27.3
提示如果以上问题出现主节点也就是Master节点,请重新执行第8步骤初始化主节点。

#错误4:集群NotReady解决方法

删除节点

# 首先驱逐要删除节点(假设节点名为k8s-node1)上的pods(master节点执行)
kubectl drain k8s-node1 --delete-local-data --force --ignore-daemonsets
# 然后执行删除(master节点执行)
kubectl delete node k8s-node1
# 最后在删除的节点机器上执行
kubeadm reset
然后删除rm -rf $HOME/.kube

添加节点

添加新节点需要在原master节点获取token和hash值

# 获取token(在master节点执行)
kubeadm token create    # 有效期24小时
# 生成永久token命令: kubeadm token create --ttl 0
# 查看token
kubeadm token list
# 获取hash值(在master节点执行)
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
# 加入集群(在新节点机器上执行)
kubeadm join $ip:$port --token $token_value --discovery-token-ca-cert-hash  sha256:$hash_value
# 其中
# $ip 为master节点ip地址
# $port 为k8s端口号,通常为6443
# $token_value 为上面生成的token值
# $hash_value 为上面生成的hash值

重启node节点

# 一、 master节点直接在对应节点重启kubelet服务
systemctl restart kubelet
# 二、slave节点
# 1. 设置节点为不可调度状态(在master节点执行)
kubectl cordon nodename
# 2. 驱逐节点上pods(在master节点执行)
kubectl drain nodename --delete-local-data --force --ignore-daemonsets
# 3. 重启服务(在对应节点机器上执行)
systemctl restart kubelet
# 4. 恢复节点为可调度状态(在master节点执行)
kubectl uncordon nodename

#错误5:新建master2节点,第一次加入错误如下:

[root@master2 ~]# kubeadm join 192.168.153.131:6443 --token afdm9h.gdw6m1d074uh0th7 --discovery-token-ca-cert-hash sha256:c72322f45c1a482870bae2153e47ff09245eb9a50bd19b86662293eec862513e   --control-plane
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
error execution phase preflight: 
One or more conditions for hosting a new control plane instance is not satisfied.
​
unable to add a new control plane instance to a cluster that doesn't have a stable controlPlaneEndpoint address
​
Please ensure that:
* The cluster has a stable controlPlaneEndpoint address.
* The certificates that must be shared among control plane instances are provided.
​

解决:

解决办法如下:
查看kubeadm-config.yaml
kubectl -n kube-system get cm kubeadm-config -oyaml
发现没有controlPlaneEndpoint
​
添加controlPlaneEndpoint
kubectl -n kube-system edit cm kubeadm-config
大概在这么个位置:
​
kind: ClusterConfiguration
kubernetesVersion: v1.18.0
controlPlaneEndpoint: 192.168.153.134:6443
然后再在准备添加为master的节点上执行kubeadm join的命令

#错误6:master2中提示error execution phase preflight: One or more conditions for hosting a new control plane instance is not satisfied.

[root@master2 ~]# kubeadm join 192.168.153.131:6443 --token afdm9h.gdw6m1d074uh0th7 --discovery-token-ca-cert-hash sha256:c72322f45c1a482870bae2153e47ff09245eb9a50bd19b86662293eec862513e   --control-plane
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
error execution phase preflight: 
One or more conditions for hosting a new control plane instance is not satisfied.
​
[failure loading certificate for CA: couldn't load the certificate file /etc/kubernetes/pki/ca.crt: open /etc/kubernetes/pki/ca.crt: no such file or directory, failure loading key for service account: couldn't load the private key file /etc/kubernetes/pki/sa.key: open /etc/kubernetes/pki/sa.key: no such file or directory, failure loading certificate for front-proxy CA: couldn't load the certificate file /etc/kubernetes/pki/front-proxy-ca.crt: open /etc/kubernetes/pki/front-proxy-ca.crt: no such file or directory, failure loading certificate for etcd CA: couldn't load the certificate file /etc/kubernetes/pki/etcd/ca.crt: open /etc/kubernetes/pki/etcd/ca.crt: no such file or directory]
​
Please ensure that:
* The cluster has a stable controlPlaneEndpoint address.
* The certificates that must be shared among control plane instances are provided.
​
​
To see the stack trace of this error execute with --v=5 or higher
​

解决办法,在master1中重新生成join进行重新加入

[root@master2 ~]# kubeadm join 192.168.153.131:6443 --token bhysu1.tiuusulhx0ej6y0w --discovery-token-ca-cert-hash sha256:c72322f45c1a482870bae2153e47ff09245eb9a50bd19b86662293eec862513e --control-plane --certificate-key 71da9034de63d4b45ed3097b938a35dbfac6ffa8a87c8cce793e9850cb11f827
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0402 04:30:27.065137   75148 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [192.160.0.10]; the provided value is: [10.96.0.10]
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
W0402 04:30:27.136126   75148 checks.go:835] detected that the sandbox image "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6" of the container runtime is inconsistent with that used by kubeadm. It is recommended that using "registry.aliyuncs.com/google_containers/pause:3.9" as the CRI sandbox image.
​

#错误7:master2 状态提示NotReady

[root@master2 ~]# kubectl get nodes
NAME      STATUS     ROLES                  AGE     VERSION
master1   Ready      control-plane,master   14d     v1.27.3
master2   NotReady   control-plane          5m53s   v1.27.3
node1     Ready      <none>                 11d     v1.27.3
node2     Ready      <none>                 11d     v1.27.3
​

解决:

#使用kubectl get pods -n kube-system -o wide 提示错误pods calico-node-vq5kh镜像文件
[root@master2 ~]# kubectl get pods -n kube-system -o wide
NAME                                       READY   STATUS                  RESTARTS       AGE   IP                NODE      NOMINATED NODE   READINESS GATES
calico-kube-controllers-6c99c8747f-tf5vm   1/1     Running                 2 (11d ago)    13d   172.16.137.74     master1   <none>           <none>
calico-node-2xsnv                          1/1     Running                 0              11d   192.168.153.132   node1     <none>           <none>
calico-node-thj8t                          1/1     Running                 0              11d   192.168.153.133   node2     <none>           <none>
calico-node-vq5kh                          0/1     Init:ImagePullBackOff   0              30m   192.168.153.134   master2   <none>           <none>
<none>           <none>
#找到缺失的镜像文件
[root@master2 ~]# kubectl describe pod calico-node-vq5kh  -n kube-system
Name:                 calico-node-vq5kh
Namespace:            kube-system
Priority:             2000001000
Priority Class Name:  system-node-critical
Service Account:      calico-node
Node:                 master2/192.168.153.134
Start Time:           Tue, 02 Apr 2024 04:30:32 +0800
Labels:               controller-revision-hash=5579898cc4
                      k8s-app=calico-node
                      pod-template-generation=1
Annotations:          <none>
Status:               Pending
IP:                   192.168.153.134
IPs:
  IP:           192.168.153.134
Controlled By:  DaemonSet/calico-node
Init Containers:
  upgrade-ipam:
    Container ID:  
    Image:         docker.io/calico/cni:v3.25.0
#通过ctr pull docker.io/calico/cni:v3.25.0 下面的错误是时间不正确,校准时间
[root@master2 ~]# ctr pull docker.io/calico/cni:v3.25.0
No help topic for 'pull'
[root@master2 ~]# ctr i pull --all-platforms docker.io/calico/cni:v3.25.0
docker.io/calico/cni:v3.25.0: resolving      |--------------------------------------| 
elapsed: 0.5 s                total:   0.0 B (0.0 B/s)                                         
INFO[0000] trying next host                              error="failed to do request: Head \"https://registry-1.docker.io/v2/calico/cni/manifests/v3.25.0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2024-04-02T05:02:55+08:00 is before 2024-04-04T00:00:00Z" host=registry-1.docker.io
ctr: failed to resolve reference "docker.io/calico/cni:v3.25.0": failed to do request: Head "https://registry-1.docker.io/v2/calico/cni/manifests/v3.25.0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2024-04-02T05:02:55+08:00 is before 2024-04-04T00:00:00Z
​

#错误8:master2、master3状态少master标识

[root@master2 ~]# kubectl get nodes
NAME      STATUS   ROLES                  AGE     VERSION
master1   Ready    control-plane,master   18d     v1.27.3
master2   Ready    control-plane          3d19h   v1.27.3
node1     Ready    <none>                 15d     v1.27.3
node2     Ready    <none>                 15d     v1.27.3

解决:应为master2的现在还没有参与调度ing,现在需要添加调度上去,输入以下命令:

[root@master2 ~]# kubectl label node master2 kubernetes.io/role=master
node/master2 labeled
#注意上面的是master2
[root@master2 ~]# kubectl taint node master2 node-role.kubernetes.io/control-plane-
node/master2 untainted
#注意上面的是master2
[root@master2 ~]# 
#最后观察状态
[root@master2 ~]# kubectl get nodes
NAME      STATUS   ROLES                  AGE     VERSION
master1   Ready    control-plane,master   18d     v1.27.3
master2   Ready    control-plane,master   4d12h   v1.27.3
node1     Ready    <none>                 15d     v1.27.3
node2     Ready    <none>                 15d     v1.27.3

#错误9:error execution phase preflight: couldn't validate the identity of the API Server: could not find a JWS signature in the cluster-info ConfigMap for token ID "bhysu1"

[root@master3 ~]# kubeadm join 192.168.153.131:6443 --token bhysu1.tiuusulhx0ej6y0w --discovery-token-ca-cert-hash sha256:c72322f45c1a482870bae2153e47ff09245eb9a50bd19b86662293eec862513e --control-plane --certificate-key 71da9034de63d4b45ed3097b938a35dbfac6ffa8a87c8cce793e9850cb11f827
[preflight] Running pre-flight checks
error execution phase preflight: couldn't validate the identity of the API Server: could not find a JWS signature in the cluster-info ConfigMap for token ID "bhysu1"
To see the stack trace of this error execute with --v=5 or higher
​

解决:1、查看服务器时间同步情况;2、在master1查看证书时间,过期重新生成时间

[root@master3 ~]# kubeadm join 192.168.153.131:6443 --token lga4o4.adlb7f3zs5ddyz2m --discovery-token-ca-cert-hash sha256:c72322f45c1a482870bae2153e47ff09245eb9a50bd19b86662293eec862513e --control-plane --certificate-key 170346a2a4200082d389afe60f53577a0d3db6abf16691cbb21c43813108ba2e
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0407 22:21:33.892468   23976 utils.go:69] The recommended value for "clusterDNS" in "KubeletConfiguration" is: [192.160.0.10]; the provided value is: [10.96.0.10]
#后面就是提示正确步骤,最后按照教程创建文件和文件夹
  • 25
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
安装Kubernetes 1.27.3在CentOS 7.9上的步骤如下: 1. 配置Kubernetes的YUM源。在终端中执行以下命令: ``` cat > /etc/yum.repos.d/kubernetes.repo <<EOF \[kubernetes\] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF ``` 2. 安装Kubernetes集群软件。在终端中执行以下命令: ``` yum install -y kubeadm-1.27.3 kubelet-1.27.3 kubectl-1.27.3 ``` 3. 配置Containerd。在终端中执行以下命令: ``` wget https://github.com/containerd/containerd/releases/download/v1.7.0/cri-containerd-cni-1.7.0-linux-amd64.tar.gz tar xf cri-containerd-cni-1.7.0-linux-amd64.tar.gz -C / mkdir /etc/containerd containerd config default > /etc/containerd/config.toml vim /etc/containerd/config.toml ``` 在打开的文件中,将`sandbox_image`的值从`"registry.k8s.io/pause:3.8"`修改为`"registry.k8s.io/pause:3.9"`。 4. 启动Containerd并设置开机自启动。在终端中执行以下命令: ``` systemctl enable --now containerd ``` 5. 验证Containerd的版本。在终端中执行以下命令: ``` containerd --version ``` 这样,你就成功在CentOS 7.9上安装Kubernetes 1.27.3。请确保按照上述步骤逐一执行,并根据需要进行相应的配置修改。 #### 引用[.reference_title] - *1* *3* [kubernetes 1.27.3 集群部署方案](https://blog.csdn.net/weixin_45623111/article/details/131683965)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^control_2,239^v3^insert_chatgpt"}} ] [.reference_item] - *2* [centOS安装K8s](https://blog.csdn.net/frankgy01/article/details/127936391)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^control_2,239^v3^insert_chatgpt"}} ] [.reference_item] [ .reference_list ]

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值