K8S1.24安装-Containerd方式

1、本文会涉及到一些用于操作Containerd 镜像、容器的命令,例如crictl和ctl客户端命令
其中ctl是containerd提供操作conatinerd的客户端命令,而crictl是k8s提供操作conatinerd的客户端命令
关于ctl和crictl命令使用可以参考此网站:网站1-ctr和crictl常用命令网站2-ctr常用命令

2、本文涉及的离线资源下载地址:下载链接: 链接:https://pan.baidu.com/s/1s96dd3Tjvg4omQUVs5EDJA?pwd=ue09
提取码:ue09

1 基础环境信息

1.1 操作系统信息

操作系统规格主机名ip
centos72核2Glb-k8s-master192.168.25.51
centos71核1Glb-k8s-node1192.168.25.52

1.2 软件信息

软件名称软件版本
K8S1.24.2
Containerd1.5.5

2 基础环境准备 (所有节点执行)

2.1 配置主机名和IP映射关系

[root@lb-k8s-master ~]# vim /etc/hosts
#忘记最后追加内容
192.168.25.51 lb-k8s-master
192.168.25.52 lb-k8s-node1
192.168.25.53 lb-k8s-node2

2.2 关闭防火墙以及sellinux

#关闭防火墙
systemctl stop firewalld
systemctl disable firewalld

2.3 关闭sellinux

#关闭sellinux
# 临时关闭
setenforce 0
# 永久关闭
sudo sed -i "s/^SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config
# 或编辑文件 /etc/selinux/config :
SELINUX=disabled

2.4 关闭交换区

# 查看交换分区的状态
sudo free -m
# 临时关闭
sudo swapoff -a
# 永久关闭: 把 /etc/fstab 中的swap注释掉
sudo sed -i 's/.*swap.*/#&/' /etc/fstab

2.5 允许Iptables检查桥接流量

# 由于开启内核 ipv4 转发需要加载 br_netfilter 模块,所以加载下该模块
modprobe br_netfilter
#编辑文件 /etc/sysctl.d/k8s.conf
vi /etc/sysctl.d/k8s.conf
#在文件中添加以下内容:
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
#执行命令,让配置生效
sysctl -p /etc/sysctl.d/k8s.conf

2.6 配置K8S yum源(K8s软件下载地址)

cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
        http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

# 执行命令创建对应缓存,-disableexcludes 禁掉除了kubernetes之外的别的仓库
yum makecache fast

3 安装Containerd (所有节点执行)

3.1 下载&安装

#Containerd各个版本下载网站:https://github.com/containerd/containerd/releases
#下载压缩包(联网方式)
wget https://github.com/containerd/containerd/releases/download/v1.5.5/cri-containerd-cni-1.5.5-linux-amd64.tar.gz

#直接将压缩包解压到系统的各个目录中
tar -C / -xzf cri-containerd-cni-1.5.5-linux-amd64.tar.gz

#然后要将 /usr/local/bin 和 /usr/local/sbin 追加到 ~/.bashrc 文件的 PATH 环境变量中
export PATH=$PATH:/usr/local/bin:/usr/local/sbin

#然后执行下面的命令使其立即生效
source ~/.bashrc

3.2 修改配置文件(配置镜像加速)

mkdir -p /etc/containerd
containerd config default > /etc/containerd/config.toml
vi /etc/containerd/config.toml
# 修改以下两个配置内容
	SystemdCgroup = true
	sandbox_image = "registry.aliyuncs.com/k8sxio/pause:3.5"
#找到第一行,然后新增第 2,3行。第2,3行配置解释:针对docker的容器通过endpoint指定镜像拉取地址
   [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
     [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
       endpoint = ["https://d8b3zdiw.mirror.aliyuncs.com"]
                       

镜像加速相关配置截图
在这里插入图片描述

3.3 启动containerd

systemctl enable containerd --now
# 验证安装是否成功
ctr version
# 验证启动是否成功
systemctl status containerd

在这里插入图片描述

4 K8S主节点安装 (主节点执行)

4.1 K8S组件下载

yum install -y kubeadm-1.24.2 kubelet-1.24.2 kubectl-1.24.2 --disableexcludes=kubernetes
kubeadm version

4.2 启动Kubelet

# kubelet 设置成开机启动
systemctl enable --now kubelet

注:如果执行systemctl status kubelet,发现kubelet没有处于Active状态,是正常现象,等后面kubeadm init初始化成功后,kubelet才会处于Active状态

4.3 K8s 初始化

kubeadm init \
      --apiserver-advertise-address=192.168.25.51 \
      --image-repository registry.aliyuncs.com/google_containers \
      --kubernetes-version v1.24.2 \
      --service-cidr=10.96.0.0/12 \
      --pod-network-cidr=10.244.0.0/16 \
      --ignore-preflight-errors=all \
      | tee kubeadm-init.log

上面命令执行日志:

[root@lb-k8s-master ~]# kubeadm init \
>       --apiserver-advertise-address=192.168.25.51 \
>       --image-repository registry.aliyuncs.com/google_containers \
>       --kubernetes-version v1.24.2 \
>       --service-cidr=10.96.0.0/12 \
>       --pod-network-cidr=10.244.0.0/16 \
>       --ignore-preflight-errors=all \
>       | tee kubeadm-init.log
        [WARNING NumCPU]: the number of available CPUs 1 is less than the required 2
[init] Using Kubernetes version: v1.24.2
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local lb-k8s-master] and IPs [10.96.0.1 192.168.25.51]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [lb-k8s-master localhost] and IPs [192.168.25.51 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [lb-k8s-master localhost] and IPs [192.168.25.51 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 9.502917 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node lb-k8s-master as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node lb-k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: p2d7oh.y9b2cufhiaw5sny6
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.25.51:6443 --token p2d7oh.y9b2cufhiaw5sny6 \
        --discovery-token-ca-cert-hash sha256:2e305c09b11f2c73635e93e34bf3a69f6d2285f26e8fc0db5f9c44f883d7ca71 

4.4 配置K8S客户端工具kubectl

按4.3上面执行日志结果提示执行相关命令

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

检查效果

在这里插入图片描述

5 K8s工作节点安装 (工作节点执行)

5.1 K8s组件安装

yum install -y kubeadm-1.24.2 kubelet-1.24.2 --disableexcludes=kubernetes

5.2 启动Kubelet

# kubelet 设置成开机启动
systemctl enable --now kubelet

注:如果执行systemctl status kubelet,发现kubelet没有处于Active状态,是正常现象,等后面kubeadm join加入控制节点成功后,kubelet才会处于Active状态

5.3 K8s 初始化

kubeadm join 192.168.25.51:6443 --token p2d7oh.y9b2cufhiaw5sny6 \
        --discovery-token-ca-cert-hash sha256:2e305c09b11f2c73635e93e34bf3a69f6d2285f26e8fc0db5f9c44f883d7ca71 

上面命令执行日志:

[root@lb-k8s-node1 ~]# kubeadm join 192.168.25.51:6443 --token p2d7oh.y9b2cufhiaw5sny6 \
>         --discovery-token-ca-cert-hash sha256:2e305c09b11f2c73635e93e34bf3a69f6d2285f26e8fc0db5f9c44f883d7ca71
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

5.4 token过期时,手动生成token(可选)

注:上一小节5.3 的token具有一定时效性,如果发现token已经过期,那么可以在主节点执行以下操作重新生成token(可选)

生成 --token后面的值:

# 方式1:生成一个有效期为24小时的token
root@master:~# kubeadm token create
# 方式1:生成一个永久的token
root@master:~# kubeadm token create --ttl 0

生成 --discovery-token-ca-cert-hash后面的值:

 openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

注:需要将 sha256: 拼接上上面生成的值才能作为完整的 --discovery-token-ca-cert-hash值

5.5 验证

主节点上执行kubectl get nodes命令

[root@lb-k8s-master ~]# kubectl get nodes
NAME            STATUS   ROLES           AGE    VERSION
lb-k8s-master   Ready    control-plane   30m    v1.24.2
lb-k8s-node1    Ready    <none>          4m9s   v1.24.2

6 安装网络插件calico

6.1 下载calico配置文件

wget -k https://docs.projectcalico.org/manifests/calico.yaml

6.2 修改calico配置

修改 CALICO_IPV4POOL_CIDR 为我们实际的PodIp值

vim calico.yaml
#找到  # - name: CALICO_IPV4POOL_CIDR 将其注释关闭,并且配置对应的value值,其值是上文 kubeadm init 用的pod-network-cidr的参数值。改成如下
            - name: CALICO_IPV4POOL_CIDR
              value: "10.244.0.0/16"

修改后效果:
在这里插入图片描述

6.3 caclico相关镜像说明

注:由于网络问题,你有可能没办法把calico相关镜像拉取到本地,导致calico创建失败,因此这里提供了离线方式导入calico镜像的方式。
1、首先查看calico.yml需要哪些镜像
在这里插入图片描述
可以看出一共需要3个镜像,分别是 docker.io/calico/node:v3.25.0、docker.io/calico/cni:v3.25.0、docker.io/calico/kube-controllers:v3.25.0。

其中:
docker.io/calico/node:v3.25.0:用于DaemonSet,作用在所有节点
docker.io/calico/cni:v3.25.0:用于DaemonSet下的InitContainers,作用在所有节点
docker.io/calico/kube-controllers:v3.25.0:用于Deployment,随机作用在一个节点上

把这3个镜像都上传至所有服务器,然后进行导入。

6.4 caclico相关镜像离线导入

这里提供了calico 的3个离线镜像文件,对对应的镜像上传至对应的服务器

下载链接: 链接:https://pan.baidu.com/s/1s96dd3Tjvg4omQUVs5EDJA?pwd=ue09
提取码:ue09

calico-cni.tar 对应 docker.io/calico/node:v3.25.0
calico-node.tar 对应 docker.io/calico/cni:v3.25.0
calico-kube-controllers.tar 对应 docker.io/calico/kube-controllers:v3.25.0

# 查看当前 镜像列表。如果不了解crictl可以看本文中顶部的引用,学习该命令
crictl images ls

#执行导入命令。如果不了解ctr可以看本文中顶部的引用,学习该命令
 ctr -n k8s.io  images  import  calico-cni.tar
 ctr -n k8s.io  images  import  calico-node.tar
 ctr -n k8s.io  images  import  calico-kube-controllers.tar

以控制节点为例,查看镜像文件导入结果:
在这里插入图片描述

6.5 根据calico.yml创建Calico相关资源(主节点执行)

kubectl apply -f calico.yml

等待一会,查看创建情况,可以看到,均已创建成功!
在这里插入图片描述

7 安装 Metrics监控服务(主节点执行)

7.1 说明

Metrics可以监控各个Pod节点、Node节点的cpu、内存使用情况。默认是没有安装的。
执行kubectl top 命令,提示我们没有安装metrics

在这里插入图片描述

7.2 下载部署文件

# 下载部署文件
wget https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.7.1/components.yaml
# 重命名
mv  components.yaml metrics-server.yml
#查看需要使用到哪些镜像
[root@lb-k8s-master metrics-server]# cat metrics-server.yml  | grep image:
        image: registry.k8s.io/metrics-server/metrics-server:v0.7.1

7.3 修改配置

配置允许不信任的tls校验
metrics-server.yml中添加下面一行,具体见下图

- --kubelet-insecure-tls  #添加

在这里插入图片描述

7.4 离线导入metrics镜像(可选)

查看需要依赖的镜像:
在这里插入图片描述

这里提供了 metrics-server的离线镜像包,如果你本地拉取不了镜像,那就使用此离线镜像包
metrics-server:v0.7.1.tar
这里提供了metrics-server:v0.7.1.tar离线镜像文件,将对应的离线镜像上传至所有服务器

下载链接: 链接:https://pan.baidu.com/s/1s96dd3Tjvg4omQUVs5EDJA?pwd=ue09
提取码:ue09

上传镜像文件metrics-server:v0.7.1.tar到服务器,并执行以下命令:

#,并执行导入镜像命令
ctr -n k8s.io  images  import  calico-cni.tar
# 查看镜像导入结果
ctr  -n k8s.io i ls | grep metri

在这里插入图片描述

7.5 部署Metrics-Server

kubectl apply -f metrics-server.yml

7.6 查看效果

[root@lb-k8s-master metrics-server]# kubectl get pods -A
NAMESPACE     NAME                                       READY   STATUS    RESTARTS       AGE
kube-system   calico-kube-controllers-55fc758c88-n22rl   1/1     Running   12 (67m ago)   10d
kube-system   calico-node-4xjz6                          1/1     Running   6 (67m ago)    10d
kube-system   calico-node-l5tzt                          1/1     Running   1 (67m ago)    10d
kube-system   coredns-74586cf9b6-7mszt                   1/1     Running   16 (67m ago)   10d
kube-system   coredns-74586cf9b6-ztv2n                   1/1     Running   2 (67m ago)    10d
kube-system   etcd-lb-k8s-master                         1/1     Running   2 (67m ago)    10d
kube-system   kube-apiserver-lb-k8s-master               1/1     Running   16 (67m ago)   10d
kube-system   kube-controller-manager-lb-k8s-master      1/1     Running   77 (67m ago)   10d
kube-system   kube-proxy-45ntb                           1/1     Running   1 (67m ago)    10d
kube-system   kube-proxy-vgnbj                           1/1     Running   1 (67m ago)    10d
kube-system   kube-scheduler-lb-k8s-master               1/1     Running   74 (67m ago)   10d
kube-system   metrics-server-669dbbdfc4-jxbjv            1/1     Running   0              32m

[root@lb-k8s-master metrics-server]# kubectl top nodes
NAME            CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
lb-k8s-master   121m         6%     1270Mi          67%       
lb-k8s-node1    163m         16%    555Mi           63% 

在这里插入图片描述

  • 12
    点赞
  • 9
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值