k8s 1.23单Master版本containerd运行时版本升级【开发/测试环境】

本文详细描述了如何在Kubernetes集群中逐步升级master和worker节点,包括修改cri-socket设置,选择升级版本,执行升级计划,处理网络插件错误,以及重启服务。
摘要由CSDN通过智能技术生成

升级前状态查看

[root@node1 modules-load.d]# kubectl get node
NAME    STATUS   ROLES                  AGE     VERSION
node1   Ready    control-plane,master   5h44m   v1.23.17
node2   Ready    <none>                 5h43m   v1.23.17
node3   Ready    <none>                 5h42m   v1.23.17

前置条件

#!!!非常重要
kubectl edit nodes node1
#需要修改kubeadm.alpha.kubernetes.io/cri-socket该值为containerd的sock文件
kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/containerd/containerd.sock

master节点升级

[root@node1 modules-load.d]# yum  --showduplicates list kubeadm | grep 1.24
kubeadm.x86_64                       1.24.0-0                        kubernetes 
kubeadm.x86_64                       1.24.1-0                        kubernetes 
kubeadm.x86_64                       1.24.2-0                        kubernetes 
kubeadm.x86_64                       1.24.3-0                        kubernetes 
kubeadm.x86_64                       1.24.4-0                        kubernetes 
kubeadm.x86_64                       1.24.5-0                        kubernetes 
kubeadm.x86_64                       1.24.6-0                        kubernetes 
kubeadm.x86_64                       1.24.7-0                        kubernetes 
kubeadm.x86_64                       1.24.8-0                        kubernetes 
kubeadm.x86_64                       1.24.9-0                        kubernetes 
kubeadm.x86_64                       1.24.10-0                       kubernetes 
kubeadm.x86_64                       1.24.11-0                       kubernetes 
kubeadm.x86_64                       1.24.12-0                       kubernetes 
kubeadm.x86_64                       1.24.13-0                       kubernetes 
kubeadm.x86_64                       1.24.14-0                       kubernetes 
kubeadm.x86_64                       1.24.15-0                       kubernetes 
kubeadm.x86_64                       1.24.16-0                       kubernetes 
kubeadm.x86_64                       1.24.17-0                       kubernetes 

#升级到指定版本,大版本只能逐个按顺序升级,无法跳大版本
yum install kubeadm-1.24.17-0 -y

#查看升级计划
kubeadm upgrade plan
#输出信息如下
[root@node1 modules-load.d]# kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0228 22:11:12.113344    8010 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/dockershim.sock". Please update your configuration!
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.23.17
[upgrade/versions] kubeadm version: v1.24.17
I0228 22:11:19.835894    8010 version.go:256] remote version is much newer: v1.29.2; falling back to: stable-1.24
[upgrade/versions] Target version: v1.24.17
[upgrade/versions] Latest version in the v1.23 series: v1.23.17

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT        TARGET
kubelet     3 x v1.23.17   v1.24.17

Upgrade to the latest stable version:

COMPONENT                 CURRENT    TARGET
kube-apiserver            v1.23.17   v1.24.17
kube-controller-manager   v1.23.17   v1.24.17
kube-scheduler            v1.23.17   v1.24.17
kube-proxy                v1.23.17   v1.24.17
CoreDNS                   v1.8.6     v1.8.6
etcd                      3.5.6-0    3.5.6-0

You can now apply the upgrade by executing the following command:

        kubeadm upgrade apply v1.24.17

_____________________________________________________________________


The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column.

API GROUP                 CURRENT VERSION   PREFERRED VERSION   MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io   v1alpha1          v1alpha1            no
kubelet.config.k8s.io     v1beta1           v1beta1             no
_____________________________________________________________________


#执行升级计划
#!!!注意:升级过程集群可能无法正常进行变更操作,但是运行中的pod不受影响
kubeadm upgrade apply v1.24.17
#输出信息如下:
[root@node1 modules-load.d]# kubeadm upgrade apply v1.24.17
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
W0228 22:30:41.880098   21530 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/containerd/containerd.sock". Please update your configuration!
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.24.17"
[upgrade/versions] Cluster version: v1.23.17
[upgrade/versions] kubeadm version: v1.24.17
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
[upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
[upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.24.17" (timeout: 5m0s)...
[upgrade/etcd] Upgrading to TLS for etcd
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Current and new manifests of etcd are equal, skipping upgrade
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests299697865"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2024-02-28-22-31-01/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2024-02-28-22-31-01/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2024-02-28-22-31-01/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upgrade/postupgrade] Removing the deprecated label node-role.kubernetes.io/master='' from all control plane Nodes. After this step only the label node-role.kubernetes.io/control-plane='' will be present on control plane Nodes.
[upgrade/postupgrade] Adding the new taint &Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,} to all control plane Nodes. After this step both taints &Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,} and &Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,} should be present on control plane Nodes.
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.24.17". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.

#更新kubelet\kubectl
yum  --showduplicates list kubelet | grep 1.24
yum  --showduplicates list kubectl | grep 1.24
yum install kubelet-1.24.17-0 kubectl-1.24.17-0 -y

#重启kubelet服务
systemctl daemon-reload
systemctl restart kubelet

#注意!!!!,此处升级完之后会报错如下
#node1 kubelet: Error: failed to parse kubelet flag: unknown flag: --network-plugin
#解决方法如下,去掉文件中的--network-plugin=cni参数
[root@node1 kubelet]# cat /var/lib/kubelet/kubeadm-flags.env
KUBELET_KUBEADM_ARGS="--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.5"


#master节点升级完成,查看状态
[root@node1 kubelet]# kubectl get node
NAME    STATUS   ROLES           AGE   VERSION
node1   Ready    control-plane   15h   v1.24.17
node2   Ready    <none>          15h   v1.23.17
node3   Ready    <none>          15h   v1.23.17

worker节点升级

#升级组件
yum install kubelet-1.24.17-0 kubectl-1.24.17-0 -y

#注意!!!!,此处升级完之后会报错如下
#node1 kubelet: Error: failed to parse kubelet flag: unknown flag: --network-plugin
#解决方法如下,去掉文件中的--network-plugin=cni参数
[root@node1 kubelet]# cat /var/lib/kubelet/kubeadm-flags.env
KUBELET_KUBEADM_ARGS="--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.5"


#重启服务
systemctl daemon-reload
systemctl restart kubelet

#升级完成,查看状态
[root@node1 kubelet]# kubectl get node
NAME    STATUS   ROLES           AGE   VERSION
node1   Ready    control-plane   15h   v1.24.17
node2   Ready    <none>          15h   v1.24.17
node3   Ready    <none>          15h   v1.24.17

注意点

需要修改/var/lib/kubelet/kubeadm-flags.env文件。

1.23版本KubernetesK8s)是开源容器编排平台的最新发行版。它是一个用于自动部署、扩展和管理容器化应用程序的工具。以下是一些1.23版本K8s资源的特点和功能: 1. Pod:Pod是K8s的最小工作元,用于运行应用程序的容器1.23版本K8s引入了一些新功能,例如支持IPVS代理模式,以提升网络性能和可靠性。 2. Service:Service用于将一组Pod暴露给其他应用程序或用户。1.23版本K8s增加了支持基于路径的域名转发,以简化网络配置和管理。 3. Deployment:Deployment用于管理Pod副本集的部署和更新。1.23版本K8s增强了Deployment的功能,例如引入了滚动升级和回滚功能,以确保应用程序的平滑运行。 4. StatefulSet:StatefulSet用于管理有状态应用程序的部署和更新。1.23版本K8s提供了更灵活的网络标识和稳定的网络持久性,以更好地支持有状态应用程序的部署。 5. DaemonSet:DaemonSet用于在集群中的每个节点上运行一个Pod副本。1.23版本K8s增加了弹性容量管理功能,例如支持节点污点和容忍策略,以更好地适应节点的添加和删除。 6. ConfigMap和Secret:ConfigMap和Secret用于存储应用程序的配置信息和敏感数据。1.23版本K8s提供了更强大的ConfigMap和Secret功能,例如支持环境变量、文件和挂载卷的动态更新。 7. PersistentVolume和PersistentVolumeClaim:PersistentVolume和PersistentVolumeClaim用于提供持久化存储。1.23版本K8s引入了新的存储插件和功能,例如CSI (Container Storage Interface)插件,以增强持久化存储的灵活性和可扩展性。 总之,1.23版本K8s提供了更多功能和改进,以帮助用户更轻松地部署、管理和扩展容器化应用程序。它继续致力于提供可靠、高性能和可扩展的容器编排平台。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值