Kubernetes 集群升级指南:无缝升级你的集群

本文详细介绍了如何将Kubernetes集群从v1.24.1版本升级到v1.25.5版本,包括使用kubeadm升级master节点和node节点的步骤,如查看集群状态、解除软件包保留、升级kubeadm、验证升级计划、腾空节点、升级kubelet和kubectl组件等。升级过程涉及组件的验证、手动升级需求以及解禁和重启相关服务。
摘要由CSDN通过智能技术生成

前言

本文演示kubernetes集群从v1.24.1升级到v1.25.5。
相关文档。

一、集群升级过程辅助命令

(1)查看节点上运行的pod。

kubectl get pod -o wide |grep <nodename>

(2)查看集群配置文件。

kubectl -n kube-system get cm kubeadm-config -o yaml

(3)查看当前集群节点。

kubectl get node

二、升级master节点

2.1、升级kubeadm。

# 更新包管理器
sudo apt-get update
# 查看可用版本
apt-cache madison kubeadm

# 解除 kubeadm软件包保留状态
sudo apt-mark unhold kubeadm
# 安装
sudo apt-get install -y kubeadm=1.25.5-00
# 设置为保留,即不自动更新
sudo apt-mark hold kubeadm

# 验证版本
kubeadm version

2.2、验证升级计划

(1)检查可升级到哪些版本,并验证你当前的集群是否可升级。

sudo kubeadm upgrade plan
_____________________________________________________________________

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       TARGET
kubelet     1 x v1.24.1   v1.25.8

Upgrade to the latest stable version:

COMPONENT                 CURRENT   TARGET
kube-apiserver            v1.24.1   v1.25.8
kube-controller-manager   v1.24.1   v1.25.8
kube-scheduler            v1.24.1   v1.25.8
kube-proxy                v1.24.1   v1.25.8
CoreDNS                   v1.8.6    v1.9.3
etcd                      3.5.3-0   3.5.6-0

You can now apply the upgrade by executing the following command:

	kubeadm upgrade apply v1.25.8

Note: Before you can perform this upgrade, you have to update kubeadm to v1.25.8.

_____________________________________________________________________

注意下面的MANUAL字段:

_____________________________________________________________________


The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column.

API GROUP                 CURRENT VERSION   PREFERRED VERSION   MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io   v1alpha1          v1alpha1            no
kubelet.config.k8s.io     v1beta1           v1beta1             no
_____________________________________________________________________

指示哪些主键需要手动升级,如果是yes就要手动升级。

(2)显示哪些差异将被应用于现有的静态 pod 资源清单。

sudo kubeadm upgrade diff 1.25.5
[upgrade/diff] Reading configuration from the cluster...
[upgrade/diff] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
--- /etc/kubernetes/manifests/kube-scheduler.yaml
+++ new manifest
@@ -16,7 +16,7 @@
     - --bind-address=127.0.0.1
     - --kubeconfig=/etc/kubernetes/scheduler.conf
     - --leader-elect=true
-    image: registry.aliyuncs.com/google_containers/kube-scheduler:v1.24.1
+    image: registry.aliyuncs.com/google_containers/kube-scheduler:1.25.5
     imagePullPolicy: IfNotPresent
     livenessProbe:
       failureThreshold: 8
--- /etc/kubernetes/manifests/kube-apiserver.yaml
+++ new manifest
@@ -40,7 +40,7 @@
     - --service-cluster-ip-range=10.96.0.0/12
     - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
     - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
-    image: registry.aliyuncs.com/google_containers/kube-apiserver:v1.24.1
+    image: registry.aliyuncs.com/google_containers/kube-apiserver:1.25.5
     imagePullPolicy: IfNotPresent
     livenessProbe:
       failureThreshold: 8
--- /etc/kubernetes/manifests/kube-controller-manager.yaml
+++ new manifest
@@ -28,7 +28,7 @@
     - --service-account-private-key-file=/etc/kubernetes/pki/sa.key
     - --service-cluster-ip-range=10.96.0.0/12
     - --use-service-account-credentials=true
-    image: registry.aliyuncs.com/google_containers/kube-controller-manager:v1.24.1
+    image: registry.aliyuncs.com/google_containers/kube-controller-manager:1.25.5
     imagePullPolicy: IfNotPresent
     livenessProbe:
       failureThreshold: 8

2.3、master节点升级

(1)升级到 1.25.5版本,此命令仅升级master节点(control plane)。

sudo kubeadm upgrade apply v1.25.5
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.25.5"
[upgrade/versions] Cluster version: v1.24.1
[upgrade/versions] kubeadm version: v1.25.5
[upgrade] Are you sure you want to proceed? [y/N]: y
[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
[upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
[upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.25.5" (timeout: 5m0s)...
[upgrade/etcd] Upgrading to TLS for etcd
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Renewing etcd-server certificate
[upgrade/staticpods] Renewing etcd-peer certificate
[upgrade/staticpods] Renewing etcd-healthcheck-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2023-03-19-08-29-54/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 1 Pods for label selector component=etcd
[upgrade/staticpods] Component "etcd" upgraded successfully!
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests1584419494"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2023-03-19-08-29-54/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2023-03-19-08-29-54/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2023-03-19-08-29-54/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upgrade/postupgrade] Removing the old taint &Taint{Key:node-role.kubernetes.io/master,Value:,Effect:NoSchedule,TimeAdded:<nil>,} from all control plane Nodes. After this step only the &Taint{Key:node-role.kubernetes.io/control-plane,Value:,Effect:NoSchedule,TimeAdded:<nil>,} taint will be present on control plane Nodes.
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.25.5". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.

(2) 腾空节点,即将节点上除守护进程之外的其他进程调度到其他节点,同时将开启调度保护。

kubectl drain <nodename> --ignore-daemonsets
$ kubectl drain k8s-master1 --ignore-daemonsets
node/k8s-master1 cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-flannel/kube-flannel-ds-nxz4d, kube-system/kube-proxy-pbnk4
evicting pod kube-system/coredns-c676cc86f-twm96
evicting pod kube-system/coredns-c676cc86f-mdgbn
pod/coredns-c676cc86f-mdgbn evicted
pod/coredns-c676cc86f-twm96 evicted
node/k8s-master1 drained

$ kubectl get pod -A
NAMESPACE      NAME                                  READY   STATUS    RESTARTS   AGE
kube-flannel   kube-flannel-ds-nxz4d                 1/1     Running   0          136m
kube-system    coredns-c676cc86f-7stvs               0/1     Pending   0          60s
kube-system    coredns-c676cc86f-vmkgv               0/1     Pending   0          60s
kube-system    etcd-k8s-master1                      1/1     Running   0          11m
kube-system    kube-apiserver-k8s-master1            1/1     Running   0          10m
kube-system    kube-controller-manager-k8s-master1   1/1     Running   0          10m
kube-system    kube-proxy-pbnk4                      1/1     Running   0          9m44s
kube-system    kube-scheduler-k8s-master1            1/1     Running   0          9m58s

$ kubectl get node
NAME          STATUS                     ROLES           AGE    VERSION
k8s-master1   Ready,SchedulingDisabled   control-plane   162m   v1.24.1

(3)升级kubelet与kubectl组件。

sudo apt-mark unhold kubelet kubectl
sudo apt-get install -y kubelet=1.25.5-00 kubectl=1.25.5-00
sudo apt-mark hold kubelet kubectl

(4)重启 kubelet。

sudo systemctl daemon-reload
sudo systemctl restart kubelet

(5)解除调度保护。

kubectl uncordon <nodename>

三、升级node节点

(1)升级节点kubelet 配置。

sudo kubeadm upgrade node

(2)腾空节点,同时开启调度保护,此命令请在master节点操作

kubectl drain <nodename> --ignore-daemonsets

(3)升级kubelet与kubectl组件。

sudo apt-mark unhold kubelet kubectl
sudo apt-get install -y kubelet=1.25.5-00 kubectl=1.25.5-00
sudo apt-mark hold kubelet kubectl

(4)重启 kubelet。

sudo systemctl daemon-reload
sudo systemctl restart kubelet

(5)解除调度保护,master节点上执行该命令。

kubectl uncordon <nodename>

总结

每个版本的升级都不一样,所以要根据版本进行适当调整,不作为万能指导。
升级过程:

  1. 升级master组件。
  2. 升级worker节点组件,调度保护、排空节点、worker节点组件升级、解除保护。

Kubernetes集群的升级可以分为以下几个步骤:

  1. 备份数据。在升级之前,需要备份Kubernetes集群的数据,包括访问控制、配置文件、数据卷等。

  2. 选择升级方式。Kubernetes集群的升级方式可以分为两种:滚动升级和强制替换。滚动升级是指逐个升级每个节点,直到所有节点都升级完成。强制替换是指一次性替换所有节点,将旧节点直接替换为新节点。

  3. 准备新版本。Kubernetes升级需要准备新版本的二进制文件和镜像文件。可以从Kubernetes官方网站下载最新版本的二进制文件和镜像文件,并上传到集群中的节点上。

  4. 升级Master节点。首先需要升级Master节点,使用新版本的二进制文件替换旧版本的二进制文件,并启动新版本的Kubernetes API Server、ControllerKubernetes是一个快速发展的开源项目,为了保持其功能和安全性,集群的升级是必须的。

    • 查看升级文档:首先需要查看官方的升级文档,了解升级过程中需要注意的事项。
    • 备份数据:在升级前需要备份当前的数据,以防升级过程中的意外情况导致数据丢失。
    • 准备好备份:在升级前需要确保备份的可用性,以便在需要时能够 Manager和Scheduler。
  5. 升级Node节点。接下来需要升级Node节点。首先需要将节点上的Kubelet和kube-proxy服务停止,使用新版本的二进制文件替换旧版本的二进制文件,然后启动新版本的Kubelet和kube-proxy服务。

  6. 验证升级结果。升级完成后,需要验证恢复数据。

    • 升级前的测试:可以在测试环境中进行升级测试,以确保升级过程和升级后的集群正常运行。
    • 升级Node:首先需要升级每个Node节点中的Kubernetes组件,包括kubelet和kube-proxy等。
    • 升级Control Plane:然后需要升级Control Plane中的Kubernetes组件,包括kube-apiserver、kube-controller-manager和kube-scheduler等。
    • 升级Kubernetes对象:升级完Control Plane后,需要升级Kubernetes对象,如Deployment集群是否正常运行。可以使用kubectl命令查看集群的状态和资源对象的状态,确保所有的服务都能够正常访问。
  7. 回滚升级。如果升级失败或出现问题,可以回滚到之前的版本。回滚的过程与升级的过程相同,只需要使用旧版本的二进制文件和镜像文件即可。

Kubernetes集群的升级需要仔细规划和准备,并按照一定的步骤进行操作。只有在备份数据、选择适当的升级方式、准备新版本、升级Master节点、升级Node节点、验证升级结果等步骤都完成后,才能确保集群的升级成功。、StatefulSet等。
升级后的检查:

  • 验证集群状态:升级后需要验证集群的状态,包括Node节点的状态、Pod的状态、Service的状态等。
  • 验证应用程序:升级后需要验证应用程序的运行状态,确保应用程序正常运行。
  • 观察日志:如果发现问题,可以通过查看日志来排查问题原因。

Kubernetes集群升级是一个需要谨慎处理的过程,需要充分准备和测试,以确保升级过程的顺利和集群的稳定。在升级过程中,需要注意备份数据和备份的可用性,升级顺序和升级后的检查等问题,以确保集群的正常运行和应用程序的稳定性。
在这里插入图片描述

评论 24
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Lion Long

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值