[root@node1 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
node1 Ready control-plane,master 3h15m v1.22.17
node2 Ready <none> 3h13m v1.22.17
node3 Ready <none> 3h13m v1.22.17
master节点升级
[root@node1 ~]# yum --showduplicates list kubeadm | grep 1.23
kubeadm.x86_64 1.23.0-0 kubernetes
kubeadm.x86_64 1.23.1-0 kubernetes
kubeadm.x86_64 1.23.2-0 kubernetes
kubeadm.x86_64 1.23.3-0 kubernetes
kubeadm.x86_64 1.23.4-0 kubernetes
kubeadm.x86_64 1.23.5-0 kubernetes
kubeadm.x86_64 1.23.6-0 kubernetes
kubeadm.x86_64 1.23.7-0 kubernetes
kubeadm.x86_64 1.23.8-0 kubernetes
kubeadm.x86_64 1.23.9-0 kubernetes
kubeadm.x86_64 1.23.10-0 kubernetes
kubeadm.x86_64 1.23.11-0 kubernetes
kubeadm.x86_64 1.23.12-0 kubernetes
kubeadm.x86_64 1.23.13-0 kubernetes
kubeadm.x86_64 1.23.14-0 kubernetes
kubeadm.x86_64 1.23.15-0 kubernetes
kubeadm.x86_64 1.23.16-0 kubernetes
kubeadm.x86_64 1.23.17-0 kubernetes
#升级到指定版本,大版本只能逐个按顺序升级,无法跳大版本
yum install kubeadm-1.23.17-0 -y
#查看升级计划
kubeadm upgrade plan
#输出信息如下
[root@node1 ~]# kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.22.17
[upgrade/versions] kubeadm version: v1.23.17
I0228 19:44:02.426057 22119 version.go:256] remote version is much newer: v1.29.2; falling back to: stable-1.23
[upgrade/versions] Target version: v1.23.17
[upgrade/versions] Latest version in the v1.22 series: v1.22.17
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT TARGET
kubelet 3 x v1.22.17 v1.23.17
Upgrade to the latest stable version:
COMPONENT CURRENT TARGET
kube-apiserver v1.22.17 v1.23.17
kube-controller-manager v1.22.17 v1.23.17
kube-scheduler v1.22.17 v1.23.17
kube-proxy v1.22.17 v1.23.17
CoreDNS v1.8.4 v1.8.6
etcd 3.5.6-0 3.5.6-0
You can now apply the upgrade by executing the following command:
kubeadm upgrade apply v1.23.17
_____________________________________________________________________
The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column.
API GROUP CURRENT VERSION PREFERRED VERSION MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io v1alpha1 v1alpha1 no
kubelet.config.k8s.io v1beta1 v1beta1 no
_____________________________________________________________________
#执行升级计划
#!!!注意:升级过程集群可能无法正常进行变更操作,但是运行中的pod不受影响
kubeadm upgrade apply v1.23.17
#输出信息如下:
[root@node1 ~]# kubeadm upgrade apply v1.23.17
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.23.17"
[upgrade/versions] Cluster version: v1.22.17
[upgrade/versions] kubeadm version: v1.23.17
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
[upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
[upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.23.17"...
Static pod: kube-apiserver-node1 hash: f34895082fb5088a5fac6d790c649292
Static pod: kube-controller-manager-node1 hash: f71939c82c4c01af0bdc388ddb87ed55
Static pod: kube-scheduler-node1 hash: db3d8203bf99f8af1ef287ca1ba16a39
[upgrade/etcd] Upgrading to TLS for etcd
Static pod: etcd-node1 hash: 57baaca143aca45939625125cf9e7f4b
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Current and new manifests of etcd are equal, skipping upgrade
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests3684995289"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2024-02-28-19-45-32/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-node1 hash: f34895082fb5088a5fac6d790c649292
Static pod: kube-apiserver-node1 hash: f34895082fb5088a5fac6d790c649292
Static pod: kube-apiserver-node1 hash: f34895082fb5088a5fac6d790c649292
Static pod: kube-apiserver-node1 hash: 9e6226cb9df678a6ca35242eeb6a39ad
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2024-02-28-19-45-32/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-node1 hash: f71939c82c4c01af0bdc388ddb87ed55
Static pod: kube-controller-manager-node1 hash: f71939c82c4c01af0bdc388ddb87ed55
Static pod: kube-controller-manager-node1 hash: f71939c82c4c01af0bdc388ddb87ed55
Static pod: kube-controller-manager-node1 hash: f71939c82c4c01af0bdc388ddb87ed55
Static pod: kube-controller-manager-node1 hash: f71939c82c4c01af0bdc388ddb87ed55
Static pod: kube-controller-manager-node1 hash: f71939c82c4c01af0bdc388ddb87ed55
Static pod: kube-controller-manager-node1 hash: f71939c82c4c01af0bdc388ddb87ed55
Static pod: kube-controller-manager-node1 hash: f71939c82c4c01af0bdc388ddb87ed55
Static pod: kube-controller-manager-node1 hash: f71939c82c4c01af0bdc388ddb87ed55
Static pod: kube-controller-manager-node1 hash: f71939c82c4c01af0bdc388ddb87ed55
Static pod: kube-controller-manager-node1 hash: f71939c82c4c01af0bdc388ddb87ed55
Static pod: kube-controller-manager-node1 hash: f71939c82c4c01af0bdc388ddb87ed55
Static pod: kube-controller-manager-node1 hash: f71939c82c4c01af0bdc388ddb87ed55
Static pod: kube-controller-manager-node1 hash: f71939c82c4c01af0bdc388ddb87ed55
Static pod: kube-controller-manager-node1 hash: f71939c82c4c01af0bdc388ddb87ed55
Static pod: kube-controller-manager-node1 hash: f71939c82c4c01af0bdc388ddb87ed55
Static pod: kube-controller-manager-node1 hash: f71939c82c4c01af0bdc388ddb87ed55
Static pod: kube-controller-manager-node1 hash: f71939c82c4c01af0bdc388ddb87ed55
Static pod: kube-controller-manager-node1 hash: f71939c82c4c01af0bdc388ddb87ed55
Static pod: kube-controller-manager-node1 hash: f71939c82c4c01af0bdc388ddb87ed55
Static pod: kube-controller-manager-node1 hash: f71939c82c4c01af0bdc388ddb87ed55
Static pod: kube-controller-manager-node1 hash: f71939c82c4c01af0bdc388ddb87ed55
Static pod: kube-controller-manager-node1 hash: 09e9af1b370c12b0af4fc4b118110202
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2024-02-28-19-45-32/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-node1 hash: db3d8203bf99f8af1ef287ca1ba16a39
Static pod: kube-scheduler-node1 hash: db3d8203bf99f8af1ef287ca1ba16a39
Static pod: kube-scheduler-node1 hash: db3d8203bf99f8af1ef287ca1ba16a39
Static pod: kube-scheduler-node1 hash: db3d8203bf99f8af1ef287ca1ba16a39
Static pod: kube-scheduler-node1 hash: db3d8203bf99f8af1ef287ca1ba16a39
Static pod: kube-scheduler-node1 hash: db3d8203bf99f8af1ef287ca1ba16a39
Static pod: kube-scheduler-node1 hash: db3d8203bf99f8af1ef287ca1ba16a39
Static pod: kube-scheduler-node1 hash: db3d8203bf99f8af1ef287ca1ba16a39
Static pod: kube-scheduler-node1 hash: db3d8203bf99f8af1ef287ca1ba16a39
Static pod: kube-scheduler-node1 hash: db3d8203bf99f8af1ef287ca1ba16a39
Static pod: kube-scheduler-node1 hash: db3d8203bf99f8af1ef287ca1ba16a39
Static pod: kube-scheduler-node1 hash: db3d8203bf99f8af1ef287ca1ba16a39
Static pod: kube-scheduler-node1 hash: db3d8203bf99f8af1ef287ca1ba16a39
Static pod: kube-scheduler-node1 hash: db3d8203bf99f8af1ef287ca1ba16a39
Static pod: kube-scheduler-node1 hash: db3d8203bf99f8af1ef287ca1ba16a39
Static pod: kube-scheduler-node1 hash: db3d8203bf99f8af1ef287ca1ba16a39
Static pod: kube-scheduler-node1 hash: db3d8203bf99f8af1ef287ca1ba16a39
Static pod: kube-scheduler-node1 hash: db3d8203bf99f8af1ef287ca1ba16a39
Static pod: kube-scheduler-node1 hash: db3d8203bf99f8af1ef287ca1ba16a39
Static pod: kube-scheduler-node1 hash: db3d8203bf99f8af1ef287ca1ba16a39
Static pod: kube-scheduler-node1 hash: db3d8203bf99f8af1ef287ca1ba16a39
Static pod: kube-scheduler-node1 hash: db3d8203bf99f8af1ef287ca1ba16a39
Static pod: kube-scheduler-node1 hash: db3d8203bf99f8af1ef287ca1ba16a39
Static pod: kube-scheduler-node1 hash: db3d8203bf99f8af1ef287ca1ba16a39
Static pod: kube-scheduler-node1 hash: 88e4ded417ffd82a7c7d2c025e43d322
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upgrade/postupgrade] Applying label node-role.kubernetes.io/control-plane='' to Nodes with label node-role.kubernetes.io/master='' (deprecated)
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.23.17". Enjoy!
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
#更新kubelet\kubectl
yum --showduplicates list kubelet | grep 1.23
yum --showduplicates list kubectl | grep 1.23
yum install kubelet-1.23.17-0 kubectl-1.23.17-0 -y
#重启kubelet服务
systemctl daemon-reload
systemctl restart kubelet
#master节点升级完成,查看状态
[root@node1 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
node1 Ready control-plane,master 3h26m v1.23.17
node2 Ready <none> 3h24m v1.22.17
node3 Ready <none> 3h24m v1.22.17
worker节点升级
#升级组件
yum install kubelet-1.23.17-0 kubectl-1.23.17-0 -y
#重启服务
systemctl daemon-reload
systemctl restart kubelet
#升级完成,查看状态
[root@node1 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
node1 Ready control-plane,master 3h28m v1.23.17
node2 Ready <none> 3h26m v1.23.17
node3 Ready <none> 3h26m v1.23.17