书籍来源:《CKA/CKAD应试指南:从Docker到Kubernetes完全攻略》
一边学习一边整理老师的课程内容及实验笔记,并与大家分享,侵权即删,谢谢支持!
附上汇总贴:CKA备考实验 | 汇总_热爱编程的通信人的博客-CSDN博客
本节演示升级第一台master的步骤。
步骤1:查看当前版本。
##########实操验证##########
[root@vms10 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
vms10.rhce.cc Ready control-plane,master 116s v1.20.1
vms11.rhce.cc Ready <none> 15s v1.20.1
vms12.rhce.cc Ready <none> 26s v1.20.1
[root@vms10 ~]#
或者通过以下命令查看。
##########实操验证##########
[root@vms10 ~]# kubectl version --short
Client Version: v1.20.1
Server Version: v1.20.1
[root@vms10 ~]#
这里显示当前安装的是v1.20.1版本,现在要升级到v1.21.1版本。
步骤2:确定当前yum源里kubeadm的可用版本。
##########实操验证##########
[root@vms10 ~]# yum list --showduplicates kubeadm --disableexcludes=kubernetes
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
Installed Packages
kubeadm.x86_64 1.20.1-0 @kubernetes
Available Packages
kubeadm.x86_64 1.6.0-0 kubernetes
kubeadm.x86_64 1.6.1-0 kubernetes
kubeadm.x86_64 1.6.2-0 kubernetes
... #忽略大量内容
[root@vms10 ~]#
这里显示yum源里kubeadm可用的最新版本为1.21.1。
4.2.1 升级kubeadm
不管是升级master还是升级worker,首先都要把kubeadm升级了。
步骤1:升级kubeadm到1.21.1。
##########实操验证##########
[root@vms10 ~]# yum install -y kubeadm-1.21.1-0 --disableexcludes=kubernetes
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
Resolving Dependencies
--> Running transaction check
---> Package kubeadm.x86_64 0:1.20.1-0 will be updated
---> Package kubeadm.x86_64 0:1.21.1-0 will be an update
--> Finished Dependency Resolution
Dependencies Resolved
========================================================================================================================================================================================================
Package Arch Version Repository Size
========================================================================================================================================================================================================
Updating:
kubeadm x86_64 1.21.1-0 kubernetes 9.5 M
Transaction Summary
========================================================================================================================================================================================================
Upgrade 1 Package
Total download size: 9.5 M
Downloading packages:
No Presto metadata available for kubernetes
e0511a4d8d070fa4c7bcd2a04217c80774ba11d44e4e0096614288189894f1c5-kubeadm-1.21.1-0.x86_64.rpm | 9.5 MB 00:00:04
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Updating : kubeadm-1.21.1-0.x86_64 1/2
Cleanup : kubeadm-1.20.1-0.x86_64 2/2
Verifying : kubeadm-1.21.1-0.x86_64 1/2
Verifying : kubeadm-1.20.1-0.x86_64 2/2
Updated:
kubeadm.x86_64 0:1.21.1-0
Complete!
[root@vms10 ~]#
步骤2:验证kubeadm的版本。
##########实操验证##########
[root@vms10 ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"21", GitVersion:"v1.21.1", GitCommit:"5e58841cce77d4bc13713ad2b91fa0d961e69192", GitTreeState:"clean", BuildDate:"2021-05-12T14:17:27Z", GoVersion:"go1.16.4", Compiler:"gc", Platform:"linux/amd64"}
[root@vms10 ~]#
步骤3:通过kubeadm upgrade plan查看集群是否需要升级,以及能升级的版本。
##########实操验证##########
[root@vms10 ~]# kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.20.1
[upgrade/versions] kubeadm version: v1.21.1
I0504 13:54:43.988452 7658 version.go:254] remote version is much newer: v1.27.1; falling back to: stable-1.21
[upgrade/versions] Target version: v1.21.14
[upgrade/versions] Latest version in the v1.20 series: v1.20.15
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT TARGET
kubelet 3 x v1.20.1 v1.20.15
Upgrade to the latest version in the v1.20 series:
COMPONENT CURRENT TARGET
kube-apiserver v1.20.1 v1.20.15
kube-controller-manager v1.20.1 v1.20.15
kube-scheduler v1.20.1 v1.20.15
kube-proxy v1.20.1 v1.20.15
CoreDNS 1.7.0 v1.8.0
etcd 3.4.13-0 3.4.13-0
You can now apply the upgrade by executing the following command:
kubeadm upgrade apply v1.20.15
_____________________________________________________________________
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT TARGET
kubelet 3 x v1.20.1 v1.21.14
Upgrade to the latest stable version:
COMPONENT CURRENT TARGET
kube-apiserver v1.20.1 v1.21.14
kube-controller-manager v1.20.1 v1.21.14
kube-scheduler v1.20.1 v1.21.14
kube-proxy v1.20.1 v1.21.14
CoreDNS 1.7.0 v1.8.0
etcd 3.4.13-0 3.4.13-0
You can now apply the upgrade by executing the following command:
kubeadm upgrade apply v1.21.14
Note: Before you can perform this upgrade, you have to update kubeadm to v1.21.14.
_____________________________________________________________________
The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column.
API GROUP CURRENT VERSION PREFERRED VERSION MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io v1alpha1 v1alpha1 no
kubelet.config.k8s.io v1beta1 v1beta1 no
_____________________________________________________________________
[root@vms10 ~]#
此命令检查集群是否可以升级,以及可以获取到的升级版本。
步骤4:把master设置为维护模式,并清空上面运行的pod。
##########实操验证##########
[root@vms10 ~]# kubectl drain vms10.rhce.cc --ignore-daemonsets
node/vms10.rhce.cc cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-proxy-9qmbh
evicting pod kube-system/coredns-7f89b7bc75-psv67
evicting pod kube-system/coredns-7f89b7bc75-5dr7t
... #如果未及时出现回显,多更待会儿即可
[root@vms10 ~]#
[root@vms10 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
vms10.rhce.cc Ready,SchedulingDisabled control-plane,master 3h47m v1.20.1
vms11.rhce.cc Ready <none> 3h45m v1.20.1
vms12.rhce.cc Ready <none> 3h46m v1.20.1
[root@vms10 ~]#
注意:kubectl drain可以在升级集群的命令kubeadm upgrade apply运行之前执行,也可以在其之后执行,这里是在其之前执行的。
4.2.2 升级kubernetes集群里master上的各个组件
kubeadm升级之后,下面开始利用kubeadm命令升级master上的各个组件。
注意,要提前导入coredns-1.21.tar。
步骤1:开始升级kubernetes集群。
##########实操验证##########
[root@vms10 ~]# kubeadm upgrade apply v1.21.1
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.21.1"
[upgrade/versions] Cluster version: v1.20.1
[upgrade/versions] kubeadm version: v1.21.1
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
[upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
[upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.21.1"...
Static pod: kube-apiserver-vms10.rhce.cc hash: 1cf9462936e6587c1daf6393c625671e
Static pod: kube-controller-manager-vms10.rhce.cc hash: de64b953177047fd563a18150d6c6070
Static pod: kube-scheduler-vms10.rhce.cc hash: 78404d25f9e940515e51f92dc60988eb
[upgrade/etcd] Upgrading to TLS for etcd
Static pod: etcd-vms10.rhce.cc hash: 17ecb81f13425638d8d438dd14984c6e
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Current and new manifests of etcd are equal, skipping upgrade
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests918380393"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2023-05-04-17-37-37/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-vms10.rhce.cc hash: 1cf9462936e6587c1daf6393c625671e
Static pod: kube-apiserver-vms10.rhce.cc hash: 1cf9462936e6587c1daf6393c625671e
Static pod: kube-apiserver-vms10.rhce.cc hash: 45d8bfb05c06e6b9e4644889092c361a
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2023-05-04-17-37-37/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-vms10.rhce.cc hash: de64b953177047fd563a18150d6c6070
Static pod: kube-controller-manager-vms10.rhce.cc hash: d69c9bc304051e2fcc0aba1a366e8511
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2023-05-04-17-37-37/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-vms10.rhce.cc hash: 78404d25f9e940515e51f92dc60988eb
Static pod: kube-scheduler-vms10.rhce.cc hash: 57952607cc2b5d4a4ac242954121e925
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upgrade/postupgrade] Applying label node-role.kubernetes.io/control-plane='' to Nodes with label node-role.kubernetes.io/master='' (deprecated)
[upgrade/postupgrade] Applying label node.kubernetes.io/exclude-from-external-load-balancers='' to control plane Nodes
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.21.1". Enjoy!
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
[root@vms10 ~]#
注意:如果升级时不想升级etcd组件,则需要加--etcd-upgrade=false选项,完整的命令是kubeadm upgrade apply v1.21.1 --etcd-upgrade=false。
步骤2:升级完毕之后,取消master的维护模式。
##########实操验证##########
[root@vms10 ~]# kubectl uncordon vms10.rhce.cc
node/vms10.rhce.cc uncordoned
[root@vms10 ~]#
[root@vms10 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
vms10.rhce.cc Ready control-plane,master 3h52m v1.20.1
vms11.rhce.cc Ready <none> 3h50m v1.20.1
vms12.rhce.cc Ready <none> 3h50m v1.20.1
[root@vms10 ~]#
这里显示vms15的版本仍然是v1.20.1,下面需要升级kubelet和kubectl。
4.2.3 升级master上的kubelet和kubectl
下面开始升级kubelet和kubectl。
步骤1:安装v1.21.1版本的kubelet及kubectl。
##########实操验证##########
[root@vms10 ~]# yum install -y kubelet-1.21.1-0 kubectl-1.21.1-0 --disableexcludes=kubernetes
Loaded plugins: fastestmirror, langpacks
docker-ce | 3.5 kB 00:00:00
epel | 4.7 kB 00:00:00
kubernetes | 2.9 kB 00:00:00
Loading mirror speeds from cached hostfile
Resolving Dependencies
--> Running transaction check
---> Package kubectl.x86_64 0:1.20.1-0 will be updated
---> Package kubectl.x86_64 0:1.21.1-0 will be an update
---> Package kubelet.x86_64 0:1.20.1-0 will be updated
---> Package kubelet.x86_64 0:1.21.1-0 will be an update
--> Finished Dependency Resolution
Dependencies Resolved
========================================================================================================================================================================================================
Package Arch Version Repository Size
========================================================================================================================================================================================================
Updating:
kubectl x86_64 1.21.1-0 kubernetes 9.8 M
kubelet x86_64 1.21.1-0 kubernetes 20 M
Transaction Summary
========================================================================================================================================================================================================
Upgrade 2 Packages
Total download size: 30 M
Downloading packages:
No Presto metadata available for kubernetes
(1/2): 3944a45bec4c99d3489993e3642b63972b62ed0a4ccb04cc7655ce0467fddfef-kubectl-1.21.1-0.x86_64.rpm | 9.8 MB 00:00:02
(2/2): c47efa28c5935ed2ffad234e2b402d937dde16ab072f2f6013c71d39ab526f40-kubelet-1.21.1-0.x86_64.rpm | 20 MB 00:00:04
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total 7.2 MB/s | 30 MB 00:00:04
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Updating : kubelet-1.21.1-0.x86_64 1/4
Updating : kubectl-1.21.1-0.x86_64 2/4
Cleanup : kubectl-1.20.1-0.x86_64 3/4
Cleanup : kubelet-1.20.1-0.x86_64 4/4
Verifying : kubectl-1.21.1-0.x86_64 1/4
Verifying : kubelet-1.21.1-0.x86_64 2/4
Verifying : kubectl-1.20.1-0.x86_64 3/4
Verifying : kubelet-1.20.1-0.x86_64 4/4
Updated:
kubectl.x86_64 0:1.21.1-0 kubelet.x86_64 0:1.21.1-0
Complete!
[root@vms10 ~]#
重启服务。
##########实操验证##########
[root@vms10 ~]# systemctl daemon-reload ; systemctl restart kubelet
[root@vms10 ~]#
步骤2:验证kubectl的版本。
##########实操验证##########
[root@vms10 ~]# kubectl version --short
Client Version: v1.21.1
Server Version: v1.21.1
[root@vms10 ~]#
或者用以下命令进行验证。
##########实操验证##########
[root@vms10 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
vms10.rhce.cc Ready control-plane,master 3h54m v1.21.1
vms11.rhce.cc Ready <none> 3h53m v1.20.1
vms12.rhce.cc Ready <none> 3h53m v1.20.1
[root@vms10 ~]#
这里可以看到master已经升级到了v1.21.1,但是worker还没升级。
如果环境里有其他master,升级第二台master的步骤和前面的步骤是一样的,只是把命令kubeadm upgrade apply v1.21.1换成kubeadm upgrade node即可。
注意:升级哪台机器,kubeadm upgrade node就在哪台机器上执行。