Kubernetes CKA认证运维工程师笔记-Kubernetes集群维护

1. Bootstrap Token方式增加Node

TLS Bootstraping:
在kubernetes集群中,Node上组件kubelet和kube-proxy都需要与kube-apiserver进行通信,为了增加传输安全性,采用https方式。这就涉及到Node组件需要具备kube-apiserver用的证书颁发机构(CA)签发客户端证书,当规模较大时,这种客户端证书颁发需要大量工作,同样也会增加集群扩展复杂度。
为了简化流程,Kubernetes引入了TLS bootstraping机制来自动颁发客户端证书,所以强烈建议在Node上使用这种方式。

工作流程:
在这里插入图片描述搭建流程参照老师的github:
https://github.com/lizhenliang/ansible-install-k8s/

1、 准备新节点环境

提前安装好 Docker

拷贝已部署好的 Node 相关文件到新节点 Node3:

scp -r /opt/kubernetes/ root@192.168.31.64:/opt/
scp -r /usr/lib/systemd/system/{kubelet,kube-proxy}.service 
root@192.168.31.64:/usr/lib/systemd/system
scp -r /opt/cni/ root@192.168.31.64:/opt

删除 kubelet 证书和 kubeconfig 文件:

cd /opt/kubernetes/ssl/
rm kubelet* -f
cd /opt/kubernetes/cfg/
rm kubelet.kubeconfig bootstrap.kubeconfig -f

注:这几个文件是证书申请审批后自动生成的,每个 Node 不同,必须删除重新生成

修改主机名

vi /opt/kubernetes/cfg/kubelet.conf
--hostname-override=k8s-node3
vi /opt/kubernetes/cfg/kube-proxy-config.yml
hostnameOverride: k8s-node3

2、确认启用 Bootstrap Token

默认已经启用

# cat /opt/kubernetes/cfg/kube-apiserver.conf
…
--enable-bootstrap-token-auth=true
…

3、使用 Secret 存储 Bootstrap Token

# vi bootstrap-secret.yaml
apiVersion: v1
kind: Secret
metadata:
 # Name MUST be of form "bootstrap-token-<token id>"
 name: bootstrap-token-07401b
 namespace: kube-system
# Type MUST be 'bootstrap.kubernetes.io/token'
type: bootstrap.kubernetes.io/token
stringData:
 # Human readable description. Optional.
 description: "The default bootstrap token generated by 'kubeadm 
init'."
 # Token ID and secret. Required.
 token-id: 07401b
 token-secret: f395accd246ae52d
 # Expiration. Optional.
 expiration: 2021-03-10T03:22:11Z
 # Allowed usages.
 usage-bootstrap-authentication: "true"
 usage-bootstrap-signing: "true"
# kubectl apply -f bootstrap-secret.yaml

注:expiration 为 token 过期时间,当前时间向后推几天随意。

4、创建 RBAC 角色绑定,允许 kubelet tls bootstrap 创建 CSR 请求

# vi rbac.yaml

# enable bootstrapping nodes to create CSR
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
 name: create-csrs-for-bootstrapping
subjects:
- kind: Group
 name: system:bootstrappers
 apiGroup: rbac.authorization.k8s.io
roleRef:
 kind: ClusterRole
 name: system:node-bootstrapper
apiGroup: rbac.authorization.k8s.io
# kubectl apply -f rbac.yaml

5、kubelet 配置 Bootstrap kubeconfig 文件

在 Node3 上操作:

# vi /opt/kubernetes/cfg/bootstrap.kubeconfig
apiVersion: v1
kind: Config
clusters:
- cluster:
 certificate-authority: /opt/kubernetes/ssl/ca.pem
 server: https://192.168.31.61:6443 
 name: bootstrap
contexts:
- context:
 cluster: bootstrap
 user: kubelet-bootstrap
 name: bootstrap
current-context: bootstrap
preferences: {}
users:
- name: kubelet-bootstrap
 user:
 token: 07401b.f395accd246ae52d

配置文件指定 kubeconfig 文件,默认已经配置:

# cat /opt/kubernetes/cfg/kubelet.conf
KUBELET_OPTS="--logtostderr=false \
--v=4 \
--log-dir=/opt/kubernetes/logs \
--hostname-override=k8s-node3 \
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \

启动并设置开机启动:

systemctl daemon-reload
systemctl start kubelet 
systemctl enable kubelet 

6、在 Master 节点颁发证书

kubectl get csr
kubectl certificate approve xxx

如果操作顺利的话,get node 可以看到新节点已经加入。

大致步骤:

  1. kube-apiserver启用Bootstrap Token
    –enable-bootstrap-token-auth=true
  2. 使用Secret存储Bootstrap Token
  3. 创建RBAC角色绑定,允许kubelet tls bootstrap创建CSR请求
  4. kubelet配置Bootstrap kubeconfig文件
  5. kubectl get scr && kubectl certificate approve XXX

2. Etcd数据库备份与恢复

Kubernetes 使用Etcd 数据库实时存储集群中的数据,安全起见,一定要备份!

kubeadm部署方式:

备份:

ETCDCTL_API=3 etcdctl \
snapshot save snap.db \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/kubernetes/pki/etcd/ca.crt \
--cert=/etc/kubernetes/pki/etcd/server.crt \
--key=/etc/kubernetes/pki/etcd/server.key
[root@k8s-master ~]# yum install etcd
[root@k8s-master ~]# ETCDCTL_API=3 etcdctl snapshot save snap.db --endpoints=https://127.0.0.1:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key
Snapshot saved at snap.db
[root@k8s-master ~]# du -sh snap.db
11M	snap.db

恢复:

1、先暂停kube-apiserver和etcd容器
mv /etc/kubernetes/manifests /etc/kubernetes/manifests.bak
mv /var/lib/etcd/ /var/lib/etcd.bak

2、恢复
ETCDCTL_API=3 etcdctl \
snapshot restore snap.db \
--data-dir=/var/lib/etcd

3、启动kube-apiserver和etcd容器
mv /etc/kubernetes/manifests.bak /etc/kubernetes/manifests
[root@k8s-master ~]# kubectl get pods
NAME                                     READY   STATUS    RESTARTS   AGE
client1                                  1/1     Running   3          40h
client2                                  1/1     Running   3          40h
configmap-demo-pod                       1/1     Running   3          3d3h
my-pod2                                  1/1     Running   9          3d10h
nfs-client-provisioner-58d675cd5-dx7n4   1/1     Running   3          3d4h
pod-taint                                1/1     Running   8          9d
secret-demo-pod                          1/1     Running   3          3d2h
sh                                       1/1     Running   4          3d4h
test-76846b5956-gftn9                    1/1     Running   3          3d3h
test-76846b5956-r7s9k                    1/1     Running   3          3d3h
test-76846b5956-trpbn                    1/1     Running   3          3d3h
test2-78c4694588-87b9r                   1/1     Running   3          3d5h
web-0                                    1/1     Running   3          3d4h
web-1                                    1/1     Running   3          3d4h
web-2                                    1/1     Running   3          3d4h
web-96d5df5c8-vc9kf                      1/1     Running   2          41h
[root@k8s-master ~]# kubectl get deployment
NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
nfs-client-provisioner   1/1     1            1           3d4h
test                     3/3     3            3           3d5h
test2                    1/1     1            1           3d5h
web                      1/1     1            1           41h
[root@k8s-master ~]# kubectl delete test
error: the server doesn't have a resource type "test"
[root@k8s-master ~]# kubectl delete deployment test
deployment.apps "test" deleted
[root@k8s-master ~]# kubectl get pods
NAME                                     READY   STATUS    RESTARTS   AGE
client1                                  1/1     Running   3          41h
client2                                  1/1     Running   3          41h
configmap-demo-pod                       1/1     Running   3          3d3h
my-pod2                                  1/1     Running   9          3d10h
nfs-client-provisioner-58d675cd5-dx7n4   1/1     Running   3          3d4h
pod-taint                                1/1     Running   8          9d
secret-demo-pod                          1/1     Running   3          3d2h
sh                                       1/1     Running   4          3d4h
test2-78c4694588-87b9r                   1/1     Running   3          3d5h
web-0                                    1/1     Running   3          3d4h
web-1                                    1/1     Running   3          3d4h
web-2                                    1/1     Running   3          3d4h
web-96d5df5c8-vc9kf                      1/1     Running   2          41h
[root@k8s-master ~]# cd /etc/kubernetes/manifests/
[root@k8s-master manifests]# ls
etcd.yaml  kube-apiserver.yaml  kube-controller-manager.yaml  kube-scheduler.yaml
[root@k8s-master manifests]# mv etcd.yaml kube-apiserver.yaml /tmp
[root@k8s-master manifests]# ls
kube-controller-manager.yaml  kube-scheduler.yaml
[root@k8s-master manifests]# kubectl get node
The connection to the server 10.0.0.61:6443 was refused - did you specify the right host or port?
[root@k8s-master manifests]# mv /var/lib/etcd/ /var/lib/etcd.bak
[root@k8s-master manifests]# ETCDCTL_API=3 etcdctl \
> snapshot restore snap.db \
> --data-dir=/var/lib/etcd
Error: open snap.db: no such file or directory
[root@k8s-master manifests]# ETCDCTL_API=3 etcdctl snapshot restore /root/snap.db --data-dir=/var/lib/etcd
2021-12-24 15:17:49.580896 I | mvcc: restore compact to 2742712
2021-12-24 15:17:49.612537 I | etcdserver/membership: added member 8e9e05c52164694d [http://localhost:2380] to cluster cdf818194e3a8c32
[root@k8s-master manifests]# ls /var/lib/etcd
member
[root@k8s-master manifests]# ls
kube-controller-manager.yaml  kube-scheduler.yaml
[root@k8s-master manifests]# mv /tmp/kube-apiserver.yaml ./
[root@k8s-master manifests]# mv /etc/e
e2fsck.conf  environment  etcd/        ethertypes   exports      exports.d/   
[root@k8s-master manifests]# mv /tmp/etcd.yaml ./
[root@k8s-master manifests]# ls
etcd.yaml  kube-apiserver.yaml  kube-controller-manager.yaml  kube-scheduler.yaml
[root@k8s-master manifests]# kubectl get node
NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    master   32d   v1.19.0
k8s-node1    Ready    <none>   32d   v1.19.0
k8s-node2    Ready    <none>   32d   v1.19.0
[root@k8s-master manifests]# kubectl get pods
NAME                                     READY   STATUS             RESTARTS   AGE
client1                                  1/1     Running            3          41h
client2                                  1/1     Running            3          41h
configmap-demo-pod                       1/1     Running            3          3d3h
my-pod2                                  1/1     Running            9          3d10h
nfs-client-provisioner-58d675cd5-dx7n4   0/1     CrashLoopBackOff   5          3d4h
pod-taint                                1/1     Running            8          9d
secret-demo-pod                          1/1     Running            3          3d2h
sh                                       1/1     Running            4          3d4h
test-76846b5956-gftn9                    1/1     Running            3          3d4h
test-76846b5956-r7s9k                    1/1     Running            3          3d4h
test-76846b5956-trpbn                    1/1     Running            3          3d4h
test2-78c4694588-87b9r                   1/1     Running            3          3d5h
web-0                                    1/1     Running            3          3d4h
web-1                                    1/1     Running            3          3d4h
web-2                                    1/1     Running            3          3d4h
web-96d5df5c8-vc9kf                      1/1     Running            2          41h

二进制部署方式:

备份:

ETCDCTL_API=3 etcdctl \
snapshot save snap.db \
--endpoints=https://192.168.31.71:2379 \
--cacert=/opt/etcd/ssl/ca.pem \
--cert=/opt/etcd/ssl/server.pem \
--key=/opt/etcd/ssl/server-key.pem

恢复:

1、先暂停kube-apiserver和etcd
systemctl stop kube-apiserver
systemctl stop etcd
mv /var/lib/etcd/default.etcd /var/lib/etcd/default.etcd.bak

2、在每个节点上恢复
ETCDCTL_API=3 etcdctl snapshot restore snap.db \
--name etcd-1 \
--initial-cluster="etcd-1=https://192.168.31.71:2380,etcd-2=https://192.168.31.72:2380,etcd-3=https://192.168.31.73:2380" \
--initial-cluster-token=etcd-cluster \
--initial-advertise-peer-urls=https://192.168.31.71:2380 \
--data-dir=/var/lib/etcd/default.etcd

3、启动kube-apiserver和etcd
systemctl start kube-apiserver
systemctl start etcd

3. kubeadm对K8S集群版本进行升级

Kubernetes每隔3个月发布一个小版本。

升级策略:

  • 始终保持最新
  • 每半年升级一次,这样会落后社区1~2个小版本
  • 一年升级一次,或者更长,落后版本太多

升级基本流程:
在这里插入图片描述

注意事项:

  • 升级前必须备份所有组件及数据,例如etcd
  • 千万不要跨小版本进行升级,例如从1.16升级到1.19

升级管理节点:

1、查找最新版本号
yum list --showduplicates kubeadm --disableexcludes=kubernetes

2、升级kubeadm
yum install -y kubeadm-1.19.3-0 --disableexcludes=kubernetes

3、驱逐node上的pod,且不可调度
kubectl drain k8s-master --ignore-daemonsets

4、检查集群是否可以升级,并获取可以升级的版本
kubeadm upgrade plan

5、执行升级
kubeadm upgrade apply v1.19.3

6、取消不可调度
kubectl uncordon k8s-master

7、升级kubelet和kubectl
yum install -y kubelet-1.19.3-0 kubectl-1.19.3-0 --disableexcludes=kubernetes

8、重启kubelet
systemctl daemon-reload
systemctl restart kubelet
[root@k8s-master ~]# kubectl get node
NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    master   32d   v1.19.0
k8s-node1    Ready    <none>   32d   v1.19.0
k8s-node2    Ready    <none>   32d   v1.19.0
[root@k8s-master ~]# yum list --showduplicates kubeadm --disableexcludes=kubernetes
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirrors.aliyun.com
 * extras: mirrors.aliyun.com
 * updates: mirrors.aliyun.com
Installed Packages
kubeadm.x86_64                              1.19.0-0                                @kubernetes
Available Packages
kubeadm.x86_64                              1.6.0-0                                 kubernetes 
kubeadm.x86_64                              1.6.1-0                                 kubernetes 
...
kubeadm.x86_64                              1.23.0-0                                kubernetes 
kubeadm.x86_64                              1.23.1-0                                kubernetes 
[root@k8s-master ~]# yum install -y kubeadm-1.19.3-0 --disableexcludes=kubernetes
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirrors.aliyun.com
 * extras: mirrors.aliyun.com
 * updates: mirrors.aliyun.com
Resolving Dependencies
--> Running transaction check
---> Package kubeadm.x86_64 0:1.19.0-0 will be updated
...
Updated:
  kubeadm.x86_64 0:1.19.3-0                                                                    

Complete!
[root@k8s-master ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.3", GitCommit:"1e11e4a2108024935ecfcb2912226cedeafd99df", GitTreeState:"clean", BuildDate:"2020-10-14T12:47:53Z", GoVersion:"go1.15.2", Compiler:"gc", Platform:"linux/amd64"}
[root@k8s-master ~]# kubectl drain k8s-master --ignore-daemonsets
node/k8s-master cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-vqzdj, kube-system/kube-proxy-tvzpd
evicting pod kube-system/calico-kube-controllers-97769f7c7-z6npb
evicting pod kube-system/coredns-6d56c8448f-9xlmw
pod/calico-kube-controllers-97769f7c7-z6npb evicted
pod/coredns-6d56c8448f-9xlmw evicted
node/k8s-master evicted
[root@k8s-master ~]# kubectl get node
NAME         STATUS                     ROLES    AGE   VERSION
k8s-master   Ready,SchedulingDisabled   master   32d   v1.19.0
k8s-node1    Ready                      <none>   32d   v1.19.0
k8s-node2    Ready                      <none>   32d   v1.19.0
[root@k8s-master ~]# kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.19.0
[upgrade/versions] kubeadm version: v1.19.3
I1224 15:34:00.807206   32183 version.go:252] remote version is much newer: v1.23.1; falling back to: stable-1.19
[upgrade/versions] Latest stable version: v1.19.16
[upgrade/versions] Latest stable version: v1.19.16
[upgrade/versions] Latest version in the v1.19 series: v1.19.16
[upgrade/versions] Latest version in the v1.19 series: v1.19.16

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       AVAILABLE
kubelet     3 x v1.19.0   v1.19.16

Upgrade to the latest version in the v1.19 series:

COMPONENT                 CURRENT   AVAILABLE
kube-apiserver            v1.19.0   v1.19.16
kube-controller-manager   v1.19.0   v1.19.16
kube-scheduler            v1.19.0   v1.19.16
kube-proxy                v1.19.0   v1.19.16
CoreDNS                   1.7.0     1.7.0
etcd                      3.4.9-1   3.4.13-0

You can now apply the upgrade by executing the following command:

	kubeadm upgrade apply v1.19.16

Note: Before you can perform this upgrade, you have to update kubeadm to v1.19.16.

_____________________________________________________________________


The table below shows the current state of component configs as understood by this version of kubeadm.
Configs that have a "yes" mark in the "MANUAL UPGRADE REQUIRED" column require manual config upgrade or
resetting to kubeadm defaults before a successful upgrade can be performed. The version to manually
upgrade to is denoted in the "PREFERRED VERSION" column.

API GROUP                 CURRENT VERSION   PREFERRED VERSION   MANUAL UPGRADE REQUIRED
kubeproxy.config.k8s.io   v1alpha1          v1alpha1            no
kubelet.config.k8s.io     v1beta1           v1beta1             no
_____________________________________________________________________

[root@k8s-master ~]# kubeadm upgrade apply v1.19.3
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.19.3"
[upgrade/versions] Cluster version: v1.19.0
[upgrade/versions] kubeadm version: v1.19.3
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Pulling images required for setting up a Kubernetes cluster
[upgrade/prepull] This might take a minute or two, depending on the speed of your internet connection
[upgrade/prepull] You can also perform this action in beforehand using 'kubeadm config images pull'
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.19.3"...
Static pod: kube-apiserver-k8s-master hash: f244e3d7ce5a03e5054570c276b5797b
Static pod: kube-controller-manager-k8s-master hash: 6e29a8125b59b61870471984c2d5269e
Static pod: kube-scheduler-k8s-master hash: 1b7d295cfbd2d0ae820b397802bd43f1
[upgrade/etcd] Upgrading to TLS for etcd
Static pod: etcd-k8s-master hash: 8f4507159e35abac64071e6a2432286f
[upgrade/staticpods] Preparing for "etcd" upgrade
[upgrade/staticpods] Renewing etcd-server certificate
[upgrade/staticpods] Renewing etcd-peer certificate
[upgrade/staticpods] Renewing etcd-healthcheck-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-12-24-15-35-23/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: etcd-k8s-master hash: 8f4507159e35abac64071e6a2432286f
Static pod: etcd-k8s-master hash: 8f4507159e35abac64071e6a2432286f
Static pod: etcd-k8s-master hash: e484d1aab5bfe8d0a81920bbe0fdd673
[apiclient] Found 1 Pods for label selector component=etcd
[upgrade/staticpods] Component "etcd" upgraded successfully!
[upgrade/etcd] Waiting for etcd to become available
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests066461858"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-12-24-15-35-23/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-k8s-master hash: f244e3d7ce5a03e5054570c276b5797b
Static pod: kube-apiserver-k8s-master hash: f244e3d7ce5a03e5054570c276b5797b
Static pod: kube-apiserver-k8s-master hash: f244e3d7ce5a03e5054570c276b5797b
Static pod: kube-apiserver-k8s-master hash: ca7b7b6b3364e357aa1a43c09ea67a3e
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-12-24-15-35-23/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-k8s-master hash: 6e29a8125b59b61870471984c2d5269e
Static pod: kube-controller-manager-k8s-master hash: 8f99a56fb3eeae0c61283d6071bfb1f4
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2021-12-24-15-35-23/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-k8s-master hash: 1b7d295cfbd2d0ae820b397802bd43f1
Static pod: kube-scheduler-k8s-master hash: 285062c53852ebaf796eba8548d69e43
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.19" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.19.3". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
[root@k8s-master ~]# kubectl uncordon k8s-master
node/k8s-master uncordoned
[root@k8s-master ~]# kubectl get node
NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    master   32d   v1.19.0
k8s-node1    Ready    <none>   32d   v1.19.0
k8s-node2    Ready    <none>   32d   v1.19.0
[root@k8s-master ~]# kubectl get pods -n kube-system
NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-97769f7c7-c4cg5   1/1     Running   0          4m13s
calico-node-4pwdc                         1/1     Running   14         32d
calico-node-9r6zd                         1/1     Running   14         32d
calico-node-vqzdj                         1/1     Running   14         32d
client1                                   1/1     Running   3          41h
coredns-6d56c8448f-gcgrh                  1/1     Running   14         32d
coredns-6d56c8448f-mdl7c                  1/1     Running   0          4m13s
etcd-k8s-master                           1/1     Running   0          83s
filebeat-5pwh7                            1/1     Running   9          9d
filebeat-pt848                            1/1     Running   9          9d
kube-apiserver-k8s-master                 1/1     Running   0          72s
kube-controller-manager-k8s-master        0/1     Running   0          67s
kube-proxy-87lbj                          1/1     Running   0          28s
kube-proxy-mcdnv                          1/1     Running   0          18s
kube-proxy-mchc9                          1/1     Running   0          8s
kube-scheduler-k8s-master                 0/1     Running   0          65s
metrics-server-84f9866fdf-rz676           1/1     Running   12         3d9h
[root@k8s-master ~]# yum install -y kubelet-1.19.3-0 kubectl-1.19.3-0 --disableexcludes=kubernetes
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirrors.aliyun.com
 * extras: mirrors.aliyun.com
 * updates: mirrors.aliyun.com
Resolving Dependencies
--> Running transaction check
---> Package kubectl.x86_64 0:1.19.0-0 will be updated
---> Package kubectl.x86_64 0:1.19.3-0 will be an update
---> Package kubelet.x86_64 0:1.19.0-0 will be updated
...
Updated:
  kubectl.x86_64 0:1.19.3-0                      kubelet.x86_64 0:1.19.3-0                     

Complete!
[root@k8s-master ~]# systemctl daemon-reload
[root@k8s-master ~]# systemctl restart kubelet
[root@k8s-master ~]# kubectl get node
NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    master   32d   v1.19.3
k8s-node1    Ready    <none>   32d   v1.19.0
k8s-node2    Ready    <none>   32d   v1.19.0

升级工作节点:

1、升级kubeadm
yum install -y kubeadm-1.19.3-0 --disableexcludes=kubernetes

2、驱逐node上的pod,且不可调度
kubectl drain k8s-node1 --ignore-daemonsets

3、升级kubelet配置
kubeadm upgrade node

4、升级kubelet和kubectl
yum install -y kubelet-1.19.3-0 kubectl-1.19.3-0 --disableexcludes=kubernetes

5、重启kubelet
systemctl daemon-reload
systemctl restart kubelet

6、取消不可调度,节点重新上线
kubectl uncordon k8s-node1
[root@k8s-master ~]# kubectl get node
NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    master   32d   v1.19.3
k8s-node1    Ready    <none>   32d   v1.19.0
k8s-node2    Ready    <none>   32d   v1.19.0
[root@k8s-master ~]# kubectl drain k8s-node1 --ignore-daemonsets
node/k8s-node1 cordoned
error: unable to drain node "k8s-node1", aborting command...

There are pending nodes to be drained:
 k8s-node1
error: cannot delete Pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet (use --force to override): default/client1, default/client2, default/configmap-demo-pod, default/secret-demo-pod, default/sh
[root@k8s-master ~]# kubectl get node
NAME         STATUS                     ROLES    AGE   VERSION
k8s-master   Ready                      master   32d   v1.19.3
k8s-node1    Ready,SchedulingDisabled   <none>   32d   v1.19.0
k8s-node2    Ready                      <none>   32d   v1.19.0

[root@k8s-node1 ~]# yum install -y kubeadm-1.19.3-0 --disableexcludes=kubernetes
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirrors.aliyun.com
 * extras: mirrors.aliyun.com
 * updates: mirrors.aliyun.com
base                                                                    | 3.6 kB  
docker-ce-stable                                                        | 3.5 kB  
epel                                                                    | 4.7 kB  
extras                                                                  | 2.9 kB           
...
Updated:
  kubeadm.x86_64 0:1.19.3-0                                                       

Complete!
[root@k8s-node1 ~]# kubeadm upgrade node
[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cfig -oyaml'
[preflight] Running pre-flight checks
[preflight] Skipping prepull. Not a control plane node.
[upgrade] Skipping phase. Not a control plane node.
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yam
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your packa
[root@k8s-node1 ~]# yum install -y kubelet-1.19.3-0 kubectl-1.19.3-0 --disableexcles
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
 * base: mirrors.aliyun.com
 * extras: mirrors.aliyun.com
 * updates: mirrors.aliyun.com
Resolving Dependencies
--> Running transaction check
---> Package kubectl.x86_64 0:1.19.0-0 will be updated                                         
...
Updated:
  kubectl.x86_64 0:1.19.3-0                      kubelet.x86_64 0:1.19.3-0        

Complete!
[root@k8s-node1 ~]# systemctl daemon-reload
[root@k8s-node1 ~]# systemctl restart kubelet

[root@k8s-master ~]# kubectl uncordon k8s-node1
node/k8s-node1 uncordoned
[root@k8s-master ~]# kubectl get node
NAME         STATUS   ROLES    AGE   VERSION
k8s-master   Ready    master   32d   v1.19.3
k8s-node1    Ready    <none>   32d   v1.19.3
k8s-node2    Ready    <none>   32d   v1.19.0

课后作业:

1、Bootstrap Token方式增加一台Node(二进制)

参考以上

2、Etcd数据库备份与恢复(kubeadm)

参考以上

3、kubeadm对K8s集群进行版本升级

参考以上

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值