使用kubeadm升级K8S

使用kubeadm升级K8S

时间:2019年3月6日16:50:10

注意:

如下示例为1.11.2升级到1.11.7

同样适用于1.11.x升级到1.12.x这种次级版本升级

不支持1.111.13这种跨次版本升级方式

  • 环境说明:

    1.操作系统:Ubuntu 18.04 LTS
    2.K8S部署: Static Pod方式部署controll plane

一、使用kubeadm upgrade plan查看当前可升级情况

[root@zxg kubernetes_ubuntu]# kubeadm upgrade plan
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.11.2
[upgrade/versions] kubeadm version: v1.11.2

[upgrade/versions] Latest stable version: v1.13.3
[upgrade/versions] Latest version in the v1.11 series: v1.11.7
[upgrade/versions] WARNING: No recommended etcd for requested kubernetes version (v1.13.3)

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       AVAILABLE
Kubelet     1 x v1.11.2   v1.11.7

Upgrade to the latest version in the v1.11 series:

COMPONENT            CURRENT   AVAILABLE
API Server           v1.11.2   v1.11.7
Controller Manager   v1.11.2   v1.11.7
Scheduler            v1.11.2   v1.11.7
Kube Proxy           v1.11.2   v1.11.7
CoreDNS              1.1.3     1.1.3
Etcd                 3.2.18    3.2.18

You can now apply the upgrade by executing the following command:

	kubeadm upgrade apply v1.11.7

Note: Before you can perform this upgrade, you have to update kubeadm to v1.11.7.

_____________________________________________________________________

Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT   CURRENT       AVAILABLE
Kubelet     1 x v1.11.2   v1.13.3

Upgrade to the latest stable version:

COMPONENT            CURRENT   AVAILABLE
API Server           v1.11.2   v1.13.3
Controller Manager   v1.11.2   v1.13.3
Scheduler            v1.11.2   v1.13.3
Kube Proxy           v1.11.2   v1.13.3
CoreDNS              1.1.3     1.1.3
Etcd                 3.2.18    N/A

You can now apply the upgrade by executing the following command:

	kubeadm upgrade apply v1.13.3

Note: Before you can perform this upgrade, you have to update kubeadm to v1.13.3.

_____________________________________________________________________

二、根据以上提示,先升级kubeadm/kubelet/kubectl

1.下载kubeadm/kubelet/kubectl安装包 [ 需要在可联网机器上下载 ]

# apt download kubeadm=1.11.7-00 kubectl=1.11.7-00 kubelet=1.11.7-00
# ls
kubeadm_1.11.7-00_amd64.deb  kubectl_1.11.7-00_amd64.deb  kubelet_1.11.7-00_amd64.deb

2.升级 kubeadm/kubelet/kubectl1.11.7版本

# ls
kubeadm_1.11.7-00_amd64.deb  kubectl_1.11.7-00_amd64.deb  kubelet_1.11.7-00_amd64.deb
# dpkg -i *.deb

注:以上两步可合并成:apt install kubeadm=1.11.7-00 kubectl=1.11.7-00 kubelet=1.11.7-00

三、下载升级到执行版本需要的镜像

1.11.2升级到 1.11.7需要的新镜像:

- kube-apiserver
- kube-controller-manager
- kube-scheduler
- kube-proxy

下载并推送到镜像仓库:

# PULL [需要联网]
docker pull mirrorgooglecontainers/kube-apiserver:v1.11.7
docker pull mirrorgooglecontainers/kube-controller-manager:v1.11.7
docker pull mirrorgooglecontainers/kube-scheduler:v1.11.7
docker pull mirrorgooglecontainers/kube-proxy:v1.11.7

# TAG
docker tag mirrorgooglecontainers/kube-controller-manager:v1.11.7 myharbor.io/google_containers/kube-controller-manager:v1.11.7
docker tag mirrorgooglecontainers/kube-apiserver:v1.11.7          myharbor.io/google_containers/kube-apiserver:v1.11.7
docker tag mirrorgooglecontainers/kube-scheduler:v1.11.7          myharbor.io/google_containers/kube-scheduler:v1.11.7                                                                                                  
docker tag mirrorgooglecontainers/kube-proxy:v1.11.7              myharbor.io/google_containers/kube-proxy:v1.11.7

# PUSH
docker push myharbor.io/google_containers/kube-controller-manager:v1.11.7
docker push myharbor.io/google_containers/kube-apiserver:v1.11.7
docker push myharbor.io/google_containers/kube-scheduler:v1.11.7
docker push myharbor.io/google_containers/kube-proxy:v1.11.7

四、准备升级配置文件: kubeadm.yaml

kubeadm.yaml文件内容如下:

	apiVersion: kubeadm.k8s.io/v1alpha2
	kind: MasterConfiguration
	api:
	  advertiseAddress: 0.0.0.0
	kubernetesVersion: v1.11.7
	imageRepository: myharbor.io/google_containers
	networking:
	  podSubnet: 10.244.0.0/16
	nodeRegistration:
	  criSocket: /var/run/containerd/containerd.sock
	kubeProxy:
	  config:
		# 这里可以选择用ipvs还是ipatables
		mode: iptables
		# mode: iptables

五、执行升级: kubeadm upgrade apply --config=kubeadm.yaml

[root@zxg ~]# kubeadm upgrade apply --config=kubeadm.yaml 
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration options from a file: kubeadm.yaml
[upgrade/apply] Respecting the --cri-socket flag that is set with higher priority than the config file.
[upgrade/version] You have chosen to change the cluster version to "v1.11.7"
[upgrade/versions] Cluster version: v1.11.2
[upgrade/versions] kubeadm version: v1.11.7
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
[upgrade/prepull] Prepulling image for component etcd.
[upgrade/prepull] Prepulling image for component kube-scheduler.
[upgrade/prepull] Prepulling image for component kube-controller-manager.
[upgrade/prepull] Prepulling image for component kube-apiserver.
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
[upgrade/prepull] Prepulled image for component kube-scheduler.
[upgrade/prepull] Prepulled image for component etcd.
[upgrade/prepull] Prepulled image for component kube-controller-manager.
[upgrade/prepull] Prepulled image for component kube-apiserver.
[upgrade/prepull] Successfully prepulled the images for all the control plane components
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.11.7"...
Static pod: kube-apiserver-zxg hash: 900aed5b6bf2b21c48ba41b186a4fad8
Static pod: kube-controller-manager-zxg hash: d79931295557377101ffdf8bcd6a5fe4
Static pod: kube-scheduler-zxg hash: 189c3f1bedf26f48a89c00f78bd0ff5f
Static pod: etcd-zxg hash: a48d824ef555c4f250e97d579ca533c6
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests607648932/etcd.yaml"
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-02-20-14-07-41/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s
Static pod: etcd-zxg hash: a48d824ef555c4f250e97d579ca533c6
[upgrade/etcd] Failed to upgrade etcd: couldn't upgrade control plane. kubeadm has tried to recover everything into the earlier state. Errors faced: [timed out waiting for the condition]
[upgrade/etcd] Waiting for previous etcd to become available
[util/etcd] Waiting 0s for initial delay
[util/etcd] Attempting to see if all cluster endpoints are available 1/10
[upgrade/etcd] Etcd was rolled back and is now available
[upgrade/apply] FATAL: fatal error when trying to upgrade the etcd cluster: couldn't upgrade control plane. kubeadm has tried to recover everything into the earlier state. Errors faced: [timed out waiting for the condition], rolled the state back to pre-upgrade state
[root@zxg ~]# kubeadm upgrade apply v1.11.7 --config=kubeadm.yaml 
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration options from a file: kubeadm.yaml
[upgrade/apply] Respecting the --cri-socket flag that is set with higher priority than the config file.
[upgrade/version] You have chosen to change the cluster version to "v1.11.7"
[upgrade/versions] Cluster version: v1.11.2
[upgrade/versions] kubeadm version: v1.11.7
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
[upgrade/prepull] Prepulling image for component etcd.
[upgrade/prepull] Prepulling image for component kube-apiserver.
[upgrade/prepull] Prepulling image for component kube-scheduler.
[upgrade/prepull] Prepulling image for component kube-controller-manager.
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
[upgrade/prepull] Prepulled image for component etcd.
[upgrade/prepull] Prepulled image for component kube-apiserver.
[upgrade/prepull] Prepulled image for component kube-controller-manager.
[upgrade/prepull] Prepulled image for component kube-scheduler.
[upgrade/prepull] Successfully prepulled the images for all the control plane components
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.11.7"...
Static pod: kube-apiserver-zxg hash: 900aed5b6bf2b21c48ba41b186a4fad8
Static pod: kube-controller-manager-zxg hash: d79931295557377101ffdf8bcd6a5fe4
Static pod: kube-scheduler-zxg hash: 189c3f1bedf26f48a89c00f78bd0ff5f
Static pod: etcd-zxg hash: a48d824ef555c4f250e97d579ca533c6
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests114868796/etcd.yaml"
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-02-20-14-18-33/etcd.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s
Static pod: etcd-zxg hash: a48d824ef555c4f250e97d579ca533c6
Static pod: etcd-zxg hash: a48d824ef555c4f250e97d579ca533c6
Static pod: etcd-zxg hash: a48d824ef555c4f250e97d579ca533c6
Static pod: etcd-zxg hash: a48d824ef555c4f250e97d579ca533c6
Static pod: etcd-zxg hash: a48d824ef555c4f250e97d579ca533c6
Static pod: etcd-zxg hash: a48d824ef555c4f250e97d579ca533c6
Static pod: etcd-zxg hash: a48d824ef555c4f250e97d579ca533c6
Static pod: etcd-zxg hash: a48d824ef555c4f250e97d579ca533c6
Static pod: etcd-zxg hash: a48d824ef555c4f250e97d579ca533c6
Static pod: etcd-zxg hash: a48d824ef555c4f250e97d579ca533c6
Static pod: etcd-zxg hash: a48d824ef555c4f250e97d579ca533c6
Static pod: etcd-zxg hash: a48d824ef555c4f250e97d579ca533c6
Static pod: etcd-zxg hash: a48d824ef555c4f250e97d579ca533c6
Static pod: etcd-zxg hash: a48d824ef555c4f250e97d579ca533c6
Static pod: etcd-zxg hash: a48d824ef555c4f250e97d579ca533c6
Static pod: etcd-zxg hash: 04a72a21ab41a100dc08c81c92d03385
[apiclient] Found 1 Pods for label selector component=etcd
[apiclient] Found 0 Pods for label selector component=etcd
[apiclient] Found 1 Pods for label selector component=etcd
[apiclient] Found 0 Pods for label selector component=etcd
[apiclient] Found 1 Pods for label selector component=etcd
[apiclient] Found 0 Pods for label selector component=etcd
[apiclient] Found 1 Pods for label selector component=etcd
[apiclient] Found 0 Pods for label selector component=etcd
[apiclient] Found 1 Pods for label selector component=etcd
[apiclient] Found 0 Pods for label selector component=etcd
[apiclient] Found 1 Pods for label selector component=etcd
[upgrade/staticpods] Component "etcd" upgraded successfully!
[upgrade/etcd] Waiting for etcd to become available
[util/etcd] Waiting 0s for initial delay
[util/etcd] Attempting to see if all cluster endpoints are available 1/10
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests114868796"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests114868796/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests114868796/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests114868796/kube-scheduler.yaml"
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-02-20-14-18-33/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s
Static pod: kube-apiserver-zxg hash: 900aed5b6bf2b21c48ba41b186a4fad8
Static pod: kube-apiserver-zxg hash: 900aed5b6bf2b21c48ba41b186a4fad8
Static pod: kube-apiserver-zxg hash: 900aed5b6bf2b21c48ba41b186a4fad8
Static pod: kube-apiserver-zxg hash: 0187d94c3eb4fb4ba5b7448e1105b494
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[apiclient] Found 0 Pods for label selector component=kube-apiserver
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[apiclient] Found 0 Pods for label selector component=kube-apiserver
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[apiclient] Found 0 Pods for label selector component=kube-apiserver
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[apiclient] Found 0 Pods for label selector component=kube-apiserver
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-02-20-14-18-33/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s
Static pod: kube-controller-manager-zxg hash: d79931295557377101ffdf8bcd6a5fe4
Static pod: kube-controller-manager-zxg hash: 3cb7a61cf22074ef031bfa7b8c35fd22
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-02-20-14-18-33/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s
Static pod: kube-scheduler-zxg hash: 189c3f1bedf26f48a89c00f78bd0ff5f
Static pod: kube-scheduler-zxg hash: d35136372fb9df1046abc7cedbb7e02c
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "zxg" as an annotation
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.11.7". Enjoy!

[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
[root@zxg ~]# 

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值