root@kubeadm-master1:/data# kubeadm init --config kubeadm-init.yaml
W0725 11:17:50.474894 21297 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0725 11:17:50.475019 21297 validation.go:28] Cannot validate kubelet config - no validator is available
[init] Using Kubernetes version: v1.17.2
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubeadm-master1.haostack.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.haostack.com] and IPs [172.26.0.1 172.16.62.18 172.16.62.200]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kubeadm-master1.haostack.com localhost] and IPs [172.16.62.18 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kubeadm-master1.haostack.com localhost] and IPs [172.16.62.18 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0725 11:18:02.869938 21297 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0725 11:18:02.872886 21297 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 38.530507 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node kubeadm-master1.haostack.com as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node kubeadm-master1.haostack.com as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[kubelet-check] Initial timeout of 40s passed.
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join 172.16.62.200:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:28ec6232cef24f6339d3ebc5051f492f301b41610f561aba7b061763c02078fb \
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.16.62.200:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:28ec6232cef24f6339d3ebc5051f492f301b41610f561aba7b061763c02078fb
root@kubeadm-master1:/data#
1.8.3 初始化完成
1.8.4 创建 kube-config 文件
根据提示开始操作
root@kubeadm-master1:/data# mkdir -p $HOME/.kube
root@kubeadm-master1:/data# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
root@kubeadm-master1:/data# sudo chown $(id -u):$(id -g) $HOME/.kube/config
root@kubeadm-master1:/data#
#查看节点信息
root@kubeadm-master1:~/.kube# kubectl get node
NAME STATUS ROLES AGE VERSION
kubeadm-master1.haostack.com NotReady master 4m20s v1.17.2
root@kubeadm-master1:~/.kube#
root@kubeadm-master1:/data# kubectl apply -f kube
kubeadm-init.yaml kube-flannel.yml
root@kubeadm-master1:/data# kubectl apply -f kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created
root@kubeadm-master1:/data#
#验证master节点状态 依据Ready 状态
root@kubeadm-master1:/data# kubectl get node
NAME STATUS ROLES AGE VERSION
kubeadm-master1.haostack.com Ready master 43m v1.17.2
root@kubeadm-master1:/data#
1.9.2 配置subnet.env
如果有报错,根据报错信息需要手动在/run/flannel目录下添加subnet.env文件
root@node3:/run/flannel# more subnet.env
FLANNEL_NETWORK=10.10.0.0/16
FLANNEL_SUBNET=10.10.0.1/24
FLANNEL_MTU=1450
FLANNEL_IPMASQ=true
root@node3:/run/flannel#
1.10 当前 maste 生成证书用于添加新控制节点:
key 很重要,需要保存起来
root@kubeadm-master1:/run/flannel# kubeadm init phase upload-certs --upload-certs
W0725 11:28:23.366212 26849 version.go:101] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled (Client.Timeout exceeded while awaiting headers)
W0725 11:28:23.366365 26849 version.go:102] falling back to the local client version: v1.17.2
W0725 11:28:23.366695 26849 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0725 11:28:23.366731 26849 validation.go:28] Cannot validate kubelet config - no validator is available
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
77909a9f1316feb9cb61547e030e87b3b3c341dbcd306e2a4c286e492dd250aa
root@kubeadm-master1:/run/flannel#
root@kubeadm-master3:/var/lib/etcd# kubeadm join 172.16.62.200:6443 --token abcdef.0123456789abcdef \
> --discovery-token-ca-cert-hash sha256:28ec6232cef24f6339d3ebc5051f492f301b41610f561aba7b061763c02078fb \
> --control-plane --certificate-key 77909a9f1316feb9cb61547e030e87b3b3c341dbcd306e2a4c286e492dd250aa
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[download-certs] Downloading the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kubeadm-master3.haostack.com localhost] and IPs [172.16.62.20 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kubeadm-master3.haostack.com localhost] and IPs [172.16.62.20 127.0.0.1 ::1]
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubeadm-master3.haostack.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.haostack.com] and IPs [172.26.0.1 172.16.62.20 172.16.62.200]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
W0725 11:43:52.749258 20536 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0725 11:43:52.759086 20536 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0725 11:43:52.760517 20536 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
{"level":"warn","ts":"2020-07-25T11:44:20.889+0800","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"passthrough:///https://172.16.62.20:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[mark-control-plane] Marking the node kubeadm-master3.haostack.com as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node kubeadm-master3.haostack.com as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
This node has joined the cluster and a new control plane instance was created:
* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.
To start administering your cluster from this node, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Run 'kubectl get nodes' to see this node join the cluster
1.10.3 验证mster状态
root@kubeadm-master1:/data# kubectl get node
NAME STATUS ROLES AGE VERSION
kubeadm-master1.haostack.com Ready master 28m v1.17.2
kubeadm-master2.haostack.com Ready master 6m31s v1.17.2
kubeadm-master3.haostack.com Ready master 3m7s v1.17.2
root@kubeadm-master1:/data#
1.10.4 验证k8s集群状态
root@kubeadm-master1:/data# kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health":"true"}
root@kubeadm-master1:/data#
1.10.5 验证csr证书状态
oot@kubeadm-master1:/data# kubectl get csr
NAME AGE REQUESTOR CONDITION
csr-7chml 6m18s system:bootstrap:abcdef Approved,Issued
csr-n7mz2 2m55s system:bootstrap:abcdef Approved,Issued
root@node3:/# kubeadm join 172.16.62.200:6443 --token abcdef.0123456789abcdef \
> --discovery-token-ca-cert-hash sha256:28ec6232cef24f6339d3ebc5051f492f301b41610f561aba7b061763c02078fb
W0725 11:51:59.807442 32501 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
root@node3:/#
#node节点镜像 已经从阿里云下载
root@node3:~# docker image ls
REPOSITORY TAG IMAGE ID CREATED SIZE
quay.io/coreos/flannel v0.12.0-amd64 4e9f801d2217 4 months ago 52.8MB
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy v1.17.2 cba2a99699bd 6 months ago 116MB
registry.cn-hangzhou.aliyuncs.com/google_containers/pause 3.1 da86e6ba6ca1 2 years ago 742kB
root@node3:~#
1.11.3 验证 node 节点状态 都是Ready 状态
root@kubeadm-master1:/data# kubectl get node
NAME STATUS ROLES AGE VERSION
kubeadm-master1.haostack.com Ready master 36m v1.17.2
kubeadm-master2.haostack.com Ready master 14m v1.17.2
kubeadm-master3.haostack.com Ready master 11m v1.17.2
node1.haostack.com Ready <none> 3m22s v1.17.2
node2.haostack.com Ready <none> 3m21s v1.17.2
node3.haostack.com Ready <none> 3m18s v1.17.2
root@kubeadm-master1:/data/dashboard# kubectl apply -f dashboard-2.0.0-rc6.yml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
root@kubeadm-master1:/data/dashboard# kubectl apply -f admin-user.yml
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created
root@kubeadm-master1:/# kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.17.2
[upgrade/versions] kubeadm version: v1.17.2
I0725 12:47:05.570227 27423 version.go:251] remote version is much newer: v1.18.6; falling back to: stable-1.17
[upgrade/versions] Latest stable version: v1.17.9
[upgrade/versions] Latest version in the v1.17 series: v1.17.9
Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':
COMPONENT CURRENT AVAILABLE
Kubelet 6 x v1.17.2 v1.17.9
Upgrade to the latest version in the v1.17 series:
COMPONENT CURRENT AVAILABLE
API Server v1.17.2 v1.17.9
Controller Manager v1.17.2 v1.17.9
Scheduler v1.17.2 v1.17.9
Kube Proxy v1.17.2 v1.17.9
CoreDNS 1.6.5 1.6.5
Etcd 3.4.3 3.4.3-0
You can now apply the upgrade by executing the following command:
kubeadm upgrade apply v1.17.9
Note: Before you can perform this upgrade, you have to update kubeadm to v1.17.9.
_____________________________________________________________________
root@kubeadm-master1:/#
3.4 升级kubadm服务
所有master和node节点都要升级
3.4.1 安装
安装命令 apt-get install kubeadm=1.17.4-00
#3个节点都要安装
root@kubeadm-master2:/etc/kubernetes# apt-get install kubeadm=1.17.4-00
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following packages will be DOWNGRADED:
kubeadm
0 upgraded, 0 newly installed, 1 downgraded, 0 to remove and 58 not upgraded.
Need to get 8,064 kB of archives.
After this operation, 4,096 B disk space will be freed.
Do you want to continue? [Y/n] Y
Get:1 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 kubeadm amd64 1.17.4-00 [8,064 kB]
Fetched 8,064 kB in 0s (20.7 MB/s)
dpkg: warning: downgrading kubeadm from 1.17.9-00 to 1.17.4-00
(Reading database ... 102935 files and directories currently installed.)
Preparing to unpack .../kubeadm_1.17.4-00_amd64.deb ...
Unpacking kubeadm (1.17.4-00) over (1.17.9-00) ...
Setting up kubeadm (1.17.4-00) ...
root@kubeadm-master2:/etc/kubernetes#
#master升级过程
root@kubeadm-master1:/data/scripts# kubeadm upgrade apply v1.17.4
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/version] You have chosen to change the cluster version to "v1.17.4"
[upgrade/versions] Cluster version: v1.17.4
[upgrade/versions] kubeadm version: v1.17.9
[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
[upgrade/prepull] Prepulling image for component etcd.
[upgrade/prepull] Prepulling image for component kube-apiserver.
[upgrade/prepull] Prepulling image for component kube-controller-manager.
[upgrade/prepull] Prepulling image for component kube-scheduler.
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-etcd
[upgrade/prepull] Prepulled image for component kube-scheduler.
[upgrade/prepull] Prepulled image for component kube-apiserver.
[upgrade/prepull] Prepulled image for component kube-controller-manager.
[upgrade/prepull] Prepulled image for component etcd.
[upgrade/prepull] Successfully prepulled the images for all the control plane components
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.17.4"...
Static pod: kube-apiserver-kubeadm-master1.haostack.com hash: b5e3b0092320eb9795f14c21b091fb3f
Static pod: kube-controller-manager-kubeadm-master1.haostack.com hash: bd130f2488b3f5237c32145a1856eaa6
Static pod: kube-scheduler-kubeadm-master1.haostack.com hash: 92e6721871b55a1e38e273dab9a560bc
[upgrade/etcd] Upgrading to TLS for etcd
[upgrade/etcd] Non fatal issue encountered during upgrade: the desired etcd version for this Kubernetes version "v1.17.4" is "3.4.3-0", but the current etcd version is "3.4.3". Won't downgrade etcd, instead just continue
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests458386756"
W0725 17:37:49.327094 346 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-07-25-17-37-47/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-kubeadm-master1.haostack.com hash: b5e3b0092320eb9795f14c21b091fb3f
Static pod: kube-apiserver-kubeadm-master1.haostack.com hash: 9a6e25f7f4f72df425fb7b61b7dc9c94
[apiclient] Found 3 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-07-25-17-37-47/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-kubeadm-master1.haostack.com hash: bd130f2488b3f5237c32145a1856eaa6
Static pod: kube-controller-manager-kubeadm-master1.haostack.com hash: 0c38f39e720ba377cf3e5bb783dab2b8
[apiclient] Found 3 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-07-25-17-37-47/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-kubeadm-master1.haostack.com hash: 92e6721871b55a1e38e273dab9a560bc
Static pod: kube-scheduler-kubeadm-master1.haostack.com hash: 51b669b8290157fc8f29eab8a4b7da81
[apiclient] Found 3 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons]: Migrating CoreDNS Corefile
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.17.4". Enjoy!
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.
root@kubeadm-master1:/data/scripts#
#node节点升级过程
oot@node3:/# kubeadm upgrade node --kubelet-version 1.17.4
[upgrade] Reading configuration from the cluster...
[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[upgrade] Skipping phase. Not a control plane node.
[upgrade] Using kubelet config version 1.17.4, while kubernetes-version is v1.17.4
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.17" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[upgrade] The configuration for this node was successfully updated!
[upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.
基于 haproxy 和 keepalived 实现高可用的反向代理,并访问到运行在 kubernetes 集群中业务 Pod
haproxy配置
listen k8s-pod-nginx
bind 172.16.62.201:80
mode tcp
balance roundrobin
server node1 172.16.62.31:30007 check inter 3s fall 3 rise 5
server node2 172.16.62.32:30007 check inter 3s fall 3 rise 5
server node3 172.16.62.33:30007 check inter 3s fall 3 rise 5