【Kubernetes高可用集群部署】


前言

Kubernetes版本: 1.18.0
系统版本: CentOS7 (内核版本)
[root@master1 k8s]# cat /etc/redhat-release
CentOS Linux release 7.9.2009 (Core)

一、环境准备

1. 服务器规划

在这里插入图片描述

2. 环境准备

下列环境在所有节点配置好

  1. 保证所有节点接入互联网并配置好YUM源(略)

  2. 向一台主机进行时间同步

[root@master1 k8s]# yum install chrony ntpdate -y
[root@master1 k8s]# vim /etc/chrony.conf
 Allow NTP client access from local network.
allow 192.168.0.0/16  #注释打开
 Serve time even if not synchronized to a time source.
local stratum 10    #注释打开

[root@master1 k8s]# systemctl restart chronyd
[root@master1 k8s]# chronyc sources     #随便同步一个

^? stratum2-1.ntp.mow01.ru.>    0   7    0    -     +0ns[   +0ns] +/-    0ns
^? docker01.rondie.nl           0   7    0    -     +0ns[   +0ns] +/-    0ns
^? makaki.miuku.net             2   6    1   27  +1568ms[+1568ms] +/-  104ms

[root@master1 k8s]# ntpdate docker01.rondie.nl  #多同步几次直到成功,然后其他机器向这台同步
  1. 关闭防火墙,selinux
systemctl stop firewalld
systemctl disable firewalld
  1. 设置好主机名,做好解析
[root@master1 k8s]# vim /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.10.11 master1
192.168.10.12 master2
192.168.10.13 master3
192.168.10.14 worker1
192.168.10.15 worker2
192.168.10.16 worker3
  1. 关闭swap分区
 swapoff -a
 sed -i 's/.*swap/#&/' /etc/fstab
  1. 配置内核参数
[root@master1 k8s]# vim /etc/sysctl.d/kubernetes.conf 
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1
vm.swappiness=0

[root@master1 ~]# sysctl --system # 执行可能会出现一些错误,不鸟他
  1. 加载ipvs模块
[root@master1 k8s]#  vim /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
​
[root@master1 k8s]#  chmod +x /etc/sysconfig/modules/ipvs.modules
[root@master1 k8s]#  /etc/sysconfig/modules/ipvs.modules

二、部署过程

1. 安装配置docker ( 所有节点)

[root@master1 k8s]#  curl http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -o /etc/yum.repos.d/docker-ce.repo 
[root@master1 k8s]#  yum -y install docker-ce-19.03.15 安装docker,建议使用19.03版

[root@master1 k8s]#  mkdir /etc/docker
[root@master1 k8s]#  vim /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ],
  "registry-mirrors": ["https://pf5f57i3.mirror.aliyuncs.com"]
}

systemctl start docker      # 启动docker
systemctl enable docker

2. 安装软件( 所有节点)

配置yum源
[root@master1 k8s]#  curl https://gitee.com/leedon21/k8s/raw/master/kubernetes.repo -o /etc/yum.repos.d/kubernetes.repo   安装软件
[root@master1 k8s]#  yum install -y kubeadm-1.20.15-0 kubelet-1.20.0-0 kubectl-1.20.15-0 ipvsadm

可用 yum list --showduplicates|egrep kubeadm 查看有哪些可用版本,版本要统一
systemctl enable kubelet  设置开机启动kubelet

3. 安装负载均衡及高可用 (所有 Master节点)

说明:
Kubernetes master 节点运行如下组件:
- kube-apiserver
- kube-scheduler
- kube-controller-manager
kube-scheduler 和 kube-controller-manager 可以以集群模式运行,通过 leader 选举产生一个工作进程,其它进程处于阻塞模式。kube-apiserver可以运行多个实例,但对其它组件需要提供统一的访问地址,该地址需要高可用。
本次部署使用 keepalived+haproxy 实现 kube-apiserver 的VIP 高可用和负载均衡。keepalived 提供 kube-apiserver 对外服务的 VIP高可用。haproxy 监听 VIP,后端连接所有 kube-apiserver 实例,提供健康检查和负载均衡功能。kube-apiserver的端口为6443, 为避免冲突, haproxy 监听的端口要与之不同,此实验中为6444。
keepalived 周期性检查本机的 haproxy 进程状态,如果检测到 haproxy 进程异常,则触发VIP 飘移。 所有组件都通过 VIP 监听的6444端口访问 kube-apiserver 服务。
在此我们使用睿云智合相关镜像,具体使用方法请访问: https://github.com/wise2c-devops 。当然也可以手动配置haproxy以实现apiserver负载均衡,keepalived实现haproxy高可用。

1)创建 haproxy和 keepalived的启动脚本

[root@master1 k8s]#   vim haproxy.sh
#!/bin/bash
MasterIP1=192.168.10.11
MasterIP2=192.168.10.12
MasterIP3=192.168.10.13
MasterPort=6443                   # apiserver端口
docker run -d --restart=always --name haproxy-k8s -p 6444:6444 \
          -e MasterIP1=$MasterIP1 \
          -e MasterIP2=$MasterIP2 \
          -e MasterIP3=$MasterIP3 \
          -e MasterPort=$MasterPort  wise2c/haproxy-k8s
[root@master1 k8s]# vim keepalived.sh
#!/bin/bash
VIRTUAL_IP=192.168.10.100         # VIP
INTERFACE=ens33                   # 网卡名称
NETMASK_BIT=24
CHECK_PORT=6444                   # Haproxy端口
RID=10
VRID=160
MCAST_GROUP=224.0.0.18
docker run -itd --restart=always --name=keepalived-k8s \
           --net=host --cap-add=NET_ADMIN \
           -e VIRTUAL_IP=$VIRTUAL_IP \
           -e INTERFACE=$INTERFACE \
           -e NETMASK_BIT=$NETMASK_BIT \
           -e CHECK_PORT=$CHECK_PORT \
           -e RID=$RID -e VRID=$VRID \
           -e MCAST_GROUP=$MCAST_GROUP  wise2c/keepalived-k8s

2)在所有Master节点执行脚本运行这两个容器

[root@master1 k8s]# sh haproxy.sh
[root@master1 k8s]# sh keepalived.sh

测试:
1).在每台机器上查看容器(haproxy, keepalived)是否都正常运行
2).在每台机器上查看6444端口是否监听
3).在有VIP的机器关闭haproxy容器或keepalived容器看看VIP能否正常飘移
参考文档:https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/

4. 初台化Master1 (仅在 Master1节点操作)

在 Master1上创建初始化配置文件,根据实际环境修改初始化配置文件

[root@master1 ~]# mkdir k8s
[root@master1 ~]# cd k8s/
[root@master1 k8s]# kubeadm config print init-defaults > init.yml

[root@master1 k8s]# vim init.yml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.10.11         # 此处改为本机IP
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: master1
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: "192.168.10.100:6444"   # VIP:PORT
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers  # 使用国内镜像仓库
kind: ClusterConfiguration
kubernetesVersion: v1.18.0    # 版本号
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
  podSubnet: 10.244.0.0/16    # pod子网,和Flannel中要一致
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs

初始化 Master1,这个命令敲完需要等待一段时间

[root@master1 k8s]# kubeadm init --config=init.yml --upload-certs |tee kubeadm-init.log 
W0330 13:01:44.434476    4560 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.10.11 192.168.10.100]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master1 localhost] and IPs [192.168.10.11 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master1 localhost] and IPs [192.168.10.11 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
W0330 13:01:50.324799    4560 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0330 13:01:50.329031    4560 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 39.597752 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-check] Initial timeout of 40s passed.
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
b696a3afc89a8c60e130028d66be172c348ee80c789fcec6f79f759142eea6b8
[mark-control-plane] Marking the node master1 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy
​
Your Kubernetes control-plane has initialized successfully!
​
To start using your cluster, you need to run the following as a regular user:
​
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
​
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/
​
You can now join any number of the control-plane node running the following command on each as root:
​
  kubeadm join 192.168.10.100:6444 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:673d71fd341c79d3a013993c546bbf529f8626506f8d14fc69f0be376956e56f \
    --control-plane --certificate-key b696a3afc89a8c60e130028d66be172c348ee80c789fcec6f79f759142eea6b8
​
Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
​
Then you can join any number of worker nodes by running the following on each as root:
​
kubeadm join 192.168.10.100:6444 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:673d71fd341c79d3a013993c546bbf529f8626506f8d14fc69f0be376956e56f
说明:
kubeadm init主要执行了以下操作:
[init]:指定版本进行初始化操作
[preflight] :初始化前的检查和下载所需要的Docker镜像文件
[kubelet-start]:生成kubelet的配置文件”"var/lib/kubelet/config.yaml",没有这个文件kubelet无法启动,所以初始化之前的kubelet实际上启动不会成功。
[certificates]:生成Kubernetes使用的证书,存放在/etc/kubernetes/pki目录中。 [kubeconfig] :生成 KubeConfig 文件,存放在/etc/kubernetes目录中,组件之间通信需要使用对应文件。
[control-plane]:使用/etc/kubernetes/manifest目录下的YAML文件,安装 Master 组件。
[etcd]:使用/etc/kubernetes/manifest/etcd.yaml安装Etcd服务。
[wait-control-plane]:等待control-plan部署的Master组件启动。
[apiclient]:检查Master组件服务状态。
[uploadconfig]:更新配置
[kubelet]:使用configMap配置kubelet。
[patchnode]:更新CNI信息到Node上,通过注释的方式记录。
[mark-control-plane]:为当前节点打标签,打了角色Master,和不可调度标签,这样默认就不会使用Master节点来运行Pod。
[bootstrap-token]:生成token记录下来,后边使用kubeadm join往集群中添加节点时会用到
[addons]:安装附加组件CoreDNS和kube-proxy

5. 配置kubectl (仅在 Master1节点操作)

要能够执行kubectl命令必须进行配置,有两种配置方式(二选一)

方式一,通过配置文件
 mkdir -p $HOME/.kube
 cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
 chown $(id -u):$(id -g) $HOME/.kube/config
 
方式二,通过环境变量
echo 'export KUBECONFIG=/etc/kubernetes/admin.conf' >> ~/.bashrc
source ~/.bashrc

配置好kubectl后,就可以使用kubectl命令了
[root@master1 k8s]# kubectl get cs
NAME                 STATUS    MESSAGE             ERROR
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   
scheduler            Healthy   ok              
​
[root@master1 k8s]# kubectl get no
NAME      STATUS     ROLES    AGE     VERSION
master1   NotReady   master   2m32s   v1.18.0
​
[root@master1 k8s]# kubectl get po -n kube-system
NAME                              READY   STATUS    RESTARTS   AGE
coredns-7ff77c879f-965nt          0/1     Pending   0          2m24s
coredns-7ff77c879f-qmsb2          0/1     Pending   0          2m24s
etcd-master1                      1/1     Running   1          2m38s
kube-apiserver-master1            1/1     Running   1          2m38s
kube-controller-manager-master1   1/1     Running   1          2m38s
kube-proxy-q847x                  1/1     Running   1          2m24s
kube-scheduler-master1            1/1     Running   1          2m38s

注意: 由于未安装网络插件,coredns处于pending状态,node处于NotReady 状态在更高版本的k8s集群中,可能会出现 scheduler、controller-manager 两个组件健康状况不正常的情况,如下:

[root@master-1 k8s]# kubectl get cs
NAME                 STATUS      MESSAGE                                                                                     ERROR
controller-manager   Unhealthy   Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: connect: connection refused
scheduler            Unhealthy   Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused
etcd-0               Healthy     {"health":"true"}

解决方法:修改以下两个文件:
[root@master1 k8s]# /etc/kubernetes/manifests/kube-controller-manager.yaml
[root@master1 k8s]# /etc/kubernetes/manifests/kube-scheduler.yaml

注释掉: - --port=0  ,等一会即可

6. 部署网络插件 (仅在 Master1节点操作)

kubernetes支持多种网络方案,这里简单介绍常用的 flannel 方案.
最新版本的kube-flannel.yml文件(注意下载下来后修改里面的镜像): https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

[root@master1 k8s]# kubectl apply -f  https://gitee.com/leedon21/k8s/raw/master/kube-flannel.yml
过一会再次查看node和 Pod状态,全部OK, 所有的核心组件都起来了

[root@master1 k8s]# kubectl get po -n kube-system
NAME                              READY   STATUS    RESTARTS   AGE
coredns-7ff77c879f-965nt          1/1     Running   0          24m
coredns-7ff77c879f-qmsb2          1/1     Running   0          24m
etcd-master1                      1/1     Running   1          25m
kube-apiserver-master1            1/1     Running   1          25m
kube-controller-manager-master1   1/1     Running   1          25m
kube-flannel-ds-amd64-vvj65       1/1     Running   0          48s
kube-proxy-q847x                  1/1     Running   1          24m
kube-scheduler-master1            1/1     Running   1          25m
​
[root@master1 k8s]# kubectl get no
NAME      STATUS   ROLES    AGE   VERSION
master1   Ready    master   26m   v1.18.0

==================================================================

7. 加入Master节点 # 其它 Master节点

在另外两个节点执行

[root@master2 ~]# kubeadm join 192.168.10.100:6444 --token abcdef.0123456789abcdef \
>     --discovery-token-ca-cert-hash sha256:673d71fd341c79d3a013993c546bbf529f8626506f8d14fc69f0be376956e56f \
>     --control-plane --certificate-key b696a3afc89a8c60e130028d66be172c348ee80c789fcec6f79f759142eea6b8
[root@master3 ~]# kubeadm join 192.168.10.100:6444 --token abcdef.0123456789abcdef \
>     --discovery-token-ca-cert-hash sha256:673d71fd341c79d3a013993c546bbf529f8626506f8d14fc69f0be376956e56f \
>     --control-plane --certificate-key b696a3afc89a8c60e130028d66be172c348ee80c789fcec6f79f759142eea6b8

再查看等待一会就都起来啦。

[root@master1 k8s]# kubectl get no
NAME      STATUS   ROLES    AGE    VERSION
master1   Ready    master   12m    v1.18.0
master2   Ready    master   10m    v1.18.0
master3   Ready    master   3m3s   v1.18.0
​
[root@master1 k8s]# kubectl get po -n kube-system
NAME                              READY   STATUS    RESTARTS   AGE
coredns-7ff77c879f-r4mqp          1/1     Running   0          12m
coredns-7ff77c879f-w9nsh          1/1     Running   0          12m
etcd-master1                      1/1     Running   0          13m
etcd-master2                      1/1     Running   0          9m39s
etcd-master3                      1/1     Running   0          3m51s
kube-apiserver-master1            1/1     Running   0          13m
kube-apiserver-master2            1/1     Running   0          11m
kube-apiserver-master3            1/1     Running   0          3m50s
kube-controller-manager-master1   1/1     Running   1          13m
kube-controller-manager-master2   1/1     Running   1          9m43s
kube-controller-manager-master3   1/1     Running   0          3m50s
kube-flannel-ds-amd64-9wn57       1/1     Running   1          11m
kube-flannel-ds-amd64-ktxpl       1/1     Running   0          12m
kube-flannel-ds-amd64-qhttx       1/1     Running   0          8m59s
kube-proxy-6hlql                  1/1     Running   0          11m
kube-proxy-jbx8r                  1/1     Running   0          12m
kube-proxy-l9782                  1/1     Running   0          8m59s
kube-scheduler-master1            1/1     Running   2          13m
kube-scheduler-master2            1/1     Running   2          9m43s
kube-scheduler-master3            1/1     Running   0          3m49s

8. 加入Worker节点 (在所有worker节点执行)

[root@node1 ~]# kubeadm join 192.168.10.100:6444 --token abcdef.0123456789abcdef \
>     --discovery-token-ca-cert-hash sha256:673d71fd341c79d3a013993c546bbf529f8626506f8d14fc69f0be376956e56f
[root@node2 ~]# kubeadm join 192.168.10.100:6444 --token abcdef.0123456789abcdef \
>     --discovery-token-ca-cert-hash sha256:673d71fd341c79d3a013993c546bbf529f8626506f8d14fc69f0be376956e56f
[root@node3 ~]# kubeadm join 192.168.10.100:6444 --token abcdef.0123456789abcdef \
>     --discovery-token-ca-cert-hash sha256:673d71fd341c79d3a013993c546bbf529f8626506f8d14fc69f0be376956e56f
[root@master1 k8s]# kubectl get no
NAME      STATUS   ROLES    AGE     VERSION
master1   Ready    master   22m     v1.18.0
master2   Ready    master   20m     v1.18.0
master3   Ready    master   13m     v1.18.0
node1     Ready    <none>   4m46s   v1.18.0
node2     Ready    <none>   3m24s   v1.18.0
node3     Ready    <none>   2m50s   v1.18.0

至此,集群部署完毕。

三、其他问题解决

如果安装过程中出现问题, 无论是Master还是Node, 都可以执行 kubeadm reset 命令进行重置

[root@node2 ~]# kubeadm reset
[reset] WARNING: Changes made to this host by 'kubeadm init' or 'kubeadm join' will be reverted.
[reset] Are you sure you want to proceed? [y/N]: y
[preflight] Running pre-flight checks
W0321 22:54:01.292739    7918 removeetcdmember.go:79] [reset] No kubeadm config, using etcd pod spec to get data directory
[reset] No etcd config found. Assuming external etcd
[reset] Please, manually reset etcd to prevent further issues
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /var/lib/dockershim /var/run/kubernetes /var/lib/cni]#reset进程没有清理CNI配置
The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
​
#reset进程没有清理iptables或IPVS表
The reset process does not reset or clean up iptables rules or IPVS tables.
If you wish to reset iptables, you must do so manually by using the "iptables" command.
​
If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
to reset your system's IPVS tables.
​
#reset进程没有清理kubeconfig文件
The reset process does not clean your kubeconfig files and you must remove them manually.
Please, check the contents of the $HOME/.kube/config file.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值