Kubernetes 1.18.9 kubeadm 高可用集群安装

23 篇文章 1 订阅
18 篇文章 1 订阅

虚拟机环境

IP版本角色
10.211.55.58CentOS 7.8.2003k8s-m1
10.211.55.59CentOS 7.8.2003k8s-m2
10.211.55.60CentOS 7.8.2003k8s-m3
10.211.55.61CentOS 7.8.2003k8s-w1

处理不必要的麻烦

  • 代理

  宿主机有开 ShadowsocksX,虚拟机上的网络都是走宿主机的代理,不然 k8s 安装不成。下面有提供离线安装包。下面开启代理:

# 系统代理
$ cat >> /etc/profile << EOF

export http_proxy=http://192.168.1.234:1087
export https_proxy=http://192.168.1.234:1087
EOF

$ source /etc/profile

# Docker 拉取镜像代理配置
$ mkdir -p mkdir -p /lib/systemd/system/docker.service.d
$ cat >> /lib/systemd/system/docker.service.d/socks5-proxy.conf << EOF
[Service]
Environment="ALL_PROXY=socks5://192.168.1.234:1086"
EOF
  • 字符集

  可以查看这篇博客

  • 升级内核

  可以查看这篇博客

  • 安装常用工具
$  yum -y install wget vim net-tools telnet bind-utils
  • 本篇博客所有依赖如下:

  安装包

  • 其他操作
### 三台机器同样操作
# 每天机器 要唯一
$ cat /sys/class/net/eth0/address
$ cat /sys/class/dmi/id/product_uuid

# 修改 hosts
$ cat >> /etc/hosts << EOF

10.211.55.58 k8s-m1
10.211.55.59 k8s-m2
10.211.55.60 k8s-m3
10.211.55.61 k8s-w1
EOF

# 关闭防火墙和 SELinux
$ systemctl disable firewalld && systemctl stop firewalld && setenforce 0
$ sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

# 关闭 Swap,自 1.8 开始,k8s 要求关闭系统 Swap,如果不关闭,kubelet 无法启动。
# swappiness 的值的大小对如何使用 swap 分区是有着很大的联系的。swappiness = 0 的时候表示最大限度使用物理内存,然后才是 swap 空间,swappiness = 100 的时候表示积极的使用 swap 分区,并且把内存上的数据及时的搬运到 swap 空间里面。linux 的基本默认设置为 60。
$ swapoff -a
$ sed -i.bak '/swap/s/^/#/' /etc/fstab
# /dev/mapper/centos-swap swap                    swap    defaults        0 0

# 开机去加载系统配置 Modeles
$ cat > /etc/rc.sysinit << EOF
#!/bin/bash
for file in /etc/sysconfig/modules/*.modules ; do [ -x $file ] && $file; done
EOF

# flannel 网络需要 br_netfilter 模块支持
$ cat > /etc/sysconfig/modules/br_netfilter.modules << EOF
modprobe br_netfilter
EOF

$ chmod 755 /etc/sysconfig/modules/br_netfilter.modules \
  && bash /etc/sysconfig/modules/br_netfilter.modules

# 最大限度使用屋里空间、开启桥接网络和转发
$ cat <<EOF >  /etc/sysctl.d/k8s.conf
vm.swappiness                       = 0
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-iptables  = 1
EOF

sysctl --system

如果不关闭 Swap 也可,需要修改 kubelet 的启动配置项 --fail-swap-on=false 。配置文件:/etc/sysconfig/kubeletKUBELET_EXTRA_ARGS=–fail-swap-on=false

  • kube-proxy 开启 ipvs 的前置条件

  ipvs 已经加入到了内核的主干,所以为 kube-proxy 开启 ipvs 的前提需要加载以下的内核模块:

模块说明
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack_ipv4# 从内核 4.19.1 开始已经修改成:nf_conntrack
$ cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack
EOF
$ chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

  还需要确保各个节点上已经安装了 ipset 软件包 yum -y install ipset。 为了便于查看 ipvs 的代理规则,最好安装一下管理工具 ipvsadmyum -y install ipvsadm。如果以上前提条件如果不满足,则即使 kube-proxy 的配置开启了 ipvs 模式,也会退回到 iptables 模式。

安装 Docker

  • 删除旧版本
$ sudo yum remove docker \
                  docker-client \
                  docker-client-latest \
                  docker-common \
                  docker-latest \
                  docker-latest-logrotate \
                  docker-logrotate \
                  docker-engine
  • 安装稳定 yum 源仓库
$ sudo yum install -y yum-utils \
  device-mapper-persistent-data \
  lvm2

$ sudo yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo
  • 安装
$ yum install -y \
  containerd.io-1.2.13 \
  docker-ce-19.03.11 \
  docker-ce-cli-19.03.11
  • 启动
$ systemctl enable docker && systemctl start docker

修改 docker cgoup driver 为 systemd

  CRI installation 中指出,对于使用 systemd 作为 init system 的 Linux 的发行版,使用 systemd 作为 Docker 的 cgroup driver 可以确保服务器节点在资源紧张的情况更加稳定,因此这里修改各个节点上 Docker 的 cgroup driver 为 systemd。

  • 配置
#

$ mkdir /etc/docker
$ cat > /etc/docker/daemon.json <<EOF
{
  // "registry-mirrors": ["https://tpzm7vxj.mirror.aliyuncs.com"], # 国内镜像加速
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF

$ systemctl daemon-reload && systemctl restart docker

安装 HAProxy

安装 HAProxy 主要是在集群之间互相访问做一个负载均衡,Master <-> Master、Master <-> Worker。

# 安装
$ yum -y install haproxy-1.5.18

$ cat /etc/haproxy/haproxy.cfg
defaults
    mode tcp
    option                  dontlognull
    option http-server-close
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000

frontend kube-api-https_frontend
  bind 0.0.0.0:8443
  mode tcp
  default_backend kube-api-https_backend

# 通过本地负载多个 Master 节点
backend kube-api-https_backend
  balance roundrobin
  mode tcp
  stick-table type ip size 200k expire 30m
  stick on src
  server k8s-m1 10.211.55.58:6443 maxconn 1024  weight 3 check inter 1500 rise 2 fall 3
  server k8s-m2 10.211.55.59:6443 maxconn 1024  weight 3 check inter 1500 rise 2 fall 3
  server k8s-m3 10.211.55.60:6443 maxconn 1024  weight 3 check inter 1500 rise 2 fall 3

$ systemctl enable haproxy && systemctl start haproxy

使用 kubeadm 部署 Kubernetes

安装 kubeadm 和 kubelet

  • 引用官方 yum 源:
$ cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF


$ yum install -y kubelet-1.18.9 kubeadm-1.18.9 kubectl-1.18.9

  安装完毕,如图:

$ systemctl enable kubelet.service && systemctl start kubelet

kubelet 现在每隔几秒就会重启,因为它陷入了一个等待 kubeadm 指令的死循环。

  使用 kubelet --help 查看很多参数丢已经 DEPRECATED 了,官方推荐 kubelet 使用 --config 指定配置文件,并在配置文件中指定原来这些参数所配置的内容,参考

使用 kubeadm 初始化 Master 1

  • /etc/hosts 增加配置
$ cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

# api-server 配置在当前内网 IP 上,如果当前机器的内网 IP 是 10.211.55.59 那就跟 k8s-m2 配置在同一行
10.211.55.58 k8s-m1 api-server
10.211.55.59 k8s-m2
10.211.55.60 k8s-m3
10.211.55.61 k8s-w1
  • 各个 Master 机器之间进行 SSH Copy
# 此处以 Master 1 为例
$ ssh-keygen -t rsa -b 4096 -C 'k8s-m1'
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:Y/zgVDPFPAQ3VzunjANJhixCusrG3JI+mdr4jqGtvRc k8s-m1
The key's randomart image is:
+---[RSA 4096]----+
|   ..  . .+== ...|
|   .. . oo ++o  .|
|  .  . .  *  . o.|
|   .   . . + o .o|
|  .     S   o o  |
|+.o E  + +   .   |
|oBo. .  . .      |
|+@. .            |
|B=Oo             |
+----[SHA256]-----+

$ ssh-copy-id -i root@k8s-m2
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'k8s-m2 (10.211.55.59)' can't be established.
ECDSA key fingerprint is SHA256:iLgFlxdAWV28zPtpjO0FUk371pMrHuClWkZBtfV0qGQ.
ECDSA key fingerprint is MD5:c8:3c:69:7c:ae:4c:4d:d3:18:b3:08:5e:37:a8:39:5e.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@k8s-m2's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'root@k8s-m2'"
and check to make sure that only the key(s) you wanted were added.
  • Master 1 初始化配置文件
$ cat kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.18.9
apiServer:
  certSANs:
  - k8s-m1
  - k8s-m2
  - k8s-m3
  - 10.211.55.58
  - 10.211.55.59
  - 10.211.55.50
  - api-server
# 配置为当前的 HAProxy 负载端口
controlPlaneEndpoint: "api-server:8443"
networking:
  podSubnet: "10.244.0.0/16"
  1. kubernetesVersion:指定 k8s 版本
  2. controlPlaneEndpoint:控制端点地址,api-server 所在主机地址
  3. podSubnet:Pod 子网络
  • 初始化(安装之前记得把代理取消掉)
$ kubeadm init --config=kubeadm-config.yaml
W0921 18:15:07.832615    6414 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.9
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-m1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local api-server k8s-m1 k8s-m2 k8s-m3 api-server] and IPs [10.96.0.1 10.211.55.58 10.211.55.58 10.211.55.59 10.211.55.50]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-m1 localhost] and IPs [10.211.55.58 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-m1 localhost] and IPs [10.211.55.58 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0921 18:15:11.411263    6414 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0921 18:15:11.411966    6414 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 20.013651 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-m1 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-m1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: im447v.6lzaa4qtt4vp0i9a
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join api-server:8443 --token im447v.6lzaa4qtt4vp0i9a \
    --discovery-token-ca-cert-hash sha256:a5fbb3faf6d72c24236b781ddafa40f9fbd296c56da71c02ffbfac2388b05bc2 \
    --control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join api-server:8443 --token im447v.6lzaa4qtt4vp0i9a \
    --discovery-token-ca-cert-hash sha256:a5fbb3faf6d72c24236b781ddafa40f9fbd296c56da71c02ffbfac2388b05bc2

这里初始化会从 Google 的镜像源拉取 Docker 镜像,如果没有🍚👍的话应该会报错,也可离线 docker load -i 包 导出,文章头部已经提供了下载。

  如果失败可以选择重新进行初始化操作:

$ kubeadm reset
$ rm -rf $HOME/.kube/config
  • 关键信息
# 告诉安装成功啦
Your Kubernetes control-plane has initialized successfully!

# 非 root 用户要使用集群,需要运行以下命令。root 用户可以配置环境变量 `echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile`
To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

# 需要配置集群使用网络,才能使集群正常运转
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

# Master 节点加入
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join api-server:8443 --token im447v.6lzaa4qtt4vp0i9a \
    --discovery-token-ca-cert-hash sha256:a5fbb3faf6d72c24236b781ddafa40f9fbd296c56da71c02ffbfac2388b05bc2 \
    --control-plane

# Worker 节点加入
Then you can join any number of worker nodes by running the following on each as root:

kubeadm join api-server:8443 --token im447v.6lzaa4qtt4vp0i9a \
    --discovery-token-ca-cert-hash sha256:a5fbb3faf6d72c24236b781ddafa40f9fbd296c56da71c02ffbfac2388b05bc2

  1. [kubelet-start] 生成 kubelet 的配置文件 /var/lib/kubelet/config.yaml
  2. [certificates] 生成相关的各种证书
  3. [kubeconfig] 生成相关的 kubeconfig 文件
  4. [bootstraptoken] 生成 token 记录下来,后边使用 kubeadm join 往集群中添加节点时会用到
  5. 配置用户通过 kubectl 访问集群
  • 配置环境变量,让当前 root 用户可以访问 k8s 集群
$ echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
$ source ~/.bash_profile

  非 root 用户可以使用如下命令:

$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config

  这个时候集群式不正常的,需要安装网络插件

# 这里的原因是这样的:https://github.com/kubernetes/kubeadm/issues/2279
$ kubectl get cs
NAME                 STATUS      MESSAGE                                                                                     ERROR
scheduler            Unhealthy   Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused
controller-manager   Unhealthy   Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: connect: connection refused
etcd-0               Healthy     {"health":"true"}

# 这里可以看到未准备好
$ kubectl get nodes
NAME     STATUS     ROLES    AGE   VERSION
k8s-m1   NotReady   master   14m   v1.18.9

安装网络插件 - Flannel

  查看节点的状态:

# https://github.com/coreos/flannel/raw/master/Documentation/kube-flannel.yml
$ kubectl apply -f kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

✨ 如果 Node 有多个网卡的话,参考 issues,目前需要在 kube-flannel.yml 中使用 --iface 参数指定集群主机内网网卡的名称,否则可能会出现 dns 无法解析。需要将 kube-flannel.yml 下载到本地,flanneld 启动参数加上 --iface=<iface-name>

......
containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-amd64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        - --iface=eth1
......

  再次查看节点和 Pod 状态,确保都在 Ready/Running 状态:

$ kubectl get nodes -o wide
NAME     STATUS   ROLES    AGE   VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
k8s-m1   Ready    master   46m   v1.18.9   10.211.55.58   <none>        CentOS Linux 7 (Core)   4.4.236-1.el7.elrepo.x86_64   docker://19.3.11

$ kubectl get pods -o wide -n kube-system
NAME                             READY   STATUS    RESTARTS   AGE   IP             NODE     NOMINATED NODE   READINESS GATES
coredns-66bff467f8-cwb2d         1/1     Running   0          46m   10.244.0.3     k8s-m1   <none>           <none>
coredns-66bff467f8-qtf2w         1/1     Running   0          46m   10.244.0.2     k8s-m1   <none>           <none>
etcd-k8s-m1                      1/1     Running   2          46m   10.211.55.58   k8s-m1   <none>           <none>
kube-apiserver-k8s-m1            1/1     Running   2          46m   10.211.55.58   k8s-m1   <none>           <none>
kube-controller-manager-k8s-m1   1/1     Running   2          46m   10.211.55.58   k8s-m1   <none>           <none>
kube-flannel-ds-86pbq            1/1     Running   0          29m   10.211.55.58   k8s-m1   <none>           <none>
kube-proxy-gjqqn                 1/1     Running   2          46m   10.211.55.58   k8s-m1   <none>           <none>
kube-scheduler-k8s-m1            1/1     Running   2          46m   10.211.55.58   k8s-m1   <none>           <none>

让 Master 节点参与负载

  使用 kubeadm 初始化的集群,出于安全考虑 Pod 不会被调度到 Master Node 上,也就是说 Master Node 不参与工作负载。这是因为当前的 master 节点打上了 node-role.kubernetes.io/master:NoSchedule 的污点:

$ kubectl describe node k8s-m1 | grep Taint
Taints:             node-role.kubernetes.io/master:NoSchedule

  如果你想让 Master 节点参与负载,那么去掉这个污点即可:

$ kubectl taint nodes k8s-m1 node-role.kubernetes.io/master-
node/k8s-m1 untainted

测试 DNS

$ kubectl run curl --image=radial/busyboxplus:curl -it
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
If you don't see a command prompt, try pressing enter.

$ nslookup kubernetes.default
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes.default
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local

其他 Master 节点加入 k8s 集群

  • 把证书从 Master 1 Copy 到其他 Master
$ cat cert-main-master.sh
USER=root
CONTROL_PLANE_IPS="k8s-m2 k8s-m3"
for host in ${CONTROL_PLANE_IPS}; do
    scp /etc/kubernetes/pki/ca.crt "${USER}"@$host:
    scp /etc/kubernetes/pki/ca.key "${USER}"@$host:
    scp /etc/kubernetes/pki/sa.key "${USER}"@$host:
    scp /etc/kubernetes/pki/sa.pub "${USER}"@$host:
    scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host:
    scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host:
    scp /etc/kubernetes/pki/etcd/ca.crt "${USER}"@$host:etcd-ca.crt
    scp /etc/kubernetes/pki/etcd/ca.key "${USER}"@$host:etcd-ca.key
done

$ ./cert-main-master.sh
ca.crt                                                                                                                                                                                                    100% 1025     1.4MB/s   00:00
ca.key                                                                                                                                                                                                    100% 1679     2.0MB/s   00:00
sa.key                                                                                                                                                                                                    100% 1679     2.2MB/s   00:00
sa.pub                                                                                                                                                                                                    100%  451   659.6KB/s   00:00
front-proxy-ca.crt                                                                                                                                                                                        100% 1038     1.5MB/s   00:00
front-proxy-ca.key                                                                                                                                                                                        100% 1679     2.4MB/s   00:00
ca.crt                                                                                                                                                                                                    100% 1017     1.3MB/s   00:00
ca.key                                                                                                                                                                                                    100% 1675     2.2MB/s   00:00
ca.crt                                                                                                                                                                                                    100% 1025     1.6MB/s   00:00
ca.key                                                                                                                                                                                                    100% 1679     2.0MB/s   00:00
sa.key                                                                                                                                                                                                    100% 1679     2.1MB/s   00:00
sa.pub                                                                                                                                                                                                    100%  451   694.4KB/s   00:00
front-proxy-ca.crt                                                                                                                                                                                        100% 1038     1.5MB/s   00:00
front-proxy-ca.key                                                                                                                                                                                        100% 1679     2.0MB/s   00:00
ca.crt                                                                                                                                                                                                    100% 1017     1.4MB/s   00:00
ca.key                                                                                                                                                                                                    100% 1675     1.7MB/s   00:00
  • 在其他 Master 上把证书放到指定目录,以下其他 Master 节点都以 Master 2 为例
$ cat cert-other-master.sh
USER=root
mkdir -p /etc/kubernetes/pki/etcd
mv /${USER}/ca.crt /etc/kubernetes/pki/
mv /${USER}/ca.key /etc/kubernetes/pki/
mv /${USER}/sa.pub /etc/kubernetes/pki/
mv /${USER}/sa.key /etc/kubernetes/pki/
mv /${USER}/front-proxy-ca.crt /etc/kubernetes/pki/
mv /${USER}/front-proxy-ca.key /etc/kubernetes/pki/
mv /${USER}/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt
mv /${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key

$ ./cert-other-master.sh
  • 加入集群
$ kubeadm join api-server:8443 --token im447v.6lzaa4qtt4vp0i9a --discovery-token-ca-cert-hash ha256:a5fbb3faf6d72c24236b781ddafa40f9fbd296c56da71c02ffbfac2388b05bc2 --control-plane
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-m2 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local api-server k8s-m1 k8s-m2 k8s-m3 api-server] and IPs [10.96.0.1 10.211.55.59 10.211.55.58 10.211.55.59 10.211.55.50]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-m2 localhost] and IPs [10.211.55.59 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-m2 localhost] and IPs [10.211.55.59 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
W0921 19:17:18.165293    6789 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0921 19:17:18.171557    6789 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0921 19:17:18.172419    6789 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Creating static Pod manifest for "etcd"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
{"level":"warn","ts":"2020-09-21T19:17:30.996+0800","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"passthrough:///https://10.211.55.59:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[mark-control-plane] Marking the node k8s-m2 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-m2 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.
$ kubectl get nodes
NAME     STATUS     ROLES    AGE     VERSION
k8s-m1   Ready      master   64m     v1.18.9
k8s-m2   NotReady   master   2m52s   v1.18.9
k8s-m3   NotReady   master   53s     v1.18.9

$ kubectl get pods -n kube-system -o wide
NAMESPACE     NAME                             READY   STATUS     RESTARTS   AGE     IP             NODE     NOMINATED NODE   READINESS GATES
default       curl                             1/1     Running    1          13m     10.244.0.4     k8s-m1   <none>           <none>
kube-system   coredns-66bff467f8-cwb2d         1/1     Running    0          67m     10.244.0.3     k8s-m1   <none>           <none>
kube-system   coredns-66bff467f8-qtf2w         1/1     Running    0          67m     10.244.0.2     k8s-m1   <none>           <none>
kube-system   etcd-k8s-m1                      1/1     Running    2          67m     10.211.55.58   k8s-m1   <none>           <none>
kube-system   etcd-k8s-m2                      1/1     Running    0          5m33s   10.211.55.59   k8s-m2   <none>           <none>
kube-system   etcd-k8s-m3                      1/1     Running    0          3m36s   10.211.55.60   k8s-m3   <none>           <none>
kube-system   kube-apiserver-k8s-m1            1/1     Running    2          67m     10.211.55.58   k8s-m1   <none>           <none>
kube-system   kube-apiserver-k8s-m2            1/1     Running    0          5m34s   10.211.55.59   k8s-m2   <none>           <none>
kube-system   kube-apiserver-k8s-m3            1/1     Running    0          3m38s   10.211.55.60   k8s-m3   <none>           <none>
kube-system   kube-controller-manager-k8s-m1   1/1     Running    3          67m     10.211.55.58   k8s-m1   <none>           <none>
kube-system   kube-controller-manager-k8s-m2   1/1     Running    0          5m33s   10.211.55.59   k8s-m2   <none>           <none>
kube-system   kube-controller-manager-k8s-m3   1/1     Running    0          3m38s   10.211.55.60   k8s-m3   <none>           <none>
kube-system   kube-flannel-ds-86pbq            1/1     Running    0          50m     10.211.55.58   k8s-m1   <none>           <none>
kube-system   kube-flannel-ds-87q7w            1/1     Running    0          5m39s   10.211.55.59   k8s-m2   <none>           <none>
kube-system   kube-flannel-ds-b9mxc            0/1     Init:0/1   0          3m40s   10.211.55.60   k8s-m3   <none>           <none>
kube-system   kube-proxy-d9wps                 1/1     Running    0          5m39s   10.211.55.59   k8s-m2   <none>           <none>
kube-system   kube-proxy-fvbw7                 1/1     Running    0          3m40s   10.211.55.60   k8s-m3   <none>           <none>
kube-system   kube-proxy-gjqqn                 1/1     Running    2          67m     10.211.55.58   k8s-m1   <none>           <none>
kube-system   kube-scheduler-k8s-m1            1/1     Running    3          67m     10.211.55.58   k8s-m1   <none>           <none>
kube-system   kube-scheduler-k8s-m2            1/1     Running    0          5m38s   10.211.55.59   k8s-m2   <none>           <none>
kube-system   kube-scheduler-k8s-m3            1/1     Running    0          3m38s   10.211.55.60   k8s-m3   <none>           <none>

  这里的 NotReady 初始化完毕之后就会变成 Ready

Worker 节点加入

$ kubeadm join api-server:8443 --token im447v.6lzaa4qtt4vp0i9a \
>     --discovery-token-ca-cert-hash sha256:a5fbb3faf6d72c24236b781ddafa40f9fbd296c56da71c02ffbfac2388b05bc2
W0921 19:22:11.296038    6506 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

# 在任意 Master 上查看
$ kubectl get pods -n kube-system -o wide
NAME                             READY   STATUS     RESTARTS   AGE     IP             NODE     NOMINATED NODE   READINESS GATES
coredns-66bff467f8-cwb2d         1/1     Running    0          68m     10.244.0.3     k8s-m1   <none>           <none>
coredns-66bff467f8-qtf2w         1/1     Running    0          68m     10.244.0.2     k8s-m1   <none>           <none>
etcd-k8s-m1                      1/1     Running    2          68m     10.211.55.58   k8s-m1   <none>           <none>
etcd-k8s-m2                      1/1     Running    0          6m52s   10.211.55.59   k8s-m2   <none>           <none>
etcd-k8s-m3                      1/1     Running    0          4m55s   10.211.55.60   k8s-m3   <none>           <none>
kube-apiserver-k8s-m1            1/1     Running    2          68m     10.211.55.58   k8s-m1   <none>           <none>
kube-apiserver-k8s-m2            1/1     Running    0          6m53s   10.211.55.59   k8s-m2   <none>           <none>
kube-apiserver-k8s-m3            1/1     Running    0          4m57s   10.211.55.60   k8s-m3   <none>           <none>
kube-controller-manager-k8s-m1   1/1     Running    3          68m     10.211.55.58   k8s-m1   <none>           <none>
kube-controller-manager-k8s-m2   1/1     Running    0          6m52s   10.211.55.59   k8s-m2   <none>           <none>
kube-controller-manager-k8s-m3   1/1     Running    0          4m57s   10.211.55.60   k8s-m3   <none>           <none>
kube-flannel-ds-4rxn4            0/1     Init:0/1   0          2m4s    10.211.55.61   k8s-w1   <none>           <none>
kube-flannel-ds-86pbq            1/1     Running    0          51m     10.211.55.58   k8s-m1   <none>           <none>
kube-flannel-ds-87q7w            1/1     Running    0          6m58s   10.211.55.59   k8s-m2   <none>           <none>
kube-flannel-ds-b9mxc            0/1     Init:0/1   0          4m59s   10.211.55.60   k8s-m3   <none>           <none>
kube-proxy-d9wps                 1/1     Running    0          6m58s   10.211.55.59   k8s-m2   <none>           <none>
kube-proxy-fvbw7                 1/1     Running    0          4m59s   10.211.55.60   k8s-m3   <none>           <none>
kube-proxy-gjqqn                 1/1     Running    2          68m     10.211.55.58   k8s-m1   <none>           <none>
kube-proxy-m6bcp                 1/1     Running    0          2m4s    10.211.55.61   k8s-w1   <none>           <none>
kube-scheduler-k8s-m1            1/1     Running    3          68m     10.211.55.58   k8s-m1   <none>           <none>
kube-scheduler-k8s-m2            1/1     Running    0          6m57s   10.211.55.59   k8s-m2   <none>           <none>
kube-scheduler-k8s-m3            1/1     Running    0          4m57s   10.211.55.60   k8s-m3   <none>           <none>
  • 最终
$ kubectl get nodes
NAME     STATUS   ROLES    AGE   VERSION
k8s-m1   Ready    master   79m   v1.18.9
k8s-m2   Ready    master   17m   v1.18.9
k8s-m3   Ready    master   15m   v1.18.9
k8s-w1   Ready    <none>   12m   v1.18.9

$ kubectl get pods -n kube-system -o wide
NAME                             READY   STATUS    RESTARTS   AGE   IP             NODE     NOMINATED NODE   READINESS GATES
coredns-66bff467f8-cwb2d         1/1     Running   0          78m   10.244.0.3     k8s-m1   <none>           <none>
coredns-66bff467f8-qtf2w         1/1     Running   0          78m   10.244.0.2     k8s-m1   <none>           <none>
etcd-k8s-m1                      1/1     Running   2          78m   10.211.55.58   k8s-m1   <none>           <none>
etcd-k8s-m2                      1/1     Running   0          16m   10.211.55.59   k8s-m2   <none>           <none>
etcd-k8s-m3                      1/1     Running   0          14m   10.211.55.60   k8s-m3   <none>           <none>
kube-apiserver-k8s-m1            1/1     Running   2          78m   10.211.55.58   k8s-m1   <none>           <none>
kube-apiserver-k8s-m2            1/1     Running   0          16m   10.211.55.59   k8s-m2   <none>           <none>
kube-apiserver-k8s-m3            1/1     Running   0          14m   10.211.55.60   k8s-m3   <none>           <none>
kube-controller-manager-k8s-m1   1/1     Running   3          78m   10.211.55.58   k8s-m1   <none>           <none>
kube-controller-manager-k8s-m2   1/1     Running   0          16m   10.211.55.59   k8s-m2   <none>           <none>
kube-controller-manager-k8s-m3   1/1     Running   0          14m   10.211.55.60   k8s-m3   <none>           <none>
kube-flannel-ds-4rxn4            1/1     Running   0          11m   10.211.55.61   k8s-w1   <none>           <none>
kube-flannel-ds-86pbq            1/1     Running   0          61m   10.211.55.58   k8s-m1   <none>           <none>
kube-flannel-ds-87q7w            1/1     Running   0          16m   10.211.55.59   k8s-m2   <none>           <none>
kube-flannel-ds-b9mxc            1/1     Running   0          14m   10.211.55.60   k8s-m3   <none>           <none>
kube-proxy-d9wps                 1/1     Running   0          16m   10.211.55.59   k8s-m2   <none>           <none>
kube-proxy-fvbw7                 1/1     Running   0          14m   10.211.55.60   k8s-m3   <none>           <none>
kube-proxy-gjqqn                 1/1     Running   2          78m   10.211.55.58   k8s-m1   <none>           <none>
kube-proxy-m6bcp                 1/1     Running   0          11m   10.211.55.61   k8s-w1   <none>           <none>
kube-scheduler-k8s-m1            1/1     Running   3          78m   10.211.55.58   k8s-m1   <none>           <none>
kube-scheduler-k8s-m2            1/1     Running   0          16m   10.211.55.59   k8s-m2   <none>           <none>
kube-scheduler-k8s-m3            1/1     Running   0          14m   10.211.55.60   k8s-m3   <none>           <none>

删除节点

  • 查看节点
$ kubectl get node
  • 下线节点,设置不可调度。由于节点目前处于正常工作状态,集群中新建资源还是有可能创建到该节点的,所以先将节点设置为不可调度:
$ kubectl cordon $node_name
  • 排空节点,将节点上资源调度到其他节点。目前集群已经不会分配新的资源在该节点上了,但是节点还运行着现有的业务,所以我们需要将节点上的业务分配到其他节点:
$ kubectl drain $node_name --delete-local-data --ignore-daemonsets
  • 删除节点。当前集群中已经没有任何资源分配在节点上了,那么我们可以直接移除节点:
$ kubectl delete $node_name

工具

命令补全工具

可以对 kubectl 管理工具的命令进行 tab 补全

$ yum -y install bash-completion
已加载插件:fastestmirror
Loading mirror speeds from cached hostfile
 * base: my.mirrors.thegigabit.com
 * extras: my.mirrors.thegigabit.com
 * updates: mirror.titansi.com.my
正在解决依赖关系
--> 正在检查事务
---> 软件包 bash-completion.noarch.1.2.1-8.el7 将被 安装
--> 解决依赖关系完成

依赖关系解决

============================================================================================================================================================================================================================================
 Package                                                        架构                                                  版本                                                        源                                                   大小
============================================================================================================================================================================================================================================
正在安装:
 bash-completion                                                noarch                                                1:2.1-8.el7                                                 base                                                 87 k

事务概要
============================================================================================================================================================================================================================================
安装  1 软件包

总下载量:87 k
安装大小:263 k
Downloading packages:
bash-completion-2.1-8.el7.noarch.rpm                                                                                                                                                                                 |  87 kB  00:00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  正在安装    : 1:bash-completion-2.1-8.el7.noarch                                                                                                                                                                                      1/1
  验证中      : 1:bash-completion-2.1-8.el7.noarch                                                                                                                                                                                      1/1

已安装:
  bash-completion.noarch 1:2.1-8.el7

完毕!

$ source /etc/profile.d/bash_completion.sh
$ echo "source <(kubectl completion bash)" >> ~/.bash_profile
$ source .bash_profile

kuboard,好用的管理工具

使用详情

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值