向已经存在的Kubernetes集群中添加新的worker节点


此前搭建了一个3节点的Kubernetes集群(1台control-plane, master节点;2台worker节点),要添加新的worker节点进来。为此,需要在新的worker节点上配置与master节点之间的ssh无密码访问,同时需要关闭新节点上的swap交换分区,并且开启bridge-nf-call相关的功能。并且配置docker-ce以及kubernetes的yum源,然后安装与已经存在集群的版本一致的kubeadm, kubectl, kubelet等软件包。并且执行加入集群的命令 kubeadm join

1. 新worker节点环境准备

新节点需要配置好yum源并且安装相应的软件包、关闭swap交换分区、开启bridge-nf-call相关的功能。

各个节点的功能作用如下表所示:

HostnameIP AddressRole
c7u6s5192.168.122.24Control-Plane, Master
c7u6s6192.168.122.25Worker
c7u6s7192.168.122.26Worker
c7u6s8192.168.122.27New Worker

1.1. 配置yum源并安装相应版本的软件包

worker节点上需要安装docker以及kubeadm, kubelet(也会自动作为依赖关系,安装kubectl),以及kubernetes-cni等软件包。
docker的yum源如下:

[root@c7u6s8:~]# cat /etc/yum.repos.d/docker-ce.repo 
[docker-ce-stable]
name=Docker CE Stable - $basearch
baseurl=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/$releasever/$basearch/stable
enabled=1
gpgcheck=1
gpgkey=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/gpg

[docker-ce-stable-debuginfo]
name=Docker CE Stable - Debuginfo $basearch
baseurl=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/$releasever/debug-$basearch/stable
enabled=0
gpgcheck=1
gpgkey=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/gpg

[docker-ce-stable-source]
name=Docker CE Stable - Sources
baseurl=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/$releasever/source/stable
enabled=0
gpgcheck=1
gpgkey=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/gpg

[docker-ce-test]
name=Docker CE Test - $basearch
baseurl=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/$releasever/$basearch/test
enabled=0
gpgcheck=1
gpgkey=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/gpg

[docker-ce-test-debuginfo]
name=Docker CE Test - Debuginfo $basearch
baseurl=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/$releasever/debug-$basearch/test
enabled=0
gpgcheck=1
gpgkey=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/gpg

[docker-ce-test-source]
name=Docker CE Test - Sources
baseurl=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/$releasever/source/test
enabled=0
gpgcheck=1
gpgkey=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/gpg

[docker-ce-nightly]
name=Docker CE Nightly - $basearch
baseurl=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/$releasever/$basearch/nightly
enabled=0
gpgcheck=1
gpgkey=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/gpg

[docker-ce-nightly-debuginfo]
name=Docker CE Nightly - Debuginfo $basearch
baseurl=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/$releasever/debug-$basearch/nightly
enabled=0
gpgcheck=1
gpgkey=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/gpg

[docker-ce-nightly-source]
name=Docker CE Nightly - Sources
baseurl=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/$releasever/source/nightly
enabled=0
gpgcheck=1
gpgkey=https://mirrors.tuna.tsinghua.edu.cn/docker-ce/linux/centos/gpg
[root@c7u6s8:~]# 

上述是采用的清华大学的软件镜像源。

Kubernetes的软件镜像源采用的是阿里巴巴的镜像站,具体如下所示:

[root@c7u6s8:~]# cat /etc/yum.repos.d/kubernetes.repo 
[kubernetes]
name=Kubernetes
#baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
[root@c7u6s8:~]# 

yum源配置好之后,需要执行yum clean all; yum makecache创建本地缓存。此后,就可以准备安装所需要的软件包了。

查看当前yum源提供的软件包,具体如下所示:

[root@c7u6s8:yum.repos.d]# yum list | egrep '^kube'
kubeadm.x86_64                              1.22.0-0                   kubernetes
kubectl.x86_64                              1.22.0-0                   kubernetes
kubelet.x86_64                              1.22.0-0                   kubernetes
kubernetes.x86_64                           1.5.2-0.7.git269f928.el7   extras   
kubernetes-client.x86_64                    1.5.2-0.7.git269f928.el7   extras   
kubernetes-cni.x86_64                       0.8.7-0                    kubernetes
kubernetes-master.x86_64                    1.5.2-0.7.git269f928.el7   extras   
kubernetes-node.x86_64                      1.5.2-0.7.git269f928.el7   extras   
[root@c7u6s8:yum.repos.d]# 

从上述输出可以看出,此时yum源默认提供的软件包版本是1.22.0,查看集群已经部署的软件包版本,具体如下所示:

[root@c7u6s5:ReplicaSet]# rpm -qa | egrep '^kube'
kubernetes-cni-0.8.7-0.x86_64
kubelet-1.21.3-0.x86_64
kubectl-1.21.3-0.x86_64
kubeadm-1.21.3-0.x86_64
[root@c7u6s5:ReplicaSet]# 

从上述输出中可以看出,已经部署的集群采用的软件包版本是1.21.3,所以,此时安装软件包的时候,需要明确指定软件包的版本信息,具体如下所示:

[root@c7u6s8:yum.repos.d]# yum install -y kubeadm-1.21.3 kubelet-1.21.3 kubectl-1.21.3
Loaded plugins: fastestmirror, langpacks
Loading mirror speeds from cached hostfile
 * base: mirrors.huaweicloud.com
 * extras: mirrors.huaweicloud.com
 * updates: mirrors.bupt.edu.cn
Resolving Dependencies
--> Running transaction check
---> Package kubeadm.x86_64 0:1.21.3-0 will be installed
--> Processing Dependency: kubernetes-cni >= 0.8.6 for package: kubeadm-1.21.3-0.x86_64
---> Package kubectl.x86_64 0:1.21.3-0 will be installed
---> Package kubelet.x86_64 0:1.21.3-0 will be installed
--> Running transaction check
---> Package kubernetes-cni.x86_64 0:0.8.7-0 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

==================================================================================================
 Package                    Arch               Version               Repository              Size
==================================================================================================
Installing:
 kubeadm                    x86_64             1.21.3-0              kubernetes             9.1 M
 kubectl                    x86_64             1.21.3-0              kubernetes             9.5 M
 kubelet                    x86_64             1.21.3-0              kubernetes              20 M
Installing for dependencies:
 kubernetes-cni             x86_64             0.8.7-0               kubernetes              19 M

Transaction Summary
==================================================================================================
Install  3 Packages (+1 Dependent package)

Total download size: 57 M
Installed size: 255 M
Downloading packages:
(1/4): 23f7e018d7380fc0c11f0a12b7fda8ced07b1c04c4ba1c5f5cd24cd4bdfb304d-ku | 9.1 MB  00:00:04     
(2/4): 7e38e980f058e3e43f121c2ba73d60156083d09be0acc2e5581372136ce11a1c-ku |  20 MB  00:00:02     
(3/4): db7cb5cb0b3f6875f54d10f02e625573988e3e91fd4fc5eef0b1876bb18604ad-ku |  19 MB  00:00:02     
(4/4): b04e5387f5522079ac30ee300657212246b14279e2ca4b58415c7bf1f8c8a8f5-ku | 9.5 MB  00:00:11     
--------------------------------------------------------------------------------------------------
Total                                                             4.9 MB/s |  57 MB  00:00:11     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  Installing : kubernetes-cni-0.8.7-0.x86_64                                                  1/4 
  Installing : kubelet-1.21.3-0.x86_64                                                        2/4 
  Installing : kubectl-1.21.3-0.x86_64                                                        3/4 
  Installing : kubeadm-1.21.3-0.x86_64                                                        4/4 
  Verifying  : kubectl-1.21.3-0.x86_64                                                        1/4 
  Verifying  : kubeadm-1.21.3-0.x86_64                                                        2/4 
  Verifying  : kubelet-1.21.3-0.x86_64                                                        3/4 
  Verifying  : kubernetes-cni-0.8.7-0.x86_64                                                  4/4 

Installed:
  kubeadm.x86_64 0:1.21.3-0       kubectl.x86_64 0:1.21.3-0       kubelet.x86_64 0:1.21.3-0      

Dependency Installed:
  kubernetes-cni.x86_64 0:0.8.7-0                                                                 

Complete!
[root@c7u6s8:yum.repos.d]# 
[root@c7u6s8:yum.repos.d]# yum install -y docker-ce docker-ce-cli

上述就完成了kubeadm-1.21.3, kubelet-1.21.3, kubectl-1.21.3以及docker-ce相关的软件包的安装。检查如下所示:

[root@c7u6s8:~]# rpm -qa | egrep '^docker|^kube'
docker-ce-rootless-extras-20.10.8-3.el7.x86_64
kubernetes-cni-0.8.7-0.x86_64
kubelet-1.21.3-0.x86_64
docker-ce-20.10.8-3.el7.x86_64
kubeadm-1.21.3-0.x86_64
docker-scan-plugin-0.8.0-3.el7.x86_64
kubectl-1.21.3-0.x86_64
docker-ce-cli-20.10.8-3.el7.x86_64
[root@c7u6s8:~]#

软件包安装完成之后,需要启动docker服务,同时不要启动kubelet服务。
启动docker服务,如下所示:

[root@c7u6s8:yum.repos.d]# systemctl enable --now docker
[root@c7u6s8:yum.repos.d]# systemctl status docker
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2021-08-20 00:33:35 CST; 9h ago
     Docs: https://docs.docker.com
 Main PID: 3809 (dockerd)
    Tasks: 9
   Memory: 42.0M
   CGroup: /system.slice/docker.service
           └─3809 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock

Aug 20 00:45:18 c7u6s8 dockerd[3809]: time="2021-08-20T00:45:18.542162934+08:00" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"h...g headers)"
Aug 20 00:45:43 c7u6s8 dockerd[3809]: time="2021-08-20T00:45:43.534478801+08:00" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": ...g headers)"
Aug 20 00:45:43 c7u6s8 dockerd[3809]: time="2021-08-20T00:45:43.534613502+08:00" level=info msg="Attempting next endpoint for pull after error: Get \"https://k...g headers)"
Aug 20 00:45:43 c7u6s8 dockerd[3809]: time="2021-08-20T00:45:43.541330005+08:00" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"h...g headers)"
Aug 20 00:45:45 c7u6s8 dockerd[3809]: time="2021-08-20T00:45:45.521463502+08:00" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": ...g headers)"
Aug 20 00:45:45 c7u6s8 dockerd[3809]: time="2021-08-20T00:45:45.521564890+08:00" level=info msg="Attempting next endpoint for pull after error: Get \"https://k...g headers)"
Aug 20 00:45:45 c7u6s8 dockerd[3809]: time="2021-08-20T00:45:45.527504682+08:00" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"h...g headers)"
Aug 20 00:46:09 c7u6s8 dockerd[3809]: time="2021-08-20T00:46:09.487046625+08:00" level=warning msg="Error getting v2 registry: Get \"https://k8s.gcr.io/v2/\": ...g headers)"
Aug 20 00:46:09 c7u6s8 dockerd[3809]: time="2021-08-20T00:46:09.487153865+08:00" level=info msg="Attempting next endpoint for pull after error: Get \"https://k...g headers)"
Aug 20 00:46:09 c7u6s8 dockerd[3809]: time="2021-08-20T00:46:09.494480392+08:00" level=error msg="Handler for POST /v1.41/images/create returned error: Get \"h...g headers)"
Hint: Some lines were ellipsized, use -l to show in full.
[root@c7u6s8:yum.repos.d]# 

如果kubelet服务处于运行状态,则需要执行systemctl stop kubelet停掉服务,并执行命令systemctl enable kubelet将其设置为开机自动运行。

接下来准备新的worker节点上的其他配置。

1.2. 关闭交换分区并开启bridge-nf-call相关的功能

上述就完成了相关软件包的安装,接下来需要配置bridge-nf-call网络过滤,具体如下所示:

[root@c7u6s8:yum.repos.d]# sysctl -a | egrep 'nf-call'
net.bridge.bridge-nf-call-arptables = 0
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
[root@c7u6s8:yum.repos.d]# 

上述三项的默认值都是0,可以通过sysctl net.bridge.bridge-nf-call-arptables=1来设置,但是这种方式在下次重启系统的时候,又会恢复为默认的0状态,所以应该在配置文件中修改,并读取修改后的结果,使修改对当前系统有效。配置文件为/lib/sysctl.d/00-system.conf,修改内容如下所示:

[root@c7u6s8:yum.repos.d]# vim /lib/sysctl.d/00-system.conf 
[root@c7u6s8:yum.repos.d]# cat /lib/sysctl.d/00-system.conf
# Kernel sysctl configuration file
#
# For binary values, 0 is disabled, 1 is enabled.  See sysctl(8) and
# sysctl.conf(5) for more details.

# Disable netfilter on bridges.
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-arptables = 1
[root@c7u6s8:yum.repos.d]# 

修改完成之后,在下次系统启动的时候才会生效,要想对当前系统有效,需要执行如下操作:

[root@c7u6s8:yum.repos.d]# sysctl -p /lib/sysctl.d/00-system.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-arptables = 1
[root@c7u6s8:yum.repos.d]# 

sysctl -p选项的含义是从指定的文件中加载配置项。

[root@c7u6s8:yum.repos.d]# sysctl -a | egrep 'bridge-nf-call'
net.bridge.bridge-nf-call-arptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
[root@c7u6s8:yum.repos.d]# 

此时,这三项的值就都为1了。

另外,还应该开启ip_forward功能,这个功能在安装完docker并且启动docker服务的时候,就自动被设置好了。为了以防万一,还是要检查下。具体如下:

[root@c7u6s8:yum.repos.d]# sysctl -a | egrep 'ip_forward'
net.ipv4.ip_forward = 1
net.ipv4.ip_forward_use_pmtu = 0
[root@c7u6s8:yum.repos.d]# 

从上述输出可见,确实已经开启了ip_forward功能。

接下来关闭交换分区,需要执行命令swapoff -a关闭当前系统环境中的交换分区。具体如下所示:

[root@c7u6s8:~]# swapoff -a
[root@c7u6s8:~]# 

上述操作并没有被保存到配置文件中,在下次重启系统的时候,还会自动开启交换分区,为了避免这种情况的发生,需要注释掉/etc/fstab文件中的swap这一行,具体如下所示:

[root@c7u6s8:~]# vim /etc/fstab 
[root@c7u6s8:~]# cat /etc/fstab
#
# /etc/fstab
# Created by anaconda on Fri Jul  9 02:17:29 2021
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/vg0-root    /                       xfs     defaults        0 0
UUID=d9a6975d-a76c-4f79-bed8-1a7d6dc66618 /boot                   ext4    defaults        1 2
#UUID=c0ca8ae3-ff82-45cd-9b2f-3fd760dda065 swap                    swap    defaults        0 0
/dev/sr1    /media/iso    iso9660    defaults,loop    0 0
[root@c7u6s8:~]# 

上述就将包含swap的这一行注释掉了。

至此,新的worker节点的准备环境基本完成了。

2. master节点生成新的令牌

新的worker节点的基础环境准备完成之后,接下来就需要准备加入节点的命令了,在master节点上执行kubeadm token查看加入节点的令牌信息(默认保留24小时,24小时候,令牌信息失效并且会被删除,所以也就意味着初始部署之后的令牌信息,在24小时候就失效并且消失了,所以需要重新创建)。

在control-plane, master节点上查看令牌信息,如果没有则重新创建令牌,具体如下所示:

[root@c7u6s5:~]# kubeadm token list
[root@c7u6s5:~]# kubeadm token create --help

This command will create a bootstrap token for you.
You can specify the usages for this token, the "time to live" and an optional human friendly description.

The [token] is the actual token to write.
This should be a securely generated random token of the form "[a-z0-9]{6}.[a-z0-9]{16}".
If no [token] is given, kubeadm will generate a random token instead.

Usage:
  kubeadm token create [token]

Flags:
      --certificate-key string   When used together with '--print-join-command', print the full 'kubeadm join' flag needed to join the cluster as a control-plane. To create a new certificate key you must use 'kubeadm init phase upload-certs --upload-certs'.
      --config string            Path to a kubeadm configuration file.
      --description string       A human friendly description of how this token is used.
      --groups strings           Extra groups that this token will authenticate as when used for authentication. Must match "\\Asystem:bootstrappers:[a-z0-9:-]{0,255}[a-z0-9]\\z" (default [system:bootstrappers:kubeadm:default-node-token])
  -h, --help                     help for create
      --print-join-command       Instead of printing only the token, print the full 'kubeadm join' flag needed to join the cluster using the token.
      --ttl duration             The duration before the token is automatically deleted (e.g. 1s, 2m, 3h). If set to '0', the token will never expire (default 24h0m0s)
      --usages strings           Describes the ways in which this token can be used. You can pass --usages multiple times or provide a comma separated list of options. Valid options: [signing,authentication] (default [signing,authentication])

Global Flags:
      --add-dir-header           If true, adds the file directory to the header of the log messages
      --dry-run                  Whether to enable dry-run mode or not
      --kubeconfig string        The kubeconfig file to use when talking to the cluster. If the flag is not set, a set of standard locations can be searched for an existing kubeconfig file. (default "/etc/kubernetes/admin.conf")
      --log-file string          If non-empty, use this log file
      --log-file-max-size uint   Defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0, the maximum file size is unlimited. (default 1800)
      --one-output               If true, only write logs to their native severity level (vs also writing to each lower severity level)
      --rootfs string            [EXPERIMENTAL] The path to the 'real' host root filesystem.
      --skip-headers             If true, avoid header prefixes in the log messages
      --skip-log-headers         If true, avoid headers when opening log files
  -v, --v Level                  number for the log level verbosity
[root@c7u6s5:~]# 
[root@c7u6s5:~]# kubeadm token create --print-join-command
kubeadm join 192.168.122.24:6443 --token hj1sax.moy397qh7e6ba298 --discovery-token-ca-cert-hash sha256:accd7731cb8fa8061f4b6cf3996d81329bab29c610110a8d75bd130c112bf3ac 
[root@c7u6s5:~]# 

上述就生成了添加新的worker节点的令牌信息。使用上述kubeadm token create --print-join-command命令生成的命令,就可以向Kubernetes集群中添加新的worker节点了。

3. 将新的worker节点加入集群

上述就生成了添加新的worker节点的令牌信息。使用上述命令,就可以添加一台worker节点了。查看kubelet服务的状态,如果处于启动状态,需要将服务停止,否则在添加worker节点的时候,会提示10250端口已经被占用,导致无法添加。具体如下所示:

[root@c7u6s8:yum.repos.d]# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
  Drop-In: /usr/lib/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Fri 2021-08-20 00:39:19 CST; 9h ago
     Docs: https://kubernetes.io/docs/
 Main PID: 4841 (kubelet)
    Tasks: 15
   Memory: 71.2M
   CGroup: /system.slice/kubelet.service
           └─4841 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/confi...
           
Aug 20 09:56:56 c7u6s8 kubelet[4841]: E0820 09:56:56.578085    4841 kubelet_node_status.go:470] "Error updating node status, will retry" err="error getting nod... not found"
Aug 20 09:56:56 c7u6s8 kubelet[4841]: W0820 09:56:56.578290    4841 reflector.go:441] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:66: watch of *v1.Pod en...ms received
Aug 20 09:56:56 c7u6s8 kubelet[4841]: E0820 09:56:56.583164    4841 kubelet_node_status.go:470] "Error updating node status, will retry" err="error getting nod... not found"
Aug 20 09:56:56 c7u6s8 kubelet[4841]: E0820 09:56:56.588517    4841 kubelet_node_status.go:470] "Error updating node status, will retry" err="error getting nod... not found"
Aug 20 09:56:56 c7u6s8 kubelet[4841]: E0820 09:56:56.588544    4841 kubelet_node_status.go:457] "Unable to update node status" err="update node status exceeds retry count"
Aug 20 09:56:56 c7u6s8 kubelet[4841]: I0820 09:56:56.864382    4841 cni.go:239] "Unable to update cni config" err="no networks found in /etc/cni/net.d"
Aug 20 09:56:58 c7u6s8 kubelet[4841]: E0820 09:56:58.487577    4841 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="n...de="c7u6s8"
Aug 20 09:57:00 c7u6s8 kubelet[4841]: E0820 09:57:00.371812    4841 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/ku.../manifests"
Aug 20 09:57:01 c7u6s8 kubelet[4841]: E0820 09:57:01.028179    4841 kubelet.go:2332] "Container runtime network not ready" networkReady="NetworkReady=false rea...nitialized"
Aug 20 09:57:01 c7u6s8 kubelet[4841]: I0820 09:57:01.864701    4841 cni.go:239] "Unable to update cni config" err="no networks found in /etc/cni/net.d"
Warning: kubelet.service changed on disk. Run 'systemctl daemon-reload' to reload units.
Hint: Some lines were ellipsized, use -l to show in full.

[root@c7u6s8:yum.repos.d]# 
[root@c7u6s8:yum.repos.d]# kubeadm join 192.168.122.24:6443 --token hj1sax.moy397qh7e6ba298 --discovery-token-ca-cert-hash sha256:accd7731cb8fa8061f4b6cf3996d81329bab29c610110a8d75bd130c112bf3ac 
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR Port-10250]: Port 10250 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher
[root@c7u6s8:yum.repos.d]# 

从上述输出可见,kubelet处于运行状态,此时执行节点加入的操作,会报错,并且提示10250端口已经被占用,从而导致节点加入失败。

停掉kubelet服务之后,再次加入节点,具体如下所示:

[root@c7u6s8:yum.repos.d]# systemctl stop kubelet
Warning: kubelet.service changed on disk. Run 'systemctl daemon-reload' to reload units.
[root@c7u6s8:yum.repos.d]# systemctl daemon-reload
[root@c7u6s8:yum.repos.d]# systemctl stop kubelet
[root@c7u6s8:yum.repos.d]# kubeadm join 192.168.122.24:6443 --token hj1sax.moy397qh7e6ba298 --discovery-token-ca-cert-hash sha256:accd7731cb8fa8061f4b6cf3996d81329bab29c610110a8d75bd130c112bf3ac 
[preflight] Running pre-flight checks
	[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

[root@c7u6s8:yum.repos.d]# 

添加worker节点的命令执行完成。此时查看kubelet服务状态信息,具体如下所示:

[root@c7u6s8:~]# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
  Drop-In: /usr/lib/systemd/system/kubelet.service.d
           └─10-kubeadm.conf
   Active: active (running) since Fri 2021-08-20 09:57:42 CST; 53min ago
     Docs: https://kubernetes.io/docs/
 Main PID: 30853 (kubelet)
    Tasks: 14
   Memory: 96.3M
   CGroup: /system.slice/kubelet.service
           └─30853 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/conf...

Aug 20 10:51:25 c7u6s8 kubelet[30853]: E0820 10:51:25.221889   30853 file_linux.go:60] "Unable to read config path" err="path does not exist, ignoring" path="/.../manifests"
Aug 20 10:51:26 c7u6s8 kubelet[30853]: E0820 10:51:26.222868   30853 file_linux.go:60] "Unable to read config path" err="path does not exist, ignoring" path="/.../manifests"
Aug 20 10:51:27 c7u6s8 kubelet[30853]: E0820 10:51:27.223981   30853 file_linux.go:60] "Unable to read config path" err="path does not exist, ignoring" path="/.../manifests"
Aug 20 10:51:28 c7u6s8 kubelet[30853]: E0820 10:51:28.225219   30853 file_linux.go:60] "Unable to read config path" err="path does not exist, ignoring" path="/.../manifests"
Aug 20 10:51:29 c7u6s8 kubelet[30853]: E0820 10:51:29.226075   30853 file_linux.go:60] "Unable to read config path" err="path does not exist, ignoring" path="/.../manifests"
Aug 20 10:51:30 c7u6s8 kubelet[30853]: E0820 10:51:30.226890   30853 file_linux.go:60] "Unable to read config path" err="path does not exist, ignoring" path="/.../manifests"
Aug 20 10:51:31 c7u6s8 kubelet[30853]: E0820 10:51:31.227268   30853 file_linux.go:60] "Unable to read config path" err="path does not exist, ignoring" path="/.../manifests"
Aug 20 10:51:32 c7u6s8 kubelet[30853]: E0820 10:51:32.227898   30853 file_linux.go:60] "Unable to read config path" err="path does not exist, ignoring" path="/.../manifests"
Aug 20 10:51:33 c7u6s8 kubelet[30853]: E0820 10:51:33.228518   30853 file_linux.go:60] "Unable to read config path" err="path does not exist, ignoring" path="/.../manifests"
Aug 20 10:51:34 c7u6s8 kubelet[30853]: E0820 10:51:34.229035   30853 file_linux.go:60] "Unable to read config path" err="path does not exist, ignoring" path="/.../manifests"
Hint: Some lines were ellipsized, use -l to show in full.
[root@c7u6s8:~]# 

节点添加命令执行完成,kubelet服务也已经开始运行了。

在master节点查看节点状态,具体如下所示:

[root@c7u6s5:~]# kubectl get nodes
NAME     STATUS     ROLES                  AGE   VERSION
c7u6s5   Ready      control-plane,master   16d   v1.21.3
c7u6s6   Ready      <none>                 16d   v1.21.3
c7u6s7   Ready      <none>                 16d   v1.21.3
c7u6s8   NotReady   <none>                 24m   v1.21.3
[root@c7u6s5:~]# 

此时节点仍然未处于Ready状态,考虑应该是相应的镜像无法拉取,查看node状态,具体如下所示:

[root@c7u6s5:~]# kubectl describe node/c7u6s8
Name:               c7u6s8
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=c7u6s8
                    kubernetes.io/os=linux
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Fri, 20 Aug 2021 09:57:49 +0800
Taints:             node.kubernetes.io/not-ready:NoExecute
                    node.kubernetes.io/not-ready:NoSchedule
Unschedulable:      false
Lease:
  HolderIdentity:  c7u6s8
  AcquireTime:     <unset>
  RenewTime:       Fri, 20 Aug 2021 10:23:21 +0800
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Fri, 20 Aug 2021 10:23:05 +0800   Fri, 20 Aug 2021 09:57:49 +0800   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Fri, 20 Aug 2021 10:23:05 +0800   Fri, 20 Aug 2021 09:57:49 +0800   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Fri, 20 Aug 2021 10:23:05 +0800   Fri, 20 Aug 2021 09:57:49 +0800   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Fri, 20 Aug 2021 10:23:05 +0800   Fri, 20 Aug 2021 09:57:49 +0800   KubeletNotReady              container runtime network not ready: NetworkReady
=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Addresses:
  InternalIP:  192.168.122.27
  Hostname:    c7u6s8
Capacity:
  cpu:                3
  ephemeral-storage:  39298308Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             2046780Ki
  pods:               110
Allocatable:
  cpu:                3
  ephemeral-storage:  36217320593
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             1944380Ki
  pods:               110
System Info:
  Machine ID:                 7dad32e0b1c648b1b5b845074144df25
  System UUID:                7DAD32E0-B1C6-48B1-B5B8-45074144DF25
  Boot ID:                    104ea76f-dfb1-4ffd-9cdd-9f04622e8e62
  Kernel Version:             3.10.0-957.el7.x86_64
  OS Image:                   CentOS Linux 7 (Core)
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  docker://20.10.8
  Kubelet Version:            v1.21.3
  Kube-Proxy Version:         v1.21.3
PodCIDR:                      10.244.3.0/24
PodCIDRs:                     10.244.3.0/24
Non-terminated Pods:          (2 in total)
  Namespace                   Name                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
  ---------                   ----                 ------------  ----------  ---------------  -------------  ---
  calico-system               calico-node-4gz7s    0 (0%)        0 (0%)      0 (0%)           0 (0%)         25m
  kube-system                 kube-proxy-2gdw6     0 (0%)        0 (0%)      0 (0%)           0 (0%)         25m
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests  Limits
  --------           --------  ------
  cpu                0 (0%)    0 (0%)
  memory             0 (0%)    0 (0%)
  ephemeral-storage  0 (0%)    0 (0%)
  hugepages-1Gi      0 (0%)    0 (0%)
  hugepages-2Mi      0 (0%)    0 (0%)
Events:
  Type    Reason                   Age                From     Message
  ----    ------                   ----               ----     -------
  Normal  Starting                 25m                kubelet  Starting kubelet.
  Normal  NodeAllocatableEnforced  25m                kubelet  Updated Node Allocatable limit across pods
  Normal  NodeHasSufficientMemory  25m (x2 over 25m)  kubelet  Node c7u6s8 status is now: NodeHasSufficientMemory
  Normal  NodeHasNoDiskPressure    25m (x2 over 25m)  kubelet  Node c7u6s8 status is now: NodeHasNoDiskPressure
  Normal  NodeHasSufficientPID     25m (x2 over 25m)  kubelet  Node c7u6s8 status is now: NodeHasSufficientPID
[root@c7u6s5:~]# 

上述输出中提示节点的网络插件未就绪。

在新的worker节点c7u6s8上查看镜像信息,具体如下所示:

[root@c7u6s8:yum.repos.d]# docker images
REPOSITORY   TAG       IMAGE ID   CREATED   SIZE
[root@c7u6s8:yum.repos.d]# docker images -a
REPOSITORY   TAG       IMAGE ID   CREATED   SIZE
[root@c7u6s8:yum.repos.d]# 

从上述输出中可以看出,确实没有相关的镜像。为此,需要将其他worker节点的镜像打包归档,并发送到c7u6s8节点上,此处使用c7u6s7这个worker节点上的镜像,具体如下所示:

[root@c7u6s7:~]# docker images
REPOSITORY                                 TAG       IMAGE ID       CREATED        SIZE
calico/node                                v3.20.0   5ef66b403f4f   2 weeks ago    170MB
calico/pod2daemon-flexvol                  v3.20.0   5991877ebc11   2 weeks ago    21.7MB
calico/cni                                 v3.20.0   4945b742b8e6   2 weeks ago    146MB
calico/typha                               v3.20.0   593c2f7340d8   2 weeks ago    59.4MB
k8s.gcr.io/kube-apiserver                  v1.21.3   3d174f00aa39   5 weeks ago    126MB
k8s.gcr.io/kube-proxy                      v1.21.3   adb2816ea823   5 weeks ago    103MB
kubernetesui/dashboard                     v2.3.1    e1482a24335a   2 months ago   220MB
k8s.gcr.io/metrics-server/metrics-server   v0.5.0    1c655933b9c5   2 months ago   63.5MB
k8s.gcr.io/pause                           3.4.1     0f8457a4c2ec   7 months ago   683kB
mindnhand/kubia                            v1        402429bcb758   7 months ago   660MB
You have new mail in /var/spool/mail/root
[root@c7u6s7:~]# docker images | egrep -v 'kubia|dashboard'
REPOSITORY                                 TAG       IMAGE ID       CREATED        SIZE
calico/node                                v3.20.0   5ef66b403f4f   2 weeks ago    170MB
calico/pod2daemon-flexvol                  v3.20.0   5991877ebc11   2 weeks ago    21.7MB
calico/cni                                 v3.20.0   4945b742b8e6   2 weeks ago    146MB
calico/typha                               v3.20.0   593c2f7340d8   2 weeks ago    59.4MB
k8s.gcr.io/kube-apiserver                  v1.21.3   3d174f00aa39   5 weeks ago    126MB
k8s.gcr.io/kube-proxy                      v1.21.3   adb2816ea823   5 weeks ago    103MB
k8s.gcr.io/metrics-server/metrics-server   v0.5.0    1c655933b9c5   2 months ago   63.5MB
k8s.gcr.io/pause                           3.4.1     0f8457a4c2ec   7 months ago   683kB
[root@c7u6s7:~]# docker images | egrep -v 'kubia|dashboard' | gawk 'BEGIN{OFS=":"} {print $1,$2}'
REPOSITORY:TAG
calico/node:v3.20.0
calico/pod2daemon-flexvol:v3.20.0
calico/cni:v3.20.0
calico/typha:v3.20.0
k8s.gcr.io/kube-apiserver:v1.21.3
k8s.gcr.io/kube-proxy:v1.21.3
k8s.gcr.io/metrics-server/metrics-server:v0.5.0
k8s.gcr.io/pause:3.4.1
[root@c7u6s7:~]# docker images | egrep -v 'kubia|dashboard' | gawk 'BEGIN{OFS=":"} {if(NR>1) print $1,$2}'
calico/node:v3.20.0
calico/pod2daemon-flexvol:v3.20.0
calico/cni:v3.20.0
calico/typha:v3.20.0
k8s.gcr.io/kube-apiserver:v1.21.3
k8s.gcr.io/kube-proxy:v1.21.3
k8s.gcr.io/metrics-server/metrics-server:v0.5.0
k8s.gcr.io/pause:3.4.1
[root@c7u6s7:~]# 
[root@c7u6s7:~]# docker images | egrep -v 'kubia|dashboard' | gawk 'BEGIN{OFS=":"} {if(NR>1) print $1,$2}' | sed -re ':label;N;s/\n/ /;t label'
calico/node:v3.20.0 calico/pod2daemon-flexvol:v3.20.0 calico/cni:v3.20.0 calico/typha:v3.20.0 k8s.gcr.io/kube-apiserver:v1.21.3 k8s.gcr.io/kube-proxy:v1.21.3 k8s.gcr.io/metrics-server/metrics-server:v0.5.0 k8s.gcr.io/pause:3.4.1
[root@c7u6s7:~]# 
[root@c7u6s7:~]# dockerimages=`docker images | egrep -v 'kubia|dashboard' | gawk 'BEGIN{OFS=":"} {if(NR>1) print $1,$2}' | sed -re ':label;N;s/\n/ /;t label'`
[root@c7u6s7:~]# docker image save -o k8s_worker_images.tar $dockerimages
[root@c7u6s7:~]# du -sh k8s_worker_images.tar 
666M	k8s_worker_images.tar
[root@c7u6s7:~]# ls -lh k8s_worker_images.tar
-rw------- 1 root root 666M Aug 20 10:44 k8s_worker_images.tar
[root@c7u6s7:~]# 

上述就将c7u6s7这个worker节点上的镜像打包归档完成了。接下来将打包好的镜像发送到c7u6s8节点上,具体如下所示:

[root@c7u6s7:~]# scp -o StrictHostKeyChecking=no k8s_worker_images.tar c7u6s8:~
Warning: Permanently added 'c7u6s8,192.168.122.27' (ECDSA) to the list of known hosts.
k8s_worker_images.tar                                    100%  665MB 199.9MB/s   00:03    
[root@c7u6s7:~]# 

上述如果不加选项-o StrictHostKeyChecking=no,那么会导致如下提示:

[root@c7u6s7:~]# scp k8s_worker_images.tar c7u6s8:~
The authenticity of host 'c7u6s8 (192.168.122.27)' can't be established.
ECDSA key fingerprint is SHA256:LmIBHAiGsAf0TtVrBu0m7gL7NtJfaRjZ5ZHccPt3lq4.
ECDSA key fingerprint is MD5:00:3b:f6:a3:5f:16:2c:8d:e3:ef:9f:a4:34:52:51:a4.
Are you sure you want to continue connecting (yes/no)? yes^C

上述提示的含义是,c7u6s8尚未与c7u6s7节点进行过ssh通信,是否将其加入到c7u6s7节点的~/.ssh/known_hosts 这个文件中。如果不加上述选项,则会交互提示,询问是否加入;加上上述选项之后,系统默认将c7u6s8的信息加入到~/.ssh/known_hosts 这个文件中,而不会有这个提示了。

接下来将镜像导入到c7u6s8节点上,具体如下所示:

[root@c7u6s8:~]# ls -lh k8s_worker_images.tar 
-rw------- 1 root root 666M Aug 20 10:46 k8s_worker_images.tar
[root@c7u6s8:~]# 
[root@c7u6s8:~]# docker load -i k8s_worker_images.tar
8166bd0cc212: Loading layer [==================================================>]  13.82kB/13.82kB
11fadb7be88c: Loading layer [==================================================>]   2.55MB/2.55MB
24ef203d9945: Loading layer [==================================================>]  5.629MB/5.629MB
51daf30faa61: Loading layer [==================================================>]  5.629MB/5.629MB
a1872b327b11: Loading layer [==================================================>]   2.55MB/2.55MB
276223362284: Loading layer [==================================================>]  5.632kB/5.632kB
983a914ea8a0: Loading layer [==================================================>]  5.378MB/5.378MB
Loaded image: calico/pod2daemon-flexvol:v3.20.0
e11f5b02839b: Loading layer [==================================================>]  88.58kB/88.58kB
0228b1d1048b: Loading layer [==================================================>]  13.82kB/13.82kB
bf90ef53f235: Loading layer [==================================================>]  145.9MB/145.9MB
Loaded image: calico/cni:v3.20.0
c0dc5afd9d3e: Loading layer [==================================================>]    873kB/873kB
410741085cea: Loading layer [==================================================>]  99.84kB/99.84kB
fb824ec46f96: Loading layer [==================================================>]  58.46MB/58.46MB
a2306f25cef0: Loading layer [==================================================>]  3.584kB/3.584kB
db44ffcc288c: Loading layer [==================================================>]   2.56kB/2.56kB
Loaded image: calico/typha:v3.20.0
16679402dc20: Loading layer [==================================================>]  3.062MB/3.062MB
3d63edbd1075: Loading layer [==================================================>]   1.71MB/1.71MB
79365e8cbfcb: Loading layer [==================================================>]  122.1MB/122.1MB
Loaded image: k8s.gcr.io/kube-apiserver:v1.21.3
48b90c7688a2: Loading layer [==================================================>]  61.99MB/61.99MB
8fe09c1d10f0: Loading layer [==================================================>]  43.14MB/43.14MB
Loaded image: k8s.gcr.io/kube-proxy:v1.21.3
417cb9b79ade: Loading layer [==================================================>]  3.062MB/3.062MB
d8de84e4db30: Loading layer [==================================================>]   61.7MB/61.7MB
Loaded image: k8s.gcr.io/metrics-server/metrics-server:v0.5.0
915e8870f7d1: Loading layer [==================================================>]  684.5kB/684.5kB
Loaded image: k8s.gcr.io/pause:3.4.1
fb65ae368a61: Loading layer [==================================================>]  172.5MB/172.5MB
8f9549b40c18: Loading layer [==================================================>]  13.82kB/13.82kB
Loaded image: calico/node:v3.20.0
[root@c7u6s8:~]# 

[root@c7u6s8:~]# docker images
REPOSITORY                                 TAG       IMAGE ID       CREATED        SIZE
calico/node                                v3.20.0   5ef66b403f4f   2 weeks ago    170MB
calico/pod2daemon-flexvol                  v3.20.0   5991877ebc11   2 weeks ago    21.7MB
calico/cni                                 v3.20.0   4945b742b8e6   2 weeks ago    146MB
calico/typha                               v3.20.0   593c2f7340d8   2 weeks ago    59.4MB
k8s.gcr.io/kube-apiserver                  v1.21.3   3d174f00aa39   5 weeks ago    126MB
k8s.gcr.io/kube-proxy                      v1.21.3   adb2816ea823   5 weeks ago    103MB
k8s.gcr.io/metrics-server/metrics-server   v0.5.0    1c655933b9c5   2 months ago   63.5MB
k8s.gcr.io/pause                           3.4.1     0f8457a4c2ec   7 months ago   683kB
[root@c7u6s8:~]# 

至此,镜像导入完成。此时查看该节点上的容器运行状态,具体如下所示:

[root@c7u6s8:~]# docker container ls 
CONTAINER ID   IMAGE                    COMMAND                  CREATED          STATUS          PORTS     NAMES
55113aedbb46   5ef66b403f4f             "start_runit"            20 seconds ago   Up 20 seconds             k8s_calico-node_calico-node-4gz7s_calico-system_3927f4a7-e00a-46d9-884b-9d90dec10fba_0
5c28f772693a   adb2816ea823             "/usr/local/bin/kube…"   21 seconds ago   Up 21 seconds             k8s_kube-proxy_kube-proxy-2gdw6_kube-system_137de7a0-368f-489f-a350-a89dc82f8df4_0
e985dae1b095   k8s.gcr.io/pause:3.4.1   "/pause"                 22 seconds ago   Up 21 seconds             k8s_POD_calico-node-4gz7s_calico-system_3927f4a7-e00a-46d9-884b-9d90dec10fba_0
1360fd248dfc   k8s.gcr.io/pause:3.4.1   "/pause"                 22 seconds ago   Up 21 seconds             k8s_POD_kube-proxy-2gdw6_kube-system_137de7a0-368f-489f-a350-a89dc82f8df4_0
[root@c7u6s8:~]# 

上述显示相关的容器已经运行起来了,包括calico网络插件、kube-proxy网络代理插件等。

此时回到control-plane,即master节点查看节点状态,具体如下所示:

[root@c7u6s5:~]# kubectl get nodes
NAME     STATUS   ROLES                  AGE   VERSION
c7u6s5   Ready    control-plane,master   16d   v1.21.3
c7u6s6   Ready    <none>                 16d   v1.21.3
c7u6s7   Ready    <none>                 16d   v1.21.3
c7u6s8   Ready    <none>                 52m   v1.21.3
[root@c7u6s5:~]# 

此时新添加的worker节点c7u6s8已经处于就绪状态了。节点添加完成了。

4. 验证新加入的worker节点

测试下pod的运行状态,具体如下所示:

[root@c7u6s5:~]# kubectl get po -o wide
NAME                            READY   STATUS    RESTARTS   AGE     IP               NODE     NOMINATED NODE   READINESS GATES
kubia-deploy-5d9966957d-2bh5c   1/1     Running   6          3d21h   10.244.66.154    c7u6s6   <none>           <none>
kubia-deploy-5d9966957d-hwmtt   1/1     Running   0          20h     10.244.66.161    c7u6s6   <none>           <none>
kubia-deploy-5d9966957d-kgh7h   1/1     Running   6          3d21h   10.244.66.157    c7u6s6   <none>           <none>
kubia-deploy-5d9966957d-x6vn8   1/1     Running   6          3d21h   10.244.66.156    c7u6s6   <none>           <none>
kubia-rs-selector-5ml4d         1/1     Running   0          19h     10.244.227.166   c7u6s7   <none>           <none>
kubia-rs-selector-6s5bm         1/1     Running   0          19h     10.244.227.164   c7u6s7   <none>           <none>
kubia-rs-selector-9d2tk         1/1     Running   0          19h     10.244.227.165   c7u6s7   <none>           <none>
kubia-rs-selector-d72n2         1/1     Running   0          19h     10.244.227.163   c7u6s7   <none>           <none>
[root@c7u6s5:~]# kubectl scale rs/kubia-rs-selector --replicas=8
replicaset.apps/kubia-rs-selector scaled
[root@c7u6s5:~]# kubectl get po -o wide
NAME                            READY   STATUS              RESTARTS   AGE     IP               NODE     NOMINATED NODE   READINESS GATES
kubia-deploy-5d9966957d-2bh5c   1/1     Running             6          3d21h   10.244.66.154    c7u6s6   <none>           <none>
kubia-deploy-5d9966957d-hwmtt   1/1     Running             0          20h     10.244.66.161    c7u6s6   <none>           <none>
kubia-deploy-5d9966957d-kgh7h   1/1     Running             6          3d21h   10.244.66.157    c7u6s6   <none>           <none>
kubia-deploy-5d9966957d-x6vn8   1/1     Running             6          3d21h   10.244.66.156    c7u6s6   <none>           <none>
kubia-rs-selector-5ml4d         1/1     Running             0          19h     10.244.227.166   c7u6s7   <none>           <none>
kubia-rs-selector-6s5bm         1/1     Running             0          19h     10.244.227.164   c7u6s7   <none>           <none>
kubia-rs-selector-9d2tk         1/1     Running             0          19h     10.244.227.165   c7u6s7   <none>           <none>
kubia-rs-selector-cgcgb         0/1     ContainerCreating   0          6s      <none>           c7u6s8   <none>           <none>
kubia-rs-selector-d72n2         1/1     Running             0          19h     10.244.227.163   c7u6s7   <none>           <none>
kubia-rs-selector-rbzsx         0/1     ContainerCreating   0          6s      <none>           c7u6s8   <none>           <none>
kubia-rs-selector-ws45w         1/1     Running             0          6s      10.244.227.167   c7u6s7   <none>           <none>
kubia-rs-selector-wzwrg         0/1     ContainerCreating   0          6s      <none>           c7u6s8   <none>           <none>
[root@c7u6s5:~]# 

等待镜像拉取完成,然后查看pod资源状态,具体如下所示:

[root@c7u6s5:ReplicaSet]# kubectl get po -o wide
NAME                            READY   STATUS    RESTARTS   AGE     IP               NODE     NOMINATED NODE   READINESS GATES
kubia-deploy-5d9966957d-2bh5c   1/1     Running   6          3d21h   10.244.66.154    c7u6s6   <none>           <none>
kubia-deploy-5d9966957d-hwmtt   1/1     Running   0          21h     10.244.66.161    c7u6s6   <none>           <none>
kubia-deploy-5d9966957d-kgh7h   1/1     Running   6          3d22h   10.244.66.157    c7u6s6   <none>           <none>
kubia-deploy-5d9966957d-x6vn8   1/1     Running   6          3d22h   10.244.66.156    c7u6s6   <none>           <none>
kubia-rs-selector-5ml4d         1/1     Running   0          19h     10.244.227.166   c7u6s7   <none>           <none>
kubia-rs-selector-6s5bm         1/1     Running   0          19h     10.244.227.164   c7u6s7   <none>           <none>
kubia-rs-selector-9d2tk         1/1     Running   0          19h     10.244.227.165   c7u6s7   <none>           <none>
kubia-rs-selector-cgcgb         1/1     Running   0          9m23s   10.244.141.131   c7u6s8   <none>           <none>
kubia-rs-selector-d72n2         1/1     Running   0          19h     10.244.227.163   c7u6s7   <none>           <none>
kubia-rs-selector-rbzsx         1/1     Running   0          9m23s   10.244.141.129   c7u6s8   <none>           <none>
kubia-rs-selector-ws45w         1/1     Running   0          9m23s   10.244.227.167   c7u6s7   <none>           <none>
kubia-rs-selector-wzwrg         1/1     Running   0          9m23s   10.244.141.130   c7u6s8   <none>           <none>
[root@c7u6s5:ReplicaSet]# 

从上述输出中可以看出,pod可以正常运行在新添加的worker节点c7u6s8上。

5. References

[1]. Kubernetes: CentOS上如何安装指定版本的Kubernetes
[2]. How to Add Workers to Kubernetes Clusters
[3]. kubeadm join
[4]. kubeadm token

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值