K8S搭建自动化部署环境(一)安装Kubernetes

在这里插入图片描述

一、环境检查

至少2台 2核4G 的服务器
CPU 必须为 x86 架构
CentOS 7.8

[注意:] 在所有节点执行环境检查

1.1 查看主网卡:enp3s0

[root@nb1 /]# ip route show
default via 192.168.1.1 dev enp3s0 proto dhcp metric 100 
192.168.1.0/24 dev enp3s0 proto kernel scope link src 192.168.1.127 metric 100 
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 
[root@nb1 /]# ip route show
default via 192.168.1.1 dev enp3s0 proto dhcp metric 100 
192.168.1.0/24 dev enp3s0 proto kernel scope link src 192.168.1.127 metric 100 
192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 
[root@nb1 /]# 
[root@nb1 /]# 

1.2 查看主网卡IP: 192.168.1.127

[root@nb1 /]# ip address
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 2c:4d:54:66:f5:93 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.127/24 brd 192.168.1.255 scope global noprefixroute dynamic enp3s0
       valid_lft 7100sec preferred_lft 7100sec
    inet6 fe80::7f66:1456:11db:5607/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
    link/ether 52:54:00:2f:85:bf brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
       valid_lft forever preferred_lft forever
4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN group default qlen 1000
    link/ether 52:54:00:2f:85:bf brd ff:ff:ff:ff:ff:ff

1.3 查看cpu信息

[root@nb001 ~]# cat /proc/cpuinfo | grep name | cut -f2 -d: | uniq -c   
      4  Intel(R) Xeon(R) Platinum 8369B CPU @ 2.70GHz

1.4 查看Centos版本

[root@nb001 ~]# cat  /etc/redhat-release
CentOS Linux release 7.8.2003 (Core)

或者使用本命令:

[root@nb001 ~]#  lsb_release -a   
LSB Version:	:core-4.1-amd64:core-4.1-noarch
Distributor ID:	CentOS
Description:	CentOS Linux release 7.8.2003 (Core)
Release:	7.8.2003
Codename:	Core

1.5 查看内存信息

total: 总内存

[root@nb001 ~]# free -h
              total        used        free      shared  buff/cache   available
Mem:            15G        8.7G        184M         50M        6.4G        6.2G
Swap:            0B          0B          0B

二、安装K8S

2.1 设置镜像,安装kubelet/kubeadm/kubectl

[注意:] 在所有节点执行

[root@nb1 /]# export REGISTRY_MIRROR=https://registry.cn-hangzhou.aliyuncs.com
[root@nb1 /]# curl -sSL https://kuboard.cn/install-script/v1.21.x/install_kubelet.sh | sh -s 1.21.4
overlay
br_netfilter
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1
* Applying /usr/lib/sysctl.d/00-system.conf ...
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf ...
kernel.yama.ptrace_scope = 0
* Applying /usr/lib/sysctl.d/50-default.conf ...
kernel.sysrq = 16
kernel.core_uses_pid = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /usr/lib/sysctl.d/60-libvirtd.conf ...
fs.aio-max-nr = 1048576
* Applying /etc/sysctl.d/99-kubernetes-cri.conf ...
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
* Applying /etc/sysctl.d/99-sysctl.conf ...
* Applying /etc/sysctl.conf ...
已加载插件:fastestmirror, langpacks
参数 containerd.io 没有匹配
不删除任何软件包
已加载插件:fastestmirror, langpacks
Loading mirror speeds from cached hostfile
 * base: mirrors.163.com
 * extras: mirrors.163.com
 * updates: mirrors.aliyun.com
base                                                                                                                   | 3.6 kB  00:00:00     
extras                                                                                                                 | 2.9 kB  00:00:00     
updates                                                                                                                | 2.9 kB  00:00:00     
正在解决依赖关系
--> 正在检查事务
---> 软件包 device-mapper-persistent-data.x86_64.0.0.8.5-2.el7 将被 升级
---> 软件包 device-mapper-persistent-data.x86_64.0.0.8.5-3.el7_9.2 将被 更新
---> 软件包 lvm2.x86_64.7.2.02.186-7.el7 将被 升级
---> 软件包 lvm2.x86_64.7.2.02.187-6.el7_9.5 将被 更新
--> 正在处理依赖关系 lvm2-libs = 7:2.02.187-6.el7_9.5,它被软件包 7:lvm2-2.02.187-6.el7_9.5.x86_64 需要
---> 软件包 yum-utils.noarch.0.1.1.31-53.el7 将被 升级
---> 软件包 yum-utils.noarch.0.1.1.31-54.el7_8 将被 更新
--> 正在检查事务
---> 软件包 lvm2-libs.x86_64.7.2.02.186-7.el7 将被 升级
---> 软件包 lvm2-libs.x86_64.7.2.02.187-6.el7_9.5 将被 更新
--> 正在处理依赖关系 device-mapper-event = 7:1.02.170-6.el7_9.5,它被软件包 7:lvm2-libs-2.02.187-6.el7_9.5.x86_64 需要
--> 正在检查事务
---> 软件包 device-mapper-event.x86_64.7.1.02.164-7.el7 将被 升级
---> 软件包 device-mapper-event.x86_64.7.1.02.170-6.el7_9.5 将被 更新
--> 正在处理依赖关系 device-mapper-event-libs = 7:1.02.170-6.el7_9.5,它被软件包 7:device-mapper-event-1.02.170-6.el7_9.5.x86_64 需要
--> 正在处理依赖关系 device-mapper = 7:1.02.170-6.el7_9.5,它被软件包 7:device-mapper-event-1.02.170-6.el7_9.5.x86_64 需要
--> 正在检查事务
---> 软件包 device-mapper.x86_64.7.1.02.164-7.el7 将被 升级
--> 正在处理依赖关系 device-mapper = 7:1.02.164-7.el7,它被软件包 7:device-mapper-libs-1.02.164-7.el7.x86_64 需要
---> 软件包 device-mapper.x86_64.7.1.02.170-6.el7_9.5 将被 更新
---> 软件包 device-mapper-event-libs.x86_64.7.1.02.164-7.el7 将被 升级
---> 软件包 device-mapper-event-libs.x86_64.7.1.02.170-6.el7_9.5 将被 更新
--> 正在检查事务
---> 软件包 device-mapper-libs.x86_64.7.1.02.164-7.el7 将被 升级
---> 软件包 device-mapper-libs.x86_64.7.1.02.170-6.el7_9.5 将被 更新
--> 解决依赖关系完成

依赖关系解决

==============================================================================================================================================
 Package                                        架构                    版本                                   源                        大小
==============================================================================================================================================
正在更新:
 device-mapper-persistent-data                  x86_64                  0.8.5-3.el7_9.2                        updates                  423 k
 lvm2                                           x86_64                  7:2.02.187-6.el7_9.5                   updates                  1.3 M
 yum-utils                                      noarch                  1.1.31-54.el7_8                        base                     122 k
为依赖而更新:
 device-mapper                                  x86_64                  7:1.02.170-6.el7_9.5                   updates                  297 k
 device-mapper-event                            x86_64                  7:1.02.170-6.el7_9.5                   updates                  192 k
 device-mapper-event-libs                       x86_64                  7:1.02.170-6.el7_9.5                   updates                  192 k
 device-mapper-libs                             x86_64                  7:1.02.170-6.el7_9.5                   updates                  325 k
 lvm2-libs                                      x86_64                  7:2.02.187-6.el7_9.5                   updates                  1.1 M

事务概要
==============================================================================================================================================
升级  3 软件包 (+5 依赖软件包)

总计:3.9 M
Downloading packages:
警告:/var/cache/yum/x86_64/7/updates/packages/device-mapper-event-1.02.170-6.el7_9.5.x86_64.rpm: 头V3 RSA/SHA256 Signature, 密钥 ID f4a80eb5: NOKEY
从 file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7 检索密钥
导入 GPG key 0xF4A80EB5:
 用户ID     : "CentOS-7 Key (CentOS 7 Official Signing Key) <security@centos.org>"
 指纹       : 6341 ab27 53d7 8a78 a7c2 7bb1 24c6 a8a7 f4a8 0eb5
 软件包     : centos-release-7-8.2003.0.el7.centos.x86_64 (@anaconda)
 来自       : /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  正在更新    : 7:device-mapper-libs-1.02.170-6.el7_9.5.x86_64                                                                           1/16 
  正在更新    : 7:device-mapper-1.02.170-6.el7_9.5.x86_64                                                                                2/16 
  正在更新    : 7:device-mapper-event-libs-1.02.170-6.el7_9.5.x86_64                                                                     3/16 
  正在更新    : 7:device-mapper-event-1.02.170-6.el7_9.5.x86_64                                                                          4/16 
  正在更新    : 7:lvm2-libs-2.02.187-6.el7_9.5.x86_64                                                                                    5/16 
  正在更新    : device-mapper-persistent-data-0.8.5-3.el7_9.2.x86_64                                                                     6/16 
  正在更新    : 7:lvm2-2.02.187-6.el7_9.5.x86_64                                                                                         7/16 
  正在更新    : yum-utils-1.1.31-54.el7_8.noarch                                                                                         8/16 
  清理        : 7:lvm2-2.02.186-7.el7.x86_64                                                                                             9/16 
  清理        : yum-utils-1.1.31-53.el7.noarch                                                                                          10/16 
  清理        : 7:lvm2-libs-2.02.186-7.el7.x86_64                                                                                       11/16 
  清理        : 7:device-mapper-event-1.02.164-7.el7.x86_64                                                                             12/16 
  清理        : 7:device-mapper-event-libs-1.02.164-7.el7.x86_64                                                                        13/16 
  清理        : 7:device-mapper-1.02.164-7.el7.x86_64                                                                                   14/16 
  清理        : 7:device-mapper-libs-1.02.164-7.el7.x86_64                                                                              15/16 
  清理        : device-mapper-persistent-data-0.8.5-2.el7.x86_64                                                                        16/16 
  验证中      : 7:device-mapper-event-1.02.170-6.el7_9.5.x86_64                                                                          1/16 
  验证中      : device-mapper-persistent-data-0.8.5-3.el7_9.2.x86_64                                                                     2/16 
  验证中      : 7:device-mapper-1.02.170-6.el7_9.5.x86_64                                                                                3/16 
  验证中      : 7:lvm2-libs-2.02.187-6.el7_9.5.x86_64                                                                                    4/16 
  验证中      : 7:lvm2-2.02.187-6.el7_9.5.x86_64                                                                                         5/16 
  验证中      : 7:device-mapper-libs-1.02.170-6.el7_9.5.x86_64                                                                           6/16 
  验证中      : yum-utils-1.1.31-54.el7_8.noarch                                                                                         7/16 
  验证中      : 7:device-mapper-event-libs-1.02.170-6.el7_9.5.x86_64                                                                     8/16 
  验证中      : 7:device-mapper-event-libs-1.02.164-7.el7.x86_64                                                                         9/16 
  验证中      : device-mapper-persistent-data-0.8.5-2.el7.x86_64                                                                        10/16 
  验证中      : 7:device-mapper-libs-1.02.164-7.el7.x86_64                                                                              11/16 
  验证中      : 7:lvm2-2.02.186-7.el7.x86_64                                                                                            12/16 
  验证中      : yum-utils-1.1.31-53.el7.noarch                                                                                          13/16 
  验证中      : 7:device-mapper-1.02.164-7.el7.x86_64                                                                                   14/16 
  验证中      : 7:device-mapper-event-1.02.164-7.el7.x86_64                                                                             15/16 
  验证中      : 7:lvm2-libs-2.02.186-7.el7.x86_64                                                                                       16/16 

更新完毕:
  device-mapper-persistent-data.x86_64 0:0.8.5-3.el7_9.2       lvm2.x86_64 7:2.02.187-6.el7_9.5       yum-utils.noarch 0:1.1.31-54.el7_8      

作为依赖被升级:
  device-mapper.x86_64 7:1.02.170-6.el7_9.5                                device-mapper-event.x86_64 7:1.02.170-6.el7_9.5                    
  device-mapper-event-libs.x86_64 7:1.02.170-6.el7_9.5                     device-mapper-libs.x86_64 7:1.02.170-6.el7_9.5                     
  lvm2-libs.x86_64 7:2.02.187-6.el7_9.5                                   

完毕!
已加载插件:fastestmirror, langpacks
adding repo from: https://download.docker.com/linux/centos/docker-ce.repo
grabbing file https://download.docker.com/linux/centos/docker-ce.repo to /etc/yum.repos.d/docker-ce.repo
repo saved to /etc/yum.repos.d/docker-ce.repo
已加载插件:fastestmirror, langpacks
Loading mirror speeds from cached hostfile
 * base: mirrors.163.com
 * extras: mirrors.163.com
 * updates: mirrors.aliyun.com
docker-ce-stable                                                                                                       | 3.5 kB  00:00:00     
(1/2): docker-ce-stable/7/x86_64/updateinfo                                                                            |   55 B  00:00:00     
(2/2): docker-ce-stable/7/x86_64/primary_db                                                                            |  63 kB  00:00:00     
正在解决依赖关系
--> 正在检查事务
---> 软件包 containerd.io.x86_64.0.1.4.3-3.2.el7 将被 安装
--> 正在处理依赖关系 container-selinux >= 2:2.74,它被软件包 containerd.io-1.4.3-3.2.el7.x86_64 需要
--> 正在检查事务
---> 软件包 container-selinux.noarch.2.2.119.2-1.911c772.el7_8 将被 安装
--> 解决依赖关系完成

依赖关系解决

==============================================================================================================================================
 Package                           架构                   版本                                         源                                大小
==============================================================================================================================================
正在安装:
 containerd.io                     x86_64                 1.4.3-3.2.el7                                docker-ce-stable                  33 M
为依赖而安装:
 container-selinux                 noarch                 2:2.119.2-1.911c772.el7_8                    extras                            40 k

事务概要
==============================================================================================================================================
安装  1 软件包 (+1 依赖软件包)

总下载量:33 M
安装大小:128 M
Downloading packages:
(1/2): container-selinux-2.119.2-1.911c772.el7_8.noarch.rpm                                                            |  40 kB  00:00:00     
warning: /var/cache/yum/x86_64/7/docker-ce-stable/packages/containerd.io-1.4.3-3.2.el7.x86_64.rpm: Header V4 RSA/SHA512 Signature, key ID 621e9f35: NOKEY
containerd.io-1.4.3-3.2.el7.x86_64.rpm 的公钥尚未安装
(2/2): containerd.io-1.4.3-3.2.el7.x86_64.rpm                                                                          |  33 MB  00:00:11     
----------------------------------------------------------------------------------------------------------------------------------------------
总计                                                                                                          2.9 MB/s |  33 MB  00:00:11     
从 https://download.docker.com/linux/centos/gpg 检索密钥
导入 GPG key 0x621E9F35:
 用户ID     : "Docker Release (CE rpm) <docker@docker.com>"
 指纹       : 060a 61c5 1b55 8a7f 742b 77aa c52f eb6b 621e 9f35
 来自       : https://download.docker.com/linux/centos/gpg
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  正在安装    : 2:container-selinux-2.119.2-1.911c772.el7_8.noarch                                                                        1/2 
  正在安装    : containerd.io-1.4.3-3.2.el7.x86_64                                                                                        2/2 
  验证中      : containerd.io-1.4.3-3.2.el7.x86_64                                                                                        1/2 
  验证中      : 2:container-selinux-2.119.2-1.911c772.el7_8.noarch                                                                        2/2 

已安装:
  containerd.io.x86_64 0:1.4.3-3.2.el7                                                                                                        

作为依赖被安装:
  container-selinux.noarch 2:2.119.2-1.911c772.el7_8                                                                                          

完毕!
Created symlink from /etc/systemd/system/multi-user.target.wants/containerd.service to /usr/lib/systemd/system/containerd.service.
已加载插件:fastestmirror, langpacks
/var/run/yum.pid 已被锁定,PID 为 12609 的另一个程序正在运行。
Another app is currently holding the yum lock; waiting for it to exit...
  另一个应用程序是:PackageKit
    内存: 50 M RSS (470 MB VSZ)
    已启动: Tue Aug 24 05:24:17 2021 - 00:28之前
    状态  :睡眠中,进程ID:12609
Another app is currently holding the yum lock; waiting for it to exit...
  另一个应用程序是:PackageKit
    内存: 52 M RSS (472 MB VSZ)
    已启动: Tue Aug 24 05:24:17 2021 - 00:30之前
    状态  :睡眠中,进程ID:12609
Another app is currently holding the yum lock; waiting for it to exit...
  另一个应用程序是:PackageKit
    内存: 52 M RSS (472 MB VSZ)
    已启动: Tue Aug 24 05:24:17 2021 - 00:32之前
    状态  :睡眠中,进程ID:12609
Loading mirror speeds from cached hostfile
 * base: mirrors.163.com
 * extras: mirrors.163.com
 * updates: mirrors.aliyun.com
正在解决依赖关系
--> 正在检查事务
---> 软件包 nfs-utils.x86_64.1.1.3.0-0.66.el7 将被 升级
---> 软件包 nfs-utils.x86_64.1.1.3.0-0.68.el7.1 将被 更新
--> 解决依赖关系完成

依赖关系解决

==============================================================================================================================================
 Package                         架构                         版本                                        源                             大小
==============================================================================================================================================
正在更新:
 nfs-utils                       x86_64                       1:1.3.0-0.68.el7.1                          updates                       412 k

事务概要
==============================================================================================================================================
升级  1 软件包

总计:412 k
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  正在更新    : 1:nfs-utils-1.3.0-0.68.el7.1.x86_64                                                                                       1/2 
  清理        : 1:nfs-utils-1.3.0-0.66.el7.x86_64                                                                                         2/2 
  验证中      : 1:nfs-utils-1.3.0-0.68.el7.1.x86_64                                                                                       1/2 
  验证中      : 1:nfs-utils-1.3.0-0.66.el7.x86_64                                                                                         2/2 

更新完毕:
  nfs-utils.x86_64 1:1.3.0-0.68.el7.1                                                                                                         

完毕!
已加载插件:fastestmirror, langpacks
Loading mirror speeds from cached hostfile
 * base: mirrors.163.com
 * extras: mirrors.163.com
 * updates: mirrors.aliyun.com
软件包 wget-1.14-18.el7_6.1.x86_64 已安装并且是最新版本
无须任何处理
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
已加载插件:fastestmirror, langpacks
参数 kubelet 没有匹配
参数 kubeadm 没有匹配
参数 kubectl 没有匹配
不删除任何软件包
已加载插件:fastestmirror, langpacks
Loading mirror speeds from cached hostfile
 * base: mirrors.163.com
 * extras: mirrors.163.com
 * updates: mirrors.aliyun.com
kubernetes                                                                                                             | 1.4 kB  00:00:00     
kubernetes/primary                                                                                                     |  95 kB  00:00:02     
kubernetes                                                                                                                            702/702
正在解决依赖关系
--> 正在检查事务
---> 软件包 kubeadm.x86_64.0.1.21.4-0 将被 安装
--> 正在处理依赖关系 kubernetes-cni >= 0.8.6,它被软件包 kubeadm-1.21.4-0.x86_64 需要
--> 正在处理依赖关系 cri-tools >= 1.13.0,它被软件包 kubeadm-1.21.4-0.x86_64 需要
---> 软件包 kubectl.x86_64.0.1.21.4-0 将被 安装
---> 软件包 kubelet.x86_64.0.1.21.4-0 将被 安装
--> 正在处理依赖关系 socat,它被软件包 kubelet-1.21.4-0.x86_64 需要
--> 正在处理依赖关系 conntrack,它被软件包 kubelet-1.21.4-0.x86_64 需要
--> 正在检查事务
---> 软件包 conntrack-tools.x86_64.0.1.4.4-7.el7 将被 安装
--> 正在处理依赖关系 libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.1)(64bit),它被软件包 conntrack-tools-1.4.4-7.el7.x86_64 需要
--> 正在处理依赖关系 libnetfilter_cttimeout.so.1(LIBNETFILTER_CTTIMEOUT_1.0)(64bit),它被软件包 conntrack-tools-1.4.4-7.el7.x86_64 需要
--> 正在处理依赖关系 libnetfilter_cthelper.so.0(LIBNETFILTER_CTHELPER_1.0)(64bit),它被软件包 conntrack-tools-1.4.4-7.el7.x86_64 需要
--> 正在处理依赖关系 libnetfilter_queue.so.1()(64bit),它被软件包 conntrack-tools-1.4.4-7.el7.x86_64 需要
--> 正在处理依赖关系 libnetfilter_cttimeout.so.1()(64bit),它被软件包 conntrack-tools-1.4.4-7.el7.x86_64 需要
--> 正在处理依赖关系 libnetfilter_cthelper.so.0()(64bit),它被软件包 conntrack-tools-1.4.4-7.el7.x86_64 需要
---> 软件包 cri-tools.x86_64.0.1.13.0-0 将被 安装
---> 软件包 kubernetes-cni.x86_64.0.0.8.7-0 将被 安装
---> 软件包 socat.x86_64.0.1.7.3.2-2.el7 将被 安装
--> 正在检查事务
---> 软件包 libnetfilter_cthelper.x86_64.0.1.0.0-11.el7 将被 安装
---> 软件包 libnetfilter_cttimeout.x86_64.0.1.0.0-7.el7 将被 安装
---> 软件包 libnetfilter_queue.x86_64.0.1.0.2-2.el7_2 将被 安装
--> 解决依赖关系完成

依赖关系解决

==============================================================================================================================================
 Package                                    架构                       版本                              源                              大小
==============================================================================================================================================
正在安装:
 kubeadm                                    x86_64                     1.21.4-0                          kubernetes                     9.1 M
 kubectl                                    x86_64                     1.21.4-0                          kubernetes                     9.5 M
 kubelet                                    x86_64                     1.21.4-0                          kubernetes                      20 M
为依赖而安装:
 conntrack-tools                            x86_64                     1.4.4-7.el7                       base                           187 k
 cri-tools                                  x86_64                     1.13.0-0                          kubernetes                     5.1 M
 kubernetes-cni                             x86_64                     0.8.7-0                           kubernetes                      19 M
 libnetfilter_cthelper                      x86_64                     1.0.0-11.el7                      base                            18 k
 libnetfilter_cttimeout                     x86_64                     1.0.0-7.el7                       base                            18 k
 libnetfilter_queue                         x86_64                     1.0.2-2.el7_2                     base                            23 k
 socat                                      x86_64                     1.7.3.2-2.el7                     base                           290 k

事务概要
==============================================================================================================================================
安装  3 软件包 (+7 依赖软件包)

总下载量:63 M
安装大小:277 M
Downloading packages:
(1/10): conntrack-tools-1.4.4-7.el7.x86_64.rpm                                                                         | 187 kB  00:00:00     
(2/10): 61c56c520cec529ff02ca33f37f190d23253acff6e84bd695cc045cdd4f52b2e-kubeadm-1.21.4-0.x86_64.rpm                   | 9.1 MB  00:00:04     
(3/10): 14bfe6e75a9efc8eca3f638eb22c7e2ce759c67f95b43b16fae4ebabde1549f3-cri-tools-1.13.0-0.x86_64.rpm                 | 5.1 MB  00:00:04     
(4/10): 79fd1783b89fa952f30b4b9be846114b359f79295d8fa3b4d0393ea667a3e327-kubectl-1.21.4-0.x86_64.rpm                   | 9.5 MB  00:00:04     
(5/10): libnetfilter_cthelper-1.0.0-11.el7.x86_64.rpm                                                                  |  18 kB  00:00:00     
(6/10): libnetfilter_queue-1.0.2-2.el7_2.x86_64.rpm                                                                    |  23 kB  00:00:00     
(7/10): libnetfilter_cttimeout-1.0.0-7.el7.x86_64.rpm                                                                  |  18 kB  00:00:00     
(8/10): socat-1.7.3.2-2.el7.x86_64.rpm                                                                                 | 290 kB  00:00:00     
(9/10): db7cb5cb0b3f6875f54d10f02e625573988e3e91fd4fc5eef0b1876bb18604ad-kubernetes-cni-0.8.7-0.x86_64.rpm             |  19 MB  00:00:08     
(10/10): 3b8df60c34c5148407fab5a1ed5c4ee1c8d11039dc6cb6485bc366e7efc704a4-kubelet-1.21.4-0.x86_64.rpm                  |  20 MB  00:00:14     
----------------------------------------------------------------------------------------------------------------------------------------------
总计                                                                                                          3.2 MB/s |  63 MB  00:00:19     
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
  正在安装    : libnetfilter_cthelper-1.0.0-11.el7.x86_64                                                                                1/10 
  正在安装    : socat-1.7.3.2-2.el7.x86_64                                                                                               2/10 
  正在安装    : libnetfilter_cttimeout-1.0.0-7.el7.x86_64                                                                                3/10 
  正在安装    : cri-tools-1.13.0-0.x86_64                                                                                                4/10 
  正在安装    : kubectl-1.21.4-0.x86_64                                                                                                  5/10 
  正在安装    : libnetfilter_queue-1.0.2-2.el7_2.x86_64                                                                                  6/10 
  正在安装    : conntrack-tools-1.4.4-7.el7.x86_64                                                                                       7/10 
  正在安装    : kubernetes-cni-0.8.7-0.x86_64                                                                                            8/10 
  正在安装    : kubelet-1.21.4-0.x86_64                                                                                                  9/10 
  正在安装    : kubeadm-1.21.4-0.x86_64                                                                                                 10/10 
  验证中      : conntrack-tools-1.4.4-7.el7.x86_64                                                                                       1/10 
  验证中      : kubernetes-cni-0.8.7-0.x86_64                                                                                            2/10 
  验证中      : libnetfilter_queue-1.0.2-2.el7_2.x86_64                                                                                  3/10 
  验证中      : kubectl-1.21.4-0.x86_64                                                                                                  4/10 
  验证中      : kubeadm-1.21.4-0.x86_64                                                                                                  5/10 
  验证中      : cri-tools-1.13.0-0.x86_64                                                                                                6/10 
  验证中      : libnetfilter_cttimeout-1.0.0-7.el7.x86_64                                                                                7/10 
  验证中      : socat-1.7.3.2-2.el7.x86_64                                                                                               8/10 
  验证中      : libnetfilter_cthelper-1.0.0-11.el7.x86_64                                                                                9/10 
  验证中      : kubelet-1.21.4-0.x86_64                                                                                                 10/10 

已安装:
  kubeadm.x86_64 0:1.21.4-0                      kubectl.x86_64 0:1.21.4-0                      kubelet.x86_64 0:1.21.4-0                     

作为依赖被安装:
  conntrack-tools.x86_64 0:1.4.4-7.el7           cri-tools.x86_64 0:1.13.0-0                    kubernetes-cni.x86_64 0:0.8.7-0             
  libnetfilter_cthelper.x86_64 0:1.0.0-11.el7    libnetfilter_cttimeout.x86_64 0:1.0.0-7.el7    libnetfilter_queue.x86_64 0:1.0.2-2.el7_2   
  socat.x86_64 0:1.7.3.2-2.el7                  

完毕!
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
containerd containerd.io 1.4.3 269548fa27e0089a8b8278fc4fc781d7f65a939b
Kubernetes v1.21.4

2.2 初始化 master 节点

按如下顺序依次执行:

*关于初始化时用到的环境变量
APISERVER_NAME 不能是 master 的 hostname
APISERVER_NAME 必须全为小写字母、数字、小数点,不能包含减号
POD_SUBNET 所使用的网段不能与 master节点/worker节点 所在的网段重叠。该字段的取值为一个 CIDR 值,如果您对 CIDR 这个概念还不熟悉,请仍然执行 export POD_SUBNET=10.100.0.0/16 命令,不做修改
2.2.1 只在 master 节点执行
#替换 192.168.1.127 为 master 节点实际 IP(请使用内网 IP)
#export 命令只在当前 shell 会话中有效(临时变量)
[root@nb1 /]# export MASTER_IP=192.168.1.127
#替换 nb1.apiserver 为 您想要的 dnsName
[root@nb1 /]# export APISERVER_NAME=nb1.apiserver
#Kubernetes 容器组所在的网段,该网段安装完成后,由 kubernetes 创建,事先并不存在于您的物理网络中
[root@nb1 /]# export POD_SUBNET=10.100.0.0/16
# 将MASTER_IP、APISERVER_NAME写入到 hosts文件中
[root@nb1 /]# echo "${MASTER_IP}    ${APISERVER_NAME}" >> /etc/hosts
2.2.2 实际初始化Master命令
[root@nb1 /]# curl -sSL https://kuboard.cn/install-script/v1.21.x/init_master.sh | sh -s 1.21.4

抓取镜像,请稍候...
[config/images] Pulled registry.aliyuncs.com/k8sxio/kube-apiserver:v1.21.4
[config/images] Pulled registry.aliyuncs.com/k8sxio/kube-controller-manager:v1.21.4
[config/images] Pulled registry.aliyuncs.com/k8sxio/kube-scheduler:v1.21.4
[config/images] Pulled registry.aliyuncs.com/k8sxio/kube-proxy:v1.21.4
[config/images] Pulled registry.aliyuncs.com/k8sxio/pause:3.4.1
[config/images] Pulled registry.aliyuncs.com/k8sxio/etcd:3.4.13-0
failed to pull image "swr.cn-east-2.myhuaweicloud.com/coredns:1.8.0": output: time="2021-08-24T05:38:49+08:00" level=fatal msg="pulling image failed: rpc error: code = NotFound desc = failed to pull and unpack image \"swr.cn-east-2.myhuaweicloud.com/coredns:1.8.0\": failed to resolve reference \"swr.cn-east-2.myhuaweicloud.com/coredns:1.8.0\": swr.cn-east-2.myhuaweicloud.com/coredns:1.8.0: not found"
, error: exit status 1
To see the stack trace of this error execute with --v=5 or higher

上述报错,在原命令的最后增加参数 /coredns,重新执行初始化
curl -sSL https://kuboard.cn/install-script/v1.21.x/init_master.sh | sh -s 1.21.4 /coredns

[root@nb1 /]# curl -sSL https://kuboard.cn/install-script/v1.21.x/init_master.sh | sh -s 1.21.4 /coredns

抓取镜像,请稍候...
[config/images] Pulled registry.aliyuncs.com/k8sxio/kube-apiserver:v1.21.4
[config/images] Pulled registry.aliyuncs.com/k8sxio/kube-controller-manager:v1.21.4
[config/images] Pulled registry.aliyuncs.com/k8sxio/kube-scheduler:v1.21.4
[config/images] Pulled registry.aliyuncs.com/k8sxio/kube-proxy:v1.21.4
[config/images] Pulled registry.aliyuncs.com/k8sxio/pause:3.4.1
[config/images] Pulled registry.aliyuncs.com/k8sxio/etcd:3.4.13-0
[config/images] Pulled swr.cn-east-2.myhuaweicloud.com/coredns/coredns:1.8.0

初始化 Master 节点
[init] Using Kubernetes version: v1.21.4
[preflight] Running pre-flight checks
	[WARNING Hostname]: hostname "nb1" could not be reached
	[WARNING Hostname]: hostname "nb1": lookup nb1 on 192.168.1.1:53: no such host
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local nb1 nb1.apiserver] and IPs [10.96.0.1 192.168.1.127]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost nb1] and IPs [192.168.1.127 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost nb1] and IPs [192.168.1.127 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 31.536666 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
662c2fbaabad9eb9227b10b475b4fd94d336e041083e005b27ee6f5732d104fc
[mark-control-plane] Marking the node nb1 as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node nb1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: x584cq.ipy0ql4lpoqiftck
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join nb1.apiserver:6443 --token x584cq.ipy0ql4lpoqiftck \
	--discovery-token-ca-cert-hash sha256:5c69de55860c96cc0b8a5c5d5590a400ef6387ec24c1bdddc4d32cdc6d8dbf32 \
	--control-plane --certificate-key 662c2fbaabad9eb9227b10b475b4fd94d336e041083e005b27ee6f5732d104fc

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join nb1.apiserver:6443 --token x584cq.ipy0ql4lpoqiftck \
	--discovery-token-ca-cert-hash sha256:5c69de55860c96cc0b8a5c5d5590a400ef6387ec24c1bdddc4d32cdc6d8dbf32 
2.2.3 检查 master 初始化结果

只在 master 节点执行

# 执行如下命令,等待 3-10 分钟,直到所有的容器组处于 Running 状态
watch kubectl get pod -n kube-system -o wide
Every 2.0s: kubectl get pod -n kube-system -o wide                                                                    Tue Aug 24 06:40:54 2021

NAME                          READY   STATUS    RESTARTS   AGE   IP              NODE     NOMINATED NODE   READINESS GATES
coredns-7d75679df-bwtjn       0/1     Pending   0          25m   <none>          <none>   <none>           <none>
coredns-7d75679df-ljwfv       0/1     Pending   0          58m   <none>          <none>   <none>           <none>
etcd-nb1                      1/1     Running   0          58m   192.168.1.127   nb1	  <none>           <none>
kube-apiserver-nb1            1/1     Running   0          58m   192.168.1.127   nb1	  <none>           <none>
kube-controller-manager-nb1   1/1     Running   0          58m   192.168.1.127   nb1	  <none>           <none>
kube-proxy-g4ds9              1/1     Running   0          58m   192.168.1.127   nb1	  <none>           <none>
kube-scheduler-nb1            1/1     Running   0          58m   192.168.1.127   nb1	  <none>           <none>

注意:上述结果中,coredns 一直pending状态,此时是正常状态,coredns 将处于启动失败的状态,请继续下一步,完成 安装网络插件 这个步骤后,coredns 将正常启动。

2.2.4 安装网络插件 calico

export POD_SUBNET=10.100.0.0/16
kubectl apply -f https://kuboard.cn/install-script/v1.21.x/calico-operator.yaml
wget https://kuboard.cn/install-script/v1.21.x/calico-custom-resources.yaml
sed -i “s#192.168.0.0/16#${POD_SUBNET}#” calico-custom-resources.yaml
kubectl apply -f calico-custom-resources.yaml

[root@nb1 /]# export POD_SUBNET=10.100.0.0/16
[root@nb1 /]# kubectl apply -f https://kuboard.cn/install-script/v1.21.x/calico-operator.yaml
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created
customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created
namespace/tigera-operator created
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/tigera-operator created
serviceaccount/tigera-operator created
clusterrole.rbac.authorization.k8s.io/tigera-operator created
clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created
deployment.apps/tigera-operator created

[root@nb1 /]# wget https://kuboard.cn/install-script/v1.21.x/calico-custom-resources.yaml
--2021-08-24 06:44:36--  https://kuboard.cn/install-script/v1.21.x/calico-custom-resources.yaml
正在解析主机 kuboard.cn (kuboard.cn)... 119.3.92.138, 122.112.240.69
正在连接 kuboard.cn (kuboard.cn)|119.3.92.138|:443... 已连接。
已发出 HTTP 请求,正在等待回应... 200 OK
长度:620 [application/octet-stream]
正在保存至: “calico-custom-resources.yaml”

100%[====================================================================================================>] 620         --.-K/s 用时 0s      

2021-08-24 06:44:37 (32.7 MB/s) - 已保存 “calico-custom-resources.yaml” [620/620])

[root@nb1 /]# sed -i "s#192.168.0.0/16#${POD_SUBNET}#" calico-custom-resources.yaml
[root@nb1 /]# kubectl apply -f calico-custom-resources.yaml
installation.operator.tigera.io/default created
[root@nb1 /]# 

安装完网络插件后,再次查看状态:发现coredns已经变为Running状态。
watch kubectl get pod -n kube-system -o wide

Every 2.0s: kubectl get pod -n kube-system -o wide                                                                                                                                         Tue Aug 24 06:48:14 2021

NAME                          READY   STATUS    RESTARTS   AGE   IP              NODE   NOMINATED NODE   READINESS GATES
coredns-7d75679df-bwtjn       1/1     Running   0          33m   10.100.198.66   nb1    <none>           <none>
coredns-7d75679df-ljwfv       1/1     Running   0          65m   10.100.198.67   nb1    <none>           <none>
etcd-nb1                      1/1     Running   0          65m   192.168.1.127   nb1    <none>           <none>
kube-apiserver-nb1            1/1     Running   0          65m   192.168.1.127   nb1    <none>           <none>
kube-controller-manager-nb1   1/1     Running   0          65m   192.168.1.127   nb1    <none>           <none>
kube-proxy-g4ds9              1/1     Running   0          65m   192.168.1.127   nb1    <none>           <none>
kube-scheduler-nb1            1/1     Running   0          65m   192.168.1.127   nb1    <none>           <none>

2.3 初始化 worker节点

2.3.1 在master节点上获取token

可获取kubeadm join 命令及参数:
该 token 的有效时间为 2 个小时。2小时内,您可以使用此 token 初始化任意数量的 worker 节点。如下所示:
只在 master 节点执行

[root@nb1 /]# kubeadm token create --print-join-command
kubeadm join nb1.apiserver:6443 --token ro3iwb.n0p1vcfo3ljw336y --discovery-token-ca-cert-hash sha256:5c69de55860c96cc0b8a5c5d5590a400ef6387ec24c1bdddc4d32cdc6d8dbf32 

注意:上述过程有提示:在所有节点上执行的都需要在各个worker节点先先先执行。

2.3.2 worker节点加入集群
[root@nb2 ~]# kubeadm join nb1.apiserver:6443 --token ro3iwb.n0p1vcfo3ljw336y --discovery-token-ca-cert-hash sha256:5c69de55860c96cc0b8a5c5d5590a400ef6387ec24c1bdddc4d32cdc6d8dbf32 
# 出现此警告是由于 export MASTER_IP=192.168.1.127 的IP设置为了worker节点IP 128 ,修改/etc/hosts 的映射为127即可。
[preflight] Running pre-flight checks
	[WARNING Hostname]: hostname "nb2" could not be reached
	[WARNING Hostname]: hostname "nb2": lookup nb2 on 192.168.1.1:53: no such host
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
2.3.3 测试是否能访问主节点
[root@nb2 ~]# curl -ik https://nb1.apiserver:6443
HTTP/1.1 403 Forbidden
Cache-Control: no-cache, private
Content-Type: application/json
X-Content-Type-Options: nosniff
X-Kubernetes-Pf-Flowschema-Uid: 0ef06e5f-e3ad-4a47-a91c-f59f5382afb4
X-Kubernetes-Pf-Prioritylevel-Uid: ac885347-bea7-47d8-a2aa-863ede1c180f
Date: Mon, 23 Aug 2021 23:21:01 GMT
Content-Length: 233

{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {
    
  },
  "status": "Failure",
  "message": "forbidden: User \"system:anonymous\" cannot get path \"/\"",
  "reason": "Forbidden",
  "details": {
    
  },
  "code": 403
}
2.3.3 检查Worker节点初始化结果

只在 master 节点执行,刚开始可能NotReady 等一会儿就重新检查就好了。

[root@nb1 ~]# kubectl get nodes -o wide
NAME   STATUS   ROLES                  AGE     VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION           CONTAINER-RUNTIME
nb1    Ready    control-plane,master   105m    v1.21.4   192.168.1.127   <none>        CentOS Linux 7 (Core)   3.10.0-1127.el7.x86_64   containerd://1.4.3
nb2    Ready    <none>                 6m30s   v1.21.4   192.168.1.128   <none>        CentOS Linux 7 (Core)   3.10.0-1127.el7.x86_64   containerd://1.4.3

至此,表示单master节点,单Worker节点(条件允许的话,worker节点可以设置多个)k8s集群已经安装完毕。

各位大佬,后文如下:

K8S搭建自动化部署环境(二)安装K8S管理工具Kuboard V3

K8S搭建自动化部署环境(三)Jenkins下载、安装和启动

K8S搭建自动化部署环境(四)Jenkins多分支流水线Blue Ocean的安装和使用

参考:
https://www.kuboard.cn/install/install-k8s.html

END

  • 8
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 5
    评论
评论 5
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

一掬净土

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值