kubeadm部署Kuberentes-v1.23.1的集群(Wmware环境)

一、基础环境及安装系统版本

操作系统:Centos8.5(kernel 4.18.0-384.e18-x86_64)
容器引擎:docker 20.10.11
kubernetes:v1.23.1

注意事项:
(1)kubernetes版本查看
最新版本号:https://storage.googleapis.com/kubernetes-release/release/stable.txt
版本发型说明:https://kubernetes.io/docs/setup/release/notes
(2)机器换机要求:每台机器 2 GB 或更多 RAM,2 个或更多 CPU,存储空间20G以上

 kubeadm是官方社区推出的一个用于快速部署kubernetes集群的工具。这个工具能通过两条指令完成一个kubernetes集群的部署:

# 创建一个 Master 节点
$ kubeadm init
# 将一个 Node 节点加入到当前集群中
$ kubeadm join <Master节点的IP和端口 >

kubenetes集群环境主机信息

IP地址主机名称角色
192.168.0.20k8s-master01,k8s-master01.iomaster
192.168.0.23k8s-node01,k8s-node01.ionode
192.168.0.24k8s-node02,k8s-node03.ionode

二、安装前的准备工作

  1. 主机时间同步
    各主机可访问互联网,启用各个主机的chronyd服务即可,
    主机无法访问互联网,可以把master配置为chronyd server,其他节点主机从master同步时间。
    sudo systemctl start chronyd.service
    sudo systemctl enable chronyd.service
    
  2. 各节点防火墙设置
    各个Node运行kube-proxy组件需要借助iptables或ipvs构建Service资源对象,为了方便链接,关闭各主机上的iptabels相关的服务
    sudo ufw disable && sudo ufw status
    
    #ufw是针对Ubuntu系统发行的,是一种管理netfilter规则的简单方法,针对Centos可以关闭防火墙
    
    # 扩展命令
    # 查看防火墙状态:sudo systemctl status firewalld
    # 关闭防火墙: systemctl stop firewalld.service
    # 开启防火墙: systemctl start firewalld.service
    # 关闭开机启动: systemctl disable firewalld.service
    # 打开防火墙开机启动: systemctl enable firewalld.service
    
  3. 禁用swap设置
    Swap是磁盘上的空间,性能较差,为了避免影响Kubenetes的调度和编排应用程序的运行效果,禁用Swap设备(加入集群,也可以在命令尾部加上 –ignore-preflight-errors=Swap ,以忽略 k8s 对主机 swap 的检查)
    sudo swapoff -a
    
    #永久性禁用Swap功能
    # 注释掉 /etc/fstab文件中,所有文件系统类型为swap的配置行
    永久禁用:
  4. 确保每个节点的唯一主机名、MAC 地址和 product_uuid
    获取网络接口的 MAC 地址:ip link或 ifconfig -a
    查看product_uuid: sudo cat /sys/class/dmi/id/product_uuid
  5. 配置主机hosts
    # k8s-adm-api.io
    
    192.168.0.20     k8s-master01 k8s-master01.io k8s-adm-api.io
    192.168.0.23     k8s-node01 k8s-node01.io
    192.168.0.24     k8s-node02 k8s-node02.io

     
  6. 配置Kubernetes的镜像(阿里云)
    cat <<EOF > /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
    enabled=1
    gpgcheck=1
    repo_gpgcheck=1
    gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    EOF

    如:

  7. 让 iptables 获得桥接流量
    确保br_netfilter模块已加载,可以通过运行命令lsmod | grep br_netfilter查看
    [root@k8s-master01 etc]# lsmod | grep br_netfilter
    br_netfilter           24576  0
    bridge                200704  1 br_netfilter
    [root@k8s-master01 etc]#

         要显式加载它,请调用sudo modprobe br_netfilter

[root@k8s-master01 etc]# sudo modprobe br_netfilter

         作为Linux节点的iptables可以获得桥接流量的要求,应该确保net.bridge.bridge-nf-call-iptables在sysctl配置中设置为1,例如

$ cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

$ cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

$ sudo sysctl --system

三、安装kubeadm、kubelet和kubectl

您将在所有机器上安装这些软件包:

  • kubeadm:引导集群的命令。

  • kubelet:在集群中的所有机器上运行的组件,并执行诸如启动 Pod 和容器之类的操作。

  • kubectl:用于与集群通信的命令行实用程序。

kubeadm不会安装或管理kubeletkubectl对你,所以你需要确保他们想要kubeadm安装适合你的Kubernetes控制平面的版本匹配。如果不这样做,则存在发生版本偏差的风险,这可能导致意外的错误行为。但是,kubelet 和控制平面之间支持一个次要版本倾斜,但 kubelet 版本可能永远不会超过 API 服务器版本

安装命令

yum install -y kubelet kubeadm kubectl

#可以指定版本安装,版本指定格式为:PKG_NAME-VERSION-RELEASE
#如:yum install -y kubelet-1.23.1-00 kubeadm-1.23.1-00 kubectl-1.23.1-00
#

安装日志:

[root@k8s-master01 yum.repos.d]# yum install -y kubelet-1.23.1-00 kubeadm-1.23.1-00 kubectl-1.23.1-00
Kubernetes                                                                                                                              2.7 kB/s | 844  B     00:00
Kubernetes                                                                                                                               15 kB/s | 4.4 kB     00:00
导入 GPG 公钥 0x307EA071:
 Userid: "Rapture Automatic Signing Key (cloud-rapture-signing-key-2021-03-01-08_01_09.pub)"
 指纹: 7F92 E05B 3109 3BEF 5A3C 2D38 FEEA 9169 307E A071
 来自: https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
导入 GPG 公钥 0x836F4BEB:
 Userid: "gLinux Rapture Automatic Signing Key (//depot/google3/production/borg/cloud-rapture/keys/cloud-rapture-pubkeys/cloud-rapture-signing-key-2020-12-03-16_08_05.pub) <glinux-team@google.com>"
 指纹: 59FE 0256 8272 69DC 8157 8F92 8B57 C5C2 836F 4BEB
 来自: https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
导入 GPG 公钥 0xDC6315A3:
 Userid: "Artifact Registry Repository Signer <artifact-registry-repository-signer@google.com>"
 指纹: 35BA A0B3 3E9E B396 F59C A838 C0BA 5CE6 DC63 15A3
 来自: https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
Kubernetes                                                                                                                               11 kB/s | 975  B     00:00
导入 GPG 公钥 0x3E1BA8D5:
 Userid: "Google Cloud Packages RPM Signing Key <gc-team@google.com>"
 指纹: 3749 E1BA 95A8 6CE0 5454 6ED2 F09C 394C 3E1B A8D5
 来自: https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
Kubernetes                                                                                                                              296 kB/s | 136 kB     00:00
依赖关系解决。
========================================================================================================================================================================
 软件包                                           架构                             版本                                      仓库                                  大小
========================================================================================================================================================================
安装:
 kubeadm                                          x86_64                           1.23.1-0                                  kubernetes                           9.0 M
 kubectl                                          x86_64                           1.23.1-0                                  kubernetes                           9.5 M
 kubelet                                          x86_64                           1.23.1-0                                  kubernetes                            21 M
安装依赖关系:
 conntrack-tools                                  x86_64                           1.4.4-10.el8                              baseos                               204 k
 cri-tools                                        x86_64                           1.19.0-0                                  kubernetes                           5.7 M
 kubernetes-cni                                   x86_64                           0.8.7-0                                   kubernetes                            19 M
 libnetfilter_cthelper                            x86_64                           1.0.0-15.el8                              baseos                                24 k
 libnetfilter_cttimeout                           x86_64                           1.0.0-11.el8                              baseos                                24 k
 libnetfilter_queue                               x86_64                           1.0.4-3.el8                               baseos                                31 k
 socat                                            x86_64                           1.7.4.1-1.el8                             appstream                            323 k

事务概要
========================================================================================================================================================================
安装  10 软件包

总下载:64 M
安装大小:287 M
下载软件包:
(1/10): libnetfilter_cthelper-1.0.0-15.el8.x86_64.rpm                                                                                   147 kB/s |  24 kB     00:00
(2/10): libnetfilter_cttimeout-1.0.0-11.el8.x86_64.rpm                                                                                  131 kB/s |  24 kB     00:00
(3/10): conntrack-tools-1.4.4-10.el8.x86_64.rpm                                                                                         559 kB/s | 204 kB     00:00
(4/10): socat-1.7.4.1-1.el8.x86_64.rpm                                                                                                  819 kB/s | 323 kB     00:00
(5/10): libnetfilter_queue-1.0.4-3.el8.x86_64.rpm                                                                                       143 kB/s |  31 kB     00:00
(6/10): 67ffa375b03cea72703fe446ff00963919e8fce913fbc4bb86f06d1475a6bdf9-cri-tools-1.19.0-0.x86_64.rpm                                  3.2 MB/s | 5.7 MB     00:01
(7/10): 8d4a11b0303bf2844b69fc4740c2e2f3b14571c0965534d76589a4940b6fafb6-kubectl-1.23.1-0.x86_64.rpm                                    3.5 MB/s | 9.5 MB     00:02
(8/10): 0ec1322286c077c3dd975de1098d4c938b359fb59d961f0c7ce1b35bdc98a96c-kubeadm-1.23.1-0.x86_64.rpm                                    1.8 MB/s | 9.0 MB     00:05
(9/10): 7a203c8509258e0c79c8c704406b2d8f7d1af8ff93eadaa76b44bb8e9f9cbabd-kubelet-1.23.1-0.x86_64.rpm                                    4.8 MB/s |  21 MB     00:04
(10/10): db7cb5cb0b3f6875f54d10f02e625573988e3e91fd4fc5eef0b1876bb18604ad-kubernetes-cni-0.8.7-0.x86_64.rpm                             5.0 MB/s |  19 MB     00:03
------------------------------------------------------------------------------------------------------------------------------------------------------------------------
总计                                                                                                                                    9.1 MB/s |  64 MB     00:06
Kubernetes                                                                                                                               54 kB/s | 4.4 kB     00:00
导入 GPG 公钥 0x307EA071:
 Userid: "Rapture Automatic Signing Key (cloud-rapture-signing-key-2021-03-01-08_01_09.pub)"
 指纹: 7F92 E05B 3109 3BEF 5A3C 2D38 FEEA 9169 307E A071
 来自: https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
导入公钥成功
导入 GPG 公钥 0x836F4BEB:
 Userid: "gLinux Rapture Automatic Signing Key (//depot/google3/production/borg/cloud-rapture/keys/cloud-rapture-pubkeys/cloud-rapture-signing-key-2020-12-03-16_08_05.pub) <glinux-team@google.com>"
 指纹: 59FE 0256 8272 69DC 8157 8F92 8B57 C5C2 836F 4BEB
 来自: https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
导入公钥成功
导入 GPG 公钥 0xDC6315A3:
 Userid: "Artifact Registry Repository Signer <artifact-registry-repository-signer@google.com>"
 指纹: 35BA A0B3 3E9E B396 F59C A838 C0BA 5CE6 DC63 15A3
 来自: https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
导入公钥成功
Kubernetes                                                                                                                               13 kB/s | 975  B     00:00
导入 GPG 公钥 0x3E1BA8D5:
 Userid: "Google Cloud Packages RPM Signing Key <gc-team@google.com>"
 指纹: 3749 E1BA 95A8 6CE0 5454 6ED2 F09C 394C 3E1B A8D5
 来自: https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
导入公钥成功
运行事务检查
事务检查成功。
运行事务测试
事务测试成功。
运行事务
  准备中  :                                                                                                                                                         1/1
  安装    : kubectl-1.23.1-0.x86_64                                                                                                                                1/10
  安装    : cri-tools-1.19.0-0.x86_64                                                                                                                              2/10
  安装    : libnetfilter_queue-1.0.4-3.el8.x86_64                                                                                                                  3/10
  运行脚本: libnetfilter_queue-1.0.4-3.el8.x86_64                                                                                                                  3/10
  安装    : libnetfilter_cttimeout-1.0.0-11.el8.x86_64                                                                                                             4/10
  运行脚本: libnetfilter_cttimeout-1.0.0-11.el8.x86_64                                                                                                             4/10
  安装    : libnetfilter_cthelper-1.0.0-15.el8.x86_64                                                                                                              5/10
  运行脚本: libnetfilter_cthelper-1.0.0-15.el8.x86_64                                                                                                              5/10
  安装    : conntrack-tools-1.4.4-10.el8.x86_64                                                                                                                    6/10
  运行脚本: conntrack-tools-1.4.4-10.el8.x86_64                                                                                                                    6/10
  安装    : socat-1.7.4.1-1.el8.x86_64                                                                                                                             7/10
  安装    : kubernetes-cni-0.8.7-0.x86_64                                                                                                                          8/10
  安装    : kubelet-1.23.1-0.x86_64                                                                                                                                9/10
  安装    : kubeadm-1.23.1-0.x86_64                                                                                                                               10/10
  运行脚本: kubeadm-1.23.1-0.x86_64                                                                                                                               10/10
  验证    : socat-1.7.4.1-1.el8.x86_64                                                                                                                             1/10
  验证    : conntrack-tools-1.4.4-10.el8.x86_64                                                                                                                    2/10
  验证    : libnetfilter_cthelper-1.0.0-15.el8.x86_64                                                                                                              3/10
  验证    : libnetfilter_cttimeout-1.0.0-11.el8.x86_64                                                                                                             4/10
  验证    : libnetfilter_queue-1.0.4-3.el8.x86_64                                                                                                                  5/10
  验证    : cri-tools-1.19.0-0.x86_64                                                                                                                              6/10
  验证    : kubeadm-1.23.1-0.x86_64                                                                                                                                7/10
  验证    : kubectl-1.23.1-0.x86_64                                                                                                                                8/10
  验证    : kubelet-1.23.1-0.x86_64                                                                                                                                9/10
  验证    : kubernetes-cni-0.8.7-0.x86_64                                                                                                                         10/10

已安装:
  conntrack-tools-1.4.4-10.el8.x86_64      cri-tools-1.19.0-0.x86_64        kubeadm-1.23.1-0.x86_64                      kubectl-1.23.1-0.x86_64
  kubelet-1.23.1-0.x86_64                  kubernetes-cni-0.8.7-0.x86_64    libnetfilter_cthelper-1.0.0-15.el8.x86_64    libnetfilter_cttimeout-1.0.0-11.el8.x86_64
  libnetfilter_queue-1.0.4-3.el8.x86_64    socat-1.7.4.1-1.el8.x86_64

完毕!
[root@k8s-master01 yum.repos.d]#

设置SELinux


        通过运行将 SELinux 设置为许可模式setenforce 0sed ...有效地禁用它。这是允许容器访问主机文件系统所必需的,例如 pod 网络所需的。您必须这样做,直到 kubelet 中的 SELinux 支持得到改进。
        如果您知道如何配置 SELinux,则可以启用它,但它可能需要 kubeadm 不支持的设置

# Set SELinux in permissive mode (effectively disabling it)
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

设置开机自启

sudo systemctl enable --now kubelet

四、初始化kubenetes集群控制平面(k8s-master)

1、执行初始化命令

sudo kubeadm init \
--image-repository=registry.aliyuncs.com/google_containers \
--kubernetes-version=v1.23.1 \
--control-plane-endpoint=k8s-adm-api.io \
--apiserver-advertise-address=192.168.0.20 \
--apiserver-bind-port=6443 \
--pod-network-cidr=10.244.0.0/16 \
--service-cidr=10.96.0.0/12 \
--token-ttl=0

init日志:

[root@k8s-master01 ~]# sudo kubeadm init \
> --v=5 \
> --image-repository=registry.aliyuncs.com/google_containers \
> --kubernetes-version=v1.23.1 \
> --control-plane-endpoint=k8s-adm-api.io \
> --apiserver-advertise-address=192.168.0.20 \
> --apiserver-bind-port=6443 \
> --pod-network-cidr=10.244.0.0/16 \
> --service-cidr=10.96.0.0/12 \
> --token-ttl=0
I1220 23:18:06.419118   15328 initconfiguration.go:117] detected and using CRI socket: /var/run/dockershim.sock
I1220 23:18:06.419241   15328 kubelet.go:217] the value of KubeletConfiguration.cgroupDriver is empty; setting it to "systemd"
[init] Using Kubernetes version: v1.23.1
[preflight] Running pre-flight checks
I1220 23:18:06.428273   15328 checks.go:578] validating Kubernetes and kubeadm version
I1220 23:18:06.428334   15328 checks.go:171] validating if the firewall is enabled and active
        [WARNING Firewalld]: firewalld is active, please ensure ports [6443 10250] are open or your cluster may not function correctly
I1220 23:18:06.442178   15328 checks.go:206] validating availability of port 6443
I1220 23:18:06.442599   15328 checks.go:206] validating availability of port 10259
I1220 23:18:06.442660   15328 checks.go:206] validating availability of port 10257
I1220 23:18:06.442738   15328 checks.go:283] validating the existence of file /etc/kubernetes/manifests/kube-apiserver.yaml
I1220 23:18:06.442799   15328 checks.go:283] validating the existence of file /etc/kubernetes/manifests/kube-controller-manager.yaml
I1220 23:18:06.442828   15328 checks.go:283] validating the existence of file /etc/kubernetes/manifests/kube-scheduler.yaml
I1220 23:18:06.442838   15328 checks.go:283] validating the existence of file /etc/kubernetes/manifests/etcd.yaml
I1220 23:18:06.442926   15328 checks.go:433] validating if the connectivity type is via proxy or direct
I1220 23:18:06.442977   15328 checks.go:472] validating http connectivity to first IP address in the CIDR
I1220 23:18:06.443000   15328 checks.go:472] validating http connectivity to first IP address in the CIDR
I1220 23:18:06.443015   15328 checks.go:107] validating the container runtime
I1220 23:18:06.527837   15328 checks.go:133] validating if the "docker" service is enabled and active
I1220 23:18:06.545668   15328 checks.go:332] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
I1220 23:18:06.545725   15328 checks.go:332] validating the contents of file /proc/sys/net/ipv4/ip_forward
I1220 23:18:06.545753   15328 checks.go:654] validating whether swap is enabled or not
I1220 23:18:06.545912   15328 checks.go:373] validating the presence of executable conntrack
I1220 23:18:06.546054   15328 checks.go:373] validating the presence of executable ip
I1220 23:18:06.546747   15328 checks.go:373] validating the presence of executable iptables
I1220 23:18:06.546776   15328 checks.go:373] validating the presence of executable mount
I1220 23:18:06.546949   15328 checks.go:373] validating the presence of executable nsenter
I1220 23:18:06.546962   15328 checks.go:373] validating the presence of executable ebtables
I1220 23:18:06.546968   15328 checks.go:373] validating the presence of executable ethtool
I1220 23:18:06.546973   15328 checks.go:373] validating the presence of executable socat
I1220 23:18:06.546981   15328 checks.go:373] validating the presence of executable tc
        [WARNING FileExisting-tc]: tc not found in system path
I1220 23:18:06.547023   15328 checks.go:373] validating the presence of executable touch
I1220 23:18:06.547030   15328 checks.go:521] running all checks
I1220 23:18:06.637063   15328 checks.go:404] checking whether the given node name is valid and reachable using net.LookupHost
I1220 23:18:06.637325   15328 checks.go:620] validating kubelet version
I1220 23:18:06.695523   15328 checks.go:133] validating if the "kubelet" service is enabled and active
I1220 23:18:06.710099   15328 checks.go:206] validating availability of port 10250
I1220 23:18:06.710243   15328 checks.go:206] validating availability of port 2379
I1220 23:18:06.710308   15328 checks.go:206] validating availability of port 2380
I1220 23:18:06.710345   15328 checks.go:246] validating the existence and emptiness of directory /var/lib/etcd
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
I1220 23:18:06.710561   15328 checks.go:842] using image pull policy: IfNotPresent
I1220 23:18:06.737010   15328 checks.go:851] image exists: registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.1
I1220 23:18:06.764295   15328 checks.go:851] image exists: registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.1
I1220 23:18:06.790066   15328 checks.go:851] image exists: registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.1
I1220 23:18:06.817259   15328 checks.go:851] image exists: registry.aliyuncs.com/google_containers/kube-proxy:v1.23.1
I1220 23:18:06.843945   15328 checks.go:851] image exists: registry.aliyuncs.com/google_containers/pause:3.6
I1220 23:18:06.873264   15328 checks.go:851] image exists: registry.aliyuncs.com/google_containers/etcd:3.5.1-0
I1220 23:18:06.905264   15328 checks.go:851] image exists: registry.aliyuncs.com/google_containers/coredns:v1.8.6
[certs] Using certificateDir folder "/etc/kubernetes/pki"
I1220 23:18:06.905323   15328 certs.go:112] creating a new certificate authority for ca
[certs] Generating "ca" certificate and key
I1220 23:18:07.015229   15328 certs.go:522] validating certificate period for ca certificate
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-adm-api.io k8s-master01.io kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.0.20]
[certs] Generating "apiserver-kubelet-client" certificate and key
I1220 23:18:07.358130   15328 certs.go:112] creating a new certificate authority for front-proxy-ca
[certs] Generating "front-proxy-ca" certificate and key
I1220 23:18:07.518964   15328 certs.go:522] validating certificate period for front-proxy-ca certificate
[certs] Generating "front-proxy-client" certificate and key
I1220 23:18:07.635612   15328 certs.go:112] creating a new certificate authority for etcd-ca
[certs] Generating "etcd/ca" certificate and key
I1220 23:18:07.755153   15328 certs.go:522] validating certificate period for etcd/ca certificate
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master01.io localhost] and IPs [192.168.0.20 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master01.io localhost] and IPs [192.168.0.20 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
I1220 23:18:08.506659   15328 certs.go:78] creating new public/private key files for signing service account users
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
I1220 23:18:08.663092   15328 kubeconfig.go:103] creating kubeconfig file for admin.conf
[kubeconfig] Writing "admin.conf" kubeconfig file
I1220 23:18:08.879687   15328 kubeconfig.go:103] creating kubeconfig file for kubelet.conf
[kubeconfig] Writing "kubelet.conf" kubeconfig file
I1220 23:18:09.051775   15328 kubeconfig.go:103] creating kubeconfig file for controller-manager.conf
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
I1220 23:18:09.316807   15328 kubeconfig.go:103] creating kubeconfig file for scheduler.conf
[kubeconfig] Writing "scheduler.conf" kubeconfig file
I1220 23:18:09.582787   15328 kubelet.go:65] Stopping the kubelet
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
I1220 23:18:09.692257   15328 manifests.go:99] [control-plane] getting StaticPodSpecs
I1220 23:18:09.692544   15328 certs.go:522] validating certificate period for CA certificate
I1220 23:18:09.692597   15328 manifests.go:125] [control-plane] adding volume "ca-certs" for component "kube-apiserver"
I1220 23:18:09.692603   15328 manifests.go:125] [control-plane] adding volume "etc-pki" for component "kube-apiserver"
I1220 23:18:09.692606   15328 manifests.go:125] [control-plane] adding volume "k8s-certs" for component "kube-apiserver"
I1220 23:18:09.695274   15328 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-apiserver" to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
I1220 23:18:09.695309   15328 manifests.go:99] [control-plane] getting StaticPodSpecs
I1220 23:18:09.695535   15328 manifests.go:125] [control-plane] adding volume "ca-certs" for component "kube-controller-manager"
I1220 23:18:09.695557   15328 manifests.go:125] [control-plane] adding volume "etc-pki" for component "kube-controller-manager"
I1220 23:18:09.695562   15328 manifests.go:125] [control-plane] adding volume "flexvolume-dir" for component "kube-controller-manager"
I1220 23:18:09.695566   15328 manifests.go:125] [control-plane] adding volume "k8s-certs" for component "kube-controller-manager"
I1220 23:18:09.695569   15328 manifests.go:125] [control-plane] adding volume "kubeconfig" for component "kube-controller-manager"
I1220 23:18:09.696364   15328 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-controller-manager" to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[control-plane] Creating static Pod manifest for "kube-scheduler"
I1220 23:18:09.696392   15328 manifests.go:99] [control-plane] getting StaticPodSpecs
I1220 23:18:09.696613   15328 manifests.go:125] [control-plane] adding volume "kubeconfig" for component "kube-scheduler"
I1220 23:18:09.697604   15328 manifests.go:154] [control-plane] wrote static Pod manifest for component "kube-scheduler" to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
I1220 23:18:09.698508   15328 local.go:65] [etcd] wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
I1220 23:18:09.698530   15328 waitcontrolplane.go:91] [wait-control-plane] Waiting for the API server to be healthy
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 5.002542 seconds
I1220 23:18:14.702263   15328 uploadconfig.go:110] [upload-config] Uploading the kubeadm ClusterConfiguration to a ConfigMap
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
I1220 23:18:14.711566   15328 uploadconfig.go:124] [upload-config] Uploading the kubelet component config to a ConfigMap
[kubelet] Creating a ConfigMap "kubelet-config-1.23" in namespace kube-system with the configuration for the kubelets in the cluster
NOTE: The "kubelet-config-1.23" naming of the kubelet ConfigMap is deprecated. Once the UnversionedKubeletConfigMap feature gate graduates to Beta the default name will become just "kubelet-config". Kubeadm upgrade will handle this transition transparently.
I1220 23:18:14.718565   15328 uploadconfig.go:129] [upload-config] Preserving the CRISocket information for the control-plane node
I1220 23:18:14.718607   15328 patchnode.go:31] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-master01.io" as an annotation
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node k8s-master01.io as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node k8s-master01.io as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: v1ky9a.webpm4umuek8lbkp
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
I1220 23:18:15.746847   15328 clusterinfo.go:47] [bootstrap-token] loading admin kubeconfig
I1220 23:18:15.747285   15328 clusterinfo.go:58] [bootstrap-token] copying the cluster from admin.conf to the bootstrap kubeconfig
I1220 23:18:15.747528   15328 clusterinfo.go:70] [bootstrap-token] creating/updating ConfigMap in kube-public namespace
I1220 23:18:15.749937   15328 clusterinfo.go:84] creating the RBAC rules for exposing the cluster-info ConfigMap in the kube-public namespace
I1220 23:18:15.754257   15328 kubeletfinalize.go:90] [kubelet-finalize] Assuming that kubelet client certificate rotation is enabled: found "/var/lib/kubelet/pki/kubelet-client-current.pem"
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
I1220 23:18:15.754932   15328 kubeletfinalize.go:134] [kubelet-finalize] Restarting the kubelet to enable client certificate rotation
[addons] Applied essential addon: CoreDNS
I1220 23:18:16.138991   15328 request.go:597] Waited for 196.96535ms due to client-side throttling, not priority and fairness, request: POST:https://k8s-adm-api.io:6443/apis/rbac.authorization.k8s.io/v1/namespaces/kube-system/rolebindings?timeout=10s
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join k8s-adm-api.io:6443 --token v1ky9a.webpm4umuek8lbkp \
        --discovery-token-ca-cert-hash sha256:8942b421b0806c82d6ace8116b547cc23e38f194912d2d31005f777ffa73f79e \
        --control-plane

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join k8s-adm-api.io:6443 --token v1ky9a.webpm4umuek8lbkp \
        --discovery-token-ca-cert-hash sha256:8942b421b0806c82d6ace8116b547cc23e38f194912d2d31005f777ffa73f79e
[root@k8s-master01 ~]#

init中的问题 :
注意:在安装部署Kubenetes的过程中出现的任何报错,尝试解决报错问题后,最好是先执行一次kubeadm reset -f来清除一下kubeadm的信息,再进行验证错误是否得到解决,不然可能上个报错没有解决又出现以下或者新的报错。

1、kubeadm init初始化时dial tcp 127.0.0.1:10248: connect: connection refused的错误

[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

        Unfortunately, an error has occurred:
                timed out waiting for the condition

        This error is likely caused by:
                - The kubelet is not running
                - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

        If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
                - 'systemctl status kubelet'
                - 'journalctl -xeu kubelet'

        Additionally, a control plane component may have crashed or exited when started by the container runtime.
        To troubleshoot, list all containers using your preferred container runtimes CLI.

        Here is one example how you may list all Kubernetes containers running in docker:
                - 'docker ps -a | grep kube | grep -v pause'
                Once you have found the failing container, you can inspect its logs with:
                - 'docker logs CONTAINERID'

couldn't initialize a Kubernetes cluster

 解决方案:这是cgroup驱动问题。默认情况下Kubernetes cgroup驱动程序设置为system,但docker设置为systemd。我们需要更改Docker cgroup驱动,通过创建配置文件/etc/docker/daemon.json并添加以下行:

# 增加一行 "exec-opts": ["native.cgroupdriver=systemd"]
sudo mkdir /etc/docker
cat <<EOF | sudo tee /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF

#加载其配置
sudo systemctl daemon-reload
#重启docker使其生效
sudo systemctl restart docker
#重启kubelet
sudo systemctl restart kubelet

2、配置命令行工具kubectl

当执行命令【kubectl get nodes】时,会出现拒绝访问的错误,这是因为需要配置.kube目录及config文件,改配置文件应该包含了用用于集群认证的证书和令牌信息等等。
根据init集群配置的日志提示,后续需要执行下面的命令

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

3、安装 Pod 网络附加组件【Flannel组件】

        您必须部署一个 容器网络接口 (CNI) 基于 Pod 网络附加组件,以便您的 Pod 可以相互通信。在安装网络之前,集群 DNS (CoreDNS) 不会启动。
根据init集群配置的日志提示,后续需要添加通过Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:   https://kubernetes.io/docs/concepts/cluster-administration/addons/

        在安装Flannel之前,nodes节点的是NotReady状态,如

# 在没有配置Flannel网络之前,node节点的状态是NotReady
[root@k8s-master01 ~]# kubectl get nodes
NAME              STATUS   ROLES                  AGE     VERSION
k8s-master01.io   NotReady    control-plane,master   3d21h   v1.23.1
[root@k8s-master01 ~]#

   根据https://github.com/flannel-io/flannel官网提醒,需要执行如下命令
For Kubernetes v1.17+ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
注意:在执行该命令时,因为网络原因无法正常安装,可以从git工程获得文件安装,具体工程如下:

#安装失败的记录
[root@k8s-master01 ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
The connection to the server raw.githubusercontent.com was refused - did you specify the right host or port?
[root@k8s-master01 ~]#


#从git获得https://github.com/flannel-io/flannel工程获得/kube-flannel.yml从本地安装如下
[root@k8s-master01 flannel]# kubectl apply -f /home/temp/kube-flannel.yml
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/psp.flannel.unprivileged configured
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
[root@k8s-master01 flannel]#

#重新启动kubelet,查看nodes
[root@k8s-master01 flannel]# sudo systemctl restart kubelet
[root@k8s-master01 flannel]# kubectl get nodes
NAME              STATUS   ROLES                  AGE     VERSION
k8s-master01.io   Ready    control-plane,master   3d21h   v1.23.1
[root@k8s-master01 flannel]#

[root@k8s-master01 ~]# kubectl get pods -n kube-system | grep flannel
kube-flannel-ds-fnvrg                     1/1     Running   4 (114s ago)   86m
[root@k8s-master01 ~]#

4、其他(检查master服务运行情况等)
首先确保能kubelet与apiserver连接正常,执行netstat -antpl | grep 6443可以看到kubelet与apiserver 10.132.106.115:6443连接正常:

# 检查服务nodes节点
[root@k8s-master01 ~]# ss -antulp | grep :6443
tcp   LISTEN 0      128                *:6443             *:*    users:(("kube-apiserver",pid=12644,fd=7))
[root@k8s-master01 ~]# netstat -antpl  |grep kubelet
tcp        0      0 127.0.0.1:10248         0.0.0.0:*               LISTEN      12876/kubelet
tcp        0      0 127.0.0.1:38319         0.0.0.0:*               LISTEN      12876/kubelet
tcp        0      0 192.168.0.20:44842      192.168.0.20:6443       ESTABLISHED 12876/kubelet
tcp6       0      0 :::10250                :::*                    LISTEN      12876/kubelet
[root@k8s-master01 ~]#

# 查看 kube-system 命名空间中运行的 pods
[root@k8s-master01 ~]# kubectl get pods -n kube-system
NAME                                      READY   STATUS    RESTARTS   AGE
coredns-6d8c4cb4d-5l9m2                   1/1     Running   0          9h
coredns-6d8c4cb4d-s4cxs                   1/1     Running   0          9h
etcd-k8s-master01.io                      1/1     Running   9          9h
kube-apiserver-k8s-master01.io            1/1     Running   10         9h
kube-controller-manager-k8s-master01.io   1/1     Running   9          9h
kube-flannel-ds-27txv                     1/1     Running   0          32m
kube-flannel-ds-2czhb                     1/1     Running   0          9h
kube-proxy-4xrdb                          1/1     Running   0          9h
kube-proxy-pcnx2                          1/1     Running   0          32m
kube-scheduler-k8s-master01.io            1/1     Running   9          9h
[root@k8s-master01 ~]#

#查看 k8s 集群组件的状态
[root@k8s-master01 ~]# kubectl get ComponentStatus
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
scheduler            Healthy   ok
controller-manager   Healthy   ok
etcd-0               Healthy   {"health":"true","reason":""}
[root@k8s-master01 ~]#

 其次,检查apiserver的访问是否正常,如果不无法访问,在添加nodes-worker节点时会出现访问链接不上的异常问题,如:

I1225 07:36:37.023939    4870 token.go:217] [discovery] Failed to request cluster-info, will try again: Get "https://k8s-adm-api.io:6443/api/v1/namespaces/kube-public/configmaps/cluster-info?timeout=10s": dial tcp 192.168.0.20:6443: connect: no route to host

如果无法访问,可能是防火墙的问题,需要设置防火墙的端口,或者关闭防火墙。

  • telnet服务的6443端口
    # 检查master的6443端口是否可以正常访问
    [root@k8s-master01 ~]# telnet 192.168.0.20 6443
    Trying 192.168.0.20...
    Connected to 192.168.0.20.
    Escape character is '^]'.
  • 设置服务器的防火墙
### Temporary disable FirewallD
$ sudo systemctl stop firewalld
$ sudo systemctl start firewalld
$ sudo systemctl disable firewalld
$ sudo systemctl enable firewalld

### Adding port on firewall
# 检查端口是否开启
$ sudo firewall-cmd --query-port=port/tcp
#查看开放的端口列表
firewall-cmd --zone=public --list-ports
#查看被监听(Listen)的端口
$ netstat -lntp
#开放一个端口
$ sudo firewall-cmd --add-port=port/protocol --permanent.
#开放一段端口范围
$ sudo firewall-cmd --zone=public --add-port=port_min-port_max/protocol --permanent
$ sudo firewall-cmd --reload
$ sudo firewall-cmd --list-all

[root@wenjay ~]# sudo firewall-cmd --add-port=5000/tcp --permanent
success

[root@wenjay ~]# sudo firewall-cmd --query-port=5000/tcp
yes
[root@wenjay ~]#

[root@wenjay ~]# sudo firewall-cmd --list-all
public (active)
  target: default
  icmp-block-inversion: no
  interfaces: ens160
  sources:
  services: cockpit dhcpv6-client ssh
  ports: 5000/tcp
  protocols:
  forward: no
  masquerade: no
  forward-ports:
  source-ports:
  icmp-blocks:
  rich rules:
[root@wenjay ~]#
#查看kubeadm的配置信息
[root@k8s-master01 ~]# kubectl -n kube-system get cm kubeadm-config -o yaml
apiVersion: v1
data:
  ClusterConfiguration: |
    apiServer:
      extraArgs:
        authorization-mode: Node,RBAC
      timeoutForControlPlane: 4m0s
    apiVersion: kubeadm.k8s.io/v1beta3
    certificatesDir: /etc/kubernetes/pki
    clusterName: kubernetes
    controlPlaneEndpoint: k8s-adm-api.io:6443
    controllerManager: {}
    dns: {}
    etcd:
      local:
        dataDir: /var/lib/etcd
    imageRepository: registry.aliyuncs.com/google_containers
    kind: ClusterConfiguration
    kubernetesVersion: v1.23.1
    networking:
      dnsDomain: cluster.local
      podSubnet: 10.244.0.0/16
      serviceSubnet: 10.96.0.0/12
    scheduler: {}
kind: ConfigMap
metadata:
  creationTimestamp: "2021-12-24T14:37:29Z"
  name: kubeadm-config
  namespace: kube-system
  resourceVersion: "212"
  uid: 04d5eaaf-121e-463b-b201-3c3c58e232e6
[root@k8s-master01 ~]#

五、初始化 Kubernetes 工作节点并将其加入集群(k8s-node)

当节点加入 kubeadm 初始化的集群时,我们需要建立双向信任。 这个过程可以分解为发现(让待加入节点信任 Kubernetes 控制平面节点)和 TLS 引导(让Kubernetes 控制平面节点信任待加入节点)两个部分,有两种主要方式完成节点join。 第一种方法是使用共享令牌和 API 服务器的 IP 地址。 第二种是提供一个文件 - 标准 kubeconfig 文件的一个子集。如果使用共享令牌进行发现,还应该传递 --discovery-token-ca-cert-hash 参数来验证 Kubernetes 控制平面节点提供的根证书颁发机构(CA)的公钥。 此参数的值指定为 "<hash-type>:<hex-encoded-value>",其中支持的哈希类型为 "sha256"

1、创建节点并加入集群

使用共享令牌和 API 服务器的 IP 地址的加入方式,复制kubeadm init命令,提示的 kubeadm join命令执行即可

# 使用共享令牌和 API 服务器的 IP 地址 加入集群
[root@k8s-node01 ~]# kubeadm join k8s-adm-api.io:6443 --v=5 --token uncf6j.0j1hcjwijo3nba1o \
>         --discovery-token-ca-cert-hash sha256:4fa45f19983e94f03235539754efa3c2b6268e487c1f5b42c7a06a365825a49f

 join日志

[root@k8s-node01 ~]# kubeadm join k8s-adm-api.io:6443 --v=5 --token uncf6j.0j1hcjwijo3nba1o \
>         --discovery-token-ca-cert-hash sha256:4fa45f19983e94f03235539754efa3c2b6268e487c1f5b42c7a06a365825a49f
I1225 09:40:43.207372   22631 join.go:413] [preflight] found NodeName empty; using OS hostname as NodeName
I1225 09:40:43.207485   22631 initconfiguration.go:117] detected and using CRI socket: /var/run/dockershim.sock
[preflight] Running pre-flight checks
I1225 09:40:43.207542   22631 preflight.go:92] [preflight] Running general checks
I1225 09:40:43.207586   22631 checks.go:283] validating the existence of file /etc/kubernetes/kubelet.conf
I1225 09:40:43.207607   22631 checks.go:283] validating the existence of file /etc/kubernetes/bootstrap-kubelet.conf
I1225 09:40:43.207614   22631 checks.go:107] validating the container runtime
I1225 09:40:43.294867   22631 checks.go:133] validating if the "docker" service is enabled and active
I1225 09:40:43.312080   22631 checks.go:332] validating the contents of file /proc/sys/net/bridge/bridge-nf-call-iptables
I1225 09:40:43.312145   22631 checks.go:332] validating the contents of file /proc/sys/net/ipv4/ip_forward
I1225 09:40:43.312180   22631 checks.go:654] validating whether swap is enabled or not
I1225 09:40:43.312226   22631 checks.go:373] validating the presence of executable conntrack
I1225 09:40:43.312241   22631 checks.go:373] validating the presence of executable ip
I1225 09:40:43.312250   22631 checks.go:373] validating the presence of executable iptables
I1225 09:40:43.312281   22631 checks.go:373] validating the presence of executable mount
I1225 09:40:43.312305   22631 checks.go:373] validating the presence of executable nsenter
I1225 09:40:43.312319   22631 checks.go:373] validating the presence of executable ebtables
I1225 09:40:43.312326   22631 checks.go:373] validating the presence of executable ethtool
I1225 09:40:43.312332   22631 checks.go:373] validating the presence of executable socat
I1225 09:40:43.312339   22631 checks.go:373] validating the presence of executable tc
        [WARNING FileExisting-tc]: tc not found in system path
I1225 09:40:43.312400   22631 checks.go:373] validating the presence of executable touch
I1225 09:40:43.312416   22631 checks.go:521] running all checks
I1225 09:40:43.392597   22631 checks.go:404] checking whether the given node name is valid and reachable using net.LookupHost
I1225 09:40:43.392706   22631 checks.go:620] validating kubelet version
I1225 09:40:43.443304   22631 checks.go:133] validating if the "kubelet" service is enabled and active
I1225 09:40:43.454595   22631 checks.go:206] validating availability of port 10250
I1225 09:40:43.454737   22631 checks.go:283] validating the existence of file /etc/kubernetes/pki/ca.crt
I1225 09:40:43.454785   22631 checks.go:433] validating if the connectivity type is via proxy or direct
I1225 09:40:43.454839   22631 join.go:530] [preflight] Discovering cluster-info
I1225 09:40:43.454866   22631 token.go:80] [discovery] Created cluster-info discovery client, requesting info from "k8s-adm-api.io:6443"
I1225 09:40:43.464496   22631 token.go:118] [discovery] Requesting info from "k8s-adm-api.io:6443" again to validate TLS against the pinned public key
I1225 09:40:43.471566   22631 token.go:135] [discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "k8s-adm-api.io:6443"
I1225 09:40:43.471610   22631 discovery.go:52] [discovery] Using provided TLSBootstrapToken as authentication credentials for the join process
I1225 09:40:43.471625   22631 join.go:544] [preflight] Fetching init configuration
I1225 09:40:43.471629   22631 join.go:590] [preflight] Retrieving KubeConfig objects
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
I1225 09:40:43.475669   22631 kubelet.go:91] attempting to download the KubeletConfiguration from the new format location (UnversionedKubeletConfigMap=true)
I1225 09:40:43.808582   22631 kubelet.go:94] attempting to download the KubeletConfiguration from the DEPRECATED location (UnversionedKubeletConfigMap=false)
I1225 09:40:43.811661   22631 interface.go:432] Looking for default routes with IPv4 addresses
I1225 09:40:43.811685   22631 interface.go:437] Default route transits interface "ens160"
I1225 09:40:43.811963   22631 interface.go:209] Interface ens160 is up
I1225 09:40:43.812020   22631 interface.go:257] Interface "ens160" has 2 addresses :[192.168.0.23/24 fe80::20c:29ff:fed7:893/64].
I1225 09:40:43.812050   22631 interface.go:224] Checking addr  192.168.0.23/24.
I1225 09:40:43.812056   22631 interface.go:231] IP found 192.168.0.23
I1225 09:40:43.812061   22631 interface.go:263] Found valid IPv4 address 192.168.0.23 for interface "ens160".
I1225 09:40:43.812065   22631 interface.go:443] Found active IP 192.168.0.23
I1225 09:40:43.818447   22631 preflight.go:103] [preflight] Running configuration dependant checks
I1225 09:40:43.818481   22631 controlplaneprepare.go:220] [download-certs] Skipping certs download
I1225 09:40:43.818489   22631 kubelet.go:119] [kubelet-start] writing bootstrap kubelet config file at /etc/kubernetes/bootstrap-kubelet.conf
I1225 09:40:43.819106   22631 kubelet.go:134] [kubelet-start] writing CA certificate at /etc/kubernetes/pki/ca.crt
I1225 09:40:43.819637   22631 kubelet.go:155] [kubelet-start] Checking for an existing Node in the cluster with name "k8s-node01.io" and status "Ready"
I1225 09:40:43.822079   22631 kubelet.go:170] [kubelet-start] Stopping the kubelet
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
I1225 09:40:48.943687   22631 cert_rotation.go:137] Starting client certificate rotation controller
I1225 09:40:48.945629   22631 kubelet.go:218] [kubelet-start] preserving the crisocket information for the node
I1225 09:40:48.945652   22631 patchnode.go:31] [patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-node01.io" as an annotation

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

2、配置命令行工具kubectl

从master服务获得admin.conf,放在任意位置,执行如下命令

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

3、安装 Pod 网络附加组件【Flannel组件】

和master节点的安装方式一样,安装Flannel组件,配置Pode网络

4、查看安装后的node

查看node创建即如果,如下,有【k8s-node01.io  Ready   <none> 15s   v1.23.1】表示完成

[root@k8s-node01 ~]# kubectl get nodes
NAME              STATUS   ROLES                  AGE   VERSION
k8s-master01.io   Ready    control-plane,master   11h   v1.23.1
k8s-node01.io     Ready    <none>                 15s   v1.23.1
[root@k8s-node01 ~]#

5、移除集群的node节点

在master节点上执行命令[kubectl drain <节点名称>]和[kubectl delete node <节点名称>]

# 移除节点前的node状态
[root@k8s-master01 ~]# kubectl get nodes
NAME              STATUS   ROLES                  AGE     VERSION
k8s-master01.io   Ready    control-plane,master   11h     v1.23.1
k8s-node01.io     Ready    <none>                 46m     v1.23.1
k8s-node02.io     Ready    <none>                 3m26s   v1.23.1
[root@k8s-master01 ~]#

# 执行节点k8s-node02.io的kubectl drain
[root@k8s-master01 ~]# kubectl drain k8s-node02.io --delete-local-data --force --ignore-daemonsets
Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
node/k8s-node02.io cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-flannel-ds-mpfgh, kube-system/kube-proxy-wsb7b
node/k8s-node02.io drained
[root@k8s-master01 ~]# kubectl get nodes
NAME              STATUS                     ROLES                  AGE    VERSION
k8s-master01.io   Ready                      control-plane,master   11h    v1.23.1
k8s-node01.io     Ready                      <none>                 47m    v1.23.1
k8s-node02.io     Ready,SchedulingDisabled   <none>                 4m2s   v1.23.1
[root@k8s-master01 ~]#

#删除节点
[root@k8s-master01 ~]# kubectl delete node k8s-node02.io
node "k8s-node02.io" deleted
[root@k8s-master01 ~]#

[root@k8s-master01 ~]# kubectl get nodes
NAME              STATUS   ROLES                  AGE   VERSION
k8s-master01.io   Ready    control-plane,master   11h   v1.23.1
k8s-node01.io     Ready    <none>                 53m   v1.23.1
[root@k8s-master01 ~]#

六、Web 界面 (Dashboard)

Dashboard 是基于网页的 Kubernetes 用户界面。 你可以使用 Dashboard 将容器应用部署到 Kubernetes 集群中,也可以对容器应用排错,还能管理集群资源。 你可以使用 Dashboard 获取运行在集群中的应用的概览信息,也可以创建或者修改 Kubernetes 资源 (如 Deployment,Job,DaemonSet 等等)。 例如,你可以对 Deployment 实现弹性伸缩、发起滚动升级、重启 Pod 或者使用向导创建新的应用。

Dashboard 同时展示了 Kubernetes 集群中的资源状态信息和所有报错信息。

1、部署 Dashboard UI

使用kubernetes的官方dashboard项目的recommended.yaml部署,但是因为网络原因部署失败,解决的方法是从git项目中直接获得相应的yaml文件信息(最新版V2.4.0),在本地进行部署。
https://github.com/kubernetes/dashboard/blob/v2.4.0/aio/deploy/recommended.yaml

# 根据kubetnetes官方的安装说明
[root@k8s-master01 ~]# kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml
The connection to the server raw.githubusercontent.com was refused - did you specify the right host or port?

# 使用已经下载的recommended.yaml文件,执行本地部署
[root@k8s-master01 ~]# kubectl apply -f /home/temp/dashboard/recommended.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
[root@k8s-master01 ~]#

查看Dashboard的service部署及pod的运行情况,runinng说明正常启动运行中。

# kubernetes-dashboard是dashboard的命名空间
[root@k8s-master01 ~]# kubectl get svc,pods  -n kubernetes-dashboard
NAME                                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
service/dashboard-metrics-scraper   ClusterIP   10.105.200.180   <none>        8000/TCP        27d
service/kubernetes-dashboard        NodePort    10.111.231.249   <none>        443:31848/TCP   27d

NAME                                             READY   STATUS    RESTARTS   AGE
pod/dashboard-metrics-scraper-799d786dbf-7m7gl   1/1     Running   22         27d
pod/kubernetes-dashboard-6b6b86c4c5-jxvmb        1/1     Running   24         27d
[root@k8s-master01 ~]#

查看代理部署 

[root@k8s-master01 ~]# kubectl cluster-info
Kubernetes control plane is running at https://k8s-adm-api.io:6443
CoreDNS is running at https://k8s-adm-api.io:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[root@k8s-master01 ~]#

2、访问 Dashboard UI 

为了保护你的集群数据,默认情况下,Dashboard 会使用最少的 RBAC 配置进行部署。 当前,Dashboard 仅支持使用 Bearer 令牌登录。 

你可以使用 kubectl 命令行工具访问 Dashboard,命令如下,但是这种方式,只能在执行命令的机器进行访问。
使用代理UI访问方式(kubectl cluster-info查看):http://k8s-adm-api.io:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/http://k8s-adm-api.io:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/https://k8s-adm-api.io:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxyhttp://k8s-adm-api.io:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/http://k8s-adm-api.io:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/

# kubectl proxy
[root@k8s-master01 ~]# kubectl proxy
Starting to serve on 127.0.0.1:8001

Dashboard支持多种方式方位,例如Kubectl Proxy,kubectl port-forward,节点端口,Ingress或者API Server等,默认创建的Service()对象类型为ClusterIP,他仅能在Pod的客户端中访问,如果需要在集群外使用浏览器访问Dashboard,可以将Service对象的类型修改为NodePort类型,通过端口进行访问。

# 修改Service队形类型修改为NodePort
# supported values: "ClusterIP", "ExternalName", "LoadBalancer", "NodePort"
[root@k8s-master01 ~]# kubectl patch svc kubernetes-dashboard -p '{"spec":{"type":"NodePort"}}' -n kubernetes-dashboard
service/kubernetes-dashboard patched
[root@k8s-master01 ~]#

未显示指定的NodePort的属性值会由Service控制器随机分配,需要通过下面的命令获得分配的端口,以便于在集群为通过浏览器访问。 

[root@k8s-master01 ~]# kubectl get services/kubernetes-dashboard -n kubernetes-dashboard
NAME                   TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
kubernetes-dashboard   NodePort   10.111.231.249   <none>        443:31848/TCP   71m
[root@k8s-master01 ~]#

[root@k8s-master01 system]# kubectl get services --all-namespaces
NAMESPACE              NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                  AGE
default                kubernetes                  ClusterIP   10.96.0.1        <none>        443/TCP                  16h
kube-system            kube-dns                    ClusterIP   10.96.0.10       <none>        53/UDP,53/TCP,9153/TCP   16h
kubernetes-dashboard   dashboard-metrics-scraper   ClusterIP   10.105.200.180   <none>        8000/TCP                 3h19m
kubernetes-dashboard   kubernetes-dashboard        NodePort    10.111.231.249   <none>        443:31848/TCP            3h19m


[root@k8s-master01 system]# netstat -nlp | grep 31848
tcp        0      0 0.0.0.0:31848           0.0.0.0:*               LISTEN      45760/kube-proxy
[root@k8s-master01 system]#

获得对口后,登录Dashboard,URL地址为主机的 https://k8s-master01.io:31848/login

 3、获得登录token

编辑一个配置文件dashboard-adminuser.yaml,文件内容如下。编辑之后部署kubectl create -f dashboard-adminuser.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard

 查看admin-user用户的token,第一步从kubernetes-dashboard空间找到admin-user-token-<****>的服务;第二步,从admin-user-token-ccnwf中获得secret的token信息

[root@k8s-master01 ~]# kubectl get secret -n kubernetes-dashboard
NAME                               TYPE                                  DATA   AGE
admin-user-token-ccnwf             kubernetes.io/service-account-token   3      13d
default-token-95m94                kubernetes.io/service-account-token   3      27d
kubernetes-dashboard-certs         Opaque                                0      27d
kubernetes-dashboard-csrf          Opaque                                1      27d
kubernetes-dashboard-key-holder    Opaque                                2      27d
kubernetes-dashboard-token-wq25m   kubernetes.io/service-account-token   3      27d
[root@k8s-master01 ~]# 
[root@k8s-master01 ~]#
# 获得Token信息
[root@k8s-master01 ~]# kubectl describe secret admin-user-token-ccnwf -n kubernetes-dashboard
Name:         admin-user-token-ccnwf
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: 53806aa5-6175-4cd1-a700-1d65fd582365

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1099 bytes
namespace:  20 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IkFQakx3M2p0YU53RENiUl8xMFAxZ3JlakFIYmtsMFFFSUJhZ1h5d1JaVncifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWNjbndmIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiI1MzgwNmFhNS02MTc1LTRjZDEtYTcwMC0xZDY1ZmQ1ODIzNjUiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZXJuZXRlcy1kYXNoYm9hcmQ6YWRtaW4tdXNlciJ9.Myersn2R1J1J54Ps4o_ZBCriZZGOP_fPsxywi2QkrpUJ_o5GyDB1JKxnWia3iFHXBb7-zyQ5G7NyC50ixrrEfY_mlvLW-o4fhlwwSSl9CMHoVneafaUKbHWZvvNIz-DagQgqHtMqSwPia67rUvTL1mL1f1UG9maGtOKfm5rv6T0753ScY4YlVEYN8MAPRSgTfIFzxJSGzc5X5XGqiAuG1zAR8osl0GwNCTdGqgYR9i56QtHRIA2ezvBOJHJv8XX31x9YDBWIoTNTKfe-JIbWrn3IRwb-PG4hWWMKVc3I_2JC0B59AorzkiYKSjOh9vu-e_VIwqYAU3Llwq8fN8jUXg
[root@k8s-master01 ~]#

登录类型选择token,输入已经获得的token,点击登录进入dashboard页面

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值