Kubernetes1.26.3 高可用集群

Kubernetes1.26.3 高可用集群

在这里插入图片描述

0、服务器信息

服务器为腾讯云服务器(按需计费,按流量计费,并且将IP转换为弹性IP,使用结束可关机仅收取硬盘等固定资源费用)

服务器名称IP描述组件信息
Kubernetes1124.223.218.159master1etcd、apiserver、contorller-manager、scheduler、kubelet、kube-proxy
Kubernetes2124.222.44.181master2etcd、apiserver、contorller-manager、scheduler、kubelet、kube-proxy
Kubernetes3124.223.197.142master3etcd、apiserver、contorller-manager、scheduler、kubelet、kube-proxy
Kubernetes4124.222.142.13node1kubelet、kube-proxy
Kubernetes5124.223.208.10node2kubelet、kube-proxy
Kubernetes6124.221.179.182node3kubelet、kube-proxy
Kuernetes749.234.50.98负载均衡器nginx
网段描述
10.19.0.0/16机器内网
10.96.0.0/16service
192.168.0.0/16pod

1、环境准备

如未特别标注,均为所有机器都需执行

1.1、host配置

hostnamectl set-hostname "Kubernetes1" --static
echo "127.0.0.1   $(hostname)" >> /etc/hosts
/etc/init.d/network restart
cat>>/etc/hosts<< EOF
10.19.0.5  Kubernetes1
10.19.0.9  Kubernetes2
10.19.0.11 Kubernetes3
10.19.0.2  Kubernetes4
10.19.0.4  Kubernetes5
10.19.0.12 Kubernetes6
EOF

1.2、ssh免密链接

# 生成ssh密钥
[root@Kubernetes1 ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:5T64EqJDkJMHYMXBs4OqQGWKjtPbF5yvTfh0V3Ynpso root@Kubernetes1
The key's randomart image is:
+---[RSA 2048]----+
|o.+o.            |
|o  *             |
|.+= o     .      |
|*+.o     o       |
|==  .. .S .   = o|
|=.o . =. o   = o.|
|oo + ..++ + o    |
|. + . o=.+ +     |
|   . ..o+ E      |
+----[SHA256]-----+
# 拷贝id_rsa.pub到目标机器,下次即可直接访问(第一次需要输入密码)
[root@Kubernetes1 ~]# for i in Kubernetes2 Kubernetes3 Kubernetes4 Kubernetes5 Kubernetes6;do ssh-copy-id -i .ssh/id_rsa.pub $i;done
······
# 测试免密连接
[root@Kubernetes1 ~]# ssh root@Kuernetes-2
Last login: Tue May  9 22:54:58 2023 from 183.195.73.137
[root@Kubernetes2 ~]# hostname
Kuernetes-2
[root@Kubernetes2 ~]# exit
logout
Connection to Kubernetes2 closed.

1.3、升级内核

# 查看内核(3.10内核在大规模集群具有不稳定性,需将内核升级到4.19+)
[root@Kubernetes1 ~]# uname -sr
Linux 3.10.0-1160.88.1.el7.x86_64
# 升级软件包
[root@Kubernetes1 ~]# yum update -y --exclude=kernel*
······
# 导入ELRepo软件仓库的公共秘钥
[root@Kubernetes1 ~]# rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
[root@Kubernetes1 ~]# rpm -Uvh https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm
Retrieving https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm
Preparing...                          ################################# [100%]
Updating / installing...
   1:elrepo-release-7.0-6.el7.elrepo  ################################# [100%]
# 镜像加速
[root@Kubernetes1 ~]# yum install -y yum-plugin-fastestmirror
······
# 查看当前可升级到的版本
[root@Kubernetes1 ~]# yum --disablerepo="*" --enablerepo="elrepo-kernel" list available
······
# 升级内核(需要一定时间)
[root@Kubernetes1 ~]# yum --enablerepo=elrepo-kernel install -y kernel-lt
······
# 查看当前系统已安装内核
[root@Kubernetes1 ~]# awk -F\' '$1=="menuentry " {print i++ " : " $2}' /etc/grub2.cfg
0 : CentOS Linux 7 Rescue 95a46ad0ee7f4772b6251edf4514b358 (5.4.242-1.el7.elrepo.x86_64)
1 : CentOS Linux (5.4.242-1.el7.elrepo.x86_64) 7 (Core)
2 : CentOS Linux (3.10.0-1160.88.1.el7.x86_64) 7 (Core)
3 : CentOS Linux (0-rescue-ba63ad6a0c7246dd8b30c727aae0e195) 7 (Core)
# 重构内核
[root@Kubernetes1 ~]# grub2-mkconfig -o /boot/grub2/grub.cfg
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-5.4.242-1.el7.elrepo.x86_64
Found initrd image: /boot/initramfs-5.4.242-1.el7.elrepo.x86_64.img
Found linux image: /boot/vmlinuz-3.10.0-1160.88.1.el7.x86_64
Found initrd image: /boot/initramfs-3.10.0-1160.88.1.el7.x86_64.img
Found linux image: /boot/vmlinuz-0-rescue-95a46ad0ee7f4772b6251edf4514b358
Found initrd image: /boot/initramfs-0-rescue-95a46ad0ee7f4772b6251edf4514b358.img
Found linux image: /boot/vmlinuz-0-rescue-ba63ad6a0c7246dd8b30c727aae0e195
Found initrd image: /boot/initramfs-0-rescue-ba63ad6a0c7246dd8b30c727aae0e195.img
done
# 再次查看当前系统已安装内核
[root@Kuernetes-6 ~]# awk -F\' '$1=="menuentry " {print i++ " : " $2}' /etc/grub2.cfg
0 : CentOS Linux (5.4.242-1.el7.elrepo.x86_64) 7 (Core)
1 : CentOS Linux (3.10.0-1160.88.1.el7.x86_64) 7 (Core)
2 : CentOS Linux (0-rescue-92581b1e8bfc4373a10654f47c8911f3) 7 (Core)
3 : CentOS Linux (0-rescue-ba63ad6a0c7246dd8b30c727aae0e195) 7 (Core)
# 修改默认内核(GRUB_DEFAULT=0)
[root@Kubernetes1 ~]# vi /etc/default/grub
[root@Kubernetes1 ~]# cat /etc/default/grub
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"
GRUB_DEFAULT=0
GRUB_DISABLE_SUBMENU=true
GRUB_TERMINAL="serial console"
GRUB_TERMINAL_OUTPUT="serial console"
GRUB_CMDLINE_LINUX="crashkernel=2G-8G:256M,8G-16G:512M,16G-:768M console=ttyS0,115200 console=tty0 panic=5 net.ifnames=0 biosdevname=0 intel_idle.max_cstate=1 intel_pstate=disable processor.max_cstate=1 amd_iommu=on iommu=pt"
GRUB_DISABLE_RECOVERY="true"
GRUB_SERIAL_COMMAND="serial --speed=115200 --unit=0 --word=8 --parity=no --stop=1"
# 再次重构内核
[root@Kubernetes1 ~]# grub2-mkconfig -o /boot/grub2/grub.cfg
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-5.4.242-1.el7.elrepo.x86_64
Found initrd image: /boot/initramfs-5.4.242-1.el7.elrepo.x86_64.img
Found linux image: /boot/vmlinuz-3.10.0-1160.88.1.el7.x86_64
Found initrd image: /boot/initramfs-3.10.0-1160.88.1.el7.x86_64.img
Found linux image: /boot/vmlinuz-0-rescue-95a46ad0ee7f4772b6251edf4514b358
Found initrd image: /boot/initramfs-0-rescue-95a46ad0ee7f4772b6251edf4514b358.img
Found linux image: /boot/vmlinuz-0-rescue-ba63ad6a0c7246dd8b30c727aae0e195
Found initrd image: /boot/initramfs-0-rescue-ba63ad6a0c7246dd8b30c727aae0e195.img
done
# 重启
[root@Kubernetes1 ~]# reboot
# 检查重启后的内核
[root@Kubernetes1 ~]# uname -sr
Linux 5.4.242-1.el7.elrepo.x86_64

1.4、安装cfssl

仅主节点即可

https://github.com/cloudflare/cfssl/releases

[root@Kubernetes1 ~]# wget https://github.com/cloudflare/cfssl/releases/download/v1.6.4/cfssl_1.6.4_linux_amd64
······
[root@Kubernetes1 ~]# wget https://github.com/cloudflare/cfssl/releases/download/v1.6.4/cfssljson_1.6.4_linux_amd64
······
[root@Kubernetes1 ~]# wget https://github.com/cloudflare/cfssl/releases/download/v1.6.4/cfssl-certinfo_1.6.4_linux_amd64
······
[root@Kubernetes1 ~]# ls
cfssl_1.6.4_linux_amd64  cfssl-certinfo_1.6.4_linux_amd64  cfssljson_1.6.4_linux_amd64
[root@Kubernetes1 ~]# chmod +x cfssl*
[root@Kubernetes1 ~]# mv cfssl_1.6.4_linux_amd64 cfssl
[root@Kubernetes1 ~]# mv cfssl-certinfo_1.6.4_linux_amd64 cfssl-certinfo
[root@Kubernetes1 ~]# mv cfssljson_1.6.4_linux_amd64 cfssljson
[root@Kubernetes1 ~]# ls -l
total 28572
-rwxr-xr-x 1 root root 12054528 Apr 11 03:07 cfssl
-rwxr-xr-x 1 root root  9560064 Apr 11 03:08 cfssl-certinfo
-rwxr-xr-x 1 root root  7643136 Apr 11 03:07 cfssljson
[root@Kubernetes1 ~]# mv cfssl* /usr/bin/
[root@Kubernetes1 ~]# cfssl
No command is given.
Usage:
Available commands:
	revoke
	bundle
	certinfo
	sign
	genkey
	ocspserve
	info
	crl
	serve
	gencert
	gencsr
	scan
	gencrl
	ocspdump
	ocspsign
	print-defaults
	version
	ocsprefresh
	selfsign
Top-level flags:

1.5、系统设置

#!/bin/bash
# set SELinux permissive(disable)
sudo setenforce 0
sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config
# close swap
swapoff -a && sysctl -w vm.swappiness=0
sed -ri 's/.*swap.*/#&/' /etc/fstab

1.6、ipvs组件

cat <<EOF | sudo tee /etc/modules-load.d/ipvs.conf
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
EOF
# 安装ipvs组件
[root@Kubernetes1 ~]# yum install ipvsadm ipset sysstat conntrack libseccomp -y
······
# 修改ipvs配置
[root@Kubernetes1 ~]# cat <<EOF | sudo tee /etc/modules-load.d/ipvs.conf
> ip_vs
> ip_vs_lc
> ip_vs_wlc
> ip_vs_rr
> ip_vs_wrr
> ip_vs_lblc
> ip_vs_lblcr
> ip_vs_dh
> ip_vs_sh
> ip_vs_fo
> ip_vs_nq
> ip_vs_sed
> ip_vs_ftp
> ip_vs_sh
> nf_conntrack
> ip_tables
> ip_set
> xt_set
> ipt_set
> ipt_rpfilter
> ipt_REJECT
> ipip
> EOF
# 重启
[root@Kubernetes1 ~]# systemctl enable --now systemd-modules-load.service
Job for systemd-modules-load.service failed because the control process exited with error code. See "systemctl status systemd-modules-load.service" and "journalctl -xe" for details.
# 查看ipvs模块是否加载
[root@Kubernetes1 ~]# lsmod | grep -e ip_vs -e nf_conntrack
ip_vs_ftp              16384  0 
nf_nat                 45056  1 ip_vs_ftp
ip_vs_sed              16384  0 
ip_vs_nq               16384  0 
ip_vs_fo               16384  0 
ip_vs_sh               16384  0 
ip_vs_dh               16384  0 
ip_vs_lblcr            16384  0 
ip_vs_lblc             16384  0 
ip_vs_wrr              16384  0 
ip_vs_rr               16384  0 
ip_vs_wlc              16384  0 
ip_vs_lc               16384  0 
ip_vs                 155648  24 ip_vs_wlc,ip_vs_rr,ip_vs_dh,ip_vs_lblcr,ip_vs_sh,ip_vs_fo,ip_vs_nq,ip_vs_lblc,ip_vs_wrr,ip_vs_lc,ip_vs_sed,ip_vs_ftp
nf_conntrack          147456  2 nf_nat,ip_vs
nf_defrag_ipv6         24576  2 nf_conntrack,ip_vs
nf_defrag_ipv4         16384  1 nf_conntrack
libcrc32c              16384  3 nf_conntrack,nf_nat,ip_vs

1.7、docker

#!/bin/bash
# remove old docker
yum remove docker \
        docker-client \
        docker-client-latest \
        docker-common \
        docker-latest \
        docker-latest-logrotate \
        docker-logrotate \
        docker-engine

# install dependents
yum install -y yum-utils

# set yum repo
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

# install docker
yum -y install docker-ce-20.10.9-3.el7 docker-ce-cli-20.10.9-3.el7 containerd.io

# start
systemctl enable docker --now

# docker config
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://12sotewv.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF

sudo systemctl daemon-reload
sudo systemctl restart docker

1.8、cri-docker

cat > /lib/systemd/system/cri-docker.service <<EOF
[Unit]
Description=CRI Interface for Docker Application Container Engine
Documentation=https://docs.mirantis.com
After=network-online.target firewalld.service docker.service
Wants=network-online.target
Requires=cri-docker.socket

[Service]
Type=notify
ExecStart=/usr/bin/cri-dockerd --network-plugin=cni --pod-infra-container-image=imaxun/pause:3.9
ExecReload=/bin/kill -s HUP
TimeoutSec=0
RestartSec=2
Restart=always

StartLimitBurst=3

StartLimitInterval=60s

LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

TasksMax=infinity
Delegate=yes
KillMode=process

[Install]
WantedBy=multi-user.target
EOF


cat > /lib/systemd/system/cri-docker.socket <<EOF
[Unit]
Description=CRI Docker Socket for the API
PartOf=cri-docker.service

[Socket]
ListenStream=%t/cri-dockerd.sock
SocketMode=0660
SocketUser=root
SocketGroup=docker

[Install]
WantedBy=sockets.target
EOF
# 启动服务
systemctl daemon-reload
systemctl enable cri-docker --now
[root@Kubernetes1 ~]# wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.0/cri-dockerd-0.3.0.amd64.tgz
······
[root@Kubernetes1 ~]# tar -zxvf cri-dockerd.tgz 
cri-dockerd/
cri-dockerd/._cri-dockerd
tar: Ignoring unknown extended header keyword `LIBARCHIVE.xattr.com.apple.quarantine'
tar: Ignoring unknown extended header keyword `LIBARCHIVE.xattr.com.apple.metadata:kMDItemWhereFroms'
cri-dockerd/cri-dockerd
[root@Kubernetes1 ~]# chmod +x cri-dockerd/cri-dockerd
[root@Kubernetes1 ~]# mv cri-dockerd/cri-dockerd /usr/bin/
[root@Kubernetes1 ~]# vi cri-docker.sh 
[root@Kubernetes1 ~]# sh cri-docker.sh 
Created symlink from /etc/systemd/system/multi-user.target.wants/cri-docker.service to /usr/lib/systemd/system/cri-docker.service.
[root@Kubernetes1 ~]# systemctl status cri-docker
● cri-docker.service - CRI Interface for Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/cri-docker.service; enabled; vendor preset: disabled)
   Active: active (running) since Sun 2023-05-14 13:11:35 CST; 18s ago
     Docs: https://docs.mirantis.com
 Main PID: 9600 (cri-dockerd)
    Tasks: 8
   Memory: 17.4M
   CGroup: /system.slice/cri-docker.service
           └─9600 /usr/bin/cri-dockerd --network-plugin=cni --pod-infra-container-image=imaxun/pause:3.9

May 14 13:11:35 Kubernetes4 cri-dockerd[9600]: time="2023-05-14T13:11:35+08:00" level=info msg="Start docker client with request timeout 0s"
May 14 13:11:35 Kubernetes4 cri-dockerd[9600]: time="2023-05-14T13:11:35+08:00" level=info msg="Hairpin mode is set to none"
May 14 13:11:35 Kubernetes4 cri-dockerd[9600]: time="2023-05-14T13:11:35+08:00" level=info msg="Loaded network plugin cni"
May 14 13:11:35 Kubernetes4 cri-dockerd[9600]: time="2023-05-14T13:11:35+08:00" level=info msg="Docker cri networking managed by network plugin cni"
May 14 13:11:35 Kubernetes4 cri-dockerd[9600]: time="2023-05-14T13:11:35+08:00" level=info msg="Docker Info: &{ID:HBZS:RXZJ:YZ6F:ZDEX:3R3S:AYDP:BH2I:6RT2:TC2K:7LSK:KG5Q:R4EX Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Nati
May 14 13:11:35 Kubernetes4 systemd[1]: Started CRI Interface for Docker Application Container Engine.
May 14 13:11:35 Kubernetes4 cri-dockerd[9600]: time="2023-05-14T13:11:35+08:00" level=info msg="Setting cgroupDriver systemd"
May 14 13:11:35 Kubernetes4 cri-dockerd[9600]: time="2023-05-14T13:11:35+08:00" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
May 14 13:11:35 Kubernetes4 cri-dockerd[9600]: time="2023-05-14T13:11:35+08:00" level=info msg="Starting the GRPC backend for the Docker CRI interface."
May 14 13:11:35 Kubernetes4 cri-dockerd[9600]: time="2023-05-14T13:11:35+08:00" level=info msg="Start cri-dockerd grpc backend"

1.9、证书

仅主节点即可

kubernetes证书机构配置(ca-config.json)相关名词解释

  • “ca-config.json”:可以定义多个 profiles,分别指定不同的过期时间、使用场景等参数;后续在签名证书时使用某个 profile
    • server:服务端证书
    • client:客户端证书
    • peer:对等证书
  • “signing”:表示该证书可用于签名其它证书
  • “server auth”:表示client可以用该 CA 对 server 提供的证书进行验证
  • “client auth”:表示server可以用该CA对client提供的证书进行验证

证书申请(ca-csr.json)相关名词解释

  • CN:域名
  • C:国家
  • ST:省份
  • L:城市
  • O:组织名
  • OU:部门
sudo tee /etc/kubernetes/pki/ca-config.json <<-'EOF'
{
  "signing": {
    "default": {
      "expiry": "87600h"
    },
    "profiles": {
      "server": {
        "expiry": "87600h",
        "usages": [
          "signing",
          "key encipherment",
          "server auth"
        ]
      },
      "client": {
        "expiry": "87600h",
        "usages": [
          "signing",
          "key encipherment",
          "client auth"
        ]
      },
      "peer": {
        "expiry": "87600h",
        "usages": [
          "signing",
          "key encipherment",
          "server auth",
          "client auth"
        ]
      },
      "kubernetes": {
        "expiry": "87600h",
        "usages": [
          "signing",
          "key encipherment",
          "server auth",
          "client auth"
        ]
      },
      "etcd": {
        "expiry": "87600h",
        "usages": [
          "signing",
          "key encipherment",
          "server auth",
          "client auth"
        ]
      }
    }
  }
}
EOF
sudo tee /etc/kubernetes/pki/ca-csr.json <<-'EOF'
{
  "CN": "Ialso",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Shanghai",
      "L": "Shanghai",
      "O": "kubernetes",
      "OU": "kubernetes"
    }
  ],
  "ca": {
    "expiry": "87600h"
  }
}
EOF
# 创建存放证书文件夹
[root@Kubernetes1 ~]# mkdir -p /etc/kubernetes/pki
# kubernetes证书机构配置
[root@Kubernetes1 pki]# sudo tee /etc/kubernetes/pki/ca-config.json <<-'EOF'
> {
>   "signing": {
>     "default": {
>       "expiry": "87600h"
>     },
>     "profiles": {
>       "server": {
>         "expiry": "87600h",
>         "usages": [
>           "signing",
>           "key encipherment",
>           "server auth"
>         ]
>       },
>       "client": {
>         "expiry": "87600h",
>         "usages": [
>           "signing",
>           "key encipherment",
>           "client auth"
>         ]
>       },
>       "peer": {
>         "expiry": "87600h",
>         "usages": [
>           "signing",
>           "key encipherment",
>           "server auth",
>           "client auth"
>         ]
>       },
>       "kubernetes": {
>         "expiry": "87600h",
>         "usages": [
>           "signing",
>           "key encipherment",
>           "server auth",
>           "client auth"
>         ]
>       },
>       "etcd": {
>         "expiry": "87600h",
>         "usages": [
>           "signing",
>           "key encipherment",
>           "server auth",
>           "client auth"
>         ]
>       }
>     }
>   }
> }
> EOF
······
# kubernetes证书机构证书申请
[root@Kubernetes1 pki]# sudo tee /etc/kubernetes/pki/ca-csr.json <<-'EOF'
> {
>   "CN": "Ialso",
>   "key": {
>     "algo": "rsa",
>     "size": 2048
>   },
>   "names": [
>     {
>       "C": "CN",
>       "ST": "Shanghai",
>       "L": "Shanghai",
>       "O": "kubernetes",
>       "OU": "kubernetes"
>     }
>   ],
>   "ca": {
>     "expiry": "87600h"
>   }
> }
> EOF
······
# 生成kubernetes机构证书
[root@Kubernetes1 pki]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
2023/05/10 22:24:22 [INFO] generating a new CA key and certificate from CSR
2023/05/10 22:24:22 [INFO] generate received request
2023/05/10 22:24:22 [INFO] received CSR
2023/05/10 22:24:22 [INFO] generating key: rsa-2048
2023/05/10 22:24:22 [INFO] encoded CSR
2023/05/10 22:24:22 [INFO] signed certificate with serial number 613017233483743397580047557677682512319581508456
# 查看证书 ca-key.pem私钥;ca.pem公钥
[root@Kubernetes1 pki]# ls
ca-config.json  ca.csr  ca-csr.json  ca-key.pem  ca.pem

1.10、复制文件到其他节点

for i in Kubernetes2 Kubernetes3 Kubernetes4 Kubernetes5 Kubernetes6;do scp -r /root/abc.txt root@$i:/root/abc.txt;done

1.11、clash

[root@Kubernetes1 ~]# mkdir /usr/local/clash
[root@Kubernetes1 ~]# cd /usr/local/clash
[root@Kubernetes1 clash]# wget https://github.com/Dreamacro/clash/releases/download/v1.7.1/clash-linux-amd64-v1.7.1.gz
[root@Kubernetes1 clash]# gunzip clash-linux-amd64-v1.7.1.gz
[root@Kubernetes1 clash]# chmod +x clash-linux-amd64-v1.7.1
[root@Kubernetes1 clash]# ln -s /usr/local/clash/clash-linux-amd64-v1.7.1 clash
[root@Kubernetes1 clash]# wget -O /usr/local/clash/config.yaml "https://mymonocloud.com/clash/760582/cZsa28nlyvsV" --no-check-certificate
[root@Kubernetes1 clash]# wget -O Country.mmdb https://github.com/Dreamacro/maxmind-geoip/releases/latest/download/Country.mmdb
[root@Kubernetes1 clash]# ll
total 14524
lrwxrwxrwx 1 root root      41 May 16 22:40 clash -> /usr/local/clash/clash-linux-amd64-v1.7.1
-rwxr-xr-x 1 root root 8990720 May 16 22:38 clash-linux-amd64-v1.7.1
-rw-r--r-- 1 root root   44361 May 16 22:41 config.yaml
-rw-r--r-- 1 root root 5833460 May 16 22:41 Country.mmdb
[root@Kubernetes1 clash]# vim /usr/lib/systemd/system/clash.service
# /usr/lib/systemd/system/clash.service
[Unit]
Description=Clash
After=syslog.target network-online.target remote-fs.target nss-lookup.target
Wants=network-online.target
 
[Service]
Type=simple
ExecStartPre=/usr/local/clash/clash -t -f /usr/local/clash/config.yaml
ExecStart=/usr/local/clash/clash -d /usr/local/clash
ExecStop=/bin/kill -s QUIT $MAINPID
LimitNOFILE=65535
 
[Install]
WantedBy=multi-user.target
[root@Kubernetes1 clash]# systemctl enable --now clash.service 
Created symlink from /etc/systemd/system/multi-user.target.wants/clash.service to /usr/lib/systemd/system/clash.service.
[root@Kubernetes1 clash]# systemctl status clash.service 
● clash.service - Clash
   Loaded: loaded (/usr/lib/systemd/system/clash.service; enabled; vendor preset: disabled)
   Active: active (running) since Tue 2023-05-16 22:43:28 CST; 13s ago
  Process: 7268 ExecStartPre=/usr/local/clash/clash -t -f /usr/local/clash/config.yaml (code=exited, status=0/SUCCESS)
 Main PID: 7426 (clash)
    Tasks: 6
   Memory: 14.6M
   CGroup: /system.slice/clash.service
           └─7426 /usr/local/clash/clash -d /usr/local/clash

May 16 22:43:28 Kubernetes1 clash[7268]: time="2023-05-16T22:43:28+08:00" level=info msg="Start initial compatible provider Streaming"
May 16 22:43:28 Kubernetes1 clash[7268]: configuration file /usr/local/clash/config.yaml test is successful
May 16 22:43:28 Kubernetes1 systemd[1]: Started Clash.
May 16 22:43:28 Kubernetes1 clash[7426]: time="2023-05-16T22:43:28+08:00" level=info msg="Start initial compatible provider Proxy"
May 16 22:43:28 Kubernetes1 clash[7426]: time="2023-05-16T22:43:28+08:00" level=info msg="Start initial compatible provider StreamingSE"
May 16 22:43:28 Kubernetes1 clash[7426]: time="2023-05-16T22:43:28+08:00" level=info msg="Start initial compatible provider Streaming"
May 16 22:43:28 Kubernetes1 clash[7426]: time="2023-05-16T22:43:28+08:00" level=info msg="Start initial compatible provider MATCH"
May 16 22:43:28 Kubernetes1 clash[7426]: time="2023-05-16T22:43:28+08:00" level=info msg="HTTP proxy listening at: 127.0.0.1:7890"
May 16 22:43:28 Kubernetes1 clash[7426]: time="2023-05-16T22:43:28+08:00" level=info msg="RESTful API listening at: 127.0.0.1:9090"
May 16 22:43:28 Kubernetes1 clash[7426]: time="2023-05-16T22:43:28+08:00" level=info msg="SOCKS proxy listening at: 127.0.0.1:7891"

2、etcd安装

2.1、安装

# 下载etcd:https://github.com/etcd-io/etcd/releases
[root@Kubernetes1 ~]# wget https://github.com/etcd-io/etcd/releases/download/v3.5.6/etcd-v3.5.6-linux-amd64.tar.gz
······
# 拷贝etcd安装包至其他master节点
[root@Kubernetes1 pki]# for i in Kubernetes2 Kubernetes3;do scp etcd-* root@$i:/root/;done 
etcd-v3.5.6-linux-amd64.tar.gz                                                                                                                                100%   19MB 118.2KB/s   02:41    
etcd-v3.5.6-linux-amd64.tar.gz                                                                                                                                100%   19MB 114.7KB/s   02:46
# 解压安装包
[root@Kubernetes1 ~]# tar -zxvf etcd-v3.5.6-linux-amd64.tar.gz
[root@Kubernetes1 ~]# mv /root/etcd-v3.5.6-linux-amd64/etcd /usr/bin/
[root@Kubernetes1 ~]# mv /root/etcd-v3.5.6-linux-amd64/etcdctl /usr/bin/
# 验证etcd
[root@Kubernetes1 ~]# etcdctl
······

2.2、证书

etcd是独立的机构,自己用自己的证书

sudo tee /etc/kubernetes/pki/etcd/etcd-ca-csr.json <<-'EOF'
{
  "CN": "Ialso",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Shanghai",
      "L": "Shanghai",
      "O": "etcd",
      "OU": "etcd"
    }
  ],
  "ca": {
    "expiry": "87600h"
  }
}
EOF
sudo tee /etc/kubernetes/pki/etcd/etcd-csr.json <<-'EOF'
{
  "CN": "etcd",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "hosts": [
    "10.19.0.5",
    "10.19.0.9",
    "10.19.0.11",
    "10.19.0.2",
    "10.19.0.4",
    "10.19.0.12",
    "10.19.0.7"
  ],
  "names": [
    {
      "C": "CN",
      "ST": "Shanghai",
      "L": "Shanghai",
      "O": "etcd",
      "OU": "etcd-colony"
    }
  ]
}
EOF
cfssl gencert \
   -ca=/etc/kubernetes/pki/etcd/ca.pem \
   -ca-key=/etc/kubernetes/pki/etcd/ca-key.pem \
   -config=/etc/kubernetes/pki/ca-config.json \
   -profile=etcd \
   etcd-csr.json | cfssljson -bare /etc/kubernetes/pki/etcd/etcd
# etcd机构证书申请
[root@Kubernetes1 pki]# sudo tee /etc/kubernetes/pki/etcd/ca-csr.json <<-'EOF'
> {
>   "CN": "Ialso",
>   "key": {
>     "algo": "rsa",
>     "size": 2048
>   },
>   "names": [
>     {
>       "C": "CN",
>       "ST": "Shanghai",
>       "L": "Shanghai",
>       "O": "etcd",
>       "OU": "etcd"
>     }
>   ],
>   "ca": {
>     "expiry": "87600h"
>   }
> }
> EOF
······
# 生成etcd机构证书
[root@Kubernetes1 etcd]# cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare ca -
2023/05/11 01:04:38 [INFO] generating a new CA key and certificate from CSR
2023/05/11 01:04:38 [INFO] generate received request
2023/05/11 01:04:38 [INFO] received CSR
2023/05/11 01:04:38 [INFO] generating key: rsa-2048
2023/05/11 01:04:38 [INFO] encoded CSR
2023/05/11 01:04:38 [INFO] signed certificate with serial number 638045234947858116581635552444821777926630480846
# 查看证书 ca-key.pem私钥;ca.pem公钥
[root@Kubernetes1 etcd]# ls
ca.csr  ca-key.pem  ca.pem
# 申请证书
[root@Kubernetes1 etcd]# sudo tee /etc/kubernetes/pki/etcd/etcd-ialso-csr.json <<-'EOF'
> {
>   "CN": "etcd-ialso",
>   "key": {
>     "algo": "rsa",
>     "size": 2048
>   },
>   "hosts": [
>     "10.19.0.5",
>     "10.19.0.9",
>     "10.19.0.11",
>     "10.19.0.2",
>     "10.19.0.4",
>     "10.19.0.12",
>     "10.19.0.7"
>   ],
>   "names": [
>     {
>       "C": "CN",
>       "ST": "Shanghai",
>       "L": "Shanghai",
>       "O": "etcd",
>       "OU": "etcd-colony"
>     }
>   ]
> }
> EOF
# 从etcd机构签发证书
[root@Kubernetes1 etcd]# cfssl gencert \
>    -ca=/etc/kubernetes/pki/etcd/ca.pem \
>    -ca-key=/etc/kubernetes/pki/etcd/ca-key.pem \
>    -config=/etc/kubernetes/pki/ca-config.json \
>    -profile=etcd \
>    etcd-ialso-csr.json | cfssljson -bare /etc/kubernetes/pki/etcd/etcd
2023/05/11 01:27:09 [INFO] generate received request
2023/05/11 01:27:09 [INFO] received CSR
2023/05/11 01:27:09 [INFO] generating key: rsa-2048
2023/05/11 01:27:09 [INFO] encoded CSR
2023/05/11 01:27:09 [INFO] signed certificate with serial number 547412799563394483789087934200510450231255257959
# 查看证书 etcd-key.pem私钥;etcd.pem公钥
[root@Kubernetes1 etcd]# ls
ca.csr  ca-key.pem  ca.pem  etcd-ca-csr.json  etcd.csr  etcd-ialso-csr.json  etcd-key.pem  etcd.pem
# 拷贝证书至其他将要安装etcd的节点
[root@Kubernetes1 etcd]# for i in Kubernetes2 Kubernetes3;do scp -r /etc/kubernetes/pki/etcd/*.pem root@$i:/etc/kubernetes/pki/;done
······

2.3、启动

etcd配置文件:https://doczhcn.gitbook.io/etcd/index/index-1/configuration

name: 'etcd1' # 节点名称
data-dir: /etc/kubernetes/data/etcd/data
wal-dir: /etc/kubernetes/data/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://10.19.0.5:2380' # 本机ip+2380端口
listen-client-urls: 'https://10.19.0.510.19.0.5:2379,http://127.0.0.1:2379' # 本机ip+2379端口
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://10.19.0.5:2380' # 本机ip+2380端口
advertise-client-urls: 'https://10.19.0.5:2379' # 本机ip+2379端口
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'etcd1=https://10.19.0.5:2380,etcd2=https://10.19.0.9:2380,etcd3=https://10.19.0.11:2380' # etcd集群所有节点名称+ip
initial-cluster-token: 'etcd-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd/ca.pem'
  auto-tls: true
peer-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  peer-client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd/ca.pem'
  auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
cat > /usr/lib/systemd/system/etcd.service  <<EOF
[Unit]
Description=Etcd Service
Documentation=https://coreos.com/etcd/docs/latest
After=network.target
After=network-online.target
Wants=network-online.target

[Service]
Type=notify
ExecStart=/usr/bin/etcd --config-file=/etc/kubernetes/conf/etcd/etcd1-conf.yaml
Restart=on-failure
RestartSec=10
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
EOF
systemctl enable --now etcd.service
[root@Kubernetes1 ~]# systemctl enable --now etcd.service
[root@Kubernetes1 etcd]# systemctl status etcd.service
● etcd.service - Etcd Service
   Loaded: loaded (/usr/lib/systemd/system/etcd.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2023-05-12 00:07:20 CST; 1min 15s ago
     Docs: https://coreos.com/etcd/docs/latest
 Main PID: 14647 (etcd)
    Tasks: 8
   Memory: 38.4M
   CGroup: /system.slice/etcd.service
           └─14647 /usr/bin/etcd --config-file=/etc/kubernetes/etcd/etcd-conf.yaml

May 12 00:08:29 Kubernetes1 etcd[14647]: {"level":"warn","ts":"2023-05-12T00:08:29.792+0800","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy sta...
May 12 00:08:29 Kubernetes1 etcd[14647]: {"level":"warn","ts":"2023-05-12T00:08:29.793+0800","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy sta...
May 12 00:08:32 Kubernetes1 etcd[14647]: {"level":"warn","ts":"2023-05-12T00:08:32.808+0800","caller":"etcdserver/cluster_util.go:288","msg":"failed to reach the peer URL...
May 12 00:08:32 Kubernetes1 etcd[14647]: {"level":"warn","ts":"2023-05-12T00:08:32.808+0800","caller":"etcdserver/cluster_util.go:155","msg":"failed to get version","remo...
May 12 00:08:32 Kubernetes1 etcd[14647]: {"level":"warn","ts":"2023-05-12T00:08:32.811+0800","caller":"etcdserver/cluster_util.go:288","msg":"failed to reach the peer URL...
May 12 00:08:32 Kubernetes1 etcd[14647]: {"level":"warn","ts":"2023-05-12T00:08:32.811+0800","caller":"etcdserver/cluster_util.go:155","msg":"failed to get version","remo...
May 12 00:08:34 Kubernetes1 etcd[14647]: {"level":"warn","ts":"2023-05-12T00:08:34.793+0800","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy sta...
May 12 00:08:34 Kubernetes1 etcd[14647]: {"level":"warn","ts":"2023-05-12T00:08:34.793+0800","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy sta...
May 12 00:08:34 Kubernetes1 etcd[14647]: {"level":"warn","ts":"2023-05-12T00:08:34.793+0800","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy sta...
May 12 00:08:34 Kubernetes1 etcd[14647]: {"level":"warn","ts":"2023-05-12T00:08:34.794+0800","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy sta...
Hint: Some lines were ellipsized, use -l to show in full.

2.4、测试

# 查看集群节点
[root@Kubernetes1 etcd]# etcdctl member list --write-out=table
+------------------+---------+-------+-------------------------+-------------------------+------------+
|        ID        | STATUS  | NAME  |       PEER ADDRS        |      CLIENT ADDRS       | IS LEARNER |
+------------------+---------+-------+-------------------------+-------------------------+------------+
| 452bb92b6cddd036 | started | etcd3 | https://10.19.0.11:2380 | https://10.19.0.11:2379 |      false |
| 8f871fbc55399fbc | started | etcd1 |  https://10.19.0.5:2380 |  https://10.19.0.5:2379 |      false |
| b7c44b92b36f66fd | started | etcd2 |  https://10.19.0.9:2380 |  https://10.19.0.9:2379 |      false |
+------------------+---------+-------+-------------------------+-------------------------+------------+
# 检查节点健康状态
[root@Kubernetes1 etcd]# ETCDCTL_API=3 etcdctl --cacert=/etc/kubernetes/pki/etcd/ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem --endpoints="https://10.19.0.5:2379,https://10.19.0.9:2379,https://10.19.0.11:2379" endpoint health --write-out=table
+-------------------------+--------+-------------+-------+
|        ENDPOINT         | HEALTH |    TOOK     | ERROR |
+-------------------------+--------+-------------+-------+
| https://10.19.0.11:2379 |   true | 11.620977ms |       |
|  https://10.19.0.9:2379 |   true | 12.017392ms |       |
|  https://10.19.0.5:2379 |   true | 12.167674ms |       |
+-------------------------+--------+-------------+-------+

3、其他组件

3.1、下载

https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.26.md

[root@Kubernetes1 ~]# wget https://dl.k8s.io/v1.26.3/kubernetes-server-linux-amd64.tar.gz
[root@Kubernetes1 bin]# chmod +x kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}
# master节点执行
[root@Kubernetes1 bin]# cp kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} /usr/bin/
# node节点执行
[root@Kubernetes1 bin]# cp kubernetes/server/bin/kube{let,ctl,-proxy} /usr/bin/

3.2、apiserver

1、证书
sudo tee /etc/kubernetes/pki/apiserver/apiserver-csr.json <<-'EOF'
{
  "CN": "Ialso",
  "hosts": [
    "10.96.0.1",
    "127.0.0.1",
    "10.19.0.5",
    "10.19.0.9",
    "10.19.0.11",
    "10.19.0.2",
    "10.19.0.4",
    "10.19.0.12",
    "10.19.0.7",
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Shanghai",
      "L": "Shanghai",
      "O": "kube-apiserver",
      "OU": "kube-apiserver"
    }
  ]
}
EOF
cfssl gencert \
    -ca=/etc/kubernetes/pki/ca.pem \
    -ca-key=/etc/kubernetes/pki/ca-key.pem \
    -config=/etc/kubernetes/pki/ca-config.json \
    -profile=kubernetes \
    apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver/apiserver
[root@Kubernetes1 pki]# pwd
/etc/kubernetes/pki
[root@Kubernetes1 pki]# mkdir apiserver
[root@Kubernetes1 pki]# cd apiserver
[root@Kubernetes1 apiserver]# sudo tee /etc/kubernetes/pki/apiserver/apiserver-csr.json <<-'EOF'
> {
>     "CN": "Ialso",
>     "hosts": [
>       "10.96.0.1",
>       "127.0.0.1",
>       "10.19.0.5",
>       "10.19.0.9"
>       "10.19.0.11"
>       "10.19.0.2"
>       "10.19.0.4"
>       "10.19.0.12"
>       "10.19.0.7"
>       "kubernetes",
>       "kubernetes.default",
>       "kubernetes.default.svc",
>       "kubernetes.default.svc.cluster",
>       "kubernetes.default.svc.cluster.local"
>     ],
>     "key": {
>         "algo": "rsa",
>         "size": 2048
>     },
>     "names": [
>         {
>             "C": "CN",
>       	  "ST": "Shanghai",
>       	  "L": "Shanghai",
>             "O": "kube-apiserver",
>             "OU": "kube-apiserver"
>         }
>     ]
> }
> EOF
······
# 从kubernetes机构签发证书
[root@Kubernetes1 apiserver]# cfssl gencert \
>     -ca=/etc/kubernetes/pki/ca.pem \
>     -ca-key=/etc/kubernetes/pki/ca-key.pem \
>     -config=/etc/kubernetes/pki/ca-config.json \
>     -profile=kubernetes \
>     apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver/apiserver
2023/05/11 23:10:48 [INFO] generate received request
2023/05/11 23:10:48 [INFO] received CSR
2023/05/11 23:10:48 [INFO] generating key: rsa-2048
2023/05/11 23:10:48 [INFO] encoded CSR
2023/05/11 23:10:48 [INFO] signed certificate with serial number 280635780381735908251327798136801705964318780365
[root@Kubernetes1 apiserver]# ls
apiserver.csr  apiserver-csr.json  apiserver-key.pem  apiserver.pem
2、负载均衡
wget http://nginx.org/download/nginx-1.21.1.tar.gz
tar -zxvf nginx-1.21.1.tar.gz
cd nginx-1.21.1/
./configure --prefix=/usr/local/nginx --with-stream
make
make install
# 重新编辑配置文件
cat > /usr/local/nginx/conf/nginx.conf <<EOF
worker_processes 1;

events {
    worker_connections  1024;
}

stream {
    upstream backend {
        server 10.19.0.5:6443;
        server 10.19.0.9:6443;
        server 10.19.0.11:6443;
    }

    server {
        listen 6443;
        proxy_pass backend;
    }
}
EOF
# 待apiserver部署完成后可以尝试访问,会提示需要证书
curl https://10.19.0.7:6443/api/v1/namsspace
[Unit]
# 描述服务
Description=nginx web service
Documentation=https://nginx.org/en/docs/
After=network.target
[Service]
# 后台运行
Type=forking
# 启动nginx
ExecStart=/usr/local/nginx/sbin/nginx
# 重新加载nginx配置
ExecReload=/usr/local/nginx/sbin/nginx -s reload
# 停止nginx
ExecStop=/usr/local/nginx/sbin/nginx -s stop
PrivateTmp=true
[Install]
WantedBy=default.target
[root@Kubernetes7 nginx-1.21.1]# vim /usr/lib/systemd/system/nginx.service
[root@Kubernetes7 nginx-1.21.1]# systemctl daemon-reload
[root@Kubernetes7 nginx-1.21.1]# systemctl enable --now nginx.service 
Created symlink from /etc/systemd/system/default.target.wants/nginx.service to /usr/lib/systemd/system/nginx.service.
[root@Kubernetes7 nginx-1.21.1]# systemctl status nginx.service 
● nginx.service - nginx web service
   Loaded: loaded (/usr/lib/systemd/system/nginx.service; enabled; vendor preset: disabled)
   Active: active (running) since Sat 2023-05-13 21:36:30 CST; 7s ago
     Docs: https://nginx.org/en/docs/
  Process: 17955 ExecStart=/usr/local/nginx/sbin/nginx (code=exited, status=0/SUCCESS)
 Main PID: 17956 (nginx)
   CGroup: /system.slice/nginx.service
           ├─17956 nginx: master process /usr/local/nginx/sbin/nginx
           └─17957 nginx: worker process

May 13 21:36:30 Kubernetes7 systemd[1]: Starting nginx web service...
May 13 21:36:30 Kubernetes7 systemd[1]: Started nginx web service.

3.3、front-proxy

front-proxy是独立的机构,自己用自己的证书

1、证书
sudo tee /etc/kubernetes/pki/front-proxy/front-proxy-ca-csr.json <<-'EOF'
{
  "CN": "front-proxy",
  "key": {
     "algo": "rsa",
     "size": 2048
  }
}
EOF
sudo tee /etc/kubernetes/pki/front-proxy/front-proxy-client-csr.json <<-'EOF'
{
  "CN": "front-proxy-client",
  "key": {
     "algo": "rsa",
     "size": 2048
  }
}
EOF
cfssl gencert \
    -ca=/etc/kubernetes/pki/front-proxy/ca.pem \
    -ca-key=/etc/kubernetes/pki/front-proxy/ca-key.pem \
    -config=/etc/kubernetes/pki/ca-config.json \
    -profile=kubernetes \
    front-proxy-client-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy/front-proxy-client
[root@Kubernetes1 pki]# pwd
/etc/kubernetes/pki
[root@Kubernetes1 pki]# mkdir front-proxy
[root@Kubernetes1 pki]# cd front-proxy
[root@Kubernetes1 front-proxy]# sudo tee /etc/kubernetes/pki/front-proxy/front-proxy-ca-csr.json <<-'EOF'
> {
>   "CN": "front-proxy",
>   "key": {
>      "algo": "rsa",
>      "size": 2048
>   }
> }
> EOF
······
# # 生成front-proxy机构证书
[root@Kubernetes1 front-proxy]# cfssl gencert -initca front-proxy-ca-csr.json | cfssljson -bare ca -
2023/05/11 22:53:38 [INFO] generating a new CA key and certificate from CSR
2023/05/11 22:53:38 [INFO] generate received request
2023/05/11 22:53:38 [INFO] received CSR
2023/05/11 22:53:38 [INFO] generating key: rsa-2048
2023/05/11 22:53:38 [INFO] encoded CSR
2023/05/11 22:53:38 [INFO] signed certificate with serial number 148643445013453340427358824714242815412407601930
[root@Kubernetes1 front-proxy]# ls
ca.csr  ca-key.pem  ca.pem  front-proxy-ca-csr.json
# 从front-proxy机构签发证书
[root@Kubernetes1 front-proxy]# cfssl gencert \
>     -ca=/etc/kubernetes/pki/front-proxy/ca.pem \
>     -ca-key=/etc/kubernetes/pki/front-proxy/ca-key.pem \
>     -config=/etc/kubernetes/pki/ca-config.json \
>     -profile=kubernetes \
>     front-proxy-client-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy/front-proxy-client
2023/05/11 23:05:21 [INFO] generate received request
2023/05/11 23:05:21 [INFO] received CSR
2023/05/11 23:05:21 [INFO] generating key: rsa-2048
2023/05/11 23:05:21 [INFO] encoded CSR
2023/05/11 23:05:21 [INFO] signed certificate with serial number 147045670994628444945849373210876597044401355992
2023/05/11 23:05:21 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@Kubernetes1 front-proxy]# ls
ca.csr  ca-key.pem  ca.pem  front-proxy-ca-csr.json  front-proxy-client.csr  front-proxy-client-csr.json  front-proxy-client-key.pem  front-proxy-client.pem

3.4、controller-manager

1、证书
sudo tee /etc/kubernetes/pki/controller-manager/controller-manager-csr.json <<-'EOF'
{
  "CN": "system:kube-controller-manager",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Shanghai",
      "L": "Shanghai",
      "O": "system:kube-controller-manager",
      "OU": "Kubernetes"
    }
  ]
}
EOF
cfssl gencert \
    -ca=/etc/kubernetes/pki/ca.pem \
    -ca-key=/etc/kubernetes/pki/ca-key.pem \
    -config=/etc/kubernetes/pki/ca-config.json \
    -profile=kubernetes \
    controller-manager-csr.json | cfssljson -bare /etc/kubernetes/pki/controller-manager/controller-manager
[root@Kubernetes1 pki]# pwd
/etc/kubernetes/pki
[root@Kubernetes1 pki]# mkdir controller-manager
[root@Kubernetes1 pki]# cd controller-manager
[root@Kubernetes1 controller-manager]# sudo tee /etc/kubernetes/pki/controller-manager/controller-manager-csr.json <<-'EOF'
> {
>   "CN": "system:kube-controller-manager",
>   "key": {
>     "algo": "rsa",
>     "size": 2048
>   },
>   "names": [
>     {
>       "C": "CN",
>       "ST": "Shanghai",
>       "L": "Shanghai",
>       "O": "system:kube-controller-manager",
>       "OU": "Kubernetes"
>     }
>   ]
> }
> EOF
······
# 从kubernetes机构签发证书
[root@Kubernetes1 controller-manager]# cfssl gencert \
>     -ca=/etc/kubernetes/pki/ca.pem \
>     -ca-key=/etc/kubernetes/pki/ca-key.pem \
>     -config=/etc/kubernetes/pki/ca-config.json \
>     -profile=kubernetes \
>     controller-manager-csr.json | cfssljson -bare /etc/kubernetes/pki/controller-manager/controller-manager
2023/05/11 23:12:55 [INFO] generate received request
2023/05/11 23:12:55 [INFO] received CSR
2023/05/11 23:12:55 [INFO] generating key: rsa-2048
2023/05/11 23:12:55 [INFO] encoded CSR
2023/05/11 23:12:55 [INFO] signed certificate with serial number 394145345286971060997019421386376897335831738853
2023/05/11 23:12:55 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@Kubernetes1 controller-manager]# ls
controller-manager.csr  controller-manager-csr.json  controller-manager-key.pem  controller-manager.pem
2、配置文件
# 生成配置文件
kubectl config set-cluster kubernetes \
    --certificate-authority=/etc/kubernetes/pki/ca.pem \
    --embed-certs=true \
    --server=https://10.19.0.7:6443 \
    --kubeconfig=/etc/kubernetes/conf/controller-manager/controller-manager.conf
# 设置环境
kubectl config set-context system:kube-controller-manager@kubernetes \
    --cluster=kubernetes \
    --user=system:kube-controller-manager \
    --kubeconfig=/etc/kubernetes/conf/controller-manager/controller-manager.conf
# 设置用户
kubectl config set-credentials system:kube-controller-manager \
    --client-certificate=/etc/kubernetes/pki/controller-manager/controller-manager.pem \
    --client-key=/etc/kubernetes/pki/controller-manager/controller-manager-key.pem \
    --embed-certs=true \
    --kubeconfig=/etc/kubernetes/conf/controller-manager/controller-manager.conf
# 设置默认环境
kubectl config use-context system:kube-controller-manager@kubernetes \
    --kubeconfig=/etc/kubernetes/conf/controller-manager/controller-manager.conf
[root@Kubernetes1 controller-manager]# kubectl config set-cluster kubernetes \
>     --certificate-authority=/etc/kubernetes/pki/ca.pem \
>     --embed-certs=true \
>     --server=https://10.19.0.7:6443 \
>     --kubeconfig=/etc/kubernetes/pki/controller-manager/controller-manager.conf
Cluster "kubernetes" set.
[root@Kubernetes1 controller-manager]# ls
controller-manager.conf  controller-manager.csr  controller-manager-csr.json  controller-manager-key.pem  controller-manager.pem
[root@Kubernetes1 controller-manager]# kubectl config set-context system:kube-controller-manager@kubernetes \
>     --cluster=kubernetes \
>     --user=system:kube-controller-manager \
>     --kubeconfig=/etc/kubernetes/pki/controller-manager/controller-manager.conf
Context "system:kube-controller-manager@kubernetes" created.
[root@Kubernetes1 controller-manager]# kubectl config set-credentials system:kube-controller-manager \
>     --client-certificate=/etc/kubernetes/pki/controller-manager/controller-manager.pem \
>     --client-key=/etc/kubernetes/pki/controller-manager/controller-manager-key.pem \
>     --embed-certs=true \
>     --kubeconfig=/etc/kubernetes/pki/controller-manager/controller-manager.conf
User "system:kube-controller-manager" set.
[root@Kubernetes1 controller-manager]# kubectl config use-context system:kube-controller-manager@kubernetes \
>     --kubeconfig=/etc/kubernetes/pki/controller-manager/controller-manager.conf
Switched to context "system:kube-controller-manager@kubernetes".

3.5、scheduler

1、证书
sudo tee /etc/kubernetes/pki/scheduler/scheduler-csr.json <<-'EOF'
{
  "CN": "system:kube-scheduler",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Shanghai",
      "L": "Shanghai",
      "O": "system:kube-scheduler",
      "OU": "Kubernetes"
    }
  ]
}
EOF
# 从kubernetes机构签发证书
cfssl gencert \
    -ca=/etc/kubernetes/pki/ca.pem \
    -ca-key=/etc/kubernetes/pki/ca-key.pem \
    -config=/etc/kubernetes/pki/ca-config.json \
    -profile=kubernetes \
    scheduler-csr.json | cfssljson -bare /etc/kubernetes/pki/scheduler/scheduler
[root@Kubernetes1 pki]# pwd
/etc/kubernetes/pki
[root@Kubernetes1 pki]# mkdir scheduler
[root@Kubernetes1 pki]# cd scheduler
[root@Kubernetes1 scheduler]# sudo tee /etc/kubernetes/pki/scheduler/scheduler-csr.json <<-'EOF'
> {
>   "CN": "system:kube-scheduler",
>   "key": {
>     "algo": "rsa",
>     "size": 2048
>   },
>   "names": [
>     {
>       "C": "CN",
>       "ST": "Shanghai",
>       "L": "Shanghai",
>       "O": "system:kube-scheduler",
>       "OU": "Kubernetes"
>     }
>   ]
> }
> EOF
······
# 从kubernetes机构签发证书
[root@Kubernetes1 scheduler]# cfssl gencert \
>     -ca=/etc/kubernetes/pki/ca.pem \
>     -ca-key=/etc/kubernetes/pki/ca-key.pem \
>     -config=/etc/kubernetes/pki/ca-config.json \
>     -profile=kubernetes \
>     scheduler-csr.json | cfssljson -bare /etc/kubernetes/pki/scheduler/scheduler
2023/05/11 23:39:55 [INFO] generate received request
2023/05/11 23:39:55 [INFO] received CSR
2023/05/11 23:39:55 [INFO] generating key: rsa-2048
2023/05/11 23:39:55 [INFO] encoded CSR
2023/05/11 23:39:55 [INFO] signed certificate with serial number 314965323806286191266675207723457512925777497135
2023/05/11 23:39:55 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@Kubernetes1 scheduler]# ls
scheduler.csr  scheduler-csr.json  scheduler-key.pem  scheduler.pem
2、配置文件
kubectl config set-cluster kubernetes \
    --certificate-authority=/etc/kubernetes/pki/ca.pem \
    --embed-certs=true \
    --server=https://10.19.0.7:6443 \
    --kubeconfig=/etc/kubernetes/conf/scheduler/scheduler.conf

kubectl config set-credentials system:kube-scheduler \
    --client-certificate=/etc/kubernetes/pki/scheduler/scheduler.pem \
    --client-key=/etc/kubernetes/pki/scheduler/scheduler-key.pem \
    --embed-certs=true \
    --kubeconfig=/etc/kubernetes/conf/scheduler/scheduler.conf
    
kubectl config set-context system:kube-scheduler@kubernetes \
    --cluster=kubernetes \
    --user=system:kube-scheduler \
    --kubeconfig=/etc/kubernetes/conf/scheduler/scheduler.conf

kubectl config use-context system:kube-scheduler@kubernetes \
     --kubeconfig=/etc/kubernetes/conf/scheduler/scheduler.conf
[root@Kubernetes1 scheduler]# kubectl config set-cluster kubernetes \
>     --certificate-authority=/etc/kubernetes/pki/ca.pem \
>     --embed-certs=true \
>     --server=https://10.19.0.7:6443 \
>     --kubeconfig=/etc/kubernetes/scheduler/scheduler.conf
Cluster "kubernetes" set.
[root@Kubernetes1 scheduler]# kubectl config set-credentials system:kube-scheduler \
>     --client-certificate=/etc/kubernetes/pki/scheduler/scheduler.pem \
>     --client-key=/etc/kubernetes/pki/scheduler/scheduler-key.pem \
>     --embed-certs=true \
>     --kubeconfig=/etc/kubernetes/scheduler/scheduler.conf
User "system:kube-scheduler" set.
[root@Kubernetes1 scheduler]# kubectl config set-context system:kube-scheduler@kubernetes \
>     --cluster=kubernetes \
>     --user=system:kube-scheduler \
>     --kubeconfig=/etc/kubernetes/scheduler/scheduler.conf
Context "system:kube-scheduler@kubernetes" created.
[root@Kubernetes1 scheduler]# kubectl config use-context system:kube-scheduler@kubernetes \
>      --kubeconfig=/etc/kubernetes/scheduler/scheduler.conf
Switched to context "system:kube-scheduler@kubernetes".

3.6、admin

1、证书
sudo tee /etc/kubernetes/pki/admin/admin-csr.json <<-'EOF'
{
  "CN": "admin",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Shanghai",
      "L": "Shanghai",
      "O": "system:masters",
      "OU": "Kubernetes"
    }
  ]
}
EOF
# 从kubernetes机构签发证书
cfssl gencert \
    -ca=/etc/kubernetes/pki/ca.pem \
    -ca-key=/etc/kubernetes/pki/ca-key.pem \
    -config=/etc/kubernetes/pki/ca-config.json \
    -profile=kubernetes \
    admin-csr.json | cfssljson -bare /etc/kubernetes/pki/admin/admin
[root@Kubernetes1 pki]# pwd
/etc/kubernetes/pki
[root@Kubernetes1 pki]# mkdir admin
[root@Kubernetes1 pki]# cd admin
[root@Kubernetes1 admin]# sudo tee /etc/kubernetes/pki/admin/admin-csr.json <<-'EOF'
> {
>   "CN": "admin",
>   "key": {
>     "algo": "rsa",
>     "size": 2048
>   },
>   "names": [
>     {
>       "C": "CN",
>       "ST": "Shanghai",
>       "L": "Shanghai",
>       "O": "system:masters",
>       "OU": "Kubernetes"
>     }
>   ]
> }
> EOF
······
[root@Kubernetes1 admin]# cfssl gencert \
>     -ca=/etc/kubernetes/pki/ca.pem \
>     -ca-key=/etc/kubernetes/pki/ca-key.pem \
>     -config=/etc/kubernetes/pki/ca-config.json \
>     -profile=kubernetes \
>     admin-csr.json | cfssljson -bare /etc/kubernetes/pki/admin/admin
2023/05/12 00:53:47 [INFO] generate received request
2023/05/12 00:53:47 [INFO] received CSR
2023/05/12 00:53:47 [INFO] generating key: rsa-2048
2023/05/12 00:53:47 [INFO] encoded CSR
2023/05/12 00:53:47 [INFO] signed certificate with serial number 467431895743927380971732871897145819854357096178
2023/05/12 00:53:47 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@Kubernetes1 admin]# ls
admin.csr  admin-csr.json  admin-key.pem  admin.pem
2、配置文件
kubectl config set-cluster kubernetes \
    --certificate-authority=/etc/kubernetes/pki/ca.pem \
    --embed-certs=true \
    --server=https://10.19.0.7:6443 \
    --kubeconfig=/etc/kubernetes/conf/admin/admin.conf

kubectl config set-credentials kubernetes-admin \
    --client-certificate=/etc/kubernetes/pki/admin/admin.pem \
    --client-key=/etc/kubernetes/pki/admin/admin-key.pem \
    --embed-certs=true \
    --kubeconfig=/etc/kubernetes/conf/admin/admin.conf

kubectl config set-context kubernetes-admin@kubernetes \
    --cluster=kubernetes \
    --user=kubernetes-admin \
    --kubeconfig=/etc/kubernetes/conf/admin/admin.conf

kubectl config use-context kubernetes-admin@kubernetes \
    --kubeconfig=/etc/kubernetes/conf/admin/admin.conf
[root@Kubernetes1 admin]# kubectl config set-cluster kubernetes \
>     --certificate-authority=/etc/kubernetes/pki/ca.pem \
>     --embed-certs=true \
>     --server=https://10.19.0.7:6443 \
>     --kubeconfig=/etc/kubernetes/admin/admin.conf
Cluster "kubernetes" set.
[root@Kubernetes1 admin]# kubectl config set-credentials kubernetes-admin \
>     --client-certificate=/etc/kubernetes/pki/admin/admin.pem \
>     --client-key=/etc/kubernetes/pki/admin/admin-key.pem \
>     --embed-certs=true \
>     --kubeconfig=/etc/kubernetes/admin/admin.conf
User "kubernetes-admin" set.
[root@Kubernetes1 admin]# kubectl config set-context kubernetes-admin@kubernetes \
>     --cluster=kubernetes \
>     --user=kubernetes-admin \
>     --kubeconfig=/etc/kubernetes/admin/admin.conf
Context "kubernetes-admin@kubernetes" created.
[root@Kubernetes1 admin]# kubectl config use-context kubernetes-admin@kubernetes \
>     --kubeconfig=/etc/kubernetes/admin/admin.conf
Switched to context "kubernetes-admin@kubernetes".
# 复制证书到/root/.kube/config
[root@Kubernetes1 admin]# mkdir -p /root/.kube/
[root@Kubernetes1 admin]# cp /etc/kubernetes/conf/admin/admin.conf /root/.kube/config

3.7、kubelet

1、证书

apiserver会自动为kubelet颁发证书

2、配置文件
kubectl config set-cluster kubernetes \
    --certificate-authority=/etc/kubernetes/pki/ca.pem \
    --embed-certs=true \
    --server=https://10.19.0.7:6443 \
    --kubeconfig=/etc/kubernetes/conf/kubelet/bootstrap-kubelet.conf

kubectl config set-credentials tls-bootstrap-token-user \
    --token=xumeng.d683399b7a553977 \
    --kubeconfig=/etc/kubernetes/conf/kubelet/bootstrap-kubelet.conf

kubectl config set-context tls-bootstrap-token-user@kubernetes \
    --cluster=kubernetes \
    --user=tls-bootstrap-token-user \
    --kubeconfig=/etc/kubernetes/conf/kubelet/bootstrap-kubelet.conf

kubectl config use-context tls-bootstrap-token-user@kubernetes \
    --kubeconfig=/etc/kubernetes/conf/kubelet/bootstrap-kubelet.conf
[root@Kubernetes1 kubelet]# kubectl config set-cluster kubernetes \
>     --certificate-authority=/etc/kubernetes/pki/ca.pem \
>     --embed-certs=true \
>     --server=https://10.19.0.7:6443 \
>     --kubeconfig=/etc/kubernetes/kubelet/bootstrap-kubelet.conf
Cluster "kubernetes" set.
[root@Kubernetes1 kubelet]# kubectl config set-credentials tls-bootstrap-token-user \
>     --token=xumeng.d683399b7a553977 \
>     --kubeconfig=/etc/kubernetes/kubelet/bootstrap-kubelet.conf
User "tls-bootstrap-token-user" set.
[root@Kubernetes1 kubelet]# kubectl config set-context tls-bootstrap-token-user@kubernetes \
>     --cluster=kubernetes \
>     --user=tls-bootstrap-token-user \
>     --kubeconfig=/etc/kubernetes/kubelet/bootstrap-kubelet.conf
Context "tls-bootstrap-token-user@kubernetes" created.
[root@Kubernetes1 kubelet]# kubectl config use-context tls-bootstrap-token-user@kubernetes \
>     --kubeconfig=/etc/kubernetes/kubelet/bootstrap-kubelet.conf
Switched to context "tls-bootstrap-token-user@kubernetes".

3.8、kube-proxy

1、证书
sudo tee /etc/kubernetes/pki/kube-proxy/kube-proxy-csr.json <<-'EOF'
{
  "CN": "system:kube-proxy",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Shanghai",
      "L": "Shanghai",
      "O": "system:kube-proxy",
      "OU": "Kubernetes"
    }
  ]
}
EOF
# 从kubernetes机构签发证书
cfssl gencert \
    -ca=/etc/kubernetes/pki/ca.pem \
    -ca-key=/etc/kubernetes/pki/ca-key.pem \
    -config=/etc/kubernetes/pki/ca-config.json \
    -profile=kubernetes \
    kube-proxy-csr.json | cfssljson -bare /etc/kubernetes/pki/kube-proxy/kube-proxy
[root@Kubernetes1 kubernetes]# sudo tee /etc/kubernetes/pki/kube-proxy/kube-proxy-csr.json <<-'EOF'
> {
>   "CN": "system:kube-proxy",
>   "key": {
>     "algo": "rsa",
>     "size": 2048
>   },
>   "names": [
>     {
>       "C": "CN",
>       "ST": "Shanghai",
>       "L": "Shanghai",
>       "O": "system:kube-proxy",
>       "OU": "Kubernetes"
>     }
>   ]
> }
> EOF
······
[root@Kubernetes1 kube-proxy]# cfssl gencert \
>     -ca=/etc/kubernetes/pki/ca.pem \
>     -ca-key=/etc/kubernetes/pki/ca-key.pem \
>     -config=/etc/kubernetes/pki/ca-config.json \
>     -profile=kubernetes \
>     kube-proxy-csr.json | cfssljson -bare /etc/kubernetes/pki/kube-proxy/kube-proxy
2023/05/14 15:09:12 [INFO] generate received request
2023/05/14 15:09:12 [INFO] received CSR
2023/05/14 15:09:12 [INFO] generating key: rsa-2048
2023/05/14 15:09:12 [INFO] encoded CSR
2023/05/14 15:09:12 [INFO] signed certificate with serial number 85466608013210076838833194009546651541147806751
2023/05/14 15:09:12 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@Kubernetes1 kube-proxy]# ls
kube-proxy.csr  kube-proxy-csr.json  kube-proxy-key.pem  kube-proxy.pem
2、配置文件
kubectl config set-cluster kubernetes \
    --certificate-authority=/etc/kubernetes/pki/ca.pem \
    --embed-certs=true \
    --server=https://10.19.0.7:6443 \
    --kubeconfig=/etc/kubernetes/conf/kube-proxy/kube-proxy.conf

kubectl config set-credentials system:kube-proxy \
    --client-certificate=/etc/kubernetes/pki/kube-proxy/kube-proxy.pem \
    --client-key=/etc/kubernetes/pki/kube-proxy/kube-proxy-key.pem \
    --embed-certs=true \
    --kubeconfig=/etc/kubernetes/conf/kube-proxy/kube-proxy.conf

kubectl config set-context default \
    --cluster=kubernetes \
    --user=system:kube-proxy \
    --kubeconfig=/etc/kubernetes/conf/kube-proxy/kube-proxy.conf

kubectl config use-context default \
    --kubeconfig=/etc/kubernetes/conf/kube-proxy/kube-proxy.conf
[root@Kubernetes1 kube-proxy]# kubectl config set-cluster kubernetes \
>     --certificate-authority=/etc/kubernetes/pki/ca.pem \
>     --embed-certs=true \
>     --server=https://10.19.0.7:6443 \
>     --kubeconfig=/etc/kubernetes/kube-proxy/kube-proxy.conf
Cluster "kubernetes" set.
[root@Kubernetes1 kube-proxy]# kubectl config set-credentials system:kube-proxy \
>     --client-certificate=/etc/kubernetes/pki/kube-proxy/kube-proxy.pem \
>     --client-key=/etc/kubernetes/pki/kube-proxy/kube-proxy-key.pem \
>     --embed-certs=true \
>     --kubeconfig=/etc/kubernetes/kube-proxy/kube-proxy.conf
User "system:kube-proxy" set.
[root@Kubernetes1 kube-proxy]# kubectl config set-context default \
>     --cluster=kubernetes \
>     --user=system:kube-proxy \
>     --kubeconfig=/etc/kubernetes/kube-proxy/kube-proxy.conf
Context "default" created.
[root@Kubernetes1 kube-proxy]# kubectl config use-context default \
>     --kubeconfig=/etc/kubernetes/kube-proxy/kube-proxy.conf
Switched to context "default".

3.9、ServiceAccount

k8s创建ServiceAccount分配一个Secret(会被私钥加密,其他节点会使用公钥解密)

[root@Kubernetes1 pki]# pwd
/etc/kubernetes/pki
[root@Kubernetes1 pki]# mkdir service-account
[root@Kubernetes1 pki]# cd service-account
# 生成私钥
[root@Kubernetes1 service-account]# openssl genrsa -out /etc/kubernetes/pki/service-account/sa.key 2048
Generating RSA private key, 2048 bit long modulus
.......................................+++
..............................+++
e is 65537 (0x10001)
# 生成公钥
[root@Kubernetes1 service-account]# ls
sa.key
[root@Kubernetes1 service-account]# openssl rsa -in /etc/kubernetes/pki/service-account/sa.key -pubout -out /etc/kubernetes/pki/service-account/sa.pub
writing RSA key

3.10、证书复制到其他节点

[root@Kubernetes1 ~]# for i in Kubernetes2 Kubernetes3 Kubernetes4 Kubernetes5 Kubernetes6;do scp -r /etc/kubernetes/* root@$i:/etc/kubernetes/;done
······

4、主节点组件启动

4.1、apiserver

1、启动
# 参数说明 https://kubernetes.io/zh-cn/docs/reference/command-line-tools-reference/kube-apiserver/ 
# 需更改--advertise-address、--etcd-servers、
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/bin/kube-apiserver \
      --v=2  \
      --allow-privileged=true  \
      --bind-address=0.0.0.0  \
      --secure-port=6443  \
      --advertise-address=10.19.0.5 \
      --service-cluster-ip-range=10.96.0.0/16  \
      --service-node-port-range=30000-32767  \
      --etcd-servers=https://10.19.0.5,https://10.19.0.9:2379,https://10.19.0.11:2379 \
      --etcd-cafile=/etc/kubernetes/pki/etcd/ca.pem  \
      --etcd-certfile=/etc/kubernetes/pki/etcd/etcd.pem  \
      --etcd-keyfile=/etc/kubernetes/pki/etcd/etcd-key.pem  \
      --client-ca-file=/etc/kubernetes/pki/ca.pem  \
      --tls-cert-file=/etc/kubernetes/pki/apiserver/apiserver.pem  \
      --tls-private-key-file=/etc/kubernetes/pki/apiserver/apiserver-key.pem  \
      --kubelet-client-certificate=/etc/kubernetes/pki/apiserver/apiserver.pem  \
      --kubelet-client-key=/etc/kubernetes/pki/apiserver/apiserver-key.pem  \
      --service-account-key-file=/etc/kubernetes/pki/service-account/sa.pub  \
      --service-account-signing-key-file=/etc/kubernetes/pki/service-account/sa.key  \
      --service-account-issuer=https://kubernetes.default.svc.cluster.local \
      --kubelet-preferred-address-types=Hostname,InternalDNS,InternalIP,ExternalDNS,ExternalIP \
      --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \
      --authorization-mode=Node,RBAC  \
      --enable-bootstrap-token-auth=true  \
      --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy/ca.pem  \
      --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy/front-proxy-client.pem  \
      --proxy-client-key-file=/etc/kubernetes/pki/front-proxy/front-proxy-client-key.pem  \
      --requestheader-allowed-names=aggregator,front-proxy-client  \
      --requestheader-group-headers=X-Remote-Group  \
      --requestheader-extra-headers-prefix=X-Remote-Extra-  \
      --requestheader-username-headers=X-Remote-User
      --enable-aggregator-routing=true"

Restart=on-failure
RestartSec=10s
LimitNOFILE=65535

[Install]
WantedBy=multi-user.target
[root@Kubernetes1 ~]# mkdir -p /etc/kubernetes/manifests/ /etc/systemd/system/kubelet.service.d /var/lib/kubelet /var/log/kubernetes
[root@Kubernetes1 ~]# vi /usr/lib/systemd/system/kube-apiserver.service
[root@Kubernetes1 ~]# systemctl daemon-reload
[root@Kubernetes1 ~]# systemctl enable --now kube-apiserver.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.
systemctl status kube-apiserver.service
[root@Kubernetes1 front-proxy]# systemctl status kube-apiserver.service
● kube-apiserver.service - Kubernetes API Server
   Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
   Active: active (running) since Sat 2023-05-13 11:42:28 CST; 24s ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 30396 (kube-apiserver)
    Tasks: 8
   Memory: 220.8M
   CGroup: /system.slice/kube-apiserver.service
           └─30396 /usr/bin/kube-apiserver --v=2 --allow-privileged=true --bind-address=0.0.0.0 --secure-port=6443 --advertise-address=10.19.0.5 --service-cluster-ip-rang...

May 13 11:42:30 Kubernetes1 kube-apiserver[30396]: I0513 11:42:30.720292   30396 apf_controller.go:444] "Update CurrentCL" plName="workload-high" seatDemandHigh...stop=false
May 13 11:42:30 Kubernetes1 kube-apiserver[30396]: I0513 11:42:30.720331   30396 apf_controller.go:444] "Update CurrentCL" plName="workload-low" seatDemandHighW...stop=false
May 13 11:42:30 Kubernetes1 kube-apiserver[30396]: I0513 11:42:30.720379   30396 apf_controller.go:444] "Update CurrentCL" plName="system" seatDemandHighWaterma...stop=false
May 13 11:42:30 Kubernetes1 kube-apiserver[30396]: I0513 11:42:30.720399   30396 apf_controller.go:444] "Update CurrentCL" plName="node-high" seatDemandHighWate...stop=false
May 13 11:42:30 Kubernetes1 kube-apiserver[30396]: I0513 11:42:30.720416   30396 apf_controller.go:444] "Update CurrentCL" plName="catch-all" seatDemandHighWate...stop=false
May 13 11:42:30 Kubernetes1 kube-apiserver[30396]: I0513 11:42:30.720433   30396 apf_controller.go:444] "Update CurrentCL" plName="leader-election" seatDemandHi...stop=false
May 13 11:42:30 Kubernetes1 kube-apiserver[30396]: I0513 11:42:30.723801   30396 strategy.go:236] "Successfully created PriorityLevelConfiguration" type="sugges...kload-low"
May 13 11:42:30 Kubernetes1 kube-apiserver[30396]: I0513 11:42:30.732897   30396 apf_controller.go:854] Retaining queues for priority level "workload-low": config={"type"...
May 13 11:42:30 Kubernetes1 kube-apiserver[30396]: I0513 11:42:30.732929   30396 apf_controller.go:846] Introducing queues for priority level "global-default": config={"t...
May 13 11:42:30 Kubernetes1 kube-apiserver[30396]: I0513 11:42:30.732942   30396 apf_controller.go:854] Retaining queues for priority level "system": config={"type":"Limi...
Hint: Some lines were ellipsized, use -l to show in full.
2、排错
# 启动日志
journalctl -xeu kube-apiserver
# 查看kube-apiserver日志
cat /var/log/messages|grep kube-apiserver|grep -i error

4.2、controller-manager

1、启动
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/bin/kube-controller-manager \
      --v=2 \
      --bind-address=127.0.0.1 \
      --root-ca-file=/etc/kubernetes/pki/ca.pem \
      --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \
      --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \
      --service-account-private-key-file=/etc/kubernetes/pki/service-account/sa.key \
      --kubeconfig=/etc/kubernetes/conf/controller-manager/controller-manager.conf \
      --leader-elect=true \
      --use-service-account-credentials=true \
      --node-monitor-grace-period=40s \
      --node-monitor-period=5s \
      --pod-eviction-timeout=2m0s \
      --controllers=*,bootstrapsigner,tokencleaner \
      --allocate-node-cidrs=true \
      --cluster-cidr=192.168.0.0/16 \
      --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy/ca.pem \
      --node-cidr-mask-size=24
      
Restart=always
RestartSec=10s

[Install]
WantedBy=multi-user.target
[root@Kubernetes1 ~]# vi /usr/lib/systemd/system/kube-controller-manager.service
[root@Kubernetes1 ~]# systemctl daemon-reload
[root@Kubernetes1 ~]# systemctl enable --now kube-controller-manager.service 
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
[root@Kubernetes1 ~]# systemctl status kube-controller-manager
● kube-controller-manager.service - Kubernetes Controller Manager
   Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled)
   Active: active (running) since Sat 2023-05-13 15:28:15 CST; 37s ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 21567 (kube-controller)
    Tasks: 5
   Memory: 24.5M
   CGroup: /system.slice/kube-controller-manager.service
           └─21567 /usr/bin/kube-controller-manager --v=2 --bind-address=127.0.0.1 --root-ca-file=/etc/kubernetes/pki/ca.pem --cluster-signing-cert-file=/etc/kubernetes/p...

May 13 15:28:16 Kubernetes1 kube-controller-manager[21567]: I0513 15:28:16.492630   21567 tlsconfig.go:200] "Loaded serving cert" certName="Generated self signed cert" ce...
May 13 15:28:16 Kubernetes1 kube-controller-manager[21567]: I0513 15:28:16.492763   21567 named_certificates.go:53] "Loaded SNI cert" index=0 certName="self-signed loopba...
May 13 15:28:16 Kubernetes1 kube-controller-manager[21567]: I0513 15:28:16.492781   21567 secure_serving.go:210] Serving securely on 127.0.0.1:10257
May 13 15:28:16 Kubernetes1 kube-controller-manager[21567]: I0513 15:28:16.492971   21567 leaderelection.go:248] attempting to acquire leader lease kube-system/k...anager...
May 13 15:28:16 Kubernetes1 kube-controller-manager[21567]: I0513 15:28:16.493275   21567 dynamic_cafile_content.go:157] "Starting controller" name="request-head...y/ca.pem"
May 13 15:28:16 Kubernetes1 kube-controller-manager[21567]: I0513 15:28:16.493406   21567 tlsconfig.go:240] "Starting DynamicServingCertificateController"
May 13 15:28:21 Kubernetes1 kube-controller-manager[21567]: E0513 15:28:21.493666   21567 leaderelection.go:330] error retrieving resource lock kube-system/kube-controlle...
May 13 15:28:29 Kubernetes1 kube-controller-manager[21567]: E0513 15:28:29.682041   21567 leaderelection.go:330] error retrieving resource lock kube-system/kube-controlle...
May 13 15:28:37 Kubernetes1 kube-controller-manager[21567]: E0513 15:28:37.206671   21567 leaderelection.go:330] error retrieving resource lock kube-system/kube-controlle...
May 13 15:28:45 Kubernetes1 kube-controller-manager[21567]: E0513 15:28:45.658338   21567 leaderelection.go:330] error retrieving resource lock kube-system/kube-controlle...
Hint: Some lines were ellipsized, use -l to show in full.
2、排错
# 启动日志
journalctl -xeu kube-controller-manager
# 查看kube-apiserver日志
cat /var/log/messages|grep kube-controller-manager|grep -i error

4.3、scheduler

1、启动
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/usr/bin/kube-scheduler \
      --v=2 \
      --bind-address=127.0.0.1 \
      --leader-elect=true \
      --kubeconfig=/etc/kubernetes/conf/scheduler/scheduler.conf

Restart=always
RestartSec=10s

[Install]
WantedBy=multi-user.target
[root@Kubernetes1 ~]# vi /usr/lib/systemd/system/kube-scheduler.service
[root@Kubernetes1 ~]# systemctl daemon-reload
[root@Kubernetes1 ~]# systemctl enable --now kube-scheduler.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
[root@Kubernetes3 ~]# systemctl status kube-scheduler.service
● kube-scheduler.service - Kubernetes Scheduler
   Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled)
   Active: active (running) since Sat 2023-05-13 17:58:09 CST; 6s ago
     Docs: https://github.com/kubernetes/kubernetes
 Main PID: 22850 (kube-scheduler)
    Tasks: 7
   Memory: 19.3M
   CGroup: /system.slice/kube-scheduler.service
           └─22850 /usr/bin/kube-scheduler --v=2 --bind-address=127.0.0.1 --leader-elect=true --kubeconfig=/etc/kubernetes/scheduler/scheduler.conf

May 13 17:58:09 Kubernetes3 kube-scheduler[22850]: I0513 17:58:09.351165   22850 flags.go:64] FLAG: --tls-private-key-file=""
May 13 17:58:09 Kubernetes3 kube-scheduler[22850]: I0513 17:58:09.351169   22850 flags.go:64] FLAG: --tls-sni-cert-key="[]"
May 13 17:58:09 Kubernetes3 kube-scheduler[22850]: I0513 17:58:09.351175   22850 flags.go:64] FLAG: --v="2"
May 13 17:58:09 Kubernetes3 kube-scheduler[22850]: I0513 17:58:09.351181   22850 flags.go:64] FLAG: --version="false"
May 13 17:58:09 Kubernetes3 kube-scheduler[22850]: I0513 17:58:09.351190   22850 flags.go:64] FLAG: --vmodule=""
May 13 17:58:09 Kubernetes3 kube-scheduler[22850]: I0513 17:58:09.351198   22850 flags.go:64] FLAG: --write-config-to=""
May 13 17:58:09 Kubernetes3 kube-scheduler[22850]: I0513 17:58:09.641245   22850 serving.go:348] Generated self-signed cert in-memory
May 13 17:58:09 Kubernetes3 kube-scheduler[22850]: W0513 17:58:09.811156   22850 authentication.go:320] No authentication-kubeconfig provided in order to lookup...on't work.
May 13 17:58:09 Kubernetes3 kube-scheduler[22850]: W0513 17:58:09.811189   22850 authentication.go:344] No authentication-kubeconfig provided in order to lookup...on't work.
May 13 17:58:09 Kubernetes3 kube-scheduler[22850]: W0513 17:58:09.811202   22850 authorization.go:194] No authorization-kubeconfig provided, so SubjectAccessRev...on't work.
Hint: Some lines were ellipsized, use -l to show in full.
2、排错
# 启动日志
journalctl -xeu kube-scheduler
# 查看kube-apiserver日志
cat /var/log/messages|grep kube-scheduler|grep -i error
3、测试
[root@Kubernetes1 ~]# kubectl get nodes
No resources found
[root@Kubernetes3 ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
scheduler            Healthy   ok                              
controller-manager   Healthy   ok                              
etcd-1               Healthy   {"health":"true","reason":""}   
etcd-0               Healthy   {"health":"true","reason":""}   
etcd-2               Healthy   {"health":"true","reason":""}

4.4、kubelet

1、集群引导权限文件
apiVersion: v1
kind: Secret
metadata:
  # 此处需更换为自己的tokenid
  name: bootstrap-token-xumeng
  namespace: kube-system
type: bootstrap.kubernetes.io/token
stringData:
  description: "The default bootstrap token generated by 'kubelet '."
  # 此处需更换为自己的tokenid
  token-id: xumeng
  # 此处需更换为自己的token
  token-secret: d683399b7a553977
  usage-bootstrap-authentication: "true"
  usage-bootstrap-signing: "true"
  auth-extra-groups:  system:bootstrappers:default-node-token,system:bootstrappers:worker,system:bootstrappers:ingress
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubelet-bootstrap
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:node-bootstrapper
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: system:bootstrappers:default-node-token
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: node-autoapprove-bootstrap
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: system:bootstrappers:default-node-token
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: node-autoapprove-certificate-rotation
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: system:nodes
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:kube-apiserver-to-kubelet
rules:
  - apiGroups:
      - ""
    resources:
      - nodes/proxy
      - nodes/stats
      - nodes/log
      - nodes/spec
      - nodes/metrics
    verbs:
      - "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:kube-apiserver
  namespace: ""
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-apiserver-to-kubelet
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: kube-apiserver
[root@Kubernetes1 kubelet]# vi /etc/kubernetes/pki/kubelet/bootstrap.secret.yaml
[root@Kubernetes1 kubelet]# kubectl apply -f bootstrap.secret.yaml
secret/bootstrap-token-xumeng created
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
clusterrolebinding.rbac.authorization.k8s.io/node-autoapprove-bootstrap created
clusterrolebinding.rbac.authorization.k8s.io/node-autoapprove-certificate-rotation created
clusterrole.rbac.authorization.k8s.io/system:kube-apiserver-to-kubelet created
clusterrolebinding.rbac.authorization.k8s.io/system:kube-apiserver created
2、启动
# 参数说明 https://kubernetes.io/zh-cn/docs/reference/config-api/kubelet-config.v1beta1/
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /etc/kubernetes/pki/ca.pem
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
cgroupDriver: systemd
cgroupsPerQOS: true
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
containerLogMaxFiles: 5
containerLogMaxSize: 10Mi
contentType: application/vnd.kubernetes.protobuf
cpuCFSQuota: true
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
enableControllerAttachDetach: true
enableDebuggingHandlers: true
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 20s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
iptablesDropBit: 15
iptablesMasqueradeBit: 14
kubeAPIBurst: 10
kubeAPIQPS: 5
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
registryBurst: 10
registryPullQPS: 5
resolvConf: /etc/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
volumeStatsAggPeriod: 1m0s
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/bin/kubelet \
    --v=2 \
    --bootstrap-kubeconfig=/etc/kubernetes/conf/kubelet/bootstrap-kubelet.conf \
    --kubeconfig=/etc/kubernetes/conf/kubelet/kubelet.conf \
    --config=/etc/kubernetes/conf/kubelet/kubelet-conf.yaml \
    --container-runtime-endpoint=unix:///run/cri-dockerd.sock \
    --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9 \

[Install]
WantedBy=multi-user.target
[root@Kubernetes1 ~]# vi /etc/kubernetes/conf/kubelet/kubelet-conf.yaml
[root@Kubernetes1 ~]# vi /usr/lib/systemd/system/kubelet.service
[root@Kubernetes1 ~]# systemctl daemon-reload
[root@Kubernetes1 ~]# systemctl enable --now kubelet.service 
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@Kubernetes1 ~]# systemctl status kubelet.service 
3、排错
# 启动日志
journalctl -xeu kubelet
# 查看kube-apiserver日志
cat /var/log/messages|grep kubelet|grep -i error

4.5、kube-proxy

1、启动
#配置说明https://kubernetes.io/zh-cn/docs/reference/config-api/kube-proxy-config.v1alpha1/
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
clientConnection:
  acceptContentTypes: ''
  burst: 10
  contentType: application/vnd.kubernetes.protobuf
  kubeconfig: /etc/kubernetes/conf/kube-proxy/kube-proxy.conf
  qps: 5
clusterCIDR: 192.168.0.0/16
configSyncPeriod: 15m0s
conntrack:
  maxPerCore: 32768
  min: 131072
  tcpCloseWaitTimeout: 1h0m0s
  tcpEstablishedTimeout: 24h0m0s
enableProfiling: false
healthzBindAddress: 0.0.0.0:10256
iptables:
  masqueradeAll: false
  masqueradeBit: 14
  minSyncPeriod: 5s
  syncPeriod: 30s
ipvs:
  minSyncPeriod: 5s
  scheduler: "rr"
  syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: "ipvs"
nodePortAddresses: null
oomScoreAdj: -999
portRange: ""
[Unit]
Description=Kubernetes kube-proxy
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/bin/kube-proxy \
    --v=2  \
	--hostname-override=Kubernetes1 \
    --config=/etc/kubernetes/conf/kube-proxy/kube-proxy-conf.yaml

Restart=always
RestartSec=10s

[Install]
WantedBy=multi-user.target
[root@Kubernetes1 ~]# vim /etc/kubernetes/conf/kube-proxy/kube-proxy-conf.yaml
[root@Kubernetes1 ~]# vim /usr/lib/systemd/system/kube-proxy.service
[root@Kubernetes1 ~]# systemctl daemon-reload
[root@Kubernetes1 ~]# systemctl enable --now kube-proxy.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
[root@Kubernetes1 ~]# systemctl status kube-proxy.service

4.6、测试

如果过程中不想小心把csr删除了的话,把kubelet相关文件移除后重新启动kubelet会重新申请csr

[root@Kubernetes1 kubelet]# kubectl get csr
NAME        AGE   SIGNERNAME                                    REQUESTOR                 REQUESTEDDURATION   CONDITION
csr-2rdjv   77s   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:xumeng   <none>              Approved,Issued
csr-bwm58   77s   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:xumeng   <none>              Approved,Issued
csr-gvfbh   77s   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:xumeng   <none>              Approved,Issued
csr-nmlmz   77s   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:xumeng   <none>              Approved,Issued
csr-vrtb6   77s   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:xumeng   <none>              Approved,Issued
csr-w2r6d   77s   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:xumeng   <none>              Approved,Issued
[root@Kubernetes1 kubelet]# kubectl get nodes
NAME          STATUS     ROLES    AGE   VERSION
kubernetes1   NotReady   <none>   9s    v1.26.3
kubernetes2   NotReady   <none>   9s    v1.26.3
kubernetes3   NotReady   <none>   9s    v1.26.3
kubernetes4   NotReady   <none>   9s    v1.26.3
kubernetes5   NotReady   <none>   9s    v1.26.3
kubernetes6   NotReady   <none>   9s    v1.26.3

5、从节点组件启动

5.1、kubelet

参照4.4

5.2、kube-proxy

参照4.5

5.3、测试

如果过程中不想小心把csr删除了的话,把kubelet相关文件移除后重新启动kubelet会重新申请csr

[root@Kubernetes1 kubelet]# kubectl get csr
NAME        AGE   SIGNERNAME                                    REQUESTOR                 REQUESTEDDURATION   CONDITION
csr-2rdjv   77s   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:xumeng   <none>              Approved,Issued
csr-bwm58   77s   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:xumeng   <none>              Approved,Issued
csr-gvfbh   77s   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:xumeng   <none>              Approved,Issued
csr-nmlmz   77s   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:xumeng   <none>              Approved,Issued
csr-vrtb6   77s   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:xumeng   <none>              Approved,Issued
csr-w2r6d   77s   kubernetes.io/kube-apiserver-client-kubelet   system:bootstrap:xumeng   <none>              Approved,Issued
[root@Kubernetes1 kubelet]# kubectl get nodes
NAME          STATUS     ROLES    AGE   VERSION
kubernetes1   NotReady   <none>   9s    v1.26.3
kubernetes2   NotReady   <none>   9s    v1.26.3
kubernetes3   NotReady   <none>   9s    v1.26.3
kubernetes4   NotReady   <none>   9s    v1.26.3
kubernetes5   NotReady   <none>   9s    v1.26.3
kubernetes6   NotReady   <none>   9s    v1.26.3

6、网络组件

[root@Kubernetes1 kubelet]# curl https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/calico-etcd.yaml -o calico.yaml
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  232k  100  232k    0     0   201k      0  0:00:01  0:00:01 --:--:--  201k
# 修改etcd集群地址
[root@Kubernetes1 ~]# sed -i 's#etcd_endpoints: "http://<ETCD_IP>:<ETCD_PORT>"#etcd_endpoints: "https://10.19.0.5:2379,https://10.19.0.9:2379,https://10.19.0.11:2379"#g' calico.yaml
# 设置证书信息
[root@Kubernetes1 ~]# ETCD_CA=`cat /etc/kubernetes/pki/etcd/ca.pem | base64 -w 0 `
[root@Kubernetes1 ~]# ETCD_CERT=`cat /etc/kubernetes/pki/etcd/etcd.pem | base64 -w 0 `
[root@Kubernetes1 ~]# ETCD_KEY=`cat /etc/kubernetes/pki/etcd/etcd-key.pem | base64 -w 0 `
[root@Kubernetes1 ~]# sed -i "s@# etcd-key: null@etcd-key: ${ETCD_KEY}@g; s@# etcd-cert: null@etcd-cert: ${ETCD_CERT}@g; s@# etcd-ca: null@etcd-ca: ${ETCD_CA}@g" calico.yaml
# 打开etcd_ca
[root@Kubernetes1 ~]# sed -i 's#etcd_ca: ""#etcd_ca: "/calico-secrets/etcd-ca"#g; s#etcd_cert: ""#etcd_cert: "/calico-secrets/etcd-cert"#g; s#etcd_key: "" #etcd_key: "/calico-secrets/etcd-key" #g' calico.yaml
# 修改Pod网段(192.168.0.0/16),这里是默认的网段,无需修改
#!/bin/bash
docker pull registry.cn-hangzhou.aliyuncs.com/ialso/calico-cni:v3.25.0
docker pull registry.cn-hangzhou.aliyuncs.com/ialso/calico-node:v3.25.0
docker pull registry.cn-hangzhou.aliyuncs.com/ialso/calico-kube-controllers:v3.25.0
docker tag registry.cn-hangzhou.aliyuncs.com/ialso/calico-cni:v3.25.0 calico/cni:v3.25.0
docker tag registry.cn-hangzhou.aliyuncs.com/ialso/calico-node:v3.25.0 calico/node:v3.25.0
docker tag registry.cn-hangzhou.aliyuncs.com/ialso/calico-kube-controllers:v3.25.0 calico/kube-controllers:v3.25.0
[root@Kubernetes1 ~]# kubectl apply -f calico.yaml
poddisruptionbudget.policy/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
serviceaccount/calico-node created
secret/calico-etcd-secrets created
configmap/calico-config created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
deployment.apps/calico-kube-controllers created
[root@Kubernetes1 ~]# kubectl get pods -A
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-56ff66f86d-cswt8   1/1     Running   0          31s
kube-system   calico-node-68hck                          1/1     Running   0          31s
kube-system   calico-node-hcbsh                          1/1     Running   0          30s
kube-system   calico-node-nmvxr                          1/1     Running   0          30s
kube-system   calico-node-p5xj2                          1/1     Running   0          30s
kube-system   calico-node-t4mpr                          1/1     Running   0          30s
kube-system   calico-node-v76v7                          1/1     Running   0          30s
[root@Kubernetes1 ~]# kubectl get nodes
NAME          STATUS   ROLES    AGE     VERSION
kubernetes1   Ready    <none>   3h10m   v1.26.3
kubernetes2   Ready    <none>   3h10m   v1.26.3
kubernetes3   Ready    <none>   3h10m   v1.26.3
kubernetes4   Ready    <none>   3h10m   v1.26.3
kubernetes5   Ready    <none>   3h10m   v1.26.3
kubernetes6   Ready    <none>   3h10m   v1.26.3

7、coreDNS

[root@Kubernetes6 kubelet]# docker pull registry.aliyuncs.com/google_containers/coredns:1.9.4
1.9.4: Pulling from google_containers/coredns
c6824c7a0594: Pull complete 
8f16f0bc6a9b: Pull complete 
Digest: sha256:b82e294de6be763f73ae71266c8f5466e7e03c69f3a1de96efd570284d35bb18
Status: Downloaded newer image for registry.aliyuncs.com/google_containers/coredns:1.9.4
registry.aliyuncs.com/google_containers/coredns:1.9.4
[root@Kubernetes1 ~]# git clone https://github.com/coredns/deployment.git
Cloning into 'deployment'...
remote: Enumerating objects: 974, done.
remote: Counting objects: 100% (115/115), done.
remote: Compressing objects: 100% (66/66), done.
remote: Total 974 (delta 63), reused 92 (delta 43), pack-reused 859
Receiving objects: 100% (974/974), 268.93 KiB | 240.00 KiB/s, done.
Resolving deltas: 100% (531/531), done.
[root@Kubernetes1 ~]# cd deployment/kubernetes
[root@Kubernetes1 ~]# ./deploy.sh -s -i 10.96.0.10 | kubectl apply -f -
[root@Kubernetes1 kubernetes]# ./deploy.sh -s -i 10.96.0.10 | kubectl apply -f -
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created
[root@Kubernetes1 kubernetes]# kubectl get pods -A
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-56ff66f86d-cswt8   1/1     Running   0          71m
kube-system   calico-node-68hck                          1/1     Running   0          71m
kube-system   calico-node-hcbsh                          1/1     Running   0          71m
kube-system   calico-node-nmvxr                          1/1     Running   0          71m
kube-system   calico-node-p5xj2                          1/1     Running   0          71m
kube-system   calico-node-t4mpr                          1/1     Running   0          71m
kube-system   calico-node-v76v7                          1/1     Running   0          71m
kube-system   coredns-85b5646f88-p6nwl                   1/1     Running   0          6m49s

8、设置标签

[root@kubernetes1 kubelet]# kubectl get nodes
NAME          STATUS   ROLES    AGE     VERSION
kubernetes1   Ready    <none>   3h40m   v1.26.3
kubernetes2   Ready    <none>   3h40m   v1.26.3
kubernetes3   Ready    <none>   3h40m   v1.26.3
kubernetes4   Ready    <none>   3h40m   v1.26.3
kubernetes5   Ready    <none>   3h40m   v1.26.3
kubernetes6   Ready    <none>   3h40m   v1.26.3
[root@Kubernetes1 kubelet]# kubectl label node kubernetes1 node-role.kubernetes.io/master=''
node/kubernetes1 labeled
[root@Kubernetes1 kubelet]# kubectl label node kubernetes2 node-role.kubernetes.io/master=''
node/kubernetes2 labeled
[root@Kubernetes1 kubelet]# kubectl label node kubernetes3 node-role.kubernetes.io/master=''
node/kubernetes3 labeled
[root@Kubernetes1 kubelet]# kubectl label node kubernetes2 node-role.kubernetes.io/worker=''
node/kubernetes2 labeled
[root@Kubernetes1 kubelet]# kubectl label node kubernetes3 node-role.kubernetes.io/worker=''
node/kubernetes3 labeled
[root@Kubernetes1 kubelet]# kubectl label node kubernetes4 node-role.kubernetes.io/worker=''
node/kubernetes4 labeled
[root@Kubernetes1 kubelet]# kubectl label node kubernetes5 node-role.kubernetes.io/worker=''
node/kubernetes5 labeled
[root@Kubernetes1 kubelet]# kubectl label node kubernetes6 node-role.kubernetes.io/worker=''
node/kubernetes6 labeled
[root@Kubernetes1 kubelet]# kubectl get nodes
NAME          STATUS   ROLES           AGE     VERSION
kubernetes1   Ready    master          3h44m   v1.26.3
kubernetes2   Ready    master,worker   3h44m   v1.26.3
kubernetes3   Ready    master,worker   3h44m   v1.26.3
kubernetes4   Ready    worker          3h44m   v1.26.3
kubernetes5   Ready    worker          3h44m   v1.26.3
kubernetes6   Ready    worker          3h44m   v1.26.3
[root@Kubernetes1 kubelet]# kubectl taint nodes kubernetes1 node-role.kubernetes.io/master=:NoSchedule
node/kubernetes1 tainted
[root@Kubernetes1 kubernetes]# kubectl describe node kubernetes1|grep "Taints"
Taints:             node-role.kubernetes.io/master:NoSchedule

9、metrics

指标监控组件

kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.6.3/components.yaml
# 如果镜像下载不下来,更换一下阿里云镜像
registry.cn-hangzhou.aliyuncs.com/google_containers/metrics-server:v0.6.3
[root@Kubernetes1 ~]# kubectl get pods -A
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-56ff66f86d-cswt8   1/1     Running   0          3h47m
kube-system   calico-node-68hck                          1/1     Running   0          3h47m
kube-system   calico-node-hcbsh                          1/1     Running   0          3h47m
kube-system   calico-node-nmvxr                          1/1     Running   0          3h47m
kube-system   calico-node-p5xj2                          1/1     Running   0          3h47m
kube-system   calico-node-t4mpr                          1/1     Running   0          3h47m
kube-system   calico-node-v76v7                          1/1     Running   0          3h47m
kube-system   coredns-85b5646f88-p6nwl                   1/1     Running   0          162m
kube-system   metrics-server-75c748dd7b-7b945            0/1     Running   0          45s

10、ingress

10.1、标签设置

[root@Kubernetes1 ~]# kubectl label node kubernetes2 node-role=ingress
node/kubernetes2 labeled
[root@Kubernetes1 ~]# kubectl label node kubernetes3 node-role=ingress
node/kubernetes3 labeled

10.2、ingress.yaml

apiVersion: v1
kind: Namespace
metadata:
  labels:
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  name: ingress-nginx
---
apiVersion: v1
automountServiceAccountToken: true
kind: ServiceAccount
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx
  namespace: ingress-nginx
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx-admission
  namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx
  namespace: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - endpoints
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingresses/status
    verbs:
      - update
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingressclasses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resourceNames:
      - ingress-controller-leader
    resources:
      - configmaps
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - coordination.k8s.io
    resourceNames:
      - ingress-controller-leader
    resources:
      - leases
    verbs:
      - get
      - update
  - apiGroups:
      - coordination.k8s.io
    resources:
      - leases
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - events
    verbs:
      - create
      - patch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx-admission
  namespace: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - secrets
    verbs:
      - get
      - create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
      - namespaces
    verbs:
      - list
      - watch
  - apiGroups:
      - coordination.k8s.io
    resources:
      - leases
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingresses/status
    verbs:
      - update
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingressclasses
    verbs:
      - get
      - list
      - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx-admission
rules:
  - apiGroups:
      - admissionregistration.k8s.io
    resources:
      - validatingwebhookconfigurations
    verbs:
      - get
      - update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx
  namespace: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-nginx
subjects:
  - kind: ServiceAccount
    name: ingress-nginx
    namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx-admission
  namespace: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-nginx-admission
subjects:
  - kind: ServiceAccount
    name: ingress-nginx-admission
    namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-nginx
subjects:
  - kind: ServiceAccount
    name: ingress-nginx
    namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx-admission
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-nginx-admission
subjects:
  - kind: ServiceAccount
    name: ingress-nginx-admission
    namespace: ingress-nginx
---
apiVersion: v1
data:
  allow-snippet-annotations: "true"
kind: ConfigMap
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx-controller
  namespace: ingress-nginx
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  ipFamilies:
    - IPv4
  ipFamilyPolicy: SingleStack
  ports:
    - appProtocol: http
      name: http
      port: 80
      protocol: TCP
      targetPort: http
    - appProtocol: https
      name: https
      port: 443
      protocol: TCP
      targetPort: https
  selector:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  type: NodePort
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx-controller-admission
  namespace: ingress-nginx
spec:
  ports:
    - appProtocol: https
      name: https-webhook
      port: 443
      targetPort: webhook
  selector:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  type: ClusterIP
---
apiVersion: apps/v1
# kind: Deployment
kind: DaemonSet
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  minReadySeconds: 0
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app.kubernetes.io/component: controller
      app.kubernetes.io/instance: ingress-nginx
      app.kubernetes.io/name: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/component: controller
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
    spec:
      containers:
        - args:
            - /nginx-ingress-controller
            - --election-id=ingress-controller-leader
            - --controller-class=k8s.io/ingress-nginx
            - --ingress-class=nginx
            - --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
            - --validating-webhook=:8443
            - --validating-webhook-certificate=/usr/local/certificates/cert
            - --validating-webhook-key=/usr/local/certificates/key
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: LD_PRELOAD
              value: /usr/local/lib/libmimalloc.so
          # image: registry.k8s.io/ingress-nginx/controller:v1.3.1@sha256:54f7fe2c6c5a9db9a0ebf1131797109bb7a4d91f56b9b362bde2abd237dd1974
          image: registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:v1.3.1
          imagePullPolicy: IfNotPresent
          lifecycle:
            preStop:
              exec:
                command:
                  - /wait-shutdown
          livenessProbe:
            failureThreshold: 5
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          name: controller
          ports:
            - containerPort: 80
              name: http
              protocol: TCP
            - containerPort: 443
              name: https
              protocol: TCP
            - containerPort: 8443
              name: webhook
              protocol: TCP
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          resources:
            requests:
              cpu: 100m
              memory: 90Mi
          securityContext:
            allowPrivilegeEscalation: true
            capabilities:
              add:
                - NET_BIND_SERVICE
              drop:
                - ALL
            runAsUser: 101
          volumeMounts:
            - mountPath: /usr/local/certificates/
              name: webhook-cert
              readOnly: true
      # dnsPolicy: ClusterFirst
      dnsPolicy: ClusterFirstWithHostNet
      # 开放的为node端口
      hostNetwork: true
      nodeSelector:
        # 选择节点角色为ingress的
        # kubernetes.io/os: linux
        node-role: ingress
      serviceAccountName: ingress-nginx
      terminationGracePeriodSeconds: 300
      volumes:
        - name: webhook-cert
          secret:
            secretName: ingress-nginx-admission
---
apiVersion: batch/v1
kind: Job
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx-admission-create
  namespace: ingress-nginx
spec:
  template:
    metadata:
      labels:
        app.kubernetes.io/component: admission-webhook
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
        app.kubernetes.io/version: 1.3.1
      name: ingress-nginx-admission-create
    spec:
      containers:
        - args:
            - create
            - --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
            - --namespace=$(POD_NAMESPACE)
            - --secret-name=ingress-nginx-admission
          env:
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          # image: registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.3.0@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47
          image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.3.0
          imagePullPolicy: IfNotPresent
          name: create
          securityContext:
            allowPrivilegeEscalation: false
      nodeSelector:
        kubernetes.io/os: linux
      restartPolicy: OnFailure
      securityContext:
        fsGroup: 2000
        runAsNonRoot: true
        runAsUser: 2000
      serviceAccountName: ingress-nginx-admission
---
apiVersion: batch/v1
kind: Job
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx-admission-patch
  namespace: ingress-nginx
spec:
  template:
    metadata:
      labels:
        app.kubernetes.io/component: admission-webhook
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
        app.kubernetes.io/version: 1.3.1
      name: ingress-nginx-admission-patch
    spec:
      containers:
        - args:
            - patch
            - --webhook-name=ingress-nginx-admission
            - --namespace=$(POD_NAMESPACE)
            - --patch-mutating=false
            - --secret-name=ingress-nginx-admission
            - --patch-failure-policy=Fail
          env:
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          image: registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.3.0@sha256:549e71a6ca248c5abd51cdb73dbc3083df62cf92ed5e6147c780e30f7e007a47
          imagePullPolicy: IfNotPresent
          name: patch
          securityContext:
            allowPrivilegeEscalation: false
      nodeSelector:
        kubernetes.io/os: linux
      restartPolicy: OnFailure
      securityContext:
        fsGroup: 2000
        runAsNonRoot: true
        runAsUser: 2000
      serviceAccountName: ingress-nginx-admission
---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: nginx
spec:
  controller: k8s.io/ingress-nginx
---
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.3.1
  name: ingress-nginx-admission
webhooks:
  - admissionReviewVersions:
      - v1
    clientConfig:
      service:
        name: ingress-nginx-controller-admission
        namespace: ingress-nginx
        path: /networking/v1/ingresses
    failurePolicy: Fail
    matchPolicy: Equivalent
    name: validate.nginx.ingress.kubernetes.io
    rules:
      - apiGroups:
          - networking.k8s.io
        apiVersions:
          - v1
        operations:
          - CREATE
          - UPDATE
        resources:
          - ingresses
    sideEffects: None

11、dashboard

https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml

# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard

---

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque

---

kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.7.0
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      containers:
        - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.8
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume
          emptyDir: {}
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: admin-user
    namespace: kubernetes-dashboard
[root@Kubernetes1 ~]# kubectl apply -f dashboard.yaml
# 新版需要手动生成token
[root@Kubernetes1 ~]# kubectl -n kubernetes-dashboard create token admin-user

12、存储系统

在Kubernetes2-Kubernetes6上都挂载了一个20GI的通用SSD硬盘

12.1、安装krew

(
  set -x; cd "$(mktemp -d)" &&
  OS="$(uname | tr '[:upper:]' '[:lower:]')" &&
  ARCH="$(uname -m | sed -e 's/x86_64/amd64/' -e 's/\(arm\)\(64\)\?.*/\1\2/' -e 's/aarch64$/arm64/')" &&
  KREW="krew-${OS}_${ARCH}" &&
  tar zxvf "${KREW}.tar.gz" &&
  ./"${KREW}" install krew
)
# 如果下载不下来 可以自己下载下来,把安装包放在对应位置
[root@Kubernetes1 ~]# wget https://github.com/kubernetes-sigs/krew/releases/download/v0.4.3/krew-linux_amd64.tar.gz
······
[root@Kubernetes1 ~]# tar zxvf krew-linux_amd64.tar.gz
./LICENSE
./krew-linux_amd64
# 安装 Krew
[root@Kubernetes1 ~]# ./krew-linux_amd64 install krew
Updated the local copy of plugin index.
Installing plugin: krew
Installed plugin: krew
\
 | Use this plugin:
 | 	kubectl krew
 | Documentation:
 | 	https://krew.sigs.k8s.io/
 | Caveats:
 | \
 |  | krew is now installed! To start using kubectl plugins, you need to add
 |  | krew's installation directory to your PATH:
 |  | 
 |  |   * macOS/Linux:
 |  |     - Add the following to your ~/.bashrc or ~/.zshrc:
 |  |         export PATH="${KREW_ROOT:-$HOME/.krew}/bin:$PATH"
 |  |     - Restart your shell.
 |  | 
 |  |   * Windows: Add %USERPROFILE%\.krew\bin to your PATH environment variable
 |  | 
 |  | To list krew commands and to get help, run:
 |  |   $ kubectl krew
 |  | For a full list of available plugins, run:
 |  |   $ kubectl krew search
 |  | 
 |  | You can find documentation at
 |  |   https://krew.sigs.k8s.io/docs/user-guide/quickstart/.
 | /
/
[root@Kubernetes1 ~]# kubectl krew list
PLUGIN  VERSION
krew    v0.4.3

12.2、directpv

1、文档

https://github.com/minio/directpv

2、安装
# 安装 DirectPV Krew 插件
[root@Kubernetes1 ~]# kubectl krew install directpv
Updated the local copy of plugin index.
Installing plugin: directpv
Installed plugin: directpv
\
 | Use this plugin:
 | 	kubectl directpv
 | Documentation:
 | 	https://github.com/minio/directpv
/
WARNING: You installed plugin "directpv" from the krew-index plugin repository.
   These plugins are not audited for security by the Krew maintainers.
   Run them at your own risk.
# 在 kubernetes 集群中安装 DirectPV
[root@Kubernetes1 ~]# kubectl directpv install
Installing on unsupported Kubernetes v1.26

 ███████████████████████████████████████████████████████████████████████████ 100%

┌──────────────────────────────────────┬──────────────────────────┐
│ NAME                                 │ KIND                     │
├──────────────────────────────────────┼──────────────────────────┤
│ directpv                             │ Namespace                │
│ directpv-min-io                      │ ServiceAccount           │
│ directpv-min-io                      │ ClusterRole              │
│ directpv-min-io                      │ ClusterRoleBinding       │
│ directpv-min-io                      │ Role                     │
│ directpv-min-io                      │ RoleBinding              │
│ directpvdrives.directpv.min.io       │ CustomResourceDefinition │
│ directpvvolumes.directpv.min.io      │ CustomResourceDefinition │
│ directpvnodes.directpv.min.io        │ CustomResourceDefinition │
│ directpvinitrequests.directpv.min.io │ CustomResourceDefinition │
│ directpv-min-io                      │ CSIDriver                │
│ directpv-min-io                      │ StorageClass             │
│ node-server                          │ Daemonset                │
│ controller                           │ Deployment               │
└──────────────────────────────────────┴──────────────────────────┘

DirectPV installed successfully
# 获取安装信息
[root@Kubernetes1 ~]# kubectl directpv info
┌───────────────┬──────────┬───────────┬─────────┬────────┐
│ NODE          │ CAPACITY │ ALLOCATED │ VOLUMES │ DRIVES │
├───────────────┼──────────┼───────────┼─────────┼────────┤
│ • kubernetes2 │ -        │ -         │ -       │ -      │
│ • kubernetes3 │ -        │ -         │ -       │ -      │
│ • kubernetes4 │ -        │ -         │ -       │ -      │
│ • kubernetes5 │ -        │ -         │ -       │ -      │
│ • kubernetes6 │ -        │ -         │ -       │ -      │
└───────────────┴──────────┴───────────┴─────────┴────────┘
# 发现并添加用于卷调度的驱动器
[root@Kubernetes1 ~]# kubectl directpv discover

 Discovered node 'kubernetes2' ✔
 Discovered node 'kubernetes3' ✔
 Discovered node 'kubernetes4' ✔
 Discovered node 'kubernetes5' ✔
 Discovered node 'kubernetes6' ✔

┌─────────────────────┬─────────────┬───────┬────────┬────────────┬──────┬───────────┬─────────────┐
│ ID                  │ NODE        │ DRIVE │ SIZE   │ FILESYSTEM │ MAKE │ AVAILABLE │ DESCRIPTION │
├─────────────────────┼─────────────┼───────┼────────┼────────────┼──────┼───────────┼─────────────┤
│ 253:16$d2uMWDyPb... │ kubernetes2 │ vdb   │ 20 GiB │ -          │ -    │ YES       │ -           │
│ 253:16$ovvgHRyMY... │ kubernetes3 │ vdb   │ 20 GiB │ -          │ -    │ YES       │ -           │
│ 253:16$xDrmDCNCN... │ kubernetes4 │ vdb   │ 20 GiB │ -          │ -    │ YES       │ -           │
│ 253:16$RGTqsADLo... │ kubernetes5 │ vdb   │ 20 GiB │ -          │ -    │ YES       │ -           │
│ 253:16$t1YI26qa6... │ kubernetes6 │ vdb   │ 20 GiB │ -          │ -    │ YES       │ -           │
└─────────────────────┴─────────────┴───────┴────────┴────────────┴──────┴───────────┴─────────────┘

Generated 'drives.yaml' successfully.
[root@Kubernetes1 ~]# kubectl directpv init drives.yaml --dangerous

 ███████████████████████████████████████████████████████████████████████████ 100%

 Processed initialization request '96b43a89-b667-4035-a7a3-0238e7a76920' for node 'kubernetes2' ✔
 Processed initialization request '084444ef-02a4-405f-afb7-587bc213e559' for node 'kubernetes3' ✔
 Processed initialization request 'e9b0afa7-e5fa-4b48-8a11-afd8780fcd56' for node 'kubernetes4' ✔
 Processed initialization request '5cfb8367-6bc1-48ae-b2b1-dde36fac919a' for node 'kubernetes5' ✔
 Processed initialization request 'e28ac09c-8dd5-46e6-bf7b-5f2b81ed237a' for node 'kubernetes6' ✔

┌──────────────────────────────────────┬─────────────┬───────┬─────────┐
│ REQUEST_ID                           │ NODE        │ DRIVE │ MESSAGE │
├──────────────────────────────────────┼─────────────┼───────┼─────────┤
│ 96b43a89-b667-4035-a7a3-0238e7a76920 │ kubernetes2 │ vdb   │ Success │
│ 084444ef-02a4-405f-afb7-587bc213e559 │ kubernetes3 │ vdb   │ Success │
│ e9b0afa7-e5fa-4b48-8a11-afd8780fcd56 │ kubernetes4 │ vdb   │ Success │
│ 5cfb8367-6bc1-48ae-b2b1-dde36fac919a │ kubernetes5 │ vdb   │ Success │
│ e28ac09c-8dd5-46e6-bf7b-5f2b81ed237a │ kubernetes6 │ vdb   │ Success │
└──────────────────────────────────────┴─────────────┴───────┴─────────┘
# 获取添加的驱动器列表
[root@Kubernetes1 ~]# kubectl directpv list drives
┌─────────────┬──────┬──────┬────────┬────────┬─────────┬────────┐
│ NODE        │ NAME │ MAKE │ SIZE   │ FREE   │ VOLUMES │ STATUS │
├─────────────┼──────┼──────┼────────┼────────┼─────────┼────────┤
│ kubernetes2 │ vdb  │      │ 20 GiB │ 20 GiB │ -       │ Ready  │
│ kubernetes3 │ vdb  │      │ 20 GiB │ 20 GiB │ -       │ Ready  │
│ kubernetes4 │ vdb  │      │ 20 GiB │ 20 GiB │ -       │ Ready  │
│ kubernetes5 │ vdb  │      │ 20 GiB │ 20 GiB │ -       │ Ready  │
│ kubernetes6 │ vdb  │      │ 20 GiB │ 20 GiB │ -       │ Ready  │
└─────────────┴──────┴──────┴────────┴────────┴─────────┴────────┘
3、测试

这里要十分注意,sc-directpv.min.io的默认回收策略是Delete,修改为Retain不生效,感觉directpv.min.io像是不支持这种模式,但是可以手动修改pv的回收策略,仍是生效的

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc1
spec:
  storageClassName: directpv-min-io # Directpv使用的SC,安装directpv时自动创建
  accessModes:
    - ReadWriteOnce # directpv不支持多节点写入模式
  resources:
    requests:
      storage: 1024Mi # 申请PV空间大小
[root@Kubernetes1 ~]# kubectl apply -f pv.yaml 
persistentvolumeclaim/pvc1 created
[root@Kubernetes1 ~]# kubectl get pvc
NAME   STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS      AGE
pvc1   Pending                                      directpv-min-io   9s
apiVersion: v1
kind: Pod
metadata:
  name: pod-minio
spec:
  volumes:
  - name: minio-pvc
    persistentVolumeClaim:
      claimName: pvc1 # 指定使用刚刚创建的PVC:pvc1
  containers:
  - image: busybox:1.28
    name: box
    args: [/bin/sh, -c, while true; do echo "$(date)" >> /tmp/1.log && sleep 1000; done]
    resources:
      requests:
        cpu: 100m
    volumeMounts:
    - name: minio-pvc
      mountPath: /tmp # PV挂载至容器里的/tmp目录
[root@Kubernetes1 ~]# kubectl apply -f pod-pv.yaml 
pod/pod-minio created
[root@Kubernetes1 ~]# kubectl get pods,pvc,pv
NAME                                READY   STATUS    RESTARTS       AGE
pod/deploy-nginx-85f8fcc944-8mctq   1/1     Running   1 (29h ago)    29h
pod/deploy-nginx-85f8fcc944-kvh8q   1/1     Running   1 (29h ago)    29h
pod/deploy-nginx-85f8fcc944-r6s4l   1/1     Running   2 (152m ago)   29h
pod/pod-minio                       1/1     Running   0              13s

NAME                         STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
persistentvolumeclaim/pvc1   Bound    pvc-b1bd3d3f-f617-4b72-a915-46ca8fb205fa   1Gi        RWO            directpv-min-io   105s

NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM          STORAGECLASS      REASON   AGE
persistentvolume/pvc-b1bd3d3f-f617-4b72-a915-46ca8fb205fa   1Gi        RWO            Delete           Bound    default/pvc1   directpv-min-io            13s
# 手动修改pv回收策略为Retain
[root@Kubernetes1 ~]# kubectl edit persistentvolume/pvc-b1bd3d3f-f617-4b72-a915-46ca8fb205fa 
persistentvolume/pvc-b1bd3d3f-f617-4b72-a915-46ca8fb205fa edited
[root@Kubernetes1 ~]# kubectl get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM          STORAGECLASS      REASON   AGE
pvc-b1bd3d3f-f617-4b72-a915-46ca8fb205fa   1Gi        RWO            Retain           Bound    default/pvc1   directpv-min-io            96s
# 测试,删除了pvc后,pv仍旧存在
[root@Kubernetes1 ~]# kubectl get pv,pvc
NAME                                                        CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS     CLAIM          STORAGECLASS      REASON   AGE
persistentvolume/pvc-b1bd3d3f-f617-4b72-a915-46ca8fb205fa   1Gi        RWO            Retain           Released   default/pvc1   directpv-min-io            3m33s
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 1
    评论
对于在 Ubuntu 18.04.1 上安装 Kubernetes 1.26.3 的教程,可以按照以下步骤进行操作: 1. 更新系统软件包: ``` sudo apt update sudo apt upgrade -y ``` 2. 安装 Docker: ``` sudo apt install docker.io -y sudo systemctl start docker sudo systemctl enable docker ``` 3. 配置 Kubernetes APT 存储库: ``` sudo apt update && sudo apt install -y apt-transport-https curl curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list sudo apt update ``` 4. 安装 Kubernetes 组件: ``` sudo apt install -y kubelet kubeadm kubectl sudo apt-mark hold kubelet kubeadm kubectl ``` 5. 初始化 Kubernetes Master 节点: ``` sudo kubeadm init --pod-network-cidr=10.244.0.0/16 ``` 初始化完成后,将会显示一些指令,类似下面的内容: ``` kubeadm join <master-ip>:<master-port> --token <token> --discovery-token-ca-cert-hash <hash> ``` 6. 设置当前用户的 Kubernetes 配置: ``` mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config ``` 7. 安装网络插件(这里以 Calico 为例): ``` kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml ``` 8. 加入其他节点(如果有的话): 在其他节点上执行第 5 步中的指令,将其加入 Kubernetes 集群。 完成上述步骤后,您应该已经成功安装了 Kubernetes 1.26.3 版本。您可以通过运行 `kubectl get nodes` 命令来验证节点的状态。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

眼眸流转

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值