openEuler 22.09环境二进制安装Kubernetes(k8s) v1.26

本文档描述了如何在openEuler 22.09上以二进制模式部署高可用Kubernetes集群(适用k8s v1.26版本)。

注意:本文档中的所有操作均使用root权限执行。

1 部署环境

1.1 软硬件配置

1、主机清单

本文档采用5台华为ECS进行部署,基本情况如下表所示。

主机名称IP地址说明软件
k8s-master01192.168.218.100master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、
kubelet、kube-proxy、nfs-client、nginx
k8s-master02192.168.218.101master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、
kubelet、kube-proxy、nfs-client、nginx
k8s-master03192.168.218.102master节点kube-apiserver、kube-controller-manager、kube-scheduler、etcd、
kubelet、kube-proxy、nfs-client、nginx
k8s-node01192.168.218.103node节点kubelet、kube-proxy、nfs-client、nginx
k8s-node02192.168.218.104node节点kubelet、kube-proxy、nfs-client、nginx

2、主要软件清单

其中的涉及的主要软件及其版本如下表所示。

软件版本
kernel5.10.0
openEuler22.09
etcdv3.5.7
containerd1.6.16
cfsslv1.6.3
cni1.2.0
crictl1.24.1

3、网络

k8s主机:192.168.218.0/24

service:10.96.0.0/12

pod:172.16.0.0/12

1.2 k8节点系统基础配置

1.2.1 IP配置

根据表1中的IP地址,配置K8S各节点的IP地址,在网络接口配置文件中,修改和添加以下选项,以下以master01为例,其它所有节点参照完成。

[root@k8s-master01 ~]# vi /etc/sysconfig/network-scripts/ifcfg-enp4s3 
TYPE=Ethernet
BOOTPROTO=static
ONBOOT=yes
IPADDR=192.168.218.100
PREFIX=24
GATEWAY=192.168.218.1
DNS1=8.8.8.8

1.2.2 激活网络连接

[root@k8s-master01 ~]# nmcli connection reload 
[root@k8s-master01 ~]# nmcli connection up enp4s3 

1.2.3 设置主机名

设置主机名后,退出当前shell,重新打开shell生效。

[root@k8s-master01 ~]# hostnamectl hostname k8s-master01
[root@k8s-master01 ~]# exit

1.2.4 配置yum源

openEuler 22.09 DVD ISO镜像默认已配置了yum源,这里暂不作修改。如若没有,可参考以下内容。

[root@k8s-master01 ~]# vim /etc/yum.repos.d/openEuler.repo 

[OS]
name=OS
baseurl=http://repo.openeuler.org/openEuler-22.09/OS/$basearch/
enabled=1
gpgcheck=1
gpgkey=http://repo.openeuler.org/openEuler-22.09/OS/$basearch/RPM-GPG-KEY-openEuler

[everything]
name=everything
baseurl=http://repo.openeuler.org/openEuler-22.09/everything/$basearch/
enabled=1
gpgcheck=1
gpgkey=http://repo.openeuler.org/openEuler-22.09/everything/$basearch/RPM-GPG-KEY-openEuler

[EPOL]
name=EPOL
baseurl=http://repo.openeuler.org/openEuler-22.09/EPOL/main/$basearch/
enabled=1
gpgcheck=1
gpgkey=http://repo.openeuler.org/openEuler-22.09/OS/$basearch/RPM-GPG-KEY-openEuler

[debuginfo]
name=debuginfo
baseurl=http://repo.openeuler.org/openEuler-22.09/debuginfo/$basearch/
enabled=1
gpgcheck=1
gpgkey=http://repo.openeuler.org/openEuler-22.09/debuginfo/$basearch/RPM-GPG-KEY-openEuler

[source]
name=source
baseurl=http://repo.openeuler.org/openEuler-22.09/source/
enabled=1
gpgcheck=1
gpgkey=http://repo.openeuler.org/openEuler-22.09/source/RPM-GPG-KEY-openEuler

[update]
name=update
baseurl=http://repo.openeuler.org/openEuler-22.09/update/$basearch/
enabled=1
gpgcheck=1
gpgkey=http://repo.openeuler.org/openEuler-22.09/OS/$basearch/RPM-GPG-KEY-openEuler

1.2.5 安装一些必备工具

在线部署时,执行以下操作

[root@k8s-master01 ~]# dnf -y install  wget psmisc vim net-tools nfs-utils telnet device-mapper-persistent-data lvm2 git tar curl bash-completion

若要离线部署,可在另一台可上网的openEuler 22.09主机上按以下步骤将软件包下载下来,然后共享给其它主机,并根据这些包制作本地yum源,本文档后面均采用离线部署方式。

# 下载必要工具
dnf -y install createrepo wget
 
# 下载全量依赖包
mkdir -p /data/openEuler2203/
cd /data/openEuler2203/
repotrack createrepo wget psmisc vim net-tools nfs-utils telnet device-mapper-persistent-data lvm2 git tar curl gcc keepalived haproxy bash-completion chrony sshpass ipvsadm ipset sysstat conntrack libseccomp bash-completion make automake autoconf libtool
 
# 删除libseccomp
rm -rf libseccomp-*.rpm
 
# 下载libseccomp
wget https://repo.huaweicloud.com/openeuler/openEuler-20.03-LTS-SP1/OS/x86_64/Packages/libseccomp-2.5.0-3.oe1.x86_64.rpm
 
# 创建yum源信息
createrepo -u -d /data/openEuler2203/
 
# 将下载的包拷贝到内网机器上
scp -r /data/openEuler2203/ root@192.168.218.100:
scp -r /data/openEuler2203/ root@192.168.218.101:
scp -r /data/openEuler2203/ root@192.168.218.102:
scp -r /data/openEuler2203/ root@192.168.218.103:
scp -r /data/openEuler2203/ root@192.168.218.104:
 
# 在内网机器上创建repo配置文件
mkdir /etc/yum.repos.d/backup
mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/backup
cat > /etc/yum.repos.d/local.repo  << EOF 
[base]
name=base software
baseurl=file:///root/openEuler2203/
gpgcheck=0
enabled=1
EOF
# 安装下载好的包
dnf clean all
dnf makecache
dnf -y install /root/openEuler2203/* --skip-broken

1.2.6 选择性下载工具

执行以下shell脚本,下载必要工具,此操作仅在可上网的主机上操作即可,需要时可拷贝至其它主机。

[root@k8s-master01 ~]# vim download.sh 
#!/bin/bash
 
# 查看版本地址:
# 
# https://github.com/containernetworking/plugins/releases/
# https://github.com/containerd/containerd/releases/
# https://github.com/kubernetes-sigs/cri-tools/releases/
# https://github.com/Mirantis/cri-dockerd/releases/
# https://github.com/etcd-io/etcd/releases/
# https://github.com/cloudflare/cfssl/releases/
# https://github.com/kubernetes/kubernetes/tree/master/CHANGELOG
# https://download.docker.com/linux/static/stable/x86_64/
# https://github.com/opencontainers/runc/releases/
 
cni_plugins='v1.2.0'
cri_containerd_cni='1.6.16'
crictl='v1.26.0'
cri_dockerd='0.3.1'
etcd='v3.5.7'
cfssl='1.6.3'
cfssljson='1.6.3'
kubernetes_server='v1.26.1'
docker_v='20.10.23'
runc='1.1.4'
 
if [ ! -f "runc.amd64" ];then
wget https://ghproxy.com/https://github.com/opencontainers/runc/releases/download/v${runc}/runc.amd64
else
echo "文件存在"
fi
 
if [ ! -f "docker-${docker_v}.tgz" ];then
wget https://download.docker.com/linux/static/stable/x86_64/docker-${docker_v}.tgz 
else
echo "文件存在"
fi
 
if [ ! -f "cni-plugins-linux-amd64-${cni_plugins}.tgz" ];then
wget https://ghproxy.com/https://github.com/containernetworking/plugins/releases/download/${cni_plugins}/cni-plugins-linux-amd64-${cni_plugins}.tgz
else
echo "文件存在"
fi
 
if [ ! -f "cri-containerd-cni-${cri_containerd_cni}-linux-amd64.tar.gz" ];then
wget https://ghproxy.com/https://github.com/containerd/containerd/releases/download/v${cri_containerd_cni}/cri-containerd-cni-${cri_containerd_cni}-linux-amd64.tar.gz
else
echo "文件存在"
fi
 
if [ ! -f "crictl-${crictl}-linux-amd64.tar.gz" ];then
wget https://ghproxy.com/https://github.com/kubernetes-sigs/cri-tools/releases/download/${crictl}/crictl-${crictl}-linux-amd64.tar.gz
else
echo "文件存在"
fi
 
if [ ! -f "cri-dockerd-${cri_dockerd}.amd64.tgz" ];then
wget https://ghproxy.com/https://github.com/Mirantis/cri-dockerd/releases/download/v${cri_dockerd}/cri-dockerd-${cri_dockerd}.amd64.tgz
else
echo "文件存在"
fi
 
if [ ! -f "kubernetes-server-linux-amd64.tar.gz" ];then
wget https://dl.k8s.io/${kubernetes_server}/kubernetes-server-linux-amd64.tar.gz
else
echo "文件存在"
fi
 
if [ ! -f "etcd-${etcd}-linux-amd64.tar.gz" ];then
wget https://ghproxy.com/https://github.com/etcd-io/etcd/releases/download/${etcd}/etcd-${etcd}-linux-amd64.tar.gz
else
echo "文件存在"
fi
 
if [ ! -f "cfssl" ];then
wget https://ghproxy.com/https://github.com/cloudflare/cfssl/releases/download/v${cfssl}/cfssl_${cfssl}_linux_amd64 -O cfssl
else
echo "文件存在"
fi
 
if [ ! -f "cfssljson" ];then
wget https://ghproxy.com/https://github.com/cloudflare/cfssl/releases/download/v${cfssljson}/cfssljson_${cfssljson}_linux_amd64 -O cfssljson
else
echo "文件存在"
fi
 
if [ ! -f "helm-canary-linux-amd64.tar.gz" ];then
wget https://get.helm.sh/helm-canary-linux-amd64.tar.gz
else
echo "文件存在"
fi
 
if [ ! -f "nginx-1.22.1.tar.gz" ];then
wget http://nginx.org/download/nginx-1.22.1.tar.gz
else
echo "文件存在"
fi
 
if [ ! -f "calico.yaml" ];then
curl https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/calico.yaml -O
else
echo "文件存在"
fi
 
if [ ! -f "get_helm.sh" ];then
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
else
echo "文件存在"
fi

1.2.7 关闭firewalld防火墙和SELinux

所有主机均要执行

[root@k8s-master01 ~]# systemctl disable --now firewalld
[root@k8s-master01 ~]# setenforce 0
[root@k8s-master01 ~]# sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config

1.2.8 关闭交换分区

所有主机均要执行

[root@k8s-master01 ~]# sed -ri 's/.*swap.*/#&/' /etc/fstab
[root@k8s-master01 ~]# swapoff -a && sysctl -w vm.swappiness=0
[root@k8s-master01 ~]# cat /etc/fstab
……
#/dev/mapper/openeuler-swap none                    swap    defaults        0 0

1.2.9 网络配置

所有主机均要执行

[root@k8s-master01 ~]# vim /etc/NetworkManager/conf.d/calico.conf 
[keyfile]
unmanaged-devices=interface-name:cali*;interface-name:tunl*

[root@k8s-master01 ~]# systemctl restart NetworkManager

1.2.10 时间同步

openEuler 22.09默认已安装并启动了chrony时间同步服务,这里只需配置时间同步服务器和客户机即可。

1、配置服务器,这里以master01作为服务器

[root@k8s-master01 ~]# vim /etc/chrony.conf 
pool pool.ntp.org iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
allow 192.168.218.0/24
local stratum 10
keyfile /etc/chrony.keys
leapsectz right/UTC
logdir /var/log/chrony

[root@k8s-master01 ~]# systemctl restart chronyd ; systemctl enable chronyd

2、配置客户机

所有其它节点均为客户端,均需要配置。

[root@k8s-master02 ~]# vim /etc/chrony.conf 
pool 192.168.218.100 iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
keyfile /etc/chrony.keys
leapsectz right/UTC
logdir /var/log/chrony

[root@k8s-master01 ~]# systemctl restart chronyd ; systemctl enable chronyd

3、客户端验证

[root@k8s-master02 ~]# chronyc sources
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^* 192.168.218.100               3   6    17    35   +428ns[-4918ns] +/-   55ms

1.2.11 配置ulimit

所有主机均要执行

[root@k8s-master01 ~]# ulimit -SHn 65535
#编辑文件,在末尾添加以下内容,并在其中可以看到帮助信息
[root@k8s-master01 ~]# vim /etc/security/limits.conf
* soft nofile 655360
* hard nofile 131072
* soft nproc 655350
* hard nproc 655350
* seft memlock unlimited
* hard memlock unlimited

1.2.12 配置免密登录

该节操作仅在master01上完成。

[root@k8s-master01 ~]# ssh-keygen -f /root/.ssh/id_rsa -P ''
[root@k8s-master01 ~]# export IP="192.168.218.100 192.168.218.101 192.168.218.102 192.168.218.103 192.168.218.104"

#下面的密码请替换成各主机的密码,且要求各主机密码相同
[root@k8s-master01 ~]# export SSHPASS=mima1234
[root@k8s-master01 ~]# for HOST in $IP;do sshpass -e ssh-copy-id -o StrictHostKeyChecking=no $HOST; done

1.2.13 安装ipvsadm

IPVS(IP Virtual Server,IP虚拟服务器)是运行在LVS(Linux Virtual Server)下的提供负载平衡功能的一种技术。ipvsadm用来查看Virtual Server状态和问题排查,比如节点分配情况以及连接数等情况。

该操作所有节点均要执行。

#新建ipvs.conf配置文件
[root@k8s-master01 ~]# vim /etc/modules-load.d/ipvs.conf
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip

#重启服务
[root@k8s-master01 ~]# systemctl restart systemd-modules-load.service

#查看内核模块
[root@k8s-master01 ~]# lsmod | grep -e ip_vs -e nf_conntrack
ip_vs_sh               16384  0
ip_vs_wrr              16384  0
ip_vs_rr               16384  0
ip_vs                 188416  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack          180224  3 nf_nat,nft_ct,ip_vs
nf_defrag_ipv6         24576  2 nf_conntrack,ip_vs
nf_defrag_ipv4         16384  1 nf_conntrack
libcrc32c              16384  4 nf_conntrack,nf_nat,nf_tables,ip_vs

1.2.14 修改内核参数

该操作所有节点均要执行。

[root@k8s-master01 ~]# vim /etc/sysctl.d/k8s.conf
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720
 
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
 
net.ipv6.conf.all.disable_ipv6 = 0
net.ipv6.conf.default.disable_ipv6 = 0
net.ipv6.conf.lo.disable_ipv6 = 0
net.ipv6.conf.all.forwarding = 1

#加载应用系统配置 
sysctl --system

1.2.15 配置hosts本地解析

所有节点均要求操作。

[root@k8s-master01 ~]# vim /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.218.100 k8s-master01
192.168.218.101 k8s-master02
192.168.218.102 k8s-master03
192.168.218.103 k8s-node01
192.168.218.104 k8s-node02

2 k8s基本组件安装

自v1.24.0,k8s不再支持docker方式的运行时Runtime ,推荐安装Containerd作为Runtime 。

要求在所有主机上完成以下操作。

2.1 安装Containerd

2.1.1 下载或拷贝软件包

将前面下载的cni-plugins-linux-amd64-v1.2.0.tgz和cri-containerd-cni-1.6.16-linux-amd64.tar.gz软件包拷贝至各主机节点。

scp cni-plugins-linux-amd64-v1.2.0.tgz root@192.168.218.100:
scp cni-plugins-linux-amd64-v1.2.0.tgz root@192.168.218.101:
scp cni-plugins-linux-amd64-v1.2.0.tgz root@192.168.218.102:
scp cni-plugins-linux-amd64-v1.2.0.tgz root@192.168.218.103:
scp cni-plugins-linux-amd64-v1.2.0.tgz root@192.168.218.104:

scp cri-containerd-cni-1.6.16-linux-amd64.tar.gz root@192.168.218.100:
scp cri-containerd-cni-1.6.16-linux-amd64.tar.gz root@192.168.218.101:
scp cri-containerd-cni-1.6.16-linux-amd64.tar.gz root@192.168.218.102:
scp cri-containerd-cni-1.6.16-linux-amd64.tar.gz root@192.168.218.103:
scp cri-containerd-cni-1.6.16-linux-amd64.tar.gz root@192.168.218.104:


scp crictl-v1.26.0-linux-amd64.tar.gz root@192.168.218.100:
scp crictl-v1.26.0-linux-amd64.tar.gz root@192.168.218.101:
scp crictl-v1.26.0-linux-amd64.tar.gz root@192.168.218.102:
scp crictl-v1.26.0-linux-amd64.tar.gz root@192.168.218.103:
scp crictl-v1.26.0-linux-amd64.tar.gz root@192.168.218.104:

2.1.2 解压软件包

[root@k8s-master01 ~]# mkdir -p /etc/cni/net.d /opt/cni/bin

#解压cni-plugins二进制包
[root@k8s-master01 ~]# tar xf cni-plugins-linux-amd64-v1.2.0.tgz -C /opt/cni/bin/

#解压cri-containerd
[root@k8s-master01 ~]# tar -xzf cri-containerd-cni-*-linux-amd64.tar.gz -C /

2.1.3  创建containerd服务

[root@k8s-master01 ~]# cat > /etc/systemd/system/containerd.service <<EOF
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target
 
[Service]
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/containerd
Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5
LimitNPROC=infinity
LimitCORE=infinity
LimitNOFILE=infinity
TasksMax=infinity
OOMScoreAdjust=-999
 
[Install]
WantedBy=multi-user.target
EOF

2.1.4 配置Containerd所需的内核

[root@k8s-master01 ~]# cat <<EOF | tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables  = 1
net.ipv4.ip_forward                 = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
 

# 加载内核 
[root@k8s-master01 ~]# sysctl --system

2.1.5 创建Containerd的配置文件

# 创建默认配置文件
[root@k8s-master01 ~]# mkdir -p /etc/containerd
[root@k8s-master01 ~]# containerd config default | tee /etc/containerd/config.toml
 
# 修改Containerd的配置文件
[root@k8s-master01 ~]# sed -i "s#SystemdCgroup\ \=\ false#SystemdCgroup\ \=\ true#g" /etc/containerd/config.toml
[root@k8s-master01 ~]# cat /etc/containerd/config.toml | grep SystemdCgroup
 
[root@k8s-master01 ~]# sed -i "s#registry.k8s.io#registry.cn-hangzhou.aliyuncs.com/chenby#g" /etc/containerd/config.toml
[root@k8s-master01 ~]# cat /etc/containerd/config.toml | grep sandbox_image
 
[root@k8s-master01 ~]# sed -i "s#config_path\ \=\ \"\"#config_path\ \=\ \"/etc/containerd/certs.d\"#g" /etc/containerd/config.toml
[root@k8s-master01 ~]# cat /etc/containerd/config.toml | grep certs.d
 
[root@k8s-master01 ~]# mkdir /etc/containerd/certs.d/docker.io -pv
 
[root@k8s-master01 ~]# cat > /etc/containerd/certs.d/docker.io/hosts.toml << EOF
server = "https://docker.io"
[host."https://hub-mirror.c.163.com"]
  capabilities = ["pull", "resolve"]
EOF

2.1.6 启动并设置为开机启动

[root@k8s-master01 ~]# systemctl daemon-reload
[root@k8s-master01 ~]# systemctl enable --now containerd

[root@k8s-master01 ~]# systemctl restart containerd

2.1.7 配置crictl客户端连接的运行时位置

#解压
[root@k8s-master01 ~]# tar xf crictl-v1.26.0-linux-amd64.tar.gz -C /usr/bin/

#生成配置文件
[root@k8s-master01 ~]# cat > /etc/crictl.yaml <<EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF
 
#测试
[root@k8s-master01 ~]# systemctl restart containerd
[root@k8s-master01 ~]# crictl info

2.2 k8s与etcd下载及安装

该节操作仅在master01上完成。

2.2.1 拷贝和解压软件包

#将前面下载的软件包拷贝至master01
scp kubernetes-server-linux-amd64.tar.gz root@192.168.218.100:
scp etcd-v3.5.7-linux-amd64.tar.gz root@192.168.218.100

#在master01将软件包解压
[root@k8s-master01 ~]# tar -xf kubernetes-server-linux-amd64.tar.gz  --strip-components=3 -C /usr/local/bin kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}
[root@k8s-master01 ~]# tar -xf etcd*.tar.gz && mv etcd-*/etcd /usr/local/bin/ && mv etcd-*/etcdctl /usr/local/bin/

# 查看/usr/local/bin下内容,共17个文件 
[root@k8s-master01 ~]# ls /usr/local/bin/
containerd               crictl       etcdctl                  kube-proxy
containerd-shim          critest      kube-apiserver           kube-scheduler
containerd-shim-runc-v1  ctd-decoder  kube-controller-manager
containerd-shim-runc-v2  ctr          kubectl
containerd-stress        etcd         kubelet

2.2.2 查看版本

[root@k8s-master01 ~]# kubelet --version
Kubernetes v1.26.1
[root@k8s-master01 ~]# etcdctl version
etcdctl version: 3.5.7
API version: 3.5

2.2.3 将组件发送至其他k8s节点

[root@k8s-master01 ~]# Master='k8s-master02 k8s-master03'
[root@k8s-master01 ~]# Work='k8s-node01 k8s-node02'
 
[root@k8s-master01 ~]# for NODE in $Master; do echo $NODE; scp /usr/local/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy} $NODE:/usr/local/bin/; scp /usr/local/bin/etcd* $NODE:/usr/local/bin/; done
 
[root@k8s-master01 ~]# for NODE in $Work; do     scp /usr/local/bin/kube{let,-proxy} $NODE:/usr/local/bin/ ; done
 
[root@k8s-master01 ~]# mkdir -p /opt/cni/bin

2.4 创建证书文件

[root@k8s-master01 ~]# mkdir pki
[root@k8s-master01 ~]# cd pki

#创建admin-csr.json证书文件
[root@k8s-master01 ~]# cat > admin-csr.json << EOF 
{
  "CN": "admin",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "system:masters",
      "OU": "Kubernetes-manual"
    }
  ]
}
EOF
 
#创建ca-config.json证书文件
[root@k8s-master01 ~]# cat > ca-config.json << EOF 
{
  "signing": {
    "default": {
      "expiry": "876000h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "876000h"
      }
    }
  }
}
EOF
 
#创建etcd-ca-csr.json证书文件
[root@k8s-master01 ~]# cat > etcd-ca-csr.json  << EOF 
{
  "CN": "etcd",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "etcd",
      "OU": "Etcd Security"
    }
  ],
  "ca": {
    "expiry": "876000h"
  }
}
EOF

#创建front-proxy-ca-csr.json证书文件 
[root@k8s-master01 ~]# cat > front-proxy-ca-csr.json  << EOF 
{
  "CN": "kubernetes",
  "key": {
     "algo": "rsa",
     "size": 2048
  },
  "ca": {
    "expiry": "876000h"
  }
}
EOF

#创建kubelet-csr.json证书文件  
[root@k8s-master01 ~]# cat > kubelet-csr.json  << EOF 
{
  "CN": "system:node:\$NODE",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "Beijing",
      "ST": "Beijing",
      "O": "system:nodes",
      "OU": "Kubernetes-manual"
    }
  ]
}
EOF

#创建manager-csr.json证书文件  
[root@k8s-master01 ~]# cat > manager-csr.json << EOF 
{
  "CN": "system:kube-controller-manager",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "system:kube-controller-manager",
      "OU": "Kubernetes-manual"
    }
  ]
}
EOF
 
#创建apiserver-csr.json证书文件 
[root@k8s-master01 ~]# cat > apiserver-csr.json << EOF 
{
  "CN": "kube-apiserver",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "Kubernetes",
      "OU": "Kubernetes-manual"
    }
  ]
}
EOF
 
#创建ca-csr.json证书文件  
[root@k8s-master01 ~]# cat > ca-csr.json   << EOF 
{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "Kubernetes",
      "OU": "Kubernetes-manual"
    }
  ],
  "ca": {
    "expiry": "876000h"
  }
}
EOF

#创建etcd-csr.json证书文件  
[root@k8s-master01 ~]# cat > etcd-csr.json << EOF 
{
  "CN": "etcd",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "etcd",
      "OU": "Etcd Security"
    }
  ]
}
EOF
 
#创建front-proxy-client-csr.json证书文件 
[root@k8s-master01 ~]# cat > front-proxy-client-csr.json  << EOF 
{
  "CN": "front-proxy-client",
  "key": {
     "algo": "rsa",
     "size": 2048
  }
}
EOF
 
#创建kube-proxy-csr.json证书文件 
[root@k8s-master01 ~]# cat > kube-proxy-csr.json  << EOF 
{
  "CN": "system:kube-proxy",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "system:kube-proxy",
      "OU": "Kubernetes-manual"
    }
  ]
}
EOF
 
#创建scheduler-csr.json证书文件 
[root@k8s-master01 ~]# cat > scheduler-csr.json << EOF 
{
  "CN": "system:kube-scheduler",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "system:kube-scheduler",
      "OU": "Kubernetes-manual"
    }
  ]
}
EOF

 
[root@k8s-master01 ~]# cd ..
[root@k8s-master01 ~]# mkdir bootstrap
[root@k8s-master01 ~]# cd bootstrap

#创建bootstrap.secret.yaml文件 
[root@k8s-master01 bootstrap]# cat > bootstrap.secret.yaml << EOF 
apiVersion: v1
kind: Secret
metadata:
  name: bootstrap-token-c8ad9c
  namespace: kube-system
type: bootstrap.kubernetes.io/token
stringData:
  description: "The default bootstrap token generated by 'kubelet '."
  token-id: c8ad9c
  token-secret: 2e4d610cf3e7426e
  usage-bootstrap-authentication: "true"
  usage-bootstrap-signing: "true"
  auth-extra-groups:  system:bootstrappers:default-node-token,system:bootstrappers:worker,system:bootstrappers:ingress
 
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubelet-bootstrap
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:node-bootstrapper
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: system:bootstrappers:default-node-token
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: node-autoapprove-bootstrap
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: system:bootstrappers:default-node-token
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: node-autoapprove-certificate-rotation
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: system:nodes
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:kube-apiserver-to-kubelet
rules:
  - apiGroups:
      - ""
    resources:
      - nodes/proxy
      - nodes/stats
      - nodes/log
      - nodes/spec
      - nodes/metrics
    verbs:
      - "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:kube-apiserver
  namespace: ""
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-apiserver-to-kubelet
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: kube-apiserver
EOF
 
 
[root@k8s-master01 ~]# cd ..
[root@k8s-master01 ~]# mkdir coredns
[root@k8s-master01 ~]# cd coredns

#创建coredns.yaml文件 
[root@k8s-master01 coredns]# cat > coredns.yaml << EOF 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:coredns
rules:
  - apiGroups:
    - ""
    resources:
    - endpoints
    - services
    - pods
    - namespaces
    verbs:
    - list
    - watch
  - apiGroups:
    - discovery.k8s.io
    resources:
    - endpointslices
    verbs:
    - list
    - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
data:
  Corefile: |
    .:53 {
        errors
        health {
          lameduck 5s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
          fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        forward . /etc/resolv.conf {
          max_concurrent 1000
        }
        cache 30
        loop
        reload
        loadbalance
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/name: "CoreDNS"
spec:
  # replicas: not specified here:
  # 1. Default is 1.
  # 2. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
    spec:
      priorityClassName: system-cluster-critical
      serviceAccountName: coredns
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      nodeSelector:
        kubernetes.io/os: linux
      affinity:
         podAntiAffinity:
           preferredDuringSchedulingIgnoredDuringExecution:
           - weight: 100
             podAffinityTerm:
               labelSelector:
                 matchExpressions:
                   - key: k8s-app
                     operator: In
                     values: ["kube-dns"]
               topologyKey: kubernetes.io/hostname
      containers:
      - name: coredns
        image: registry.cn-hangzhou.aliyuncs.com/chenby/coredns:v1.10.0
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
          readOnly: true
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - all
          readOnlyRootFilesystem: true
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /ready
            port: 8181
            scheme: HTTP
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.96.0.10 
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
  - name: metrics
    port: 9153
    protocol: TCP
EOF
 
 
[root@k8s-master01 ~]# cd ..
[root@k8s-master01 ~]# mkdir metrics-server
[root@k8s-master01 ~]# cd metrics-server

#创建metrics-server.yaml文件 
[root@k8s-master01 metrics-server]# cat > metrics-server.yaml << EOF 
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
    rbac.authorization.k8s.io/aggregate-to-admin: "true"
    rbac.authorization.k8s.io/aggregate-to-edit: "true"
    rbac.authorization.k8s.io/aggregate-to-view: "true"
  name: system:aggregated-metrics-reader
rules:
- apiGroups:
  - metrics.k8s.io
  resources:
  - pods
  - nodes
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
rules:
- apiGroups:
  - ""
  resources:
  - pods
  - nodes
  - nodes/stats
  - namespaces
  - configmaps
  verbs:
  - get
  - list
  - watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server-auth-reader
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server:system:auth-delegator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:auth-delegator
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    k8s-app: metrics-server
  name: system:metrics-server
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:metrics-server
subjects:
- kind: ServiceAccount
  name: metrics-server
  namespace: kube-system
---
apiVersion: v1
kind: Service
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  ports:
  - name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    k8s-app: metrics-server
  name: metrics-server
  namespace: kube-system
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  strategy:
    rollingUpdate:
      maxUnavailable: 0
  template:
    metadata:
      labels:
        k8s-app: metrics-server
    spec:
      hostNetwork: true
      containers:
      - args:
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        - --metric-resolution=15s
        - --kubelet-insecure-tls
        - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem # change to front-proxy-ca.crt for kubeadm
        - --requestheader-username-headers=X-Remote-User
        - --requestheader-group-headers=X-Remote-Group
        - --requestheader-extra-headers-prefix=X-Remote-Extra-
        image: registry.cn-hangzhou.aliyuncs.com/chenby/metrics-server:v0.5.2
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /livez
            port: https
            scheme: HTTPS
          periodSeconds: 10
        name: metrics-server
        ports:
        - containerPort: 4443
          name: https
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /readyz
            port: https
            scheme: HTTPS
          initialDelaySeconds: 20
          periodSeconds: 10
        resources:
          requests:
            cpu: 100m
            memory: 200Mi
        securityContext:
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        volumeMounts:
        - mountPath: /tmp
          name: tmp-dir
        - name: ca-ssl
          mountPath: /etc/kubernetes/pki
      nodeSelector:
        kubernetes.io/os: linux
      priorityClassName: system-cluster-critical
      serviceAccountName: metrics-server
      volumes:
      - emptyDir: {}
        name: tmp-dir
      - name: ca-ssl
        hostPath:
          path: /etc/kubernetes/pki
 
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
  labels:
    k8s-app: metrics-server
  name: v1beta1.metrics.k8s.io
spec:
  group: metrics.k8s.io
  groupPriorityMinimum: 100
  insecureSkipTLSVerify: true
  service:
    name: metrics-server
    namespace: kube-system
  version: v1beta1
  versionPriority: 100
EOF

3 生成证书

3.1 工具准备

将前面下载的工具拷贝至master01

#将下载好的文件拷贝至master01
scp cfssl root@192.168.218.100:
scp cfssljson root@192.168.218.100:


#在master01节点,将文件拷贝至指定位置,并增加执行权限
[root@k8s-master01 ~]# cp cfssl /usr/local/bin/cfssl
[root@k8s-master01 ~]# cp cfssljson /usr/local/bin/cfssljson
[root@k8s-master01 ~]# chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson

3.2 生成etcd证书

特别说明除外,以下操作在所有master节点操作

3.1.1 在所有master节点创建证书存放目录

mkdir -p /etc/etcd/ssl 

3.1.2 master01节点生成etcd证书

[root@k8s-master01 ~]# cd pki
# 生成etcd证书和etcd证书的key(如果你觉得以后可能会扩容,可以在ip那多写几个预留出来)
# 若没有IPv6 可删除可保留 
[root@k8s-master01 pki]# cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /etc/etcd/ssl/etcd-ca

[root@k8s-master01 pki]# cfssl gencert \
   -ca=/etc/etcd/ssl/etcd-ca.pem \
   -ca-key=/etc/etcd/ssl/etcd-ca-key.pem \
   -config=ca-config.json \
   -hostname=127.0.0.1,k8s-master01,k8s-master02,k8s-master03,192.168.218.100,192.168.218.101,192.168.218.102 \
   -profile=kubernetes \
   etcd-csr.json | cfssljson -bare /etc/etcd/ssl/etcd

3.1.3 将证书复制到其他master节点

[root@k8s-master01 pki]# Master='k8s-master02 k8s-master03'

[root@k8s-master01 pki]# for NODE in $Master; do ssh $NODE "mkdir -p /etc/etcd/ssl"; for FILE in etcd-ca-key.pem  etcd-ca.pem  etcd-key.pem  etcd.pem; do scp /etc/etcd/ssl/${FILE} $NODE:/etc/etcd/ssl/${FILE}; done; done

3.2 生成k8s相关证书

特别说明除外,以下操作在所有master节点操作,注意Shell命令提示符。

3.2.1 所有master节点创建证书存放目录

mkdir -p /etc/kubernetes/pki

3.2.2 master01节点生成k8s证书

[root@k8s-master01 pki]# cfssl gencert -initca ca-csr.json | cfssljson -bare /etc/kubernetes/pki/ca

# 生成一个根证书 ,多写了一些IP作为预留IP,为将来添加node做准备
# 10.96.0.1是service网段的第一个地址

[root@k8s-master01 pki]# cfssl gencert   \
-ca=/etc/kubernetes/pki/ca.pem   \
-ca-key=/etc/kubernetes/pki/ca-key.pem   \
-config=ca-config.json   \
-hostname=10.96.0.1,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,x.oiox.cn,k.oiox.cn,l.oiox.cn,o.oiox.cn,192.168.218.100,192.168.218.101,192.168.218.102,192.168.218.103,192.168.218.104,192.168.218.105,192.168.218.106,192.168.218.107,192.168.218.108,192.168.218.109 \
-profile=kubernetes   apiserver-csr.json | cfssljson -bare /etc/kubernetes/pki/apiserver

3.2.3 master01节点生成apiserver聚合证书

[root@k8s-master01 pki]# cfssl gencert   -initca front-proxy-ca-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-ca 
 
# 执行以下命令会有一个警告,可以忽略 
[root@k8s-master01 pki]# cfssl gencert  \
-ca=/etc/kubernetes/pki/front-proxy-ca.pem   \
-ca-key=/etc/kubernetes/pki/front-proxy-ca-key.pem   \
-config=ca-config.json   \
-profile=kubernetes   front-proxy-client-csr.json | cfssljson -bare /etc/kubernetes/pki/front-proxy-client

3.2.4 master01节点生成controller-manage的证书

[root@k8s-master01 pki]# cfssl gencert \
   -ca=/etc/kubernetes/pki/ca.pem \
   -ca-key=/etc/kubernetes/pki/ca-key.pem \
   -config=ca-config.json \
   -profile=kubernetes \
   manager-csr.json | cfssljson -bare /etc/kubernetes/pki/controller-manager

# 设置一个集群项,这里使用nginx方案实现高可用 
[root@k8s-master01 pki]# kubectl config set-cluster kubernetes \
     --certificate-authority=/etc/kubernetes/pki/ca.pem \
     --embed-certs=true \
     --server=https://127.0.0.1:8443 \
     --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
 
# 设置一个环境项,一个上下文
 
[root@k8s-master01 pki]# kubectl config set-context system:kube-controller-manager@kubernetes \
    --cluster=kubernetes \
    --user=system:kube-controller-manager \
    --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
 
# 设置一个用户项
 
[root@k8s-master01 pki]# kubectl config set-credentials system:kube-controller-manager \
     --client-certificate=/etc/kubernetes/pki/controller-manager.pem \
     --client-key=/etc/kubernetes/pki/controller-manager-key.pem \
     --embed-certs=true \
     --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig
 
# 设置默认环境
 
[root@k8s-master01 pki]# kubectl config use-context system:kube-controller-manager@kubernetes \
     --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig

 
[root@k8s-master01 pki]# cfssl gencert \
   -ca=/etc/kubernetes/pki/ca.pem \
   -ca-key=/etc/kubernetes/pki/ca-key.pem \
   -config=ca-config.json \
   -profile=kubernetes \
   scheduler-csr.json | cfssljson -bare /etc/kubernetes/pki/scheduler
 
 
[root@k8s-master01 pki]# kubectl config set-cluster kubernetes \
     --certificate-authority=/etc/kubernetes/pki/ca.pem \
     --embed-certs=true \
     --server=https://127.0.0.1:8443 \
     --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
 
[root@k8s-master01 pki]# kubectl config set-credentials system:kube-scheduler \
     --client-certificate=/etc/kubernetes/pki/scheduler.pem \
     --client-key=/etc/kubernetes/pki/scheduler-key.pem \
     --embed-certs=true \
     --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
 
[root@k8s-master01 pki]# kubectl config set-context system:kube-scheduler@kubernetes \
     --cluster=kubernetes \
     --user=system:kube-scheduler \
     --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
 
[root@k8s-master01 pki]# kubectl config use-context system:kube-scheduler@kubernetes \
     --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
 
[root@k8s-master01 pki]# cfssl gencert \
   -ca=/etc/kubernetes/pki/ca.pem \
   -ca-key=/etc/kubernetes/pki/ca-key.pem \
   -config=ca-config.json \
   -profile=kubernetes \
   admin-csr.json | cfssljson -bare /etc/kubernetes/pki/admin
 
 
[root@k8s-master01 pki]# kubectl config set-cluster kubernetes     \
  --certificate-authority=/etc/kubernetes/pki/ca.pem     \
  --embed-certs=true     \
  --server=https://127.0.0.1:8443     \
  --kubeconfig=/etc/kubernetes/admin.kubeconfig
 
[root@k8s-master01 pki]# kubectl config set-credentials kubernetes-admin  \
  --client-certificate=/etc/kubernetes/pki/admin.pem     \
  --client-key=/etc/kubernetes/pki/admin-key.pem     \
  --embed-certs=true     \
  --kubeconfig=/etc/kubernetes/admin.kubeconfig
 
[root@k8s-master01 pki]# kubectl config set-context kubernetes-admin@kubernetes    \
  --cluster=kubernetes     \
  --user=kubernetes-admin     \
  --kubeconfig=/etc/kubernetes/admin.kubeconfig
 
[root@k8s-master01 pki]# kubectl config use-context kubernetes-admin@kubernetes  --kubeconfig=/etc/kubernetes/admin.kubeconfig

3.2.5 master01节点创建kube-proxy证书

[root@k8s-master01 pki]# cfssl gencert \
   -ca=/etc/kubernetes/pki/ca.pem \
   -ca-key=/etc/kubernetes/pki/ca-key.pem \
   -config=ca-config.json \
   -profile=kubernetes \
   kube-proxy-csr.json | cfssljson -bare /etc/kubernetes/pki/kube-proxy
 
 
[root@k8s-master01 pki]# kubectl config set-cluster kubernetes     \
  --certificate-authority=/etc/kubernetes/pki/ca.pem     \
  --embed-certs=true     \
  --server=https://127.0.0.1:8443     \
  --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
 
[root@k8s-master01 pki]# kubectl config set-credentials kube-proxy  \
  --client-certificate=/etc/kubernetes/pki/kube-proxy.pem     \
  --client-key=/etc/kubernetes/pki/kube-proxy-key.pem     \
  --embed-certs=true     \
  --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
 
[root@k8s-master01 pki]# kubectl config set-context kube-proxy@kubernetes    \
  --cluster=kubernetes     \
  --user=kube-proxy     \
  --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
 
[root@k8s-master01 pki]# kubectl config use-context kube-proxy@kubernetes  --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig

3.2.5 master01节点创建ServiceAccount Key ——secret

[root@k8s-master01 pki]# openssl genrsa -out /etc/kubernetes/pki/sa.key 2048

[root@k8s-master01 pki]# openssl rsa -in /etc/kubernetes/pki/sa.key -pubout -out /etc/kubernetes/pki/sa.pub

3.2.6 将证书发送到其他master节点

#确保其它master节点上已创建/etc/kubernetes/pki/目录
[root@localhost pki]# for NODE in k8s-master02 k8s-master03; do  for FILE in $(ls /etc/kubernetes/pki | grep -v etcd); do  scp /etc/kubernetes/pki/${FILE} $NODE:/etc/kubernetes/pki/${FILE}; done;  for FILE in admin.kubeconfig controller-manager.kubeconfig scheduler.kubeconfig; do  scp /etc/kubernetes/${FILE} $NODE:/etc/kubernetes/${FILE}; done; done

3.2.7 查看证书

确保所有master节点上都能查看到以下26个证书

[root@k8s-master01 pki]# ls /etc/kubernetes/pki/
admin.csr      apiserver.csr      ca.csr      controller-manager.csr      front-proxy-ca.csr      front-proxy-client.csr      kube-proxy.csr      sa.key         scheduler-key.pem
admin-key.pem  apiserver-key.pem  ca-key.pem  controller-manager-key.pem  front-proxy-ca-key.pem  front-proxy-client-key.pem  kube-proxy-key.pem  sa.pub         scheduler.pem
admin.pem      apiserver.pem      ca.pem      controller-manager.pem      front-proxy-ca.pem      front-proxy-client.pem      kube-proxy.pem      scheduler.csr


[root@k8s-master01 pki]# ls /etc/kubernetes/pki/ | wc -l
26

4 k8s系统组件配置

4.1 etcd配置

4.1.1 master01配置

[root@k8s-master01 ~]# cat > /etc/etcd/etcd.config.yml << EOF 
name: 'k8s-master01'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.218.100:2380'
listen-client-urls: 'https://192.168.218.100:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.218.100:2380'
advertise-client-urls: 'https://192.168.218.100:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://192.168.218.100:2380,k8s-master02=https://192.168.218.101:2380,k8s-master03=https://192.168.218.102:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  auto-tls: true
peer-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  peer-client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
EOF

4.1.2 master02配置

[root@k8s-master02 ~]# cat > /etc/etcd/etcd.config.yml << EOF 
name: 'k8s-master02'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.218.101:2380'
listen-client-urls: 'https://192.168.218.101:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.218.101:2380'
advertise-client-urls: 'https://192.168.218.101:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://192.168.218.100:2380,k8s-master02=https://192.168.218.101:2380,k8s-master03=https://192.168.218.102:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  auto-tls: true
peer-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  peer-client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
EOF

4.1.3 master03配置

[root@k8s-master03 ~]# cat > /etc/etcd/etcd.config.yml << EOF 
name: 'k8s-master03'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.218.102:2380'
listen-client-urls: 'https://192.168.218.102:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.218.102:2380'
advertise-client-urls: 'https://192.168.218.102:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'k8s-master01=https://192.168.218.100:2380,k8s-master02=https://192.168.218.101:2380,k8s-master03=https://192.168.218.102:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  auto-tls: true
peer-transport-security:
  cert-file: '/etc/kubernetes/pki/etcd/etcd.pem'
  key-file: '/etc/kubernetes/pki/etcd/etcd-key.pem'
  peer-client-cert-auth: true
  trusted-ca-file: '/etc/kubernetes/pki/etcd/etcd-ca.pem'
  auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
EOF

4.2 创建service

本节要求在所有master节点上操作完成。

4.2.1 创建etcd.service并启动

cat > /usr/lib/systemd/system/etcd.service << EOF
[Unit]
Description=Etcd Service
Documentation=https://coreos.com/etcd/docs/latest/
After=network.target
 
[Service]
Type=notify
ExecStart=/usr/local/bin/etcd --config-file=/etc/etcd/etcd.config.yml
Restart=on-failure
RestartSec=10
LimitNOFILE=65536
 
[Install]
WantedBy=multi-user.target
Alias=etcd3.service
 
EOF

4.2.2 创建etcd证书目录

mkdir /etc/kubernetes/pki/etcd
ln -s /etc/etcd/ssl/* /etc/kubernetes/pki/etcd/
systemctl daemon-reload
systemctl enable --now etcd

4.2.3 查看etcd状态

每个master节点上都要查看

[root@k8s-master01 ~]# export ETCDCTL_API=3

[root@k8s-master01 ~]# etcdctl --endpoints="192.168.218.102:2379,192.168.218.101:2379,192.168.218.100:2379" --cacert=/etc/kubernetes/pki/etcd/etcd-ca.pem --cert=/etc/kubernetes/pki/etcd/etcd.pem --key=/etc/kubernetes/pki/etcd/etcd-key.pem  endpoint status --write-out=table
+----------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|       ENDPOINT       |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+----------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| 192.168.218.102:2379 | d43da0d39aaa95be |   3.5.7 |   20 kB |     false |      false |         2 |          9 |                  9 |        |
| 192.168.218.101:2379 | 11f94c330b77381a |   3.5.7 |   20 kB |     false |      false |         2 |          9 |                  9 |        |
| 192.168.218.100:2379 | ee015994ddee02d0 |   3.5.7 |   20 kB |      true |      false |         2 |          9 |                  9 |        |
+----------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

5 配置高可用性

可以使用 nginx或者haproxy+keepalived实现高可用,这文档使用nginx方案。

5.1 手动编译安装nginx

将最初下载的nginx软件包拷贝至所有节点,然后解压和编译安装。

#将先前下载好的nginx软件包拷贝至所有节点
scp nginx-1.22.1.tar.gz root@192.168.218.100:
scp nginx-1.22.1.tar.gz root@192.168.218.101:
scp nginx-1.22.1.tar.gz root@192.168.218.102:
scp nginx-1.22.1.tar.gz root@192.168.218.103:
scp nginx-1.22.1.tar.gz root@192.168.218.104:

#在各节点解压软件包
tar xvf nginx-*.tar.gz
cd nginx-1.22.1

# 在各节点编译和安装
./configure --with-stream --without-http --without-http_uwsgi_module --without-http_scgi_module --without-http_fastcgi_module
make && make install 

5.2 写入启动配置

在所有主机上执行

# 写入nginx配置文件
cat > /usr/local/nginx/conf/kube-nginx.conf <<EOF
worker_processes 1;
events {
    worker_connections  1024;
}
stream {
    upstream backend {
        least_conn;
        hash $remote_addr consistent;
        server 192.168.218.100:6443        max_fails=3 fail_timeout=30s;
        server 192.168.218.101:6443        max_fails=3 fail_timeout=30s;
        server 192.168.218.102:6443        max_fails=3 fail_timeout=30s;
    }
    server {
        listen 127.0.0.1:8443;
        proxy_connect_timeout 1s;
        proxy_pass backend;
    }
}
EOF
 
# 写入启动配置文件
cat > /etc/systemd/system/kube-nginx.service <<EOF
[Unit]
Description=kube-apiserver nginx proxy
After=network.target
After=network-online.target
Wants=network-online.target
 
[Service]
Type=forking
ExecStartPre=/usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/kube-nginx.conf -p /usr/local/nginx -t
ExecStart=/usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/kube-nginx.conf -p /usr/local/nginx
ExecReload=/usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/kube-nginx.conf -p /usr/local/nginx -s reload
PrivateTmp=true
Restart=always
RestartSec=5
StartLimitInterval=0
LimitNOFILE=65536
 
[Install]
WantedBy=multi-user.target
EOF
 
# 设置开机自启,并查看其状态,确保其状态均为running
systemctl enable --now  kube-nginx 
systemctl restart kube-nginx
systemctl status kube-nginx

6 k8s组件配置

所有k8s节点执行以下命令创建目录。

mkdir -p /etc/kubernetes/manifests/ /etc/systemd/system/kubelet.service.d /var/lib/kubelet /var/log/kubernetes

6.1 创建apiserver

6.1.1 master01节点配置

[root@k8s-master01 ~]# cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
 
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
 
[Service]
ExecStart=/usr/local/bin/kube-apiserver \\
      --v=2  \\
      --allow-privileged=true  \\
      --bind-address=0.0.0.0  \\
      --secure-port=6443  \\
      --advertise-address=192.168.218.100 \\
      --service-cluster-ip-range=10.96.0.0/12  \\
      --service-node-port-range=30000-32767  \\
      --etcd-servers=https://192.168.218.100:2379,https://192.168.218.101:2379,https://192.168.218.102:2379 \\
      --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem  \\
      --etcd-certfile=/etc/etcd/ssl/etcd.pem  \\
      --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem  \\
      --client-ca-file=/etc/kubernetes/pki/ca.pem  \\
      --tls-cert-file=/etc/kubernetes/pki/apiserver.pem  \\
      --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem  \\
      --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem  \\
      --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem  \\
      --service-account-key-file=/etc/kubernetes/pki/sa.pub  \\
      --service-account-signing-key-file=/etc/kubernetes/pki/sa.key  \\
      --service-account-issuer=https://kubernetes.default.svc.cluster.local \\
      --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \\
      --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \
      --authorization-mode=Node,RBAC  \\
      --enable-bootstrap-token-auth=true  \\
      --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem  \\
      --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem  \\
      --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem  \\
      --requestheader-allowed-names=aggregator  \\
      --requestheader-group-headers=X-Remote-Group  \\
      --requestheader-extra-headers-prefix=X-Remote-Extra-  \\
      --requestheader-username-headers=X-Remote-User \\
      --enable-aggregator-routing=true
      # --feature-gates=IPv6DualStack=true
      # --token-auth-file=/etc/kubernetes/token.csv
 
Restart=on-failure
RestartSec=10s
LimitNOFILE=65535
 
[Install]
WantedBy=multi-user.target
 
EOF

6.1.2 master02节点配置

[root@k8s-master02 ~]#  cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
 
[Service]
ExecStart=/usr/local/bin/kube-apiserver \\
      --v=2  \\
      --allow-privileged=true  \\
      --bind-address=0.0.0.0  \\
      --secure-port=6443  \\
      --advertise-address=192.168.218.101 \\
      --service-cluster-ip-range=10.96.0.0/12  \\
      --service-node-port-range=30000-32767  \\
      --etcd-servers=https://192.168.218.100:2379,https://192.168.218.101:2379,https://192.168.218.102:2379 \\
      --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem  \\
      --etcd-certfile=/etc/etcd/ssl/etcd.pem  \\
      --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem  \\
      --client-ca-file=/etc/kubernetes/pki/ca.pem  \\
      --tls-cert-file=/etc/kubernetes/pki/apiserver.pem  \\
      --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem  \\
      --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem  \\
      --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem  \\
      --service-account-key-file=/etc/kubernetes/pki/sa.pub  \\
      --service-account-signing-key-file=/etc/kubernetes/pki/sa.key  \\
      --service-account-issuer=https://kubernetes.default.svc.cluster.local \\
      --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \\
      --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \\
      --authorization-mode=Node,RBAC  \\
      --enable-bootstrap-token-auth=true  \\
      --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem  \\
      --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem  \\
      --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem  \\
      --requestheader-allowed-names=aggregator  \\
      --requestheader-group-headers=X-Remote-Group  \\
      --requestheader-extra-headers-prefix=X-Remote-Extra-  \\
      --requestheader-username-headers=X-Remote-User \\
      --enable-aggregator-routing=true
      # --feature-gates=IPv6DualStack=true
      # --token-auth-file=/etc/kubernetes/token.csv
 
Restart=on-failure
RestartSec=10s
LimitNOFILE=65535
 
[Install]
WantedBy=multi-user.target
 
EOF

6.1.3 master03节点配置

[root@k8s-master03 ~]# cat > /usr/lib/systemd/system/kube-apiserver.service  << EOF
 
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
 
[Service]
ExecStart=/usr/local/bin/kube-apiserver \\
      --v=2  \\
      --allow-privileged=true  \\
      --bind-address=0.0.0.0  \\
      --secure-port=6443  \\
      --advertise-address=192.168.218.102 \\
      --service-cluster-ip-range=10.96.0.0/12  \\
      --service-node-port-range=30000-32767  \\
      --etcd-servers=https://192.168.218.100:2379,https://192.168.218.101:2379,https://192.168.218.102:2379 \\
      --etcd-cafile=/etc/etcd/ssl/etcd-ca.pem  \\
      --etcd-certfile=/etc/etcd/ssl/etcd.pem  \\
      --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem  \\
      --client-ca-file=/etc/kubernetes/pki/ca.pem  \\
      --tls-cert-file=/etc/kubernetes/pki/apiserver.pem  \\
      --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem  \\
      --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem  \\
      --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem  \\
      --service-account-key-file=/etc/kubernetes/pki/sa.pub  \\
      --service-account-signing-key-file=/etc/kubernetes/pki/sa.key  \\
      --service-account-issuer=https://kubernetes.default.svc.cluster.local \\
      --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \\
      --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \\
      --authorization-mode=Node,RBAC  \\
      --enable-bootstrap-token-auth=true  \\
      --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem  \\
      --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.pem  \\
      --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client-key.pem  \\
      --requestheader-allowed-names=aggregator  \\
      --requestheader-group-headers=X-Remote-Group  \\
      --requestheader-extra-headers-prefix=X-Remote-Extra-  \\
      --requestheader-username-headers=X-Remote-User \\
      --enable-aggregator-routing=true
      # --feature-gates=IPv6DualStack=true
      # --token-auth-file=/etc/kubernetes/token.csv
 
Restart=on-failure
RestartSec=10s
LimitNOFILE=65535
 
[Install]
WantedBy=multi-user.target
 
EOF

6.1.4 启动apiserver(所有master节点)

systemctl daemon-reload && systemctl enable --now kube-apiserver
 
# 查看状态,确保均为正常
systemctl status kube-apiserver

6.2 配置kube-controller-manager service

6.2.1 所有master节点配置,且配置相同

# 172.16.0.0/12为pod网段,按需求设置你自己的网段
 
cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
 
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
 
[Service]
ExecStart=/usr/local/bin/kube-controller-manager \\
      --v=2 \\
      --bind-address=127.0.0.1 \\
      --root-ca-file=/etc/kubernetes/pki/ca.pem \\
      --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \\
      --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \\
      --service-account-private-key-file=/etc/kubernetes/pki/sa.key \\
      --kubeconfig=/etc/kubernetes/controller-manager.kubeconfig \\
      --leader-elect=true \\
      --use-service-account-credentials=true \\
      --node-monitor-grace-period=40s \\
      --node-monitor-period=5s \\
      --pod-eviction-timeout=2m0s \\
      --controllers=*,bootstrapsigner,tokencleaner \\
      --allocate-node-cidrs=true \\
      --service-cluster-ip-range=10.96.0.0/12 \\
      --cluster-cidr=172.16.0.0/12 \\
      --node-cidr-mask-size-ipv4=24 \\
      --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.pem 
 
Restart=always
RestartSec=10s
 
[Install]
WantedBy=multi-user.target
 
EOF

6.2.2 启动kube-controller-manager,并查看状态,确保其状态均正常

systemctl daemon-reload
systemctl enable --now kube-controller-manager
systemctl  status kube-controller-manager

6.3 配置kube-scheduler service

6.3.1 所有master节点配置,且配置相同

cat > /usr/lib/systemd/system/kube-scheduler.service << EOF
 
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
 
[Service]
ExecStart=/usr/local/bin/kube-scheduler \\
      --v=2 \\
      --bind-address=127.0.0.1 \\
      --leader-elect=true \\
      --kubeconfig=/etc/kubernetes/scheduler.kubeconfig
 
Restart=always
RestartSec=10s
 
[Install]
WantedBy=multi-user.target
 
EOF

6.3.2 启动并查看服务状态,确保其状态均正常

systemctl daemon-reload
systemctl enable --now kube-scheduler
systemctl status kube-scheduler

7 配置TLS Bootstrapping

TLS bootstrapping 是用来简化管理员配置kubelet 与 apiserver 双向加密通信的配置步骤的一种机制。

7.1 在master01上配置

[root@k8s-master01 ~]# cd bootstrap
 
[root@k8s-master01 ~]# kubectl config set-cluster kubernetes     \
--certificate-authority=/etc/kubernetes/pki/ca.pem     \
--embed-certs=true     --server=https://127.0.0.1:8443     \
--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
 
# token的位置在bootstrap.secret.yaml,如果修改的话到该文件修改
[root@k8s-master01 ~]# kubectl config set-credentials tls-bootstrap-token-user     \
--token=c8ad9c.2e4d610cf3e7426e \
--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
 
[root@k8s-master01 ~]# kubectl config set-context tls-bootstrap-token-user@kubernetes     \
--cluster=kubernetes     \
--user=tls-bootstrap-token-user     \
--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
 
[root@k8s-master01 ~]# kubectl config use-context tls-bootstrap-token-user@kubernetes     \
--kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig
 
[root@k8s-master01 ~]# mkdir -p /root/.kube ; cp /etc/kubernetes/admin.kubeconfig /root/.kube/config

7.2 查看集群状态

此步若没问题,则可继续后续操作。

[root@k8s-master01 bootstrap]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
controller-manager   Healthy   ok                              
scheduler            Healthy   ok                              
etcd-1               Healthy   {"health":"true","reason":""}   
etcd-0               Healthy   {"health":"true","reason":""}   
etcd-2               Healthy   {"health":"true","reason":""}  

# 切记执行,别忘记!!! 
[root@k8s-master01 bootstrap]# kubectl create -f bootstrap.secret.yaml
secret/bootstrap-token-c8ad9c created
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
clusterrolebinding.rbac.authorization.k8s.io/node-autoapprove-bootstrap created
clusterrolebinding.rbac.authorization.k8s.io/node-autoapprove-certificate-rotation created
clusterrole.rbac.authorization.k8s.io/system:kube-apiserver-to-kubelet created
clusterrolebinding.rbac.authorization.k8s.io/system:kube-apiserver created

8 配置node节点

8.1 在master01上将证书复制到其它各节点

[root@k8s-master01 ~]# cd /etc/kubernetes/
 
[root@k8s-master01 kubernetes]# for NODE in k8s-master02 k8s-master03 k8s-node01 k8s-node02; do ssh $NODE mkdir -p /etc/kubernetes/pki; for FILE in pki/ca.pem pki/ca-key.pem pki/front-proxy-ca.pem bootstrap-kubelet.kubeconfig kube-proxy.kubeconfig; do scp /etc/kubernetes/$FILE $NODE:/etc/kubernetes/${FILE}; done; done

8.2 kubelet配置

8.2.1 所有节点配置kubelet service

mkdir -p /var/lib/kubelet /var/log/kubernetes /etc/systemd/system/kubelet.service.d /etc/kubernetes/manifests/
 

cat > /usr/lib/systemd/system/kubelet.service << EOF
 
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=containerd.service
Requires=containerd.service
 
[Service]
ExecStart=/usr/local/bin/kubelet \\
    --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.kubeconfig  \\
    --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\
    --config=/etc/kubernetes/kubelet-conf.yml \\
    --container-runtime-endpoint=unix:///run/containerd/containerd.sock  \\
    --node-labels=node.kubernetes.io/node=
    # --feature-gates=IPv6DualStack=true
    # --container-runtime=remote
    # --runtime-request-timeout=15m
    # --cgroup-driver=systemd
 
[Install]
WantedBy=multi-user.target
EOF

8.2.2 所有节点创建kubelet的配置文件

cat > /etc/kubernetes/kubelet-conf.yml <<EOF
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /etc/kubernetes/pki/ca.pem
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
cgroupDriver: systemd
cgroupsPerQOS: true
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
containerLogMaxFiles: 5
containerLogMaxSize: 10Mi
contentType: application/vnd.kubernetes.protobuf
cpuCFSQuota: true
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
enableControllerAttachDetach: true
enableDebuggingHandlers: true
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 20s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
iptablesDropBit: 15
iptablesMasqueradeBit: 14
kubeAPIBurst: 10
kubeAPIQPS: 5
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
registryBurst: 10
registryPullQPS: 5
resolvConf: /etc/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
staticPodPath: /etc/kubernetes/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
volumeStatsAggPeriod: 1m0s
EOF

8.2.3 所有节点启动kubelet

systemctl daemon-reload
systemctl restart kubelet
systemctl enable --now kubelet

8.2.4 查看集群

[root@k8s-master01 ~]# kubectl  get node
NAME           STATUS     ROLES    AGE     VERSION
k8s-master01   Ready      <none>   10s     v1.26.1
k8s-master02   Ready      <none>   8s      v1.26.1
k8s-master03   Ready      <none>   6s      v1.26.1
k8s-node01     Ready      <none>   4s      v1.26.1
k8s-node02     Ready      <none>   2s      v1.26.1

8.3 kube-proxy配置

8.3.1 将kubeconfig发送至其他节点

root@k8s-master01 ~]# for NODE in k8s-master02 k8s-master03; do scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig; done
 
root@k8s-master01 ~]# for NODE in k8s-node01 k8s-node02; do scp /etc/kubernetes/kube-proxy.kubeconfig $NODE:/etc/kubernetes/kube-proxy.kubeconfig;  done

8.3.2 所有节点添加kube-proxy的service文件

cat >  /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
After=network.target
 
[Service]
ExecStart=/usr/local/bin/kube-proxy \\
  --config=/etc/kubernetes/kube-proxy.yaml \\
  --v=2
 
Restart=always
RestartSec=10s
 
[Install]
WantedBy=multi-user.target
 
EOF

8.3.3 所有节点添加kube-proxy的配置

cat > /etc/kubernetes/kube-proxy.yaml << EOF
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
clientConnection:
  acceptContentTypes: ""
  burst: 10
  contentType: application/vnd.kubernetes.protobuf
  kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig
  qps: 5
clusterCIDR: 172.16.0.0/12
configSyncPeriod: 15m0s
conntrack:
  max: null
  maxPerCore: 32768
  min: 131072
  tcpCloseWaitTimeout: 1h0m0s
  tcpEstablishedTimeout: 24h0m0s
enableProfiling: false
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: ""
iptables:
  masqueradeAll: false
  masqueradeBit: 14
  minSyncPeriod: 0s
  syncPeriod: 30s
ipvs:
  masqueradeAll: true
  minSyncPeriod: 5s
  scheduler: "rr"
  syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: "ipvs"
nodePortAddresses: null
oomScoreAdj: -999
portRange: ""
udpIdleTimeout: 250ms
 
EOF

8.3.4 所有节点启动kube-proxy

systemctl daemon-reload
systemctl restart kube-proxy
systemctl enable --now kube-proxy

9 安装网络插件

查看libseccomp的版本,确保其版本高于2.4,否则无法安装网络插件。

本节步骤只在master01上执行。

#将先前下载的软件包拷贝至master01
scp runc.amd64 root@192.168.218.100:

#升级runc
[root@k8s-master01 ~]# install -m 755 runc.amd64 /usr/local/sbin/runc
[root@k8s-master01 ~]# cp -p /usr/local/sbin/runc  /usr/local/bin/runc
[root@k8s-master01 ~]# cp -p /usr/local/sbin/runc  /usr/bin/runc

#查看当前版本
[root@k8s-master01 ~]# runc -v
runc version 1.1.4
commit: v1.1.4-0-g5fd4c4d1
spec: 1.0.2-dev
go: go1.17.10
libseccomp: 2.5.4

9.1 安装Calico

到此处后,建议创建好快照后再进行操作,后续出问题可以回滚

9.1.1 更改calico网段

首先从指定站点下载calico.yaml文件至/etc/kubernetes目录。

如果pod使用网络为192.168.0.0/16,则根据自身情况修改calico.yaml,在名为calico-config的ConfigMap中,将etcd_endpoints的值设置为etcd服务器的IP地址和端口。否则无需修改,Calico将根据运行配置自动检测CIDR。

本文档Pod网络为172.16.0.0/12,无需修改calico.yaml。

#下载calico.yaml文件
[root@k8s-master01 ~]# mkdir yaml
[root@k8s-master01 ~]# cd yaml
[root@k8s-master01 yaml]# curl https://raw.githubusercontent.com/projectcalico/calico/v3.25.0/manifests/calico-etcd.yaml -o calico.yaml

#应用清单
[root@k8s-master01 yaml]# kubectl apply -f calico.yaml
serviceaccount/calico-kube-controllers created
serviceaccount/calico-node created
configmap/calico-config created
……此处省略部分输出……

9.1.2查看容器状态

calico 初始化会比较慢,需要耐心等待一下,大约十分钟左右,再执行以下命令可看到容器状态为Running。

[root@k8s-master01 ~]# kubectl get pod -A
NAMESPACE     NAME                                      READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-57b57c56f-drxz8   1/1     Running   0          15m
kube-system   calico-node-8xz7p                         1/1     Running   0          15m
kube-system   calico-node-j5h47                         1/1     Running   0          15m
kube-system   calico-node-lqtmn                         1/1     Running   0          15m
kube-system   calico-node-qhrx6                         1/1     Running   0          15m
kube-system   calico-node-vfvwt                         1/1     Running   0          15m

9.2 安装cilium

本节操作仅在master01上完成。

9.2.1 安装helm

# 将先前下载的helm-canary-linux-amd64.tar.gz拷贝至master01
scp helm-canary-linux-amd64.tar.gz root@192.168.218.100:

 
# master01节点,解压软件包
[root@k8s-master01 ~]# tar xvf helm-canary-linux-amd64.tar.gz
[root@k8s-master01 ~]# cp linux-amd64/helm /usr/local/bin/

9.2.2 安装cilium

# 添加源
[root@k8s-master01 ~]# helm repo add cilium https://helm.cilium.io
 
# 拉取并解压cilium压缩包,离线部署时,可将拉取的cilium-*.tgz压缩包拷贝至master01节点
[root@k8s-master01 ~]# helm pull cilium/cilium
[root@k8s-master01 ~]# tar xvf cilium-*.tgz
 
# 默认参数安装
[root@k8s-master01 ~]# helm install  harbor ./cilium/ -n kube-system

#卸载命令为 helm uninstall  harbor -n kube-system

# 启用路由信息和监控插件,默认参数安装与下面的命令可选择执行,本文档选择执行以下命令
[root@k8s-master01 ~]# helm install cilium/ cilium/cilium --namespace kube-system --set hubble.relay.enabled=true --set hubble.ui.enabled=true --set prometheus.enabled=true --set operator.prometheus.enabled=true --set hubble.enabled=true --set hubble.metrics.enabled="{dns,drop,tcp,flow,port-distribution,icmp,http}"

9.2.3 查看

上步操作后,大约稍候1-2分钟,再执行以下命令,可看到容器状态为Running。

[root@k8s-master01 ~]# kubectl get pod -A | grep cil
cilium-monitoring   grafana-698bbd89d5-7z86j                                 1/1     Running            0                57m
cilium-monitoring   prometheus-75fd6464d8-hlz72                              1/1     Running            0                57m
kube-system         cilium-5w885                                             1/1     Running            0                3m59s
kube-system         cilium-c4skr                                             1/1     Running            0                3m59s
kube-system         cilium-dmqpb                                             1/1     Running            0                3m59s
kube-system         cilium-operator-65b585d467-cxqrv                         1/1     Running            0                3m59s
kube-system         cilium-operator-65b585d467-nrk26                         1/1     Running            0                3m59s
kube-system         cilium-ps9fl                                             1/1     Running            0                3m59s
kube-system         cilium-z42rr                                             1/1     Running            0                3m59s

9.2.4 下载专属监控面板

[root@k8s-master01 ~]# cd yaml
[root@k8s-master01 yaml]# wget https://raw.githubusercontent.com/cilium/cilium/1.12.1/examples/kubernetes/addons/prometheus/monitoring-example.yaml
[root@k8s-master01 yaml]# kubectl  apply -f monitoring-example.yaml
namespace/cilium-monitoring created
serviceaccount/prometheus-k8s created
configmap/grafana-config created
……此处省略部分输出……

9.2.5 下载部署测试用例

[root@k8s-master01 yaml]# wget https://raw.githubusercontent.com/cilium/cilium/master/examples/kubernetes/connectivity-check/connectivity-check.yaml

[root@k8s-master01 yaml]# sed -i "s#google.com#oiox.cn#g" connectivity-check.yaml

[root@k8s-master01 yaml]# kubectl  apply -f connectivity-check.yaml
deployment.apps/echo-a created
deployment.apps/echo-b created
deployment.apps/echo-b-host created
……此处省略部分输出……

9.2.6 查看pod

[root@k8s-master01 yaml]# kubectl  get pod -A
NAMESPACE           NAME                                                     READY   STATUS             RESTARTS         AGE
cilium-monitoring   grafana-698bbd89d5-7z86j                                 1/1     Running            0                58m
cilium-monitoring   prometheus-75fd6464d8-hlz72                              1/1     Running            0                58m
default             echo-a-568cb98744-nnw59                                  1/1     Running            0                54m
default             echo-b-64db4dfd5d-7rj86                                  1/1     Running            0                54m
default             echo-b-host-6b7bb88666-n722x                             1/1     Running            0                54m
default             host-to-b-multi-node-headless-5458c6bff-d8cph            0/1     Running            19 (43s ago)     54m
default             pod-to-a-allowed-cnp-55cb67b5c5-m9hjm                    0/1     Running            19 (54s ago)     54m
default             pod-to-a-c9b8bf6f7-2f2vv                                 0/1     Running            19 (24s ago)     54m
default             pod-to-a-denied-cnp-85fb9df657-czx7x                     1/1     Running            0                54m
default             pod-to-b-intra-node-nodeport-55784cc5c9-bdxrr            0/1     Running            19 (13s ago)     54m
default             pod-to-b-multi-node-clusterip-5c46dd6677-6zr97           0/1     Running            19 (4s ago)      54m
default             pod-to-b-multi-node-headless-748dfc6fd7-v7r2l            0/1     Running            19 (24s ago)     54m
default             pod-to-b-multi-node-nodeport-f6464499f-bj2j6             0/1     Running            19 (43s ago)     54m
default             pod-to-external-1111-96c489555-zmp42                     0/1     Running            19 (24s ago)     54m
default             pod-to-external-fqdn-allow-google-cnp-57694dc7df-7ldpf   0/1     Running            19 (14s ago)     54m
kube-system         calico-kube-controllers-57b57c56f-65nqp                  1/1     Running            0                3h14m
kube-system         calico-node-8xz7p                                        1/1     Running            0                4h28m
kube-system         calico-node-j5h47                                        1/1     Running            0                4h28m
kube-system         calico-node-lqtmn                                        1/1     Running            0                4h28m
kube-system         calico-node-q8xcg                                        1/1     Running            0                3h14m
kube-system         calico-node-vfvwt                                        1/1     Running            0                4h28m
kube-system         cilium-5w885                                             1/1     Running            0                5m20s
kube-system         cilium-c4skr                                             1/1     Running            0                5m20s
kube-system         cilium-dmqpb                                             1/1     Running            0                5m20s
kube-system         cilium-operator-65b585d467-cxqrv                         1/1     Running            0                5m20s
kube-system         cilium-operator-65b585d467-nrk26                         1/1     Running            0                5m20s
kube-system         cilium-ps9fl                                             1/1     Running            0                5m20s
kube-system         cilium-z42rr                                             1/1     Running            0                5m20s
kube-system         hubble-relay-69d66476f4-z2fmn                            1/1     Running            0                5m20s
kube-system         hubble-ui-59588bd5c7-kx4s8                               2/2     Running            0                5m20s

9.2.7 修改服务类型为NodePort

[root@k8s-master01 yaml]# kubectl edit svc -n kube-system hubble-ui
#将倒数第三行的type: ClusterIP改为type: NodePort,与vim相同,按:wq保存退出

[root@k8s-master01 yaml]# kubectl edit svc -n cilium-monitoring grafana
#将倒数第三行的type: ClusterIP改为type: NodePort

[root@k8s-master01 yaml]# kubectl edit svc -n cilium-monitoring prometheus
#将倒数第三行的type: ClusterIP改为type: NodePort

9.2.8 查看端口

查看grafana、prometheus和hubble-ui服务的端口,如图中的30550、30123和30092。

[root@k8s-master01 yaml]# kubectl get svc -A | grep monit
cilium-monitoring   grafana                NodePort    10.103.26.114    <none>        3000:30550/TCP   43m
cilium-monitoring   prometheus             NodePort    10.98.21.159     <none>        9090:30123/TCP   43m

[root@k8s-master01 yaml]# kubectl get svc -A | grep hubble
kube-system         hubble-metrics         ClusterIP   None             <none>        9965/TCP         7m55s
kube-system         hubble-peer            ClusterIP   10.105.135.226   <none>        443/TCP          7m55s
kube-system         hubble-relay           ClusterIP   10.109.191.76    <none>        80/TCP           7m55s
kube-system         hubble-ui              NodePort    10.103.19.102    <none>        80:30092/TCP     7m55s

9.2.9 访问

依次访问master01上的30550、30123和30092端口,如用VM虚拟机部署,则访问地址如下:

http://master01的IP地址:30550
http://master01的IP地址:30123
http://master01的IP地址:30092

本文档采用华为ECS部署,这里使用了master01的EIP,访问效果如下图所示。

10 安装CoreDNS

本节只在master01节点上操作

10.1 查看或修改文件

在coredns.yaml文件中可查看或修改clusterIP,本文档采用默认clusterIP。

[root@k8s-master01 ~]# cd coredns/
[root@k8s-master01 coredns]# cat coredns.yaml | grep clusterIP:
  clusterIP: 10.96.0.10

10.2 安装CoreDNS

[root@k8s-master01 coredns]# kubectl create -f coredns.yaml
serviceaccount/coredns created
clusterrole.rbac.authorization.k8s.io/system:coredns created
clusterrolebinding.rbac.authorization.k8s.io/system:coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created

11.安装Metrics Server

本节只在master01节点上操作

11.1 安装Metrics-server

在新版的Kubernetes中系统资源的采集均使用Metrics-server,可以通过Metrics采集节点和Pod的内存、磁盘、CPU和网络的使用率。

[root@k8s-master01 ~]# cd metrics-server
[root@k8s-master01 metrics-server]# kubectl  apply -f metrics-server.yaml
serviceaccount/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
service/metrics-server created
deployment.apps/metrics-server created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created

11.2 查看状态

稍等片刻查看状态

[root@k8s-master01 metrics-server]# kubectl top node
NAME           CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
k8s-master01   113m         0%     3245Mi          10%       
k8s-master02   129m         0%     2086Mi          6%        
k8s-master03   129m         0%     2274Mi          7%        
k8s-node01     86m          0%     1373Mi          4%        
k8s-node02     179m         1%     1739Mi          5% 

12 集群验证

本节只在master01节点上操作

12.1 部署pod资源

12.1.1 部署pod资源

[root@k8s-master01 ~]# cat<<EOF | kubectl apply -f -
> apiVersion: v1
> kind: Pod
> metadata:
>   name: busybox
>   namespace: default
> spec:
>   containers:
>   - name: busybox
>     image: docker.io/library/busybox:1.28.3
>     command:
>       - sleep
>       - "3600"
>     imagePullPolicy: IfNotPresent
>   restartPolicy: Always
> EOF

12.1.2 查看pod资源

[root@k8s-master01 ~]# kubectl  get pod
NAME                                                     READY   STATUS             RESTARTS          AGE
busybox                                                  1/1     Running            0                 20s
……此处省略其它pod资源信息……

 如若拉取失败,可执行【kubectl describe pod busybox】命令查看失败原因。

12.2 用pod解析默认命名空间中的kubernetes

#查看svc
[root@k8s-master01 ~]# kubectl get svc
NAME                   TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
echo-a                 ClusterIP   10.104.127.34   <none>        8080/TCP         18h
echo-b                 NodePort    10.103.67.31    <none>        8080:31414/TCP   18h
echo-b-headless        ClusterIP   None            <none>        8080/TCP         18h
echo-b-host-headless   ClusterIP   None            <none>        <none>           18h
kubernetes             ClusterIP   10.96.0.1       <none>        443/TCP          27h


[root@k8s-master01 ~]# kubectl exec  busybox -n default -- nslookup kubernetes
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      kubernetes
Address 1: 10.96.0.1 kubernetes.default.svc.cluster.local

若提示** server can't find kubernetes.svc.cluster.local: NXDOMAIN,可拉取其它版本的busybox的,很可能就是busybox的版本的问题。

12.3 测试跨命名空间是否可以解析

[root@k8s-master01 ~]# kubectl exec  busybox -n default -- nslookup kube-dns.kube-system
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      kube-dns.kube-system
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

12.4 测试每个节点能否访问Kubernetes的kubernetes svc 443

[root@k8s-master01 ~]# telnet 10.96.0.1 443
Trying 10.96.0.1...
Connected to 10.96.0.1.
Escape character is '^]'.

12.5 测试每个节点能否访问kube-dns的service 53

[root@k8s-master01 ~]# telnet 10.96.0.10 53
Trying 10.96.0.10...
Connected to 10.96.0.10.
Escape character is '^]'.

[root@k8s-master01 ~]# curl 10.96.0.10:53
curl: (52) Empty reply from server

12.6 测试Pod和Pod之间是否相通

[root@k8s-master01 ~]# kubectl get pod -owide
NAME                                                     READY   STATUS    RESTARTS        AGE     IP                NODE           NOMINATED NODE   READINESS GATES
busybox                                                  1/1     Running   0               8m53s   10.0.2.237        k8s-master03   <none>           <none>
echo-a-568cb98744-nnw59                                  1/1     Running   0               18h     10.0.3.127        k8s-master01   <none>           <none>
echo-b-64db4dfd5d-7rj86                                  1/1     Running   0               18h     10.0.2.175        k8s-master03   <none>           <none>
echo-b-host-6b7bb88666-n722x                             1/1     Running   0               18h     192.168.218.102   k8s-master03   <none>           <none>
host-to-b-multi-node-clusterip-6cfc94d779-96fw6          1/1     Running   18 (17h ago)    18h     192.168.218.101   k8s-master02   <none>           <none>
host-to-b-multi-node-headless-5458c6bff-d8cph            1/1     Running   26 (17h ago)    18h     192.168.218.104   k8s-node02     <none>           <none>
pod-to-a-allowed-cnp-55cb67b5c5-m9hjm                    1/1     Running   26 (17h ago)    18h     10.0.1.145        k8s-master02   <none>           <none>
pod-to-a-c9b8bf6f7-2f2vv                                 1/1     Running   26 (17h ago)    18h     10.0.0.190        k8s-node02     <none>           <none>
pod-to-a-denied-cnp-85fb9df657-czx7x                     1/1     Running   0               18h     10.0.4.175        k8s-node01     <none>           <none>
pod-to-b-intra-node-nodeport-55784cc5c9-bdxrr            1/1     Running   26 (17h ago)    18h     10.0.2.51         k8s-master03   <none>           <none>
pod-to-b-multi-node-clusterip-5c46dd6677-6zr97           1/1     Running   26 (17h ago)    18h     10.0.3.76         k8s-master01   <none>           <none>
pod-to-b-multi-node-headless-748dfc6fd7-v7r2l            1/1     Running   26 (17h ago)    18h     10.0.3.224        k8s-master01   <none>           <none>
pod-to-b-multi-node-nodeport-f6464499f-bj2j6             1/1     Running   26 (17h ago)    18h     10.0.4.66         k8s-node01     <none>           <none>
pod-to-external-1111-96c489555-zmp42                     0/1     Running   321 (55s ago)   18h     10.0.1.198        k8s-master02   <none>           <none>
pod-to-external-fqdn-allow-google-cnp-57694dc7df-7ldpf   0/1     Running   321 (5s ago)    18h     10.0.4.132        k8s-node01     <none>           <none>


[root@k8s-master01 ~]# kubectl get pod -n kube-system -owide
NAME                                      READY   STATUS    RESTARTS   AGE   IP                NODE           NOMINATED NODE   READINESS GATES
calico-kube-controllers-57b57c56f-65nqp   1/1     Running   0          21h   172.16.58.193     k8s-node02     <none>           <none>
calico-node-8xz7p                         1/1     Running   0          22h   192.168.218.101   k8s-master02   <none>           <none>
calico-node-j5h47                         1/1     Running   0          22h   192.168.218.102   k8s-master03   <none>           <none>
calico-node-lqtmn                         1/1     Running   0          22h   192.168.218.100   k8s-master01   <none>           <none>
calico-node-q8xcg                         1/1     Running   0          21h   192.168.218.104   k8s-node02     <none>           <none>
calico-node-vfvwt                         1/1     Running   0          22h   192.168.218.103   k8s-node01     <none>           <none>
cilium-5w885                              1/1     Running   0          18h   192.168.218.101   k8s-master02   <none>           <none>
cilium-c4skr                              1/1     Running   0          18h   192.168.218.103   k8s-node01     <none>           <none>
cilium-dmqpb                              1/1     Running   0          18h   192.168.218.100   k8s-master01   <none>           <none>
cilium-operator-65b585d467-cxqrv          1/1     Running   0          18h   192.168.218.102   k8s-master03   <none>           <none>
cilium-operator-65b585d467-nrk26          1/1     Running   0          18h   192.168.218.104   k8s-node02     <none>           <none>
cilium-ps9fl                              1/1     Running   0          18h   192.168.218.104   k8s-node02     <none>           <none>
cilium-z42rr                              1/1     Running   0          18h   192.168.218.102   k8s-master03   <none>           <none>
coredns-568bb5dbff-dz2vx                  1/1     Running   0          17h   10.0.4.42         k8s-node01     <none>           <none>
hubble-relay-69d66476f4-z2fmn             1/1     Running   0          18h   10.0.1.86         k8s-master02   <none>           <none>
hubble-ui-59588bd5c7-kx4s8                2/2     Running   0          18h   10.0.1.34         k8s-master02   <none>           <none>
metrics-server-679f8d6774-szn9l           1/1     Running   0          17h   10.0.0.90         k8s-node02     <none>           <none>


#进入busybox ping其他节点上的pod
[root@k8s-master01 ~]# kubectl exec -ti busybox -- sh
/ # ping -c4 192.168.218.103
PING 192.168.218.103 (192.168.218.103): 56 data bytes
64 bytes from 192.168.218.103: seq=0 ttl=62 time=0.411 ms
64 bytes from 192.168.218.103: seq=1 ttl=62 time=0.278 ms
64 bytes from 192.168.218.103: seq=2 ttl=62 time=0.269 ms
64 bytes from 192.168.218.103: seq=3 ttl=62 time=0.337 ms

--- 192.168.218.103 ping statistics ---
4 packets transmitted, 4 packets received, 0% packet loss
round-trip min/avg/max = 0.269/0.323/0.411 ms

/ # exit
[root@k8s-master01 ~]# 

# 可以连通证明这个pod是可以跨命名空间和跨主机通信的

12.6 测式分布式部署

创建三个nginx副本,可以看到3个副本分布在不同的节点上,用完可以删除。

[root@k8s-master01 ~]# cd yaml/
[root@k8s-master01 yaml]# cat > deployments.yaml << EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: docker.io/library/nginx:latest
        ports:
        - containerPort: 80
EOF

[root@k8s-master01 yaml]# kubectl apply -f deployments.yaml
deployment.apps/nginx-deployment created

[root@k8s-master01 yaml]# kubectl get pod | grep nginx
nginx-deployment-7d8f6659d6-bpqk2                        1/1     Running            0                87s
nginx-deployment-7d8f6659d6-j4bbc                        1/1     Running            0                2m11s
nginx-deployment-7d8f6659d6-mf2gj                        1/1     Running            0                115s

#查看各个nginx容器的详细信息,可看到三个容器分别部署在不同节点上
[root@k8s-master01 yaml]# kubectl describe pod nginx
Name:             nginx-deployment-7d8f6659d6-bpqk2
Namespace:        default
Priority:         0
Service Account:  default
Node:             k8s-node01/192.168.218.103
Start Time:       Mon, 27 Feb 2023 15:11:52 +0800
Labels:           app=nginx
                  pod-template-hash=7d8f6659d6
Annotations:      <none>
Status:           Running
IP:               10.0.4.105
IPs:
  IP:           10.0.4.105
Controlled By:  ReplicaSet/nginx-deployment-7d8f6659d6
……此处省略其它输出信息……

# 不需要时,可删除这些容器
#删除某一个容器
[root@k8s-master01 yaml]# kubectl delete pod 此处填容器名称

#删除部署的所有nginx容器
[root@k8s-master01 yaml]# kubectl delete -f deployments.yaml
deployment.apps "nginx-deployment" deleted

13 安装dashboard

本节步骤只在master01上执行。

13.1 安装dashboard

1、获取部署dashboard的配置文件

# 下载部署配置文件
[root@k8s-master01 ~]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml

# 若下载不了,可复制以下内容保存
[root@k8s-master01 ~]# vim kubernetes-dashboard.yaml 

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kube-system
type: Opaque
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: kubernetes-dashboard-minimal
  namespace: kube-system
rules:
- apiGroups: [""]
  resources: ["secrets"]
  verbs: ["create"]
- apiGroups: [""]
  resources: ["configmaps"]
  verbs: ["create"]
- apiGroups: [""]
  resources: ["secrets"]
  resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]
  verbs: ["get", "update", "delete"]
- apiGroups: [""]
  resources: ["configmaps"]
  resourceNames: ["kubernetes-dashboard-settings"]
  verbs: ["get", "update"]
- apiGroups: [""]
  resources: ["services"]
  resourceNames: ["heapster"]
  verbs: ["proxy"]
- apiGroups: [""]
  resources: ["services/proxy"]
  resourceNames: ["heapster", "http:heapster:", "https:heapster:"]
  verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard-minimal
  namespace: kube-system
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubernetes-dashboard
  namespace: kube-system
---
kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
      - name: kubernetes-dashboard
        image: registry.cn-hangzhou.aliyuncs.com/kubeapps/k8s-gcr-kubernetes-dashboard-amd64:v1.8.3
        ports:
        - containerPort: 8443
          protocol: TCP
        args:
          - --auto-generate-certificates
        volumeMounts:
        - name: kubernetes-dashboard-certs
          mountPath: /certs
        - mountPath: /tmp
          name: tmp-volume
        livenessProbe:
          httpGet:
            scheme: HTTPS
            path: /
            port: 8443
          initialDelaySeconds: 30
          timeoutSeconds: 30
      volumes:
      - name: kubernetes-dashboard-certs
        secret:
          secretName: kubernetes-dashboard-certs
      - name: tmp-volume
        emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
---
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 31080
  selector:
    k8s-app: kubernetes-dashboard

2、部署dashboard

[root@k8s-master01 ~]# kubectl apply -f kubernetes-dashboard.yaml 
secret/kubernetes-dashboard-certs created
serviceaccount/kubernetes-dashboard created
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
deployment.apps/kubernetes-dashboard created
service/kubernetes-dashboard created

13.2 查看dashboard容器

在部署配置文件中,dashboard被指定安装在kube-system命名空间,服务类型指定为NodePort,对外端口指定为31080,等待1-2分钟后可查看pod的详情。

# 1分钟后,查看dashboard pod名称及其工作状态
[root@k8s-master01 ~]# kubectl get pod -n kube-system | grep dashboard
kubernetes-dashboard-549694665c-mpg52      1/1     Running   0          20m

# 根据上面的pod名称,查看pod详情,详情中可以看到该pod部署在哪个节点
[root@k8s-master01 ~]# kubectl describe pod kubernetes-dashboard-549694665c-mpg52 -n kube-system

……此处省略部分输出……
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  22m   default-scheduler  Successfully assigned kube-system/kubernetes-dashboard-549694665c-mpg52 to k8s-node-1
  Normal  Pulled     22m   kubelet            Container image "registry.cn-hangzhou.aliyuncs.com/kubeapps/k8s-gcr-kubernetes-dashboard-amd64:v1.8.3" already present on machine
  Normal  Created    22m   kubelet            Created container kubernetes-dashboard
  Normal  Started    22m   kubelet            Started container kubernetes-dashboard


# 查看dashboard服务类型与端口
[root@k8s-master01 ~]# kubectl get svc -n kube-system | grep dashboard
kubernetes-dashboard   NodePort    10.108.153.227   <none>        443:31080/TCP            21m

13.3 创建身份验证令牌(RBAC)

1、创建服务账号配置文件 

[root@k8s-master01 ~]# vim createuser.yaml 

apiVersion: v1
kind: ServiceAccount
metadata:
    name: admin-user
    namespace: kube-system

 2、创建集群角色绑定配置文件 

[root@k8s-master-1 ~]# vim createClusterRoleBinding.yaml 

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: admin-user
    namespace: kube-system

3、创建账号和集群角色绑定

[root@k8s-master-1 ~]# kubectl apply -f createuser.yaml 
serviceaccount/admin-user created

[root@k8s-master-1 ~]# kubectl apply -f createClusterRoleBinding.yaml 
clusterrolebinding.rbac.authorization.k8s.io/admin-user created

4、获取账号token

[root@k8s-master01 ~]# kubectl -n kube-system create token admin-user
eyJhbGciOiJSUzI1NiIsImtpZCI6Imp3TVliUi1MQUxFMl9TSnlUMXlTYW1HdG8zSnB6NDZLTGpwMDhsSHBMUFkifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHquc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjgwNjgxMDAyLCJpYXQiOjE2ODA2Nzc0MDIsimlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiMjdiNDcwMzAtMDk0MS00MmEwLWIzNGMtOTY1M2I1ZTEyOGZmIn19LCJuYmYiOjE2ODA2Nzc0MDIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTphZG1pbi11c2VyIn0.cbgOhkOsIMBet4HDK-ivipIEqL0EVE_m_dFxT4tomLZI_YnTUxRz7l-_QT76dVkid8mqCUQMyBURa2gxYp-BbsBnQbJ1ww6ABTgw-5ZFTyPrBDFem2LmjzktwfeMMviGzr2a_A_vEUr4agw0iA8WDXXXFTMQEDkO_hNMmH4feYITlJiqF6cwUOzIKpmIFYfX1WTjiZMDpGLtRNS1g0JBbajWyJw9_GmFPLn9ofjyTpRa4dque3w920nfalbm9MHBLUG8M7VAm_IP7P6sNn8WbzKa3ahs9eab42Y-oVfF1KgwZipZkBOZsyC41mvFeie2Jpd83qyKHv71mAvSkN8P7w

13.4 登录dashboard

复制上面生成的token,并访问URL地址:https://dashboard容器所在节点IP地址:31080

 

 粘贴前面生成的token,登录成功后的效果如下图所示。

14 安装ingress

14.1 创建文件

14.1.1 创建deploy.yaml文件

[root@k8s-master01 ~]# mkdir ingress
[root@k8s-master01 ~]# cd ingress/

#利用以下内容创建deploy.yaml文件
[root@k8s-master01 ingress]# cat deploy.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx


---
# Source: ingress-nginx/templates/controller-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.10
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx
  namespace: ingress-nginx
automountServiceAccountToken: true
---
# Source: ingress-nginx/templates/controller-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.10
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller
  namespace: ingress-nginx
data:
  allow-snippet-annotations: 'true'
---
# Source: ingress-nginx/templates/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.10
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.0
    app.kubernetes.io/managed-by: Helm
  name: ingress-nginx
rules:
  - apiGroups:
      - ''
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
      - namespaces
    verbs:
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ''
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingresses/status
    verbs:
      - update
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingressclasses
    verbs:
      - get
      - list
      - watch
---
# Source: ingress-nginx/templates/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.10
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.0
    app.kubernetes.io/managed-by: Helm
  name: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-nginx
subjects:
  - kind: ServiceAccount
    name: ingress-nginx
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.10
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx
  namespace: ingress-nginx
rules:
  - apiGroups:
      - ''
    resources:
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ''
    resources:
      - configmaps
      - pods
      - secrets
      - endpoints
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingresses/status
    verbs:
      - update
  - apiGroups:
      - networking.k8s.io
    resources:
      - ingressclasses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ''
    resources:
      - configmaps
    resourceNames:
      - ingress-controller-leader
    verbs:
      - get
      - update
  - apiGroups:
      - ''
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ''
    resources:
      - events
    verbs:
      - create
      - patch
---
# Source: ingress-nginx/templates/controller-rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.10
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx
  namespace: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-nginx
subjects:
  - kind: ServiceAccount
    name: ingress-nginx
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/controller-service-webhook.yaml
apiVersion: v1
kind: Service
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.10
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller-admission
  namespace: ingress-nginx
spec:
  type: ClusterIP
  ports:
    - name: https-webhook
      port: 443
      targetPort: webhook
      appProtocol: https
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/component: controller
---
# Source: ingress-nginx/templates/controller-service.yaml
apiVersion: v1
kind: Service
metadata:
  annotations:
  labels:
    helm.sh/chart: ingress-nginx-4.0.10
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  type: NodePort
  externalTrafficPolicy: Local
  ipFamilyPolicy: SingleStack
  ipFamilies:
    - IPv4
  ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: http
      appProtocol: http
    - name: https
      port: 443
      protocol: TCP
      targetPort: https
      appProtocol: https
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/component: controller
---
# Source: ingress-nginx/templates/controller-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.10
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/instance: ingress-nginx
      app.kubernetes.io/component: controller
  revisionHistoryLimit: 10
  minReadySeconds: 0
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/component: controller
    spec:
      dnsPolicy: ClusterFirst
      containers:
        - name: controller
          image: registry.cn-hangzhou.aliyuncs.com/chenby/controller:v1.2.0 
          imagePullPolicy: IfNotPresent
          lifecycle:
            preStop:
              exec:
                command:
                  - /wait-shutdown
          args:
            - /nginx-ingress-controller
            - --election-id=ingress-controller-leader
            - --controller-class=k8s.io/ingress-nginx
            - --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
            - --validating-webhook=:8443
            - --validating-webhook-certificate=/usr/local/certificates/cert
            - --validating-webhook-key=/usr/local/certificates/key
          securityContext:
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            runAsUser: 101
            allowPrivilegeEscalation: true
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
            - name: LD_PRELOAD
              value: /usr/local/lib/libmimalloc.so
          livenessProbe:
            failureThreshold: 5
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
            - name: https
              containerPort: 443
              protocol: TCP
            - name: webhook
              containerPort: 8443
              protocol: TCP
          volumeMounts:
            - name: webhook-cert
              mountPath: /usr/local/certificates/
              readOnly: true
          resources:
            requests:
              cpu: 100m
              memory: 90Mi
      nodeSelector:
        kubernetes.io/os: linux
      serviceAccountName: ingress-nginx
      terminationGracePeriodSeconds: 300
      volumes:
        - name: webhook-cert
          secret:
            secretName: ingress-nginx-admission
---
# Source: ingress-nginx/templates/controller-ingressclass.yaml
# We don't support namespaced ingressClass yet
# So a ClusterRole and a ClusterRoleBinding is required
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.10
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: controller
  name: nginx
  namespace: ingress-nginx
spec:
  controller: k8s.io/ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/validating-webhook.yaml
# before changing this value, check the required kubernetes version
# https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#prerequisites
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
  labels:
    helm.sh/chart: ingress-nginx-4.0.10
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
  name: ingress-nginx-admission
webhooks:
  - name: validate.nginx.ingress.kubernetes.io
    matchPolicy: Equivalent
    rules:
      - apiGroups:
          - networking.k8s.io
        apiVersions:
          - v1
        operations:
          - CREATE
          - UPDATE
        resources:
          - ingresses
    failurePolicy: Fail
    sideEffects: None
    admissionReviewVersions:
      - v1
    clientConfig:
      service:
        namespace: ingress-nginx
        name: ingress-nginx-controller-admission
        path: /networking/v1/ingresses
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: ingress-nginx-admission
  namespace: ingress-nginx
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-4.0.10
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrole.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: ingress-nginx-admission
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-4.0.10
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
rules:
  - apiGroups:
      - admissionregistration.k8s.io
    resources:
      - validatingwebhookconfigurations
    verbs:
      - get
      - update
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/clusterrolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: ingress-nginx-admission
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-4.0.10
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-nginx-admission
subjects:
  - kind: ServiceAccount
    name: ingress-nginx-admission
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: ingress-nginx-admission
  namespace: ingress-nginx
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-4.0.10
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
rules:
  - apiGroups:
      - ''
    resources:
      - secrets
    verbs:
      - get
      - create
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/rolebinding.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: ingress-nginx-admission
  namespace: ingress-nginx
  annotations:
    helm.sh/hook: pre-install,pre-upgrade,post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-4.0.10
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-nginx-admission
subjects:
  - kind: ServiceAccount
    name: ingress-nginx-admission
    namespace: ingress-nginx
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-createSecret.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: ingress-nginx-admission-create
  namespace: ingress-nginx
  annotations:
    helm.sh/hook: pre-install,pre-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-4.0.10
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
spec:
  template:
    metadata:
      name: ingress-nginx-admission-create
      labels:
        helm.sh/chart: ingress-nginx-4.0.10
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/version: 1.1.0
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: admission-webhook
    spec:
      containers:
        - name: create
          image: registry.cn-hangzhou.aliyuncs.com/chenby/kube-webhook-certgen:v1.2.0 
          imagePullPolicy: IfNotPresent
          args:
            - create
            - --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
            - --namespace=$(POD_NAMESPACE)
            - --secret-name=ingress-nginx-admission
          env:
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          securityContext:
            allowPrivilegeEscalation: false
      restartPolicy: OnFailure
      serviceAccountName: ingress-nginx-admission
      nodeSelector:
        kubernetes.io/os: linux
      securityContext:
        runAsNonRoot: true
        runAsUser: 2000
---
# Source: ingress-nginx/templates/admission-webhooks/job-patch/job-patchWebhook.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: ingress-nginx-admission-patch
  namespace: ingress-nginx
  annotations:
    helm.sh/hook: post-install,post-upgrade
    helm.sh/hook-delete-policy: before-hook-creation,hook-succeeded
  labels:
    helm.sh/chart: ingress-nginx-4.0.10
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 1.1.0
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
spec:
  template:
    metadata:
      name: ingress-nginx-admission-patch
      labels:
        helm.sh/chart: ingress-nginx-4.0.10
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/version: 1.1.0
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/component: admission-webhook
    spec:
      containers:
        - name: patch
          image: registry.cn-hangzhou.aliyuncs.com/chenby/kube-webhook-certgen:v1.1.1 
          imagePullPolicy: IfNotPresent
          args:
            - patch
            - --webhook-name=ingress-nginx-admission
            - --namespace=$(POD_NAMESPACE)
            - --patch-mutating=false
            - --secret-name=ingress-nginx-admission
            - --patch-failure-policy=Fail
          env:
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          securityContext:
            allowPrivilegeEscalation: false
      restartPolicy: OnFailure
      serviceAccountName: ingress-nginx-admission
      nodeSelector:
        kubernetes.io/os: linux
      securityContext:
        runAsNonRoot: true
        runAsUser: 2000

14.1.2 创建backend.yaml文件 

[root@k8s-master01 ingress]# cat > backend.yaml << EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: default-http-backend
  labels:
    app.kubernetes.io/name: default-http-backend
  namespace: kube-system
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: default-http-backend
  template:
    metadata:
      labels:
        app.kubernetes.io/name: default-http-backend
    spec:
      terminationGracePeriodSeconds: 60
      containers:
      - name: default-http-backend
        image: registry.cn-hangzhou.aliyuncs.com/chenby/defaultbackend-amd64:1.5 
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 30
          timeoutSeconds: 5
        ports:
        - containerPort: 8080
        resources:
          limits:
            cpu: 10m
            memory: 20Mi
          requests:
            cpu: 10m
            memory: 20Mi
---
apiVersion: v1
kind: Service
metadata:
  name: default-http-backend
  namespace: kube-system
  labels:
    app.kubernetes.io/name: default-http-backend
spec:
  ports:
  - port: 80
    targetPort: 8080
  selector:
    app.kubernetes.io/name: default-http-backend
    
EOF

 14.1.3 创建ingress-demo-app.yaml

[root@k8s-master01 ingress]# cat > ingress-demo-app.yaml << EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-server
spec:
  replicas: 2
  selector:
    matchLabels:
      app: hello-server
  template:
    metadata:
      labels:
        app: hello-server
    spec:
      containers:
      - name: hello-server
        image: registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images/hello-server
        ports:
        - containerPort: 9000
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx-demo
  name: nginx-demo
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx-demo
  template:
    metadata:
      labels:
        app: nginx-demo
    spec:
      containers:
      - image: nginx
        name: nginx
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: nginx-demo
  name: nginx-demo
spec:
  selector:
    app: nginx-demo
  ports:
  - port: 8000
    protocol: TCP
    targetPort: 80
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: hello-server
  name: hello-server
spec:
  selector:
    app: hello-server
  ports:
  - port: 8000
    protocol: TCP
    targetPort: 9000
---
apiVersion: networking.k8s.io/v1
kind: Ingress  
metadata:
  name: ingress-host-bar
spec:
  ingressClassName: nginx
  rules:
  - host: "hello.ptuxgk.cn"
    http:
      paths:
      - pathType: Prefix
        path: "/"
        backend:
          service:
            name: hello-server
            port:
              number: 8000
  - host: "demo.ptuxgk.cn"
    http:
      paths:
      - pathType: Prefix
        path: "/nginx"  
        backend:
          service:
            name: nginx-demo
            port:
              number: 8000
EOF

14.2 执行部署

[root@k8s-master01 ~]# cd ingress/
 
[root@k8s-master01 ingress]# kubectl  apply -f deploy.yaml
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
configmap/ingress-nginx-controller created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
service/ingress-nginx-controller-admission created
service/ingress-nginx-controller created
deployment.apps/ingress-nginx-controller created
ingressclass.networking.k8s.io/nginx created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created
serviceaccount/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created
 
[root@k8s-master01 ingress]# kubectl  apply -f backend.yaml
deployment.apps/default-http-backend created
service/default-http-backend created

 
# 等创建完成后在执行,大约等待2-3分钟后再执行:
[root@k8s-master01 ingress]# kubectl  apply -f ingress-demo-app.yaml 
deployment.apps/hello-server created
deployment.apps/nginx-demo created
service/nginx-demo created
service/hello-server created
ingress.networking.k8s.io/ingress-host-bar created
 
 
[root@k8s-master01 ingress]# kubectl  get ingress
NAME               CLASS   HOSTS                            ADDRESS           PORTS   AGE
ingress-host-bar   nginx   hello.ptuxgk.cn,demo.ptuxgk.cn   192.168.218.100   80      2m40s

14.3 过滤查看ingress端口 

[root@k8s-master01 ingress]# kubectl  get svc -A | grep ingress
ingress-nginx       ingress-nginx-controller             NodePort    10.110.35.238    <none>        80:30370/TCP,443:31483/TCP   5m36s
ingress-nginx       ingress-nginx-controller-admission   ClusterIP   10.105.66.156    <none>        443/TCP                      5m36s

参考文献

[1] 二进制安装Kubernetes(k8s) v1.26.1 IPv4/IPv6双栈 可脱离互联网_小陈运维的博客-CSDN博客

[2] Install Calico | Calico Documentation

[3] Kubernetes 

[4] Kubernetes 文档 | Kubernetes

  • 3
    点赞
  • 13
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值