二进制部署Kubernetes v1.27.1 高可用集群(保姆级超详细,可通过在线或离线部署)

本文将针对k8s每个组件,逐一部署,了解每个组件所需要的文件以及配置,在部署上更清晰易懂。更方便于日后问题排查。

介绍:Kubernetes(简称为:k8s)是Google在2014年6月开源的一个容器集群管理系统,使用Go语言开发,用于管理云平台中多个主机上的容器化的应用,Kubernetes的目标是让部署容器化的应用简单并且高效,Kubernetes提供了资源调度、部署管理、服务发现、扩容缩容、监控,维护等一整套功能,努力成为跨主机集群的自动部署、扩展以及运行应用程序容器的平台。 它支持一系列容器工具, 包括Docker、Containerd等。

版本说明:https://kubernetes.io/zh-cn/releases/

1 环境说明

1.1 主机规划

集群为2主1从

hostnameip软件
master1192.168.1.21cfssl,containerd,etcd,kube-apiserver,kube-controller-manager,kube-schedule,kubelet,kube-proxy,nginx,keepalived
master2192.168.1.22containerd,etcd,kube-apiserver,kube-controller-manager,kube-schedule,kubelet,kube-proxy,nginx,keepalived
node1192.168.1.23containerd,kubelet,kube-proxy
docker-registry192.168.1.9docker

说明: 在部署服务中,镜像官网都是国外网站,如果网络不好有波动,会出现镜像拉取失败的报错,docker-registry上部署镜像仓库,主要为集群提供镜像,上面的镜像都是根据官网yaml文件提前拉取好的镜像。方便集群的快速部署。

1.2 软件版本选择

软件版本
kernelv6.3.3-1.el7
CentOsv7.9.2009
containerdv1.7.1
runcv1.1.4
cfsslv1.6.3
etcdv3.5.8
nginx1.23.3
keepalivedyum源默认
kube-apiserver,kube-controller-manager,kube-schedule,kubelet,kube-proxyv1.27.1
docker24.0.1
calicov3.25
corednsv1.9.4
dashboardv2.5.0

1.3 网络分配

集群安装时会涉及到三个网段:

宿主机网段:就是安装k8s的服务器

Pod网段:k8s Pod的网段,相当于容器的IP

Service网段:k8s service网段,service用于集群容器通信。

vip: apiserver高可用地址

网络名称网段
Node网络192.168.1.0/24
Service网络10.96.0.0/16
Pod网络10.244.0.0/16
vip(虚拟ip)192.168.1.25

2 基础环境配置

2.1 配置ip地址和主机名

hostnamectl set-hostname master1
nmcli connection modify ens33 ipv4.method manual ipv4.addresses "192.168.1.21/24" ipv4.gateway 192.168.1.254 ipv4.dns "8.8.8.8,114.114.114.114" connection.autoconnect yes
nmcli connection up ens33

hostnamectl set-hostname master2
nmcli connection modify ens33 ipv4.method manual ipv4.addresses "192.168.1.22/24" ipv4.gateway 192.168.1.254 ipv4.dns "8.8.8.8,114.114.114.114" connection.autoconnect yes
nmcli connection up ens33


hostnamectl set-hostname node1
nmcli connection modify ens33 ipv4.method manual ipv4.addresses "192.168.1.23/24" ipv4.gateway 192.168.1.254 ipv4.dns "8.8.8.8,114.114.114.114" connection.autoconnect yes
nmcli connection up ens33

hostnamectl set-hostname docker-registry
nmcli connection modify ens33 ipv4.method manual ipv4.addresses "192.168.1.9/24" ipv4.gateway 192.168.1.254 ipv4.dns "8.8.8.8,114.114.114.114" connection.autoconnect yes
nmcli connection up ens33

说明

网关需要和自己VMware配置的对应,配置dns为了方便访问互联网

ping baidu.com
PING baidu.com (39.156.66.10) 56(84) bytes of data.
64 bytes from 39.156.66.10 (39.156.66.10): icmp_seq=1 ttl=128 time=213 ms
64 bytes from 39.156.66.10 (39.156.66.10): icmp_seq=2 ttl=128 time=109 ms

2.2 配置IP和主机名解析

[root@master1 ~]# cat >> /etc/hosts << 'EOF'
192.168.1.21 master1
192.168.1.22 master2
192.168.1.23 node1
EOF

[root@master1 ~]# for i in {22,23}
do
scp /etc/hosts 192.168.1.$i:/etc/hosts
done

2.3 配置 ssh 免密登录

#配置master1可以免密登录其他主机,大部分操作都在master1,部署完成后将其他节点需要的文件scp过去
[root@master1 ~]# ssh-keygen -t rsa -P '' -f /root/.ssh/id_rsa
[root@master1 ~]# for i in {master1,master2,node1}
do
ssh-copy-id $i
done

2.4 配置yum源

#使用本地镜像包挂载
cat >> /etc/fstab << 'EOF'
/media/CentOS-7-x86_64-Everything-2009.iso /mnt iso9660 defaults 0 0
EOF

#让配置生效
mount -a

#编写yum配置文件
cd /etc/yum.repos.d/
mkdir bak
mv * bak/

cat > local.repo << 'EOF'
[dvd]
name=dvd
baseurl=file:///mnt
enabled=1
gpgcheck=0
EOF

#查看yum软件包
yum clean all
yum repolist

或者配置国内的阿里云在线镜像 (官网有配置方法) https://developer.aliyun.com/mirror/

wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
#非阿里云ECS用户会出现 Couldn't resolve host 'mirrors.cloud.aliyuncs.com' 信息,不影响使用。用户也可自行修改相关配置: eg:
sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo

[root@master1 ~]# for i in {22,23}
do
scp -r /etc/yum.repos.d/ 192.168.1.$i:/etc/
done

2.5安装需要的工具

yum -y install wget net-tools telnet curl

2.6 关闭防火墙

[root@master1 ~]# for i in {master1,master2,node1}
do
ssh $i "systemctl disable --now firewalld"
done

2.7 关闭 Selinux

[root@master1 ~]# for i in {master1,master2,node1}
do
ssh $i "setenforce 0;sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config"
done

2.8 关闭交换分区

[root@master1 ~]# for i in {master1,master2,node1}
do
ssh $i "sed -i 's/.*swap.*/#&/' /etc/fstab;swapoff -a ;sysctl -w vm.swappiness=0"
done
 
说明: sysctl -w vm.swappiness=0 是用来设置Linux系统的虚拟内存使用策略的。其中,vm.swappiness是一个内核参数,它控制了系统在物理内存不足时,会把哪些数据交换到磁盘上的swap分区中。这个参数的值越大,系统就越倾向于使用swap分区,反之则越少。将vm.swappiness设置为0,就是告诉系统在物理内存不足时不要使用swap分区,而是直接杀掉进程。

2.9 网络配置

#配置 NetworkManager 服务,以允许 Calico 管理网络接口
[root@master1 ~]# cat > /etc/NetworkManager/conf.d/calico.conf << EOF 
[keyfile]
unmanaged-devices=interface-name:cali*;interface-name:tunl*
EOF

文件说明:
指定 NetworkManager 不应管理任何名称以 cali 或 tunl 开头的网络接口。这些接口由 Calico 用于在 Kubernetes 集群中进行网络和策略执行。默认情况下,NetworkManager 可能会干扰这些接口的操作,导致集群中的网络和策略执行出现问题。通过配置 NetworkManager 忽略这些接口,您可以确保 Calico 能够在您的 Kubernetes 集群中正确运行。

[root@master1 ~]# for i in {master1,master2,node1}
do
scp /etc/NetworkManager/conf.d/calico.conf $i:/etc/NetworkManager/conf.d/calico.conf
ssh $i "systemctl restart NetworkManager"
done

2.10 优化ssh

#修改ssh配置文件
[root@master1 ~]# vim /etc/ssh/sshd_config
UseDNS no
GSSAPIAuthentication no

[root@master1 ~]# systemctl restart sshd

[root@master1 ~]# for i in {master2,node1}
do
scp /etc/ssh/sshd_config $i:/etc/ssh/sshd_config
ssh $i "systemctl restart sshd"
done

#上面内容整理为一条命令
[root@master1 ~]# for i in {master1,master2,node1}
do
ssh $i "sed -i 's/#UseDNS\ yes/UseDNS\ no/g; s/GSSAPIAuthentication\ yes/GSSAPIAuthentication\ no/g' /etc/ssh/sshd_config"
ssh $i "systemctl restart sshd"
done

2.11 配置时间同步

#推荐使用chrony配置
[root@master1 ~]# for i in {master1,master2,node1}
 do
 ssh $i "yum -y install chrony && sed -i -e '/^server.*/d' -e '/^# Please consider .*/a\server\ ntp.aliyun.com\ iburst' /etc/chrony.conf;systemctl restart chronyd"
 done
 
#验证
[root@master1 ~]# chronyc sources -v
210 Number of sources = 1

  .-- Source mode  '^' = server, '=' = peer, '#' = local clock.
 / .- Source state '*' = current synced, '+' = combined , '-' = not combined,
| /   '?' = unreachable, 'x' = time may be in error, '~' = time too variable.
||                                                 .- xxxx [ yyyy ] +/- zzzz
||      Reachability register (octal) -.           |  xxxx = adjusted offset,
||      Log2(Polling interval) --.      |          |  yyyy = measured offset,
||                                \     |          |  zzzz = estimated error.
||                                 |    |           \
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^* 203.107.6.88                  2   6    17     3    -28us[ +681us] +/-   32ms

2.12 优化资源限制

#临时修改文件描述符的限制
ulimit -SHn 65535

[root@master1 ~]# cat >>/etc/security/limits.conf << 'EOF'
* soft nofile 65536
* hard nofile 131072
* soft nproc 65536
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited
EOF

#limit优化
[root@master1 ~]# for i in {master1,master2,node1}
do
scp /etc/security/limits.conf $i:/etc/security/limits.conf
ssh $i "ulimit -SHn 65535;sysctl --system"
done

2.13 升级内核版本至4.18以上

#3.10的kernel问题太多,也是k8s支持的最低版本,CentOS7需要升级内核才可以稳定运行kubernetes各组件,其它系统根据自己的需求去升级。
#下载内核安装包
地址:https://elrepo.org/linux/kernel/el7/x86_64/RPMS/	#lt长期维护版 ml最新稳定版

[root@master1 ~]# mkdir kernel
[root@master1 ~]# cd kernel/
[root@master1 kernel]# wget https://elrepo.org/linux/kernel/el7/x86_64/RPMS/kernel-ml-6.3.3-1.el7.elrepo.x86_64.rpm
[root@master1 kernel]# wget https://elrepo.org/linux/kernel/el7/x86_64/RPMS/kernel-ml-devel-6.3.3-1.el7.elrepo.x86_64.rpm

#安装并配置默认内核,然后重启,安装时间较长,耐心等待,安装中不要操作
[root@master1 kernel]# yum -y install kernel-ml-6.3.3-1.el7.elrepo.x86_64.rpm kernel-ml-devel-6.3.3-1.el7.elrepo.x86_64.rpm
[root@master1 kernel]# grubby --set-default $(ls /boot/vmlinuz-* | grep elrepo) && reboot

#其他节点安装
[root@master1 kernel]# for i in {master2,node1}
do
scp -r /root/kernel $i:/root/
ssh $i "cd /root/kernel;yum -y install kernel-ml-6.3.3-1.el7.elrepo.x86_64.rpm kernel-ml-devel-6.3.3-1.el7.elrepo.x86_64.rpm"
ssh $i "grubby --set-default $(ls /boot/vmlinuz-* | grep elrepo) && reboot"
done

#验证
[root@master1 ~]# for i in {master1,master2,node1}
do
ssh $i "echo $i 的内核版本是 `uname -r`"
done
master1 的内核版本是 6.3.3-1.el7.elrepo.x86_64
master2 的内核版本是 6.3.3-1.el7.elrepo.x86_64
node1 的内核版本是 6.3.3-1.el7.elrepo.x86_64

2.14 系统内核优化

[root@master1 ~]# cat > /etc/sysctl.d/k8s.conf << 'EOF'
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.netfilter.nf_conntrack_max=2310720
 
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_probes = 3
net.ipv4.tcp_keepalive_intvl =15
net.ipv4.tcp_max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.tcp_timestamps = 0
net.core.somaxconn = 16384
EOF

#分发给其他节点并加载配置
[root@master1 ~]# for i in {master1,master2,node1}
do
scp /etc/sysctl.d/k8s.conf $i:/etc/sysctl.d/k8s.conf
ssh $i "sysctl --system"
done

2.15安装ipvs及加载模块

[root@master1 ~]# yum -y install ipvsadm ipset sysstat conntrack libseccomp
[root@master1 ~]# cat >> /etc/modules-load.d/ipvs.conf <<EOF 
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
EOF

[root@master1 ~]# systemctl restart systemd-modules-load.service

[root@master1 ~]# for i in {master2,node1}
do
ssh $i "yum -y install ipvsadm ipset sysstat conntrack libseccomp && systemctl restart systemd-modules-load.service"
done

3 安装 Containerd 作为Runtime

kubernetes v1.24 版本以上推荐 Containerd

ctr是containerd本身的CLI工具。

crictl是kubernetes社区定义的专门CLI工具。

Kubernetes1.24的时候已经不在依附于docker,当时的容器技术也不仅仅只有docker可选,还有其它的容器可选,因此它就终于独立了,它决定把dockershim这个组件不再开发把它移除,这时候就变成了什么样子了,dockershim就彻底的没有了,1.24把dockershim抛弃了,docker就没办法通过CRI和k8s进行通讯,自然k8s也就没有办法来使用docker的容器技术,当然docker也没有办法通过k8s来进行结合使用,这个对于k8s来讲已经不是问题,因为k8s除了docker以外它还有其它的容器技术可选,而docker对应的企业级的编排的领域只有k8s可选,因为k8s已经是垄断地位它没有第三方可选,在这种情况下docker放低身段只好自己开发一个软件贴在CRI基础之上这就是cri-dockerd,cri-dockerd是docker公司开发出来的,人家不兼容了抛弃你了,那我就自己做个软件叫cri-dockerd来遵守CRI从而才能和k8s组合在一起。

3.1 下载并解压安装包

[root@master1 ~]# wget https://github.com/containerd/containerd/releases/download/v1.7.1/cri-containerd-cni-1.7.1-linux-amd64.tar.gz

#解压cri-containerd-cni安装包
#查看tar包文件目录结构
[root@master1 ~]# tar -tvf cri-containerd-cni-1.7.1-linux-amd64.tar.gz

#默认解压后会有如下目录:etc、opt、usr 把对应的目解压到 / 下对应目录中,这样就省去复制文件步骤
[root@master1 ~]# tar -xf cri-containerd-cni-1.7.1-linux-amd64.tar.gz -C /

#查看containerd 服务启动文件,无需修改
[root@master1 ~]# grep -v '^$\|^\s*#' /etc/systemd/system/containerd.service
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target
[Service]
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/containerd
Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5
LimitNPROC=infinity
LimitCORE=infinity
LimitNOFILE=infinity
TasksMax=infinity
OOMScoreAdjust=-999
[Install]
WantedBy=multi-user.target

3.2 配置 Containerd 所需的模块

[root@master1 ~]# cat > /etc/modules-load.d/containerd.conf << EOF
overlay
br_netfilter
EOF

#加载模块
[root@master1 ~]# systemctl restart systemd-modules-load.service

#查看containerd相关模块加载情况:
[root@master1 ~]# lsmod | egrep 'br_netfilter|overlay'

3.3 配置 Containerd 所需的内核

[root@master1 ~]# cat > /etc/sysctl.d/99-kubernetes-cri.conf << 'EOF'
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF

#加载内核
[root@master1 ~]# sysctl --system

3.4 创建 Containerd 的配置文件并修改

#创建默认配置文件
[root@master1 ~]# mkdir /etc/containerd
[root@master1 ~]# containerd config default > /etc/containerd/config.toml

[root@master1 ~]# grep SystemdCgroup /etc/containerd/config.toml
            SystemdCgroup = false	#修改为true
[root@master1 ~]# sed -i s#SystemdCgroup\ =\ false#SystemdCgroup\ =\ true# /etc/containerd/config.toml

#配置镜像地址
[root@master1 ~]# grep registry.k8s.io /etc/containerd/config.toml
    sandbox_image = "registry.k8s.io/pause:3.8"
[root@master1 ~]# sed -i s#registry.k8s.io#registry.aliyuncs.com/google_containers#  /etc/containerd/config.toml

#配置私有仓库,如果配置harbor请参考我的文章:使用 cfssl 工具生成证书离线部署Harbor企业级私有镜像仓库
[root@master1 ~]# vim /etc/containerd/config.toml
      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."192.168.1.9:5000"]
          endpoint = ["http://192.168.1.9:5000"]
          
#说明:在 [plugins."io.containerd.grpc.v1.cri".registry.mirrors] 下面添加2行内容,因为后面在部署calico等组件时,我们会拉取私有仓库上已经push好的官网镜像。实现快速安装。crictl拉取私有docker-registry仓库镜像需要此配置,不然会报错。如果网络没问题,可以实现科学上网,在后面部署中可以直接部署。可不用修改此配置。

3.5 启动并设置开机自启

[root@master1 ~]# systemctl daemon-reload
[root@master1 ~]# systemctl enable --now containerd
[root@master1 ~]# systemctl status containerd

3.6 containerd 常用命令

crictl客户端连接的运行时位置配置作用是指定crictl客户端连接的容器运行时的位置。这个位置通常是一个Unix套接字文件,它允许crictl与容器运行时进行通信,以便管理容器和镜像。如果您想使用crictl管理Docker容器,您需要将crictl配置为连接到Docker的Unix套接字文件。例如,您可以在/etc/crictl.yaml中设置runtime-endpoint和image-endpoint参数来指定containerd的Unix套接字文件路径

#修改crictl配置文件
[root@master1 ~]# cat > /etc/crictl.yaml <<EOF
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: false
EOF

[root@master1 ~]# systemctl restart  containerd

containerd 常用命令crictl

#查看
[root@master1 ~]# crictl info
[root@master1 ~]# crictl ps
#crictl常用命令:
#拉取镜像
crictl pull <image_name>

#上传镜像
crictl push <image_name> <registry>

#crictl推送镜像
crictl push <image_name> <registry>

#保存镜像为tar包
crictl image export <image_name.tar> <image_name:tag>

#给镜像打标签
crictl image tag <image_new_name> <image_name>

#重命名镜像
crictl image rename <new_name> <image_name>

#加载本地镜像包
crictk load <image.tar>

#运行pod
crictl runp <pod_name>

#查看pod
crictl ps

#查看容器
crictl ps -a

#查看镜像
crictl images

#删除容器:
crictl rm <container_id>

#删除镜像
crictl rmi <image_id>

#创建pod
crictl runp <pod_name> --port-mappings=<container_port:host_port> --mount=type=bind,src=<host_dir_path>,dst=<container_dir_path> <image_name>

3.7 安装runc

由于上述软件包中包含的runc对系统依赖过多,所以建议单独下载安装。

默认runc执行时提示:runc: symbol lookup error: runc: undefined symbol: seccomp_notify_respond

[root@master1 ~]# runc
runc: symbol lookup error: runc: undefined symbol: seccomp_notify_respond

#下载runc安装包
[root@master1 ~]# wget https://github.com/opencontainers/runc/releases/download/v1.1.4/runc.amd64


#替换掉原软件包中的runc
[root@master1 ~]# whereis runc
[root@master1 ~]# mv -f runc.amd64 /usr/local/sbin/runc
[root@master1 ~]# chmod +x /usr/local/sbin/runc

[root@master1 ~]# runc -v
runc version 1.1.4
commit: v1.1.4-0-g5fd4c4d1
spec: 1.0.2-dev
go: go1.17.10
libseccomp: 2.5.4

#重启containerd
[root@master1 ~]# systemctl restart containerd

3.8 将所有文件分发给其他节点并启动

[root@master1 ~]# for i in {master2,node1}
do
scp cri-containerd-cni-1.7.1-linux-amd64.tar.gz $i:/root/
ssh $i "tar -xf cri-containerd-cni-1.7.1-linux-amd64.tar.gz -C /"
scp /etc/modules-load.d/containerd.conf $i:/etc/modules-load.d/containerd.conf
scp /etc/sysctl.d/99-kubernetes-cri.conf $i:/etc/sysctl.d/99-kubernetes-cri.conf
scp -r /etc/containerd/ $i:/etc/
scp -p /usr/local/sbin/runc $i:/usr/local/sbin/runc
ssh $i "systemctl enable --now containerd"
done

4 部署 etcd 集群

Etcd 是一个分布式键值存储系统,Kubernetes使用Etcd进行数据存储,所以先准备一个Etcd数据库,为解决Etcd单点故障,应采用集群方式部署,这里使用3台组建集群,可容忍1台机器故障,当然,你也可以使用5台组建集群,可容忍2台机器故障
为了节省机器,这里与k8s节点复用,也可以部署在k8s机器之外,只要apiserver能连接到就行。

4.1 准备cfssl证书生成工具

#cfssl是使用go编写,由CloudFlare开源的一款PKI/TLS工具。主要程序有:
- cfssl,是CFSSL的命令行工具。
- cfssljson用来从cfssl程序获取JSON输出,并将证书,密钥,CSR和bundle写入文件中。

[root@master1 ~]# wget "https://github.com/cloudflare/cfssl/releases/download/v1.6.3/cfssl_1.6.3_linux_amd64" -O /usr/local/bin/cfssl
[root@master1 ~]# wget "https://github.com/cloudflare/cfssl/releases/download/v1.6.3/cfssljson_1.6.3_linux_amd64" -O /usr/local/bin/cfssljson
[root@master1 ~]# chmod +x /usr/local/bin/cfssl*

[root@master1 ~]# cfssl version
Version: 1.6.3
Runtime: go1.18

4.2 创建CA证书

4.2.1 配置ca证书请求文件

各种 CA 证书类型:

https://github.com/kubernetes-incubator/apiserver-builder/blob/master/docs/concepts/auth.md

#创建自签工作目录
[root@master1 ~]# mkdir -p ~/pki/{etcd,k8s}
[root@master1 ~]# cd ~/pki/etcd/

# cfssl print-defaults csr > ca-csr.json 可生成模板
#配置ca请求证书
[root@master1 etcd]# cat > etcd-ca-csr.json  << EOF 
{
  "CN": "etcd",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "etcd",
      "OU": "Etcd Security"
    }
  ],
  "ca": {
    "expiry": "876000h"
  }
}
EOF
**ca-csr.json文件中包含如下内容简要说明:**
"CN": "etcd":这将证书的通用名称 (CN) 设置为 “etcd”。
"key": {"algo": "rsa", "size": 2048}:这指定了密钥算法应该使用 RSA,密钥大小为 2048 位。
"names": [{"C": "CN", "ST": "Beijing", "L": "Beijing", "O": "etcd", "OU": "Etcd Security"}]:这设置了证书的主题信息。字段如下:
"C": "CN":这将国家名称设置为 CN (中国)。
"ST": "Beijing":这将州或省名称设置为北京。
"L": "Beijing":这将地区名称设置为北京。
"O": "etcd":这将组织名称设置为 etcd。
"OU": "Etcd Security":这将组织单位名称设置为 Etcd Security。
"ca": {"expiry": "876000h"}:这将 CA 证书的过期时间设置为 876000 小时,相当于 100 年。

4.2.2 创建ca证书

#创建证书存放目录
[root@master1 etcd]# mkdir -p /opt/etcd/{ssl,bin,cfg}

[root@master1 etcd]# cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare /opt/etcd/ssl/etcd-ca

4.2.3 配置ca证书策略

#cfssl print-defaults config > ca-config.json 此命令可生成模板文件,可在基础上修改

[root@master1 etcd]# cat > ca-config.json << EOF 
{
  "signing": {
    "default": {
      "expiry": "876000h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "876000h"
      }
    }
  }
}
EOF
"signing":此字段包含 CA 的签名过程的配置。
"default":此字段包含签名过程的默认配置。
"expiry": "876000h":将 CA 颁发的证书的默认过期时间设置为 876000 小时,相当于 100 年。
"profiles":此字段包含不同签名配置文件的配置。
"kubernetes":这是一个名为 “kubernetes” 的自定义配置文件。
"usages": ["signing", "key encipherment", "server auth", "client auth"]:设置使用此配置文件颁发的证书的允许用途。允许的用途包括签名、密钥加密、服务器身份验证和客户端身份验证。
"expiry": "876000h":将使用此配置文件颁发的证书的过期时间设置为 876000 小时,相当于 100 年

4.2.4 配置etcd请求csr文件

[root@master1 etcd]# cat > etcd-csr.json << EOF 
{
  "CN": "etcd",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "etcd",
      "OU": "Etcd Security"
    }
  ]
}
EOF

4.2.5 生成 etcd 证书

[root@master1 etcd]# cfssl gencert \
 -ca=/opt/etcd/ssl/etcd-ca.pem \
 -ca-key=/opt/etcd/ssl/etcd-ca-key.pem \
 -config=ca-config.json \
 -hostname=127.0.0.1,192.168.1.21,192.168.1.22,192.168.1.23 \
 -profile=kubernetes \
 etcd-csr.json | cfssljson -bare /opt/etcd/ssl/etcd

上述命令hostname中ip为所有etcd节点的集群内部通信ip,一个都不能少,为了方便后期扩容可以多写几个预留的ip。

4.3 下载 etcd 安装包并部署

#下载安装包
[root@master1 ~]# https://github.com/etcd-io/etcd/releases/download/v3.5.8/etcd-v3.5.8-linux-amd64.tar.gz

#解压etcd安装包
[root@master1 ~]# tar -xf etcd-v3.5.8-linux-amd64.tar.gz
[root@master1 ~]# cd etcd-v3.5.8-linux-amd64/
[root@master1 etcd-v3.5.8-linux-amd64]# cp -p etcd etcdctl /opt/etcd/bin/

#查看
[root@master1 ~]# /opt/etcd/bin/etcdctl version
etcdctl version: 3.5.8
API version: 3.5

4.3.1 创建 etcd 配置文件

[root@master1 ~]# cat > /opt/etcd/cfg/etcd.config.yml << EOF 
name: 'master1'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.1.21:2380'
listen-client-urls: 'https://192.168.1.21:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.1.21:2380'
advertise-client-urls: 'https://192.168.1.21:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'master1=https://192.168.1.21:2380,master2=https://192.168.1.22:2380,node1=https://192.168.1.23:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
  cert-file: '/opt/etcd/ssl/etcd.pem'
  key-file: '/opt/etcd/ssl/etcd-key.pem'
  client-cert-auth: true
  trusted-ca-file: '/opt/etcd/ssl/etcd-ca.pem'
  auto-tls: true
peer-transport-security:
  cert-file: '/opt/etcd/ssl/etcd.pem'
  key-file: '/opt/etcd/ssl/etcd-key.pem'
  peer-client-cert-auth: true
  trusted-ca-file: '/opt/etcd/ssl/etcd-ca.pem'
  auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
EOF

4.3.2 配置 systemd 管理 etcd

[root@master1 ~]# cat > /usr/lib/systemd/system/etcd.service << EOF

[Unit]
Description=Etcd Service
Documentation=https://coreos.com/etcd/docs/latest/
After=network.target

[Service]
Type=notify
ExecStart=/opt/etcd/bin/etcd --config-file=/opt/etcd/cfg/etcd.config.yml
Restart=on-failure
RestartSec=10
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
Alias=etcd3.service

EOF

4.3.3 将 master1 节点文件拷贝到其他节点

[root@master1 ~]# for i in {master2,node1}
do
scp -r /opt/etcd/ $i:/opt/
scp /usr/lib/systemd/system/etcd.service $i:/usr/lib/systemd/system/etcd.service
done

4.3.4 修改master2、node1上etcd.conf 配置文件中的节点名称和当前服务器IP

[root@master2 ~]# cat > /opt/etcd/cfg/etcd.config.yml << EOF 
name: 'master2'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.1.22:2380'
listen-client-urls: 'https://192.168.1.22:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.1.22:2380'
advertise-client-urls: 'https://192.168.1.22:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'master1=https://192.168.1.21:2380,master2=https://192.168.1.22:2380,node1=https://192.168.1.23:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
  cert-file: '/opt/etcd/ssl/etcd.pem'
  key-file: '/opt/etcd/ssl/etcd-key.pem'
  client-cert-auth: true
  trusted-ca-file: '/opt/etcd/ssl/etcd-ca.pem'
  auto-tls: true
peer-transport-security:
  cert-file: '/opt/etcd/ssl/etcd.pem'
  key-file: '/opt/etcd/ssl/etcd-key.pem'
  peer-client-cert-auth: true
  trusted-ca-file: '/opt/etcd/ssl/etcd-ca.pem'
  auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
EOF
[root@node1 ~]# cat > /opt/etcd/cfg/etcd.config.yml << EOF 
name: 'node1'
data-dir: /var/lib/etcd
wal-dir: /var/lib/etcd/wal
snapshot-count: 5000
heartbeat-interval: 100
election-timeout: 1000
quota-backend-bytes: 0
listen-peer-urls: 'https://192.168.1.23:2380'
listen-client-urls: 'https://192.168.1.23:2379,http://127.0.0.1:2379'
max-snapshots: 3
max-wals: 5
cors:
initial-advertise-peer-urls: 'https://192.168.1.23:2380'
advertise-client-urls: 'https://192.168.1.23:2379'
discovery:
discovery-fallback: 'proxy'
discovery-proxy:
discovery-srv:
initial-cluster: 'master1=https://192.168.1.21:2380,master2=https://192.168.1.22:2380,node1=https://192.168.1.23:2380'
initial-cluster-token: 'etcd-k8s-cluster'
initial-cluster-state: 'new'
strict-reconfig-check: false
enable-v2: true
enable-pprof: true
proxy: 'off'
proxy-failure-wait: 5000
proxy-refresh-interval: 30000
proxy-dial-timeout: 1000
proxy-write-timeout: 5000
proxy-read-timeout: 0
client-transport-security:
  cert-file: '/opt/etcd/ssl/etcd.pem'
  key-file: '/opt/etcd/ssl/etcd-key.pem'
  client-cert-auth: true
  trusted-ca-file: '/opt/etcd/ssl/etcd-ca.pem'
  auto-tls: true
peer-transport-security:
  cert-file: '/opt/etcd/ssl/etcd.pem'
  key-file: '/opt/etcd/ssl/etcd-key.pem'
  peer-client-cert-auth: true
  trusted-ca-file: '/opt/etcd/ssl/etcd-ca.pem'
  auto-tls: true
debug: false
log-package-levels:
log-outputs: [default]
force-new-cluster: false
EOF

4.3.5 启动etcd并设置开机自启

#三个节点同时启动
systemctl daemon-reload
systemctl enable --now etcd
systemctl status etcd

4.3.6 检查集群状态

# 检查ETCD数据库集群状态:
[root@master1 ~]# ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/etcd-ca.pem --cert=/opt/etcd/ssl/etcd.pem --key=/opt/etcd/ssl/etcd-key.pem --endpoints="https://192.168.1.21:2379,https://192.168.1.22:2379,https://192.168.1.23:2379" endpoint health --write-out=table
+---------------------------+--------+------------+-------+
|         ENDPOINT          | HEALTH |    TOOK    | ERROR |
+---------------------------+--------+------------+-------+
| https://192.168.1.21:2379 |   true | 7.471012ms |       |
| https://192.168.1.22:2379 |   true | 8.072207ms |       |
| https://192.168.1.23:2379 |   true | 8.248099ms |       |
+---------------------------+--------+------------+-------+

# 检查ETCD数据库性能:
[root@master1 ~]# ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/etcd-ca.pem --cert=/opt/etcd/ssl/etcd.pem --key=/opt/etcd/ssl/etcd-key.pem --endpoints="https://192.168.1.21:2379,https://192.168.1.22:2379,https://192.168.1.23:2379" check perf --write-out=table
 59 / 60  59 / 60 Boooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooom    !  98.33%
PASS: Throughput is 150 writes/sPASS: Slowest request took 0.048788s
PASS: Stddev is 0.001209s
PASS

# 检查ETCD数据库集群
[root@master1 ~]# ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/etcd-ca.pem --cert=/opt/etcd/ssl/etcd.pem --key=/opt/etcd/ssl/etcd-key.pem --endpoints="https://192.168.1.21:2379,https://192.168.1.22:2379,https://192.168.1.23:2379" member list --write-out=table
+------------------+---------+--------+---------------------------+---------------------------+------------+
|        ID        | STATUS  |  NAME  |        PEER ADDRS         |       CLIENT ADDRS        | IS LEARNER |
+------------------+---------+--------+---------------------------+---------------------------+------------+
| 22cb69b2fd1bb417 | started | etcd-1 | https://192.168.1.21:2380 | https://192.168.1.21:2379 |      false |
| 3c3bd4fd7d7e553e | started | etcd-2 | https://192.168.1.22:2380 | https://192.168.1.22:2379 |      false |
| 86ab7347ef3517f6 | started | etcd-3 | https://192.168.1.23:2380 | https://192.168.1.23:2379 |      false |
+------------------+---------+--------+---------------------------+---------------------------+------------+

# 检查ETCD数据库集群认证:
[root@master1 ~]# ETCDCTL_API=3 /opt/etcd/bin/etcdctl --cacert=/opt/etcd/ssl/etcd-ca.pem --cert=/opt/etcd/ssl/etcd.pem --key=/opt/etcd/ssl/etcd-key.pem --endpoints="https://192.168.1.21:2379,https://192.168.1.22:2379,https://192.168.1.23:2379" endpoint status --write-out=table
+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
|         ENDPOINT          |        ID        | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://192.168.1.21:2379 | 22cb69b2fd1bb417 |   3.5.8 |   35 MB |      true |      false |         2 |      15225 |              15225 |        |
| https://192.168.1.22:2379 | 3c3bd4fd7d7e553e |   3.5.8 |   35 MB |     false |      false |         2 |      15226 |              15226 |        |
| https://192.168.1.23:2379 | 86ab7347ef3517f6 |   3.5.8 |   35 MB |     false |      false |         2 |      15227 |              15227 |        |
+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+

5 部署Nginx+Keepalived高可用负载均衡器

Nginx是一个主流Web服务和反向代理服务器,这里用四层实现对apiserver实现负载均衡。
Keepalived是一个主流高可用软件,基于VIP绑定实现服务器双机热备,Keepalived主要根据Nginx运行状态判断是否需要故障转移(漂移VIP),例如当Nginx主节点挂掉,VIP会自动绑定在Nginx备节点,从而保证VIP一直可用,实现Nginx高可用。
如果你是在公有云上,一般都不支持keepalived,那么你可以直接用它们的负载均衡器产品,直接负载均衡多台Master kube-apiserver。
在master1和master2上操作

5.1安装软件nginx并配置

5.1.1下载安装包

[root@master1 ~]# wget http://nginx.org/download/nginx-1.23.3.tar.gz

5.1.2解压缩并安装

#安装依赖包
[root@master1 ~]# yum -y install gcc gcc-c++ make pcre pcre-devel zlib zlib-devel openssl openssl-devel

[root@master1 ~]# tar -xf nginx-1.23.3.tar.gz

#编译安装
[root@master1 ~]# cd nginx-1.23.3/
[root@master1 nginx-1.23.3]# ./configure --prefix=/usr/local/nginx --with-stream --without-http --without-http_uwsgi_module --without-http_scgi_module --without-http_fastcgi_module
[root@master1 nginx-1.23.3]# make && make install

--prefix=PATH:指定安装目录。
--with-xxx:指定编译选项,例如 --with-openssl 指定使用 OpenSSL 库。
--without-xxx:禁用某个模块,例如 --without-http_rewrite_module 禁用 HTTP Rewrite 模块。
--enable-xxx:启用某个模块,例如 --enable-debug 启用调试模式。
--disable-xxx:禁用某个模块,例如 --disable-shared 禁用共享库。
更多参数可以通过运行 ./configure --help 查看

5.1.3修改nginx配置文件

[root@master1 nginx-1.23.3]# cd /usr/local/nginx/conf/
[root@master1 conf]# grep -Ev "^#|^$" nginx.conf
worker_processes  1;
events {
    worker_connections  1024;
}
stream {
    log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
    access_log  /usr/local/nginx/logs/k8s-access.log  main;  
  
    upstream k8s-apiserver {
        server 192.168.1.21:6443  max_fails=3 fail_timeout=30s;
	    server 192.168.1.22:6443  max_fails=3 fail_timeout=30s; 
    }
    server {
        listen 16443;
        proxy_connect_timeout 1s;
        proxy_pass k8s-apiserver;    
    }
}

#添加 stream 模块的信息,删除http模块(编译的时候 --without-http),其余的不用修改即可

#在 Nginx 配置文件中,log_format 指令用于定义日志格式。main 是自定义的日志格式名称,$remote_addr、$upstream_addr、$time_local、$status 和 $upstream_bytes_sent 是日志格式的变量。其中,$remote_addr 表示客户端 IP 地址,$upstream_addr 表示后端服务器 IP 地址,$time_local 表示访问时间,$status 表示 HTTP 状态码,$upstream_bytes_sent 表示响应大小

5.1.4配置nginx服务文件

[root@master1 conf]# cat > /usr/lib/systemd/system/nginx.service << 'EOF'
[Unit]
Description=nginx - high performance web server
After=network.target remote-fs.target nss-lookup.target

[Service]
Type=forking
ExecStart=/usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf
ExecReload=/usr/local/nginx/sbin/nginx -s reload
ExecStop=/usr/local/nginx/sbin/nginx -s stop
PrivateTmp=true

[Install]
WantedBy=multi-user.target
EOF

5.1.5将master1上配置好的nginx同步到master2

[root@master1 ~]# scp -r /usr/local/nginx/ master2:/usr/local/
[root@master1 ~]# scp /usr/lib/systemd/system/nginx.service master2:/usr/lib/systemd/system/

5.2安装keepalived并配置

[root@master1 ~]# yum -y install keepalived.x86_64

[root@master1 ~]# grep -Ev "^#|^$" /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
}
vrrp_script check_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
    interval 5 
    weight -5
    fall 2
    rise 1
}
vrrp_instance VI_1 {
    state MASTER
    interface ens33 
    mcast_src_ip 192.168.1.21
    virtual_router_id 51
    priority 100
    nopreempt
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.1.25
    }
    track_script {
       check_apiserver 
	}
}

#     priority 100    			# 优先级,备服务器设置 90 小于主服务器即可
#     advert_int 1    			# 指定VRRP 心跳包通告间隔时间,默认1秒 

vrrp_script:指定检查nginx工作状态脚本(根据nginx状态判断是否故障转移)
virtual_ipaddress:虚拟IP(VIP)

5.2.1编写健康检查配置脚本

[root@master1 ~]# cat > /etc/keepalived/check_apiserver.sh << 'EOF'
#!/bin/bash

err=0
for k in $(seq 1 3)
do
    check_code=$(pgrep nginx)
    if [[ $check_code == "" ]]; then
        err=$(expr $err + 1)
        sleep 1
        continue
    else
        err=0
        break
    fi
done

if [[ $err != "0" ]]; then
    echo "systemctl stop keepalived"
    /usr/bin/systemctl stop keepalived
    exit 1
else
    exit 0
fi
EOF

#给脚本授权
[root@master1 ~]# chmod +x /etc/keepalived/check_apiserver.sh

5.2.2配置master2的keepalived

[root@master2 ~]# yum -y install keepalived.x86_64
[root@master2 ~]# scp master1:/etc/keepalived/{keepalived.conf,check_apiserver.sh} /etc/keepalived/

#修改keepalived配置
[root@master2 ~]# grep -Ev "^#|^$" /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
}
vrrp_script check_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
    interval 5 
    weight -5
    fall 2
    rise 1
}
vrrp_instance VI_1 {
    state BACKUP
    interface ens33 
    mcast_src_ip 192.168.1.22
    virtual_router_id 51
    priority 90
    nopreempt
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.1.25
    }
    track_script {
       check_apiserver 
	}
}

5.3启动服务并设置开机自启

systemctl daemon-reload
systemctl enable nginx --now keepalived
systemctl status nginx keepalived

5.4查看keepalived工作状态

查看vip
[root@master1 ~]# ip addr | grep -w 192.168.1.25
    inet 192.168.1.25/32 scope global ens33


Nginx+keepalived高可用测试
关闭主节点Nginx,测试VIP是否漂移到备节点服务器。
在Nginx Master执行 pkill nginx;
在Nginx Backup,ip addr命令查看已成功绑定VIP。

6 Kubernetes master集群部署

6.1 下载k8s二进制安装包并配置安装目录

#创建k8s安装目录
[root@master1 ~]# mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs,manifests,kubeconfig}

#下载地址
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.27.md

#下载安装包,网页直接点击kubernetes-server-linux-amd64.tar.gz 也可下载
[root@master1 ~]# wget https://dl.k8s.io/v1.27.1/kubernetes-server-linux-amd64.tar.gz

#解压k8s安装包
[root@master1 ~]# tar -xf kubernetes-server-linux-amd64.tar.gz  --strip-components=3 -C /opt/kubernetes/bin/ kubernetes/server/bin/kube{let,ctl,-apiserver,-controller-manager,-scheduler,-proxy}

#此命令 --strip-components=3:指示tar在提取文件时删除路径的前3个。此命令从kubernetes-server-linux-amd64.tar.gz归档文件中提取以下文件:

[root@master1 ~]# ll /opt/kubernetes/bin/
总用量 473028
-rwxr-xr-x 1 root root 123904000 517 22:22 kube-apiserver
-rwxr-xr-x 1 root root 113246208 517 22:22 kube-controller-manager
-rwxr-xr-x 1 root root  45039616 517 22:22 kubectl
-rwxr-xr-x 1 root root 114263032 517 22:22 kubelet
-rwxr-xr-x 1 root root  41197568 517 22:22 kube-proxy
-rwxr-xr-x 1 root root  46727168 517 22:22 kube-scheduler

[root@master1 ~]# mv /opt/kubernetes/bin/kubectl /usr/local/bin
#查看
[root@master1 ~]# /opt/kubernetes/bin/kubelet --version
Kubernetes v1.27.1

6.2 部署 kube-apiserver

6.2.1 配置请求证书

[root@master1 ~]# cd ~/pki/k8s/
[root@master1 k8s]# cat > ca-csr.json   << EOF 
{
  "CN": "kubernetes",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "Kubernetes",
      "OU": "Kubernetes-manual"
    }
  ],
  "ca": {
    "expiry": "876000h"
  }
}
EOF

6.2.2 自签ca证书

#生成根证书
[root@master1 k8s]# cfssl gencert -initca ca-csr.json | cfssljson -bare /opt/kubernetes/ssl/ca

6.2.3 配置ca证书策略

[root@master1 k8s]# cat > ca-config.json << EOF
{
  "signing": {
    "default": {
      "expiry": "876000h"
    },
    "profiles": {
      "kubernetes": {
        "usages": [
            "signing",
            "key encipherment",
            "server auth",
            "client auth"
        ],
        "expiry": "876000h"
      }
    }
  }
}
EOF

6.2.4 配置请求证书文件

[root@master1 k8s]# cat > apiserver-csr.json << EOF
{
  "CN": "kube-apiserver",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "Kubernetes",
      "OU": "Kubernetes-manual"
    }
  ]
}
EOF

6.2.5 生成apiserver证书及token文件

#生成apiserver证书
[root@master1 k8s]# cfssl gencert   \
-ca=/opt/kubernetes/ssl/ca.pem   \
-ca-key=/opt/kubernetes/ssl/ca-key.pem   \
-config=ca-config.json   \
-hostname=10.96.0.1,127.0.0.1,kubernetes,kubernetes.default,kubernetes.default.svc,kubernetes.default.svc.cluster,kubernetes.default.svc.cluster.local,192.168.1.21,192.168.1.22,192.168.1.23,192.168.1.25  \
-profile=kubernetes   apiserver-csr.json | cfssljson -bare /opt/kubernetes/ssl/apiserver


#上述命令中hostname中的IP为所有Master/LB/VIP IP,一个都不能少,为了方便后期扩容可以多写几个预留的IP。

6.2.6生成apiserver聚合证书

[root@master1 k8s]# cat > front-proxy-ca-csr.json  << EOF
{
  "CN": "kubernetes",
  "key": {
     "algo": "rsa",
     "size": 2048
  },
  "ca": {
    "expiry": "876000h"
  }
}
EOF
[root@master1 k8s]# cfssl gencert -initca front-proxy-ca-csr.json | cfssljson -bare /opt/kubernetes/ssl/front-proxy-ca
[root@master1 k8s]# cat > front-proxy-client-csr.json  << 'EOF'
{
  "CN": "front-proxy-client",
  "key": {
     "algo": "rsa",
     "size": 2048
  }
}
EOF
[root@master1 k8s]# cfssl gencert  \
-ca=/opt/kubernetes/ssl/front-proxy-ca.pem   \
-ca-key=/opt/kubernetes/ssl/front-proxy-ca-key.pem   \
-config=ca-config.json   \
-profile=kubernetes \
front-proxy-client-csr.json | cfssljson -bare /opt/kubernetes/ssl/front-proxy-client

6.2.7创建ServiceAccount Key ——secret

[root@master1 k8s]# openssl genrsa -out /opt/kubernetes/ssl/sa.key 2048
[root@master1 k8s]# openssl rsa -in /opt/kubernetes/ssl/sa.key -pubout -out /opt/kubernetes/ssl/sa.pub

6.2.8 创建apiserver服务配置文件

[root@master1 k8s]# cat > /opt/kubernetes/cfg/kube-apiserver.conf << EOF
KUBE_APISERVER_OPTS="--v=2  \\
 --allow-privileged=true  \\
 --bind-address=0.0.0.0  \\
 --secure-port=6443  \\
 --advertise-address=192.168.1.21 \\
 --service-cluster-ip-range=10.96.0.0/16  \\
 --service-node-port-range=30000-32767  \\
 --etcd-servers=https://192.168.1.21:2379,https://192.168.1.22:2379,https://192.168.1.23:2379 \\
 --etcd-cafile=/opt/etcd/ssl/etcd-ca.pem  \\
 --etcd-certfile=/opt/etcd/ssl/etcd.pem  \\
 --etcd-keyfile=/opt/etcd/ssl/etcd-key.pem  \\
 --client-ca-file=/opt/kubernetes/ssl/ca.pem  \\
 --tls-cert-file=/opt/kubernetes/ssl/apiserver.pem  \\
 --tls-private-key-file=/opt/kubernetes/ssl/apiserver-key.pem  \\
 --kubelet-client-certificate=/opt/kubernetes/ssl/apiserver.pem  \\
 --kubelet-client-key=/opt/kubernetes/ssl/apiserver-key.pem  \\
 --service-account-key-file=/opt/kubernetes/ssl/sa.pub  \\
 --service-account-signing-key-file=/opt/kubernetes/ssl/sa.key  \\
 --service-account-issuer=https://kubernetes.default.svc.cluster.local \\
 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \\
 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \
 --authorization-mode=Node,RBAC  \\
 --enable-bootstrap-token-auth=true  \\
 --requestheader-client-ca-file=/opt/kubernetes/ssl/front-proxy-ca.pem  \\
 --proxy-client-cert-file=/opt/kubernetes/ssl/front-proxy-client.pem  \\
 --proxy-client-key-file=/opt/kubernetes/ssl/front-proxy-client-key.pem  \\
 --requestheader-allowed-names=aggregator  \\
 --requestheader-group-headers=X-Remote-Group  \\
 --requestheader-extra-headers-prefix=X-Remote-Extra-  \\
 --requestheader-username-headers=X-Remote-User \\
 --enable-aggregator-routing=true"
 # --feature-gates=IPv6DualStack=true
 # --token-auth-file=/etc/kubernetes/token.csv
EOF
配置参数说明:
--v=2:设置 kube-apiserver 的日志级别为 2。
--allow-privileged=true:允许在集群中运行特权容器。
--bind-address=0.0.0.0:设置 kube-apiserver 绑定的 IP 地址为 0.0.0.0,这意味着它将监听所有网络接口上的请求。
--secure-port=6443:设置 kube-apiserver 监听安全连接的端口号
--advertise-address=192.168.1.25:设置 kube-apiserver 向集群中其他组件宣告的 IP 地址
--service-cluster-ip-range=10.96.0.0/16:设置集群内部服务的 IP 地址范围为 10.96.0.0/16。
--service-node-port-range=30000-32767:设置 NodePort 类型服务的端口号范围为 30000 到 32767。
--etcd-servers=https://192.168.1.21:2379...:设置 etcd 服务器的地址。
--etcd-cafile=/opt/etcd/ssl/etcd-ca.pem:设置 etcd 服务器的 CA 证书文件路径
--etcd-certfile=/opt/etcd/ssl/etcd.pem:设置 etcd 服务器的客户端证书文件路径
--etcd-keyfile=/opt/etcd/ssl/etcd-key.pem:设置 etcd 服务器的客户端密钥文件路径
--client-ca-file=/opt/kubernetes/ssl/ca.pem:设置客户端 CA 证书文件路径
--tls-cert-file=/opt/kubernetes/ssl/apiserver.pem:设置 TLS 证书文件路径
--tls-private-key-file=/opt/kubernetes/ssl/apiserver-key.pem:设置 TLS 私钥文件路径
--kubelet-client-certificate=/opt/kubernetes/ssl/apiserver.pem:设置 kubelet 客户端证书文件路径
--kubelet-client-key=/opt/kubernetes/ssl/apiserver-key.pem:设置 kubelet 客户端密钥文件路径
--service-account-key-file=/opt/kubernetes/ssl/sa.pub:设置服务账号公钥文件路径
--service-account-signing-key-file=/opt/kubernetes/ssl/sa.key:设置服务账号签名私钥文件路径
--service-account-issuer=https://kubernetes.default.svc.cluster.local:设置服务账号颁发者的 URL 为 https://kubernetes.default.svc.cluster.local。
--kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname:设置 kubelet 的首选地址类型顺序为 InternalIP、ExternalIP 和 Hostname。
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota:启用指定的准入控制插件。这些插件会在创建或修改资源时执行特定的检查和操作,以确保集群的安全性和稳定性。
--authorization-mode=Node,RBAC:启用 Node 和 RBAC 授权模式。这些模式会检查用户或组件对资源的访问权限,以确保集群的安全性和稳定性。
--enable-bootstrap-token-auth=true:启用 bootstrap token 认证。这允许新节点使用 bootstrap token 加入集群。
--requestheader-client-ca-file=/opt/kubernetes/ssl/front-proxy-ca.pem:设置前端代理的 CA 证书文件路径
--proxy-client-cert-file=/opt/kubernetes/ssl/front-proxy-client.pem:设置前端代理的客户端证书文件路径
--proxy-client-key-file=/opt/kubernetes/ssl/front-proxy-client-key.pem:设置前端代理的客户端密钥文件路径
--requestheader-allowed-names=aggregator:设置允许通过前端代理访问 kube-apiserver 的客户端名称为 aggregator。
--requestheader-group-headers=X-Remote-Group:设置前端代理传递用户组信息的 HTTP 头字段名称为 X-Remote-Group。
--requestheader-extra-headers-prefix=X-Remote-Extra-:设置前端代理传递额外用户信息的 HTTP 头字段前缀为 X-Remote-Extra-。
--requestheader-username-headers=X-Remote-User:设置前端代理传递用户名信息的 HTTP 头字段名称为 X-Remote-User。
--enable-aggregator-routing=true":启用聚合层路由。这允许 kube-apiserver 将请求路由到扩展 API 服务器。

6.2.9 配置systemd管理apiserver

[root@master1 k8s]# cat > /usr/lib/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver.conf
ExecStart=/opt/kubernetes/bin/kube-apiserver \$KUBE_APISERVER_OPTS
Restart=on-failure
RestartSec=10s
LimitNOFILE=65535


[Install]
WantedBy=multi-user.target
EOF

6.2.10 启动apiserver并开机自启

[root@master1 k8s]# systemctl daemon-reload
[root@master1 k8s]# systemctl enable --now kube-apiserver
[root@master1 k8s]# systemctl status kube-apiserver

6.2.11 将文件分发给其他master节点并修改对应配置

#分发文件到master2,如果集群有更多主节点,同操作
[root@master1 k8s]# scp -r /opt/kubernetes master2:/opt/
[root@master1 k8s]# scp -p /usr/local/bin/kubectl master2:/usr/local/bin/kubectl
[root@master1 k8s]# scp /usr/lib/systemd/system/kube-apiserver.service master2:/usr/lib/systemd/system/kube-apiserver.service

##修改配置master2的kube-apiserver.conf对应的IP地址

[root@master2 ~]# cat > /opt/kubernetes/cfg/kube-apiserver.conf << EOF
KUBE_APISERVER_OPTS="--v=2  \\
 --allow-privileged=true  \\
 --bind-address=0.0.0.0  \\
 --secure-port=6443  \\
 --advertise-address=192.168.1.22 \\
 --service-cluster-ip-range=10.96.0.0/16  \\
 --service-node-port-range=30000-32767  \\
 --etcd-servers=https://192.168.1.21:2379,https://192.168.1.22:2379,https://192.168.1.23:2379 \\
 --etcd-cafile=/opt/etcd/ssl/etcd-ca.pem  \\
 --etcd-certfile=/opt/etcd/ssl/etcd.pem  \\
 --etcd-keyfile=/opt/etcd/ssl/etcd-key.pem  \\
 --client-ca-file=/opt/kubernetes/ssl/ca.pem  \\
 --tls-cert-file=/opt/kubernetes/ssl/apiserver.pem  \\
 --tls-private-key-file=/opt/kubernetes/ssl/apiserver-key.pem  \\
 --kubelet-client-certificate=/opt/kubernetes/ssl/apiserver.pem  \\
 --kubelet-client-key=/opt/kubernetes/ssl/apiserver-key.pem  \\
 --service-account-key-file=/opt/kubernetes/ssl/sa.pub  \\
 --service-account-signing-key-file=/opt/kubernetes/ssl/sa.key  \\
 --service-account-issuer=https://kubernetes.default.svc.cluster.local \\
 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname  \\
 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota  \
 --authorization-mode=Node,RBAC  \\
 --enable-bootstrap-token-auth=true  \\
 --requestheader-client-ca-file=/opt/kubernetes/ssl/front-proxy-ca.pem  \\
 --proxy-client-cert-file=/opt/kubernetes/ssl/front-proxy-client.pem  \\
 --proxy-client-key-file=/opt/kubernetes/ssl/front-proxy-client-key.pem  \\
 --requestheader-allowed-names=aggregator  \\
 --requestheader-group-headers=X-Remote-Group  \\
 --requestheader-extra-headers-prefix=X-Remote-Extra-  \\
 --requestheader-username-headers=X-Remote-User \\
 --enable-aggregator-routing=true"
 # --feature-gates=IPv6DualStack=true
 # --token-auth-file=/etc/kubernetes/token.csv
EOF

6.2.12 master2 启动服务

[root@master2 ~]# systemctl daemon-reload
[root@master2 ~]# systemctl enable --now kube-apiserver
[root@master2 ~]# systemctl status kube-apiserver

#测试
curl -k https://192.168.1.21:6443/healthz
ok

curl --insecure https://192.168.1.21:6443/
curl --insecure https://192.168.1.22:6443/
curl --insecure https://192.168.1.25:16443/

6.3 部署 kube-controller-manager

6.3.1 创建kube-controller-manager证书请求文件

[root@master1 k8s]# cat > manager-csr.json << EOF 
{
  "CN": "system:kube-controller-manager",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "system:kube-controller-manager",
      "OU": "Kubernetes-manual"
    }
  ]
}
EOF

6.3.2 生成 kube-controller-manager 证书

[root@master1 k8s]# cfssl gencert \
   -ca=/opt/kubernetes/ssl/ca.pem \
   -ca-key=/opt/kubernetes/ssl/ca-key.pem \
   -config=ca-config.json \
   -profile=kubernetes \
   manager-csr.json | cfssljson -bare /opt/kubernetes/ssl/controller-manager

6.3.3 创建kube-controller-manager的kube-controller-manager.kubeconfig 文件

#上面配置了nginx高可用方案,--server地址为 https://192.168.1.25:16443

#设置一个集群项
[root@master1 k8s]# kubectl config set-cluster kubernetes \
     --certificate-authority=/opt/kubernetes/ssl/ca.pem \
     --embed-certs=true \
     --server=https://192.168.1.25:16443 \
     --kubeconfig=/opt/kubernetes/kubeconfig/controller-manager.kubeconfig   

#设置一个上下文
[root@master1 k8s]# kubectl config set-context system:kube-controller-manager@kubernetes \
    --cluster=kubernetes \
    --user=system:kube-controller-manager \
    --kubeconfig=/opt/kubernetes/kubeconfig/controller-manager.kubeconfig	 

#设置一个用户项
[root@master1 k8s]# kubectl config set-credentials system:kube-controller-manager \
     --client-certificate=/opt/kubernetes/ssl/controller-manager.pem \
     --client-key=/opt/kubernetes/ssl/controller-manager-key.pem \
     --embed-certs=true \
     --kubeconfig=/opt/kubernetes/kubeconfig/controller-manager.kubeconfig

#指定生成kubeconfig文件
[root@master1 k8s]# kubectl config use-context system:kube-controller-manager@kubernetes \
     --kubeconfig=/opt/kubernetes/kubeconfig/controller-manager.kubeconfig

6.3.4 创建 kube-controller-manager 配置文件

[root@master1 k8s]# cat > /opt/kubernetes/cfg/kube-controller-manager.conf << EOF
KUBE_CONTROLLER_MANAGER_OPTS="--v=2 \\
--bind-address=0.0.0.0 \\
--root-ca-file=/opt/kubernetes/ssl/ca.pem \\
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \\
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \\
--service-account-private-key-file=/opt/kubernetes/ssl/sa.key \\
--kubeconfig=/opt/kubernetes/kubeconfig/controller-manager.kubeconfig \\
--leader-elect=true \\
--use-service-account-credentials=true \\
--node-monitor-grace-period=40s \\
--node-monitor-period=5s \\
--controllers=*,bootstrapsigner,tokencleaner \\
--allocate-node-cidrs=true \\
--service-cluster-ip-range=10.96.0.0/16 \\
--cluster-cidr=10.244.0.0/16 \\
--node-cidr-mask-size-ipv4=24 \\
--requestheader-client-ca-file=/opt/kubernetes/ssl/front-proxy-ca.pem"
#--node-cidr-mask-size-ipv6=120       
# --feature-gates=IPv6DualStack=true
EOF

6.3.5 配置kube-controller-manager 服务文件

[root@master1 k8s]# cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager.conf
ExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS	
Restart=always
RestartSec=10s

[Install]
WantedBy=multi-user.target

EOF

6.3.6 启动服务并设置开机自启

[root@master1 k8s]# systemctl daemon-reload 
[root@master1 k8s]# systemctl enable --now kube-controller-manager
[root@master1 k8s]# systemctl status kube-controller-manager

6.3.7 同步文件到集群其他master节点

[root@master1 k8s]# scp /opt/kubernetes/cfg/kube-controller-manager.conf master2:/opt/kubernetes/cfg/

[root@master1 k8s]# scp /usr/lib/systemd/system/kube-controller-manager.service master2:/usr/lib/systemd/system/

[root@master1 k8s]# scp /opt/kubernetes/kubeconfig/controller-manager.kubeconfig master2:/opt/kubernetes/kubeconfig/

#启动并查看状态
[root@master2 ~]# systemctl daemon-reload 
[root@master2 ~]# systemctl enable --now kube-controller-manager
[root@master2 ~]# systemctl status kube-controller-manager

6.4 部署kube-scheduler

6.4.1 创建kube-scheduler证书请求文件

[root@master1 k8s]# cat > scheduler-csr.json << EOF 
{
  "CN": "system:kube-scheduler",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "system:kube-scheduler",
      "OU": "Kubernetes-manual"
    }
  ]
}
EOF

6.4.2 生成 kube-scheduler 证书文件

[root@master1 k8s]# cfssl gencert \
   -ca=/opt/kubernetes/ssl/ca.pem \
   -ca-key=/opt/kubernetes/ssl/ca-key.pem \
   -config=ca-config.json \
   -profile=kubernetes \
   scheduler-csr.json | cfssljson -bare /opt/kubernetes/ssl/scheduler

6.4.3 创建kube-scheduler的kubeconfig

[root@master1 k8s]# kubectl config set-cluster kubernetes \
     --certificate-authority=/opt/kubernetes/ssl/ca.pem \
     --embed-certs=true \
     --server=https://192.168.1.25:16443 \
     --kubeconfig=/opt/kubernetes/kubeconfig/scheduler.kubeconfig
  
[root@master1 k8s]# kubectl config set-credentials system:kube-scheduler \
     --client-certificate=/opt/kubernetes/ssl/scheduler.pem \
     --client-key=/opt/kubernetes/ssl/scheduler-key.pem \
     --embed-certs=true \
     --kubeconfig=/opt/kubernetes/kubeconfig/scheduler.kubeconfig
  
[root@master1 k8s]# kubectl config set-context system:kube-scheduler@kubernetes \
     --cluster=kubernetes \
     --user=system:kube-scheduler \
     --kubeconfig=/opt/kubernetes/kubeconfig/scheduler.kubeconfig
  
[root@master1 k8s]# kubectl config use-context system:kube-scheduler@kubernetes \
     --kubeconfig=/opt/kubernetes/kubeconfig/scheduler.kubeconfig

6.4.4 创建kube-scheduler的配置文件

[root@master1 k8s]# cat > /opt/kubernetes/cfg/scheduler.conf << 'EOF'
KUBE_SCHEDULER_OPTS="--v=2 \
--bind-address=0.0.0.0 \
--leader-elect=true \
--kubeconfig=/opt/kubernetes/kubeconfig/scheduler.kubeconfig"
EOF

6.4.5 创建服务启动文件

[root@master1 k8s]# cat > /usr/lib/systemd/system/kube-scheduler.service << 'EOF'
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
EnvironmentFile=/opt/kubernetes/cfg/scheduler.conf
ExecStart=/opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=always
RestartSec=10s

[Install]
WantedBy=multi-user.target
EOF

6.4.6 启动服务

[root@master1 k8s]# systemctl daemon-reload 
[root@master1 k8s]# systemctl enable --now kube-scheduler
[root@master1 k8s]# systemctl status kube-scheduler

6.4.7 分发文件到其他master节点并启动

[root@master1 k8s]# scp /opt/kubernetes/cfg/scheduler.conf master2:/opt/kubernetes/cfg/

[root@master1 k8s]# scp /usr/lib/systemd/system/kube-scheduler.service master2:/usr/lib/systemd/system/

[root@master1 k8s]# scp /opt/kubernetes/kubeconfig/scheduler.kubeconfig master2:/opt/kubernetes/kubeconfig/

#启动服务
[root@master2 ~]# systemctl daemon-reload 
[root@master2 ~]# systemctl enable --now kube-scheduler
[root@master2 ~]# systemctl status kube-scheduler

6.5 部署kubectl命令行工具

6.5.1 创建kubectl证书请求文件

kubectl 与 apiserver https 安全端口通信,apiserver 对提供的证书进行认证和授权。
kubectl 作为集群的管理工具,需要被授予最高权限。这里创建具有最高权限的 admin 证书。

[root@master1 k8s]# cat > admin-csr.json << EOF 
{
  "CN": "admin",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "system:masters",
      "OU": "Kubernetes-manual"
    }
  ]
}
EOF

6.5.2 生成证书文件

[root@master1 k8s]# cfssl gencert \
   -ca=/opt/kubernetes/ssl/ca.pem \
   -ca-key=/opt/kubernetes/ssl/ca-key.pem \
   -config=ca-config.json \
   -profile=kubernetes \
   admin-csr.json | cfssljson -bare /opt/kubernetes/ssl/admin

6.5.3 生成 kubectl 配置文件

kube.config 为 kubectl 的配置文件,包含访问 apiserver 的所有信息,如 apiserver 地址、CA 证书和自身使用的证书。

[root@master1 k8s]# kubectl config set-cluster kubernetes     \
  --certificate-authority=/opt/kubernetes/ssl/ca.pem     \
  --embed-certs=true     \
  --server=https://192.168.1.25:16443     \
  --kubeconfig=/opt/kubernetes/kubeconfig/admin.kubeconfig
  
[root@master1 k8s]# kubectl config set-credentials kubernetes-admin  \
  --client-certificate=/opt/kubernetes/ssl/admin.pem     \
  --client-key=/opt/kubernetes/ssl/admin-key.pem     \
  --embed-certs=true     \
  --kubeconfig=/opt/kubernetes/kubeconfig/admin.kubeconfig
  
[root@master1 k8s]# kubectl config set-context kubernetes-admin@kubernetes    \
  --cluster=kubernetes     \
  --user=kubernetes-admin     \
  --kubeconfig=/opt/kubernetes/kubeconfig/admin.kubeconfig
  
[root@master1 k8s]# kubectl config use-context kubernetes-admin@kubernetes  --kubeconfig=/opt/kubernetes/kubeconfig/admin.kubeconfig

[root@master1 k8s]# mkdir -p /root/.kube
[root@master1 k8s]# cp /opt/kubernetes/kubeconfig/admin.kubeconfig /root/.kube/config

6.5.4 拷贝到其他master节点

[root@master1 k8s]# scp -r /root/.kube/ master2:/root/

6.5.5 命令验证

#查看集群信息
[root@master1 k8s]# kubectl cluster-info
Kubernetes control plane is running at https://192.168.1.25:16443

#查看集群组件状态
[root@master1 k8s]#  kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
scheduler            Healthy   ok                              
controller-manager   Healthy   ok                              
etcd-2               Healthy   {"health":"true","reason":""}   
etcd-1               Healthy   {"health":"true","reason":""}   
etcd-0               Healthy   {"health":"true","reason":""}   

#查看命名空间中资源对象
[root@master1 k8s]# kubectl get all --all-namespaces
NAMESPACE   NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
default     service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   54m

7 部署 Work Node

下面还是在master1上面操作,即当Master节点,也当Work Node节点

7.1 部署 kubelet

7.1.1创建 bootstrap token 文件

#/dev/urandom是一个产生高质量随机数的设备文件。tr用于删除控制字符,fold用于限制每行的字符数,head提取第一行
#token-id
[root@test k8s]# cat /dev/urandom |tr -dc 'a-z0-9' | fold -w 6  | head -n 1
qfxsz4

#token-secret
[root@test k8s]# cat /dev/urandom |tr -dc 'a-z0-9' | fold -w 16 | head -n 1
emer1gzx3shzbf2t
#配置集群中的 bootstrap token 和相关的 RBAC 角色绑定,用于控制对集群资源的访问权限。

[root@master1 k8s]# cat > /opt/kubernetes/cfg/bootstrap.secret.yaml << EOF 
apiVersion: v1
kind: Secret
metadata:
  name: bootstrap-token-qfxsz4
  namespace: kube-system
type: bootstrap.kubernetes.io/token
stringData:
  description: "The default bootstrap token generated by 'kubelet '."
  token-id: qfxsz4
  token-secret: emer1gzx3shzbf2t
  usage-bootstrap-authentication: "true"
  usage-bootstrap-signing: "true"
  auth-extra-groups:  system:bootstrappers:default-node-token,system:bootstrappers:worker,system:bootstrappers:ingress
 
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubelet-bootstrap
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:node-bootstrapper
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: system:bootstrappers:default-node-token
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: node-autoapprove-bootstrap
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: system:bootstrappers:default-node-token
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: node-autoapprove-certificate-rotation
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: system:nodes
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:kube-apiserver-to-kubelet
rules:
  - apiGroups:
      - ""
    resources:
      - nodes/proxy
      - nodes/stats
      - nodes/log
      - nodes/spec
      - nodes/metrics
    verbs:
      - "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:kube-apiserver
  namespace: ""
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-apiserver-to-kubelet
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: kube-apiserver
EOF
#执行上述文件,一定要执行



[root@master1 k8s]# kubectl apply -f /opt/kubernetes/cfg/bootstrap.secret.yaml 

7.1.2 创建kubeconfig文件

kubeconfig 文件用于 kubelet 初次加入集群时进行引导

[root@master1 k8s]# kubectl config set-cluster kubernetes \
--certificate-authority=/opt/kubernetes/ssl/ca.pem \
--embed-certs=true     --server=https://192.168.1.25:16443 \
--kubeconfig=/opt/kubernetes/kubeconfig/bootstrap-kubelet.kubeconfig

[root@master1 k8s]# kubectl config set-credentials tls-bootstrap-token-user \
--token=qfxsz4.emer1gzx3shzbf2t \
--kubeconfig=/opt/kubernetes/kubeconfig/bootstrap-kubelet.kubeconfig

[root@master1 k8s]# kubectl config set-context tls-bootstrap-token-user@kubernetes \
--cluster=kubernetes \
--user=tls-bootstrap-token-user \
--kubeconfig=/opt/kubernetes/kubeconfig/bootstrap-kubelet.kubeconfig

[root@master1 k8s]# kubectl config use-context tls-bootstrap-token-user@kubernetes \
--kubeconfig=/opt/kubernetes/kubeconfig/bootstrap-kubelet.kubeconfig

###说明:
#token必须和bootstrap.secret.yaml 中的token相对应

7.1.3 创建 kubelet 配置文件

[root@master1 k8s]# cat > /opt/kubernetes/cfg/kubelet-conf.yml << EOF
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /opt/kubernetes/ssl/ca.pem
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
cgroupDriver: systemd
cgroupsPerQOS: true
clusterDNS:
- 10.96.0.10
clusterDomain: cluster.local
containerLogMaxFiles: 5
containerLogMaxSize: 10Mi
contentType: application/vnd.kubernetes.protobuf
cpuCFSQuota: true
cpuManagerPolicy: none
cpuManagerReconcilePeriod: 10s
enableControllerAttachDetach: true
enableDebuggingHandlers: true
enforceNodeAllocatable:
- pods
eventBurst: 10
eventRecordQPS: 5
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
evictionPressureTransitionPeriod: 5m0s
failSwapOn: true
fileCheckFrequency: 20s
hairpinMode: promiscuous-bridge
healthzBindAddress: 127.0.0.1
healthzPort: 10248
httpCheckFrequency: 20s
imageGCHighThresholdPercent: 85
imageGCLowThresholdPercent: 80
imageMinimumGCAge: 2m0s
iptablesDropBit: 15
iptablesMasqueradeBit: 14
kubeAPIBurst: 10
kubeAPIQPS: 5
makeIPTablesUtilChains: true
maxOpenFiles: 1000000
maxPods: 110
nodeStatusUpdateFrequency: 10s
oomScoreAdj: -999
podPidsLimit: -1
registryBurst: 10
registryPullQPS: 5
resolvConf: /etc/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: 2m0s
serializeImagePulls: true
staticPodPath: /opt/kubernetes/manifests
streamingConnectionIdleTimeout: 4h0m0s
syncFrequency: 1m0s
volumeStatsAggPeriod: 1m0s
EOF

7.1.4 配置 kubelet 启动文件

[root@master1 k8s]# cat > /usr/lib/systemd/system/kubelet.service << EOF

[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=containerd.service
Requires=containerd.service

[Service]
ExecStart=/opt/kubernetes/bin/kubelet \\
    --bootstrap-kubeconfig=/opt/kubernetes/kubeconfig/bootstrap-kubelet.kubeconfig  \\
    --kubeconfig=/opt/kubernetes/kubeconfig/kubelet.kubeconfig \\
    --config=/opt/kubernetes/cfg/kubelet-conf.yml \\
    --container-runtime-endpoint=unix:///run/containerd/containerd.sock  \\
    --node-labels=node.kubernetes.io/node=
    # --feature-gates=IPv6DualStack=true
    # --container-runtime=remote
    # --runtime-request-timeout=15m
    # --cgroup-driver=systemd

[Install]
WantedBy=multi-user.target
EOF

7.1.5 启动并设置开机自启

[root@master1 k8s]# systemctl daemon-reload
[root@master1 k8s]# systemctl enable --now kubelet
[root@master1 k8s]# systemctl status kubelet

7.1.6 查看集群

查看集群节点
[root@master1 k8s]# kubectl get nodes
NAME      STATUS   ROLES    AGE   VERSION
master1   Ready    <none>   42s   v1.27.1

#查看容器运行时
[root@master1 k8s]# kubectl describe node | grep Runtime
  Container Runtime Version:  containerd://1.7.1

7.1.7 分发文件到所有node节点

[root@master1 ~]# for i in {master2,node1}
do
ssh $i "mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs,manifests,kubeconfig}"
scp /opt/kubernetes/bin/kubelet $i:/opt/kubernetes/bin
scp /opt/kubernetes/kubeconfig/bootstrap-kubelet.kubeconfig $i:/opt/kubernetes/kubeconfig/
scp /opt/kubernetes/cfg/kubelet-conf.yml $i:/opt/kubernetes/cfg/
scp -r /opt/kubernetes/ssl/ca.pem $i:/opt/kubernetes/ssl/
scp /usr/lib/systemd/system/kubelet.service $i:/usr/lib/systemd/system/
done

7.1.8 所有节点启动 kubelet

[root@master1 ~]# for i in {master2,node1}
do
ssh $i "systemctl daemon-reload;systemctl enable --now kubelet"
done
#查看节点,全都加入集群
[root@master1 k8s]# kubectl get nodes
NAME      STATUS   ROLES    AGE     VERSION
master1   Ready    <none>   18m     v1.27.1
master2   Ready    <none>   6m32s   v1.27.1
node1     Ready    <none>   25s     v1.27.1

7.2 部署 kube-proxy

7.2.1 创建 kube-proxy 证书请求文件

[root@master1 k8s]# cat > kube-proxy-csr.json  << EOF 
{
  "CN": "system:kube-proxy",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "Beijing",
      "L": "Beijing",
      "O": "system:kube-proxy",
      "OU": "Kubernetes-manual"
    }
  ]
}
EOF

7.2.2 生成证书

[root@master1 k8s]# cfssl gencert \
   -ca=/opt/kubernetes/ssl/ca.pem \
   -ca-key=/opt/kubernetes/ssl/ca-key.pem \
   -config=ca-config.json \
   -profile=kubernetes \
   kube-proxy-csr.json | cfssljson -bare /opt/kubernetes/ssl/kube-proxy

7.2.3 生成聚合证书文件

[root@master1 k8s]# kubectl config set-cluster kubernetes     \
  --certificate-authority=/opt/kubernetes/ssl/ca.pem     \
  --embed-certs=true     \
  --server=https://192.168.1.25:16443     \
  --kubeconfig=/opt/kubernetes/kubeconfig/kube-proxy.kubeconfig
  
[root@master1 k8s]# kubectl config set-credentials kube-proxy  \
  --client-certificate=/opt/kubernetes/ssl/kube-proxy.pem     \
  --client-key=/opt/kubernetes/ssl/kube-proxy-key.pem     \
  --embed-certs=true     \
  --kubeconfig=/opt/kubernetes/kubeconfig/kube-proxy.kubeconfig
  
[root@master1 k8s]# kubectl config set-context kube-proxy@kubernetes    \
  --cluster=kubernetes     \
  --user=kube-proxy     \
  --kubeconfig=/opt/kubernetes/kubeconfig/kube-proxy.kubeconfig
  
[root@master1 k8s]# kubectl config use-context kube-proxy@kubernetes  --kubeconfig=/opt/kubernetes/kubeconfig/kube-proxy.kubeconfig

7.2.4配置参数文件

[root@master1 k8s]# cat > /opt/kubernetes/cfg/kube-proxy.yaml << EOF
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
clientConnection:
  acceptContentTypes: ""
  burst: 10
  contentType: application/vnd.kubernetes.protobuf
  kubeconfig: /opt/kubernetes/kubeconfig/kube-proxy.kubeconfig
  qps: 5
clusterCIDR: 10.244.0.0/16
configSyncPeriod: 15m0s
conntrack:
  max: null
  maxPerCore: 32768
  min: 131072
  tcpCloseWaitTimeout: 1h0m0s
  tcpEstablishedTimeout: 24h0m0s
enableProfiling: false
healthzBindAddress: 0.0.0.0:10256
hostnameOverride: ""
iptables:
  masqueradeAll: false
  masqueradeBit: 14
  minSyncPeriod: 0s
  syncPeriod: 30s
ipvs:
  masqueradeAll: true
  minSyncPeriod: 5s
  scheduler: "rr"
  syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: "ipvs"
nodePortAddresses: null
oomScoreAdj: -999
portRange: ""
udpIdleTimeout: 250ms

EOF

7.2.5 配置启动文件

[root@master1 k8s]# cat >  /usr/lib/systemd/system/kube-proxy.service << EOF
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
After=network.target

[Service]
ExecStart=/opt/kubernetes/bin/kube-proxy \\
  --config=/opt/kubernetes/cfg/kube-proxy.yaml \\
  --v=2

Restart=always
RestartSec=10s

[Install]
WantedBy=multi-user.target

EOF

7.2.6 启动服务

[root@master1 k8s]# systemctl daemon-reload
[root@master1 k8s]# systemctl enable --now kube-proxy
[root@master1 k8s]# systemctl status kube-proxy

7.2.7 分发文件到其他节点启动服务

[root@master1 k8s]# for i in {master2,node1}
do
scp /opt/kubernetes/bin/kube-proxy $i:/opt/kubernetes/bin/
scp /opt/kubernetes/kubeconfig/kube-proxy.kubeconfig $i:/opt/kubernetes/kubeconfig/
scp /opt/kubernetes/cfg/kube-proxy.yaml $i:/opt/kubernetes/cfg/
scp /usr/lib/systemd/system/kube-proxy.service $i:/usr/lib/systemd/system/
ssh $i "systemctl daemon-reload;systemctl enable --now kube-proxy"
done

8部署calico

此文将采用离线方法部署calico,在线部署同理

8.1 搭建本地简易镜像仓库 registry

在规划的 192.168.1.9 上操作

8.1.1 安装docker

#下载安装包
[root@docker-registry ~]# wget https://download.docker.com/linux/static/stable/x86_64/docker-24.0.1.tgz

#解压缩并配置文件
[root@docker-registry ~]# tar -xf docker-24.0.1.tgz
[root@docker-registry ~]# mv docker/* /usr/bin/

8.1.2 配置docker启动文件

[root@docker-registry ~]# cat > /etc/systemd/system/docker.service << EOF
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target

[Service]
Type=notify
ExecStart=/usr/bin/dockerd
ExecReload=/bin/kill -s HUP \$MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s

[Install]
WantedBy=multi-user.target
EOF

8.1.3 创建docker配置文件

#设置docker存储目录到大硬盘目录、设置私有镜像仓库地址(可选,注意替换目录位置与私有镜像仓库URL),配置国内镜像加速
[root@docker-registry ~]# mkdir /etc/docker
[root@docker-registry ~]# mkdir -p /data/docker
[root@docker-registry ~]# cat > /etc/docker/daemon.json << 'EOF'
{
"insecure-registries":["192.168.1.9:5000"],
"data-root":"/data/docker",
"registry-mirrors": ["https://l32efr19.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "20m"
  }
}
EOF

#配置镜像加速方法
登录阿里云https://www.aliyun.com/,搜索容器镜像服务,登录后点击:左侧栏镜像工具-镜像加速器
{
  "registry-mirrors": ["https://l32efr19.mirror.aliyuncs.com"]
}

8.1.4 启动并开机自启

[root@docker-registry ~]# systemctl daemon-reload 
[root@docker-registry ~]# systemctl enable --now docker

8.1.5搭建 Docker Registry

[root@docker-registry ~]# mkdir /data/registry
[root@docker-registry ~]# docker run -d \
-p 5000:5000 \
--restart=always \
--name=registry \
-v /opt/myregistry:/var/lib/registry \
registry
    
#至此一个简易的镜像仓库搭建完成,不用的话直接删除即可
[root@registry ~]# docker rm registry

8.1.6上传官网calico镜像到仓库

在我的CSDN资源里可自行下载

[root@docker-registry ~]# cd calico3.25/
[root@docker-registry ~]# docker load -i calico_cni-v3.25.0.tar 
[root@docker-registry ~]# docker tag d70a5947d57e 192.168.1.9:5000/cni:v3.25.0
[root@docker-registry ~]# docker push 192.168.1.9:5000/cni:v3.25.0

[root@docker-registry ~]# docker load -i calico_kube-controllers-v3.25.0.tar
[root@docker-registry ~]# docker tag 5e785d005ccc 192.168.1.9:5000/kube-controllers:v3.25.0
[root@docker-registry ~]# docker push 192.168.1.9:5000/kube-controllers:v3.25.0

[root@docker-registry ~]# docker load -i calico_node-v3.25.0.tar 
[root@docker-registry ~]# docker tag 08616d26b8e7 192.168.1.9:5000/node:v3.25.0
[root@docker-registry ~]# docker push 192.168.1.9:5000/node:v3.25.0

8.2 安装网络插件

8.2.1下载Calico的yaml文件

官网 查询文档,选择合适的calico版本

#centos7 最好要升级libseccomp
[root@master1 ~]# yum -y install http://rpmfind.net/linux/centos/8-stream/BaseOS/x86_64/os/Packages/libseccomp-2.5.1-1.el8.x86_64.rpm
[root@master1 ~]# rpm -qa | grep libseccomp

[root@master1 ~]# mkdir calico3.25
[root@master1 ~]# cd calico3.25
[root@master1 calico3.25]# curl https://docs.projectcalico.org/archive/v3.25/manifests/calico.yaml -O

8.2.2 修改yaml文件

#修改默认网桥
把calico.yaml里pod所在网段改成 --cluster-cidr=10.244.0.0/16 时选项所指定的网段,
直接用vim编辑打开此文件查找192,按如下标记进行修改:
# no effect. This should fall within `--cluster-cidr`.
# - name: CALICO_IPV4POOL_CIDR
#   value: "192.168.1.0/16"
# Disable file logging so `kubectl logs` works.
- name: CALICO_DISABLE_FILE_LOGGING
  value: "true"
  
把两个#及#后面的空格去掉,并把192.168.1.0/16改成10.244.0.0/16
# no effect. This should fall within `--cluster-cidr`.
- name: CALICO_IPV4POOL_CIDR
  value: "10.244.0.0/16"
# Disable file logging so `kubectl logs` works.
- name: CALICO_DISABLE_FILE_LOGGING
  value: "true"
#由于yaml文件中的镜像是国外的,建议提前拉取镜像再部署,减少由于直接安装时,镜像拉取失败,导致的安装失败
[root@master1 calico3.25]# grep "image:" calico.yaml | uniq
          image: docker.io/calico/cni:v3.25.0
          image: docker.io/calico/node:v3.25.0
          image: docker.io/calico/kube-controllers:v3.25.0

8.2.3部署 calico

[root@master1 calico3.25]# sed -i s#docker.io/calico#192.168.1.9:5000# calico.yaml

[root@master1 calico3.25]# kubectl apply -f calico.yaml
[root@master1 calico3.25]# kubectl get pods -n kube-system
NAME                                       READY   STATUS    RESTARTS   AGE
calico-kube-controllers-5bbfbcdcdc-zz8wt   1/1     Running   0          30s
calico-node-7qvb7                          1/1     Running   0          30s
calico-node-bwxsd                          1/1     Running   0          30s
calico-node-xzl2q                          1/1     Running   0          30s


#差不多不到1分钟左右即可安装完成,如果用的官网上的镜像,网络可以的话,大概10多分钟。

9部署coredns

9.1介绍

coredns: Kubernetes内部域名解析

coredns在K8S中的用途,主要是用作服务发现,也就是服务(应用)之间相互定位的过程。
Server实现了POD地址的固定,DNS可以实现访问Server的名称访问到POD(服务发现)
对于一些服务提前不知道Server地址,但是要基于服务配置,就可以直接使用Server的名称,后期只要创建这样名称即可

为什么需要服务发现
在K8S集群中,POD有以下特性:
a、服务动态性强
容器在k8s中迁移会导致POD的IP地址变化
b、更新发布频繁
版本迭代快,新旧POD的IP地址会不同
c、支持自动伸缩
大促或流量高峰需要动态伸缩,IP地址会动态增减

service资源解决POD服务发现:
为了解决pod地址变化的问题,需要部署service资源,用service资源代理后端pod,通过暴露service资源的固定地址(集群IP),来解决以上POD资源变化产生的IP变动问题

那service资源的服务发现呢?
service资源提供了一个不变的集群IP供外部访问,但
a、IP地址毕竟难以记忆
b、service资源可能也会被销毁和创建
c、能不能将service资源名称和service暴露的集群网络IP对于
d、类似域名与IP关系,则只需记服务名就能自动匹配服务
e、IP岂不就达到了service服务的自动发现
在k8s中,coredns就是为了解决以上问题。

coredns域名解析流程:

当pod1应用想通过dns域名的方式访问pod2则首先根据容器中/etc/resolv.conf内容配置的namserver地址,向dns服务器发出请求,由service将请求抛出转发给kube-dns service,由它进行调度后端的core-dns进行域名解析。解析后请求给kubernetes service进行调度后端etcd数据库返回数据,pod1得到数据后由core-dns转发目的pod2地址解析,最终pod1请求得到pod2
coredns是不负责存储域名解析记录的,是通过apiserver。它是K8S内部的,集群内的各pod与K8S的API server联系都是通过这个10.96.0.1地址联系的,这个地址是K8S集群内API server的服务地址

9.2未部署CoreDNS前测试

[root@docker-registry ~]# docker pull busybox
[root@docker-registry ~]# docker tag beae173ccac6 192.168.1.9:5000/busybox:latest
[root@docker-registry ~]# docker push 192.168.1.9:5000/busybox:latest


[root@master1 ~]# kubectl run -it --rm --restart=Never --image=192.168.1.9:5000/busybox:latest busybox
If you don't see a command prompt, try pressing enter.
/ # cat /etc/resolv.conf 
search default.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.96.0.10
options ndots:5
/ # nslookup kubernetes
;; connection timed out; no servers could be reached

#上面表明无法进行解析

9.3下载并安装CoreDNS

官网
https://github.com/coredns/
进入github,搜索coredns,跳转出页面点击coredns/deployment
点击 coredns/deployment - 点击 kubernetes - 点击coredns.yaml.sed,就可以看见文件内容了,下载下来进行修改即可
完整地址
https://github.com/coredns/deployment/tree/master/kubernetes
https://github.com/coredns/deployment/blob/master/kubernetes/coredns.yaml.sed

下载三个文件
coredns.yaml.sed deploy.sh rollback.sh #下载前2个就够了

[root@master1 ~]# grep clusterDNS -A 1 /opt/kubernetes/cfg/kubelet-conf.yml 
clusterDNS:
- 10.96.0.10


cp coredns.yaml.sed coredns.yaml
cat deploy.sh
...
sed -e "s/CLUSTER_DNS_IP/$CLUSTER_DNS_IP/g" \
    -e "s/CLUSTER_DOMAIN/$CLUSTER_DOMAIN/g" \
    -e "s?REVERSE_CIDRS?$REVERSE_CIDRS?g" \
    -e "s@STUBDOMAINS@${STUBDOMAINS//$orig/$replace}@g" \
    -e "s/UPSTREAMNAMESERVER/$UPSTREAM/g" \
	"${YAML_TEMPLATE}"

脚本提示coredns.yaml有5个地方需要根据集群情况修改

CLUSTER_DNS_IP 10.96.0.10
kubernetes CLUSTER_DOMAIN REVERSE_CIDRS { kubernetes cluster.local in-addr.arpa ip6.arpa {
STUBDOMAINS 删掉
UPSTREAMNAMESERVER /etc/resolv.conf

#所有修改的地方整理出来
[root@master1 ~]# sed -i -e "s/CLUSTER_DNS_IP/10.96.0.10/g" \
-e "s/CLUSTER_DOMAIN/cluster.local/g" \
-e "s/REVERSE_CIDRS/in-addr.arpa\ ip6.arpa/g" \
-e "s/STUBDOMAINS//g" \
-e "s#UPSTREAMNAMESERVER#/etc/resolv.conf#g" coredns.yaml
#上传镜像包并推送到镜像仓库
[root@dockerregistry ~]# docker load -i coredns_1.9.4.tar
[root@dockerregistry ~]# docker tag a81c2ec4e946 192.168.1.9:5000/coredns:1.9.4
[root@dockerregistry ~]# docker push 192.168.1.9:5000/coredns:1.9.4


[root@master1 ~]# sed -i s#coredns/coredns#192.168.1.9:5000/coredns# coredns.yaml
#安装
[root@master1 ~]# kubectl apply -f coredns.yaml

#查看
[root@master1 ~]# kubectl get service -n kube-system
NAME       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
kube-dns   ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   8s

#查看coredns服务,默认是一个副本
[root@master1 ~]# kubectl get pod -A | grep core
kube-system   coredns-5dc594fd48-c2znc                   1/1     Running   0              26s

9.4再次验证

[root@master1 ~]# kubectl run -it --rm --restart=Never --image=192.168.1.9:5000/busybox:latest busybox
If you don't see a command prompt, try pressing enter.
/ # nslookup kubernetes
Server:		10.96.0.10
Address:	10.96.0.10:53

Name:	kubernetes.default.svc.cluster.local
Address: 10.96.0.1


[root@master1 ~]# cat << EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: busybox
  namespace: default
spec:
  containers:
  - name: busybox
    image: 192.168.1.9:5000/busybox:latest
    command:
      - sleep
      - "3600"
    imagePullPolicy: IfNotPresent
  restartPolicy: Always
EOF

[root@master1 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   25h


[root@master1 ~]#  kubectl get po -n kube-system -owide
NAME                                       READY   STATUS    RESTARTS       AGE   IP              NODE      NOMINATED NODE   READINESS GATES
calico-kube-controllers-5bbfbcdcdc-zz8wt   1/1     Running   1 (3h2m ago)   23h   10.244.180.1    master2   <none>           <none>
calico-node-7qvb7                          1/1     Running   1 (3h2m ago)   23h   192.168.1.21    master1   <none>           <none>
calico-node-bwxsd                          1/1     Running   1 (3h2m ago)   23h   192.168.1.22    master2   <none>           <none>
calico-node-xzl2q                          1/1     Running   1 (3h2m ago)   23h   192.168.1.23    node1     <none>           <none>
coredns-5dc594fd48-c2znc                   1/1     Running   0              65m   10.244.137.75   master1   <none>           <none>

# 进入busybox ping其他节点上的 pod
[root@master1 ~]# kubectl exec -it busybox -- sh
/ # ping 192.168.1.23 -w 2
PING 192.168.1.23 (192.168.1.23): 56 data bytes
64 bytes from 192.168.1.23: seq=0 ttl=63 time=0.279 ms
64 bytes from 192.168.1.23: seq=1 ttl=63 time=0.156 ms

/ # ping 10.244.137.75 -w 2
PING 10.244.137.75 (10.244.137.75): 56 data bytes
64 bytes from 10.244.137.75: seq=0 ttl=62 time=0.427 ms
64 bytes from 10.244.137.75: seq=1 ttl=62 time=0.193 ms

10部署dashboard

官方网址

无特别要求可以直接按照官网提示直接安装 kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.0/aio/deploy/recommended.yaml

本次采用下载yaml到本地再运行,使用NodePort方式运行dashboard,使之可以在web页面访问

10.1 下载yaml文件

[root@master1 ~]# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.0/aio/deploy/recommended.yaml

[root@master1 ~]# vim recommended.yaml
...
32 kind: Service
33 apiVersion: v1
34 metadata:
35   labels:
36     k8s-app: kubernetes-dashboard
37   name: kubernetes-dashboard
38   namespace: kubernetes-dashboard
39 spec:
40   type: NodePort			#在39行下面添加
41   ports:
42     - port: 443
43       targetPort: 8443
44       nodePort: 30001	#添加一行定义对外的访问端口
45   selector:
46     k8s-app: kubernetes-dashboard
...

10.2 安装

#上传dashboard-v2.5.0 包到docker-registry,将镜像推送到私有仓库
[root@docker-registry ~]# cd dashboard-v2.5.0/

#总共两个镜像包
[root@docker-registry dashboard-v2.5.0]# grep "image:" recommended.yaml 
          image: kubernetesui/dashboard:v2.5.0
          image: kubernetesui/metrics-scraper:v1.0.7

[root@docker-registry dashboard-v2.5.0]# docker load -i dashboard-v2.5.0.tar
[root@docker-registry dashboard-v2.5.0]# docker tag 57446aa2002e 192.168.1.9:5000/dashboard:v2.5.0
[root@docker-registry dashboard-v2.5.0]# docker push 192.168.1.9:5000/dashboard:v2.5.0

[root@docker-registry dashboard-v2.5.0]# docker load -i  metrics-scraper-v1.0.7.tar
[root@docker-registry dashboard-v2.5.0]# docker images | grep metrics
kubernetesui/metrics-scraper        v1.0.7    7801cfc6d5c0   2 years ago     34.4MB
[root@docker-registry dashboard-v2.5.0]# docker tag 7801cfc6d5c0 192.168.1.9:5000/metrics-scraper:v1.0.7
[root@docker-registry dashboard-v2.5.0]# docker push 192.168.1.9:5000/metrics-scraper:v1.0.7

#修改镜像地址
[root@master1 ~]# sed -i s/kubernetesui/192.168.1.9:5000/g recommended.yaml

#部署
[root@master1 ~]# kubectl apply -f recommended.yaml

#查看服务
[root@master1 ~]# kubectl get pods,svc -n kubernetes-dashboard
NAME                                             READY   STATUS    RESTARTS   AGE
pod/dashboard-metrics-scraper-69996c49db-dzzgp   1/1     Running   0          14s
pod/kubernetes-dashboard-6c68b8b59c-nsb8q        1/1     Running   0          14s

NAME                                TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
service/dashboard-metrics-scraper   ClusterIP   10.96.9.242     <none>        8000/TCP        14s
service/kubernetes-dashboard        NodePort    10.96.175.237   <none>        443:30001/TCP   14s

10.3 网页访问dashboard

默认是https://master_ip:30001/
https://192.168.1.21:30001/

由于证书原因,如果使用谷歌网址进不去,可以使用火狐登录。

dashboard有两种登陆方式,一种是Token方式,另外一种是使用kubeconfig方式
在这里插入图片描述

10.4 使用Token方式登陆

#创建一个叫 dashboard-admin 的账号,并指定命名空间为kube-system
[root@master1 ~]# kubectl create serviceaccount dashboard-admin -n kube-system
serviceaccount/dashboard-admin created

#创建一个关系,关系名为dashboard-admin,角色为cluster-admin,账户为kube-system命名空间下的dashboard-admin账号
[root@master1 ~]# kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin
clusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created

#创建token
[root@master1 ~]# kubectl -n kube-system create token dashboard-admin
eyJhbGciOiJSUzI1NiIsImtpZCI6IllVZ3ZxTmh0RTNyZkZiZ0FvS3FqSk0tWmZRYTRZWGFIQW1jbzNNMnBjU1kifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjkyNzIwODA4LCJpYXQiOjE2OTI3MTcyMDgsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJkYXNoYm9hcmQtYWRtaW4iLCJ1aWQiOiJmYzdkNTFmYi0xMmE0LTRjMjItYTdiMy1jMmY5MTJiYzRlMzIifX0sIm5iZiI6MTY5MjcxNzIwOCwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.op9WREunCFPbE-G-8VK5Oz-neCQCqaWd16a3goP33wvjmhdN2C_bVKdhk9pr_aVZ8MwraIGN-6rSnmi8ySSC2pJxPjyHJJJpuRjqO0qe39YO5cXFF_JDOGt27Xoi9U43zSR5KEW4gyIyAR71L72bJos0tmYEgIlEhIoaEIcag3KaD8V9kEbkZlVTUQ2pMhXUt3CzNeOUMfEAlw7e9HPTtpw6HMkCKmo9J-NVshkaP6zzpNecj-nX9bFVkjJr-M7bD7iRNJZkQoQnF9w8er18R9hXzFrBBY4y6PXPU2mbVwDJxHIB2zdR-ft-hTu3J7dxbb661gIeE0PhNhhRwBZscw


#将上述输出的内容进行保存,或者kubectl -n kube-system create token dashboard-admin > dashboard_token 保存到文件,方便后面登录使用,如果忘记了token,再次执行kubectl -n kube-system create token dashboard-admin 生成即可。
#上面操作步骤创建serviceaccount和clusterrolebinding,也可以写成yaml文件,通过kubectl apply ...创建

cat > dashboard-user.yaml << EOF
apiVersion: v1
kind: ServiceAccount
metadata:
  name: dashboard-admin
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: dashboard-admin
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: dashboard-admin
  namespace: kube-system
EOF

kubectl  apply -f dashboard-user.yaml
kubectl -n kube-system create token dashboard-admin

复制token到dashboard
登录,安装完成

11部署Metrics Server

官网地址: https://github.com/kubernetes-sigs/metrics-server

11.1组件简介

介绍:

Kubernetes Metrics Server 是 Kubernetes 集群的核心监控数据聚合器。它从 Kubelet 收集资源指标,并通过 Metrics API 在 Kubernetes APIServer 中提供给缩放资源对象 HPA 使用。您也可以通过 Metrics API 提供的 Kubectl top 查看 Pod 资源占用情况,从而实现对资源的自动缩放1。

Metrics Server 提供了以下功能:

  • 一个适用于大多数集群的单一部署。
  • 快速自动缩放,每 15 秒收集一次指标。
  • 资源效率,每个集群节点使用 1 毫核 CPU 和 2 MB 内存。
  • 可扩展支持,最多可支持 5,000 个节点的集群。

查看 metrics-server(或者其他资源指标 API metrics.k8s.io 服务提供者)是否正在运行,键入以下命令查看:

如果资源指标 API 可用,则会输出将包含一个对 metrics.k8s.io 的引用。目前我还没有安装。

[root@master1 ~]# kubectl get apiservices
NAME                                   SERVICE   AVAILABLE   AGE
v1.                                    Local     True        17h
v1.admissionregistration.k8s.io        Local     True        17h
v1.apiextensions.k8s.io                Local     True        17h
v1.apps                                Local     True        17h
v1.authentication.k8s.io               Local     True        17h
v1.authorization.k8s.io                Local     True        17h
v1.autoscaling                         Local     True        17h
v1.batch                               Local     True        17h
v1.certificates.k8s.io                 Local     True        17h
v1.coordination.k8s.io                 Local     True        17h
v1.crd.projectcalico.org               Local     True        26m
v1.discovery.k8s.io                    Local     True        17h
v1.events.k8s.io                       Local     True        17h
v1.networking.k8s.io                   Local     True        17h
v1.node.k8s.io                         Local     True        17h
v1.policy                              Local     True        17h
v1.rbac.authorization.k8s.io           Local     True        17h
v1.scheduling.k8s.io                   Local     True        17h
v1.storage.k8s.io                      Local     True        17h
v1beta2.flowcontrol.apiserver.k8s.io   Local     True        17h
v1beta3.flowcontrol.apiserver.k8s.io   Local     True        17h
v2.autoscaling                         Local     True        17h

11.2下载yaml文件部署

#下载yaml文件
#单机版
https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.6.3/components.yaml

#高可用版本
[root@master1 ~]# wget https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.6.3/high-availability.yaml


#上传镜像包,官网镜像包拉取不下来
[root@docker-registry ~]# docker load -i metrics-server_v0.6.3.tar
[root@docker-registry ~]# docker tag 817bbe3f2e51 192.168.1.9:5000/metrics-server:v0.6.3
[root@docker-registry ~]# docker push 192.168.1.9:5000/metrics-server:v0.6.3

#修改yaml文件,配置证书
[root@master1 ~]# vim high-availability.yaml
#1
      containers:
      - args:
        - --cert-dir=/tmp
        - --secure-port=4443
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --kubelet-use-node-status-port
        - --metric-resolution=15s
        - --kubelet-insecure-tls
        - --requestheader-client-ca-file=/opt/kubernetes/ssl/front-proxy-ca.pem
        - --requestheader-username-headers=X-Remote-User
        - --requestheader-group-headers=X-Remote-Group
        - --requestheader-extra-headers-prefix=X-Remote-Extra-

#2
        volumeMounts:
        - mountPath: /tmp
          name: tmp-dir
        - name: ca-ssl
          mountPath: /opt/kubernetes/ssl
          
#3
      volumes:
      - emptyDir: {}
        name: tmp-dir
      - name: ca-ssl
        hostPath:
          path: /opt/kubernetes/ssl

#4   将 policy/v1beta1 改成 policy/v1
---
apiVersion: policy/v1
kind: PodDisruptionBudget


#修改yaml文件镜像地址
[root@master1 ~]# sed -i s#registry.k8s.io/metrics-server#192.168.1.9:5000#g high-availability.yaml

配置的是2个节点,会在三个节点上随机选择两个节点安装,node1上ssl文件没有同步,我们同步过去,不然安装会一直 CrashLoopBackOff
[root@master1 ~]# scp /opt/kubernetes/ssl/* node1:/opt/kubernetes/ssl/


#执行安装
[root@master1 ~]# kubectl apply -f high-availability.yaml

11.3查看验证

[root@master1 ~]# kubectl get pod -n kube-system -owide | grep metrics-server
metrics-server-7f447d95f6-lq6cr            1/1     Running   0             5m49s   10.244.137.81    master1   <none>           <none>
metrics-server-7f447d95f6-vlt88            1/1     Running   0             2m44s   10.244.166.133   node1     <none>           <none>


[root@master1 ~]# kubectl get apiservices | grep metrics-server
v1beta1.metrics.k8s.io                 kube-system/metrics-server   True        6m7s

[root@master1 ~]# kubectl top node
NAME      CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
master1   209m         10%    1303Mi          34%       
master2   216m         10%    1251Mi          33%       
node1     145m         7%     676Mi           36%       


[root@master1 ~]# kubectl top pod busybox
NAME      CPU(cores)   MEMORY(bytes)   
busybox   0m           0Mi      

12安装ingress

12.1 ingress介绍

K8s集群对外暴露服务的方式:

ClusterIP

ClusterIp是集群内部的私有ip,在集群内部访问服务非常方便,也是kuberentes集群默认的方式,直接通过service的ClusterIp访问,也可以直接通过ServiceName访问。集群外部则是无法访问的

NodePort

NodePort 服务是引导外部流量到你的服务的最原始方式。NodePort,正如这个名字所示,在所有节点(虚拟机)上开放一个特定端口,任何发送到该端口的流量都被转发到对应服务。
NodePort 服务特征如下:
每个端口只能是一种服务
端口范围只能是 30000-32767(可调)
不在 YAML 配置文件中指定则会分配一个默认端口

LoadBalancer

LoadBlancer Service 是 kubernetes 深度结合云平台的一个组件;当使用 LoadBlancer Service 暴露服务时,实际上是通过向底层云平台申请创建一个负载均衡器来向外暴露服务;由于 LoadBlancer Service 深度结合了云平台,所以只能在一些云平台上来使用

Ingress

Ingress资源对象,用于将不同URL的访问请求转发到后端不同的Service,以实现HTTP层的业务路由机制。Kubernetes使用一个Ingress策略定义和一个具体的Ingress Controller,两者结合并实现了一个完整的Ingress负载均衡器。
Ingress Controller将基于Ingress规则将客户请求直接转发到Service对应的后端Endpoint上,这样会跳过kube-proxy的转发功能,kube-proxy 不再起作用。

ingress由两部分组成:

  1. ingress controller:将新加入的Ingress转化成Nginx的配置文件并使之生效
  2. ingress服务:将Nginx的配置抽象成一个Ingress对象,每添加一个新的服务只需写一个新的Ingress的yaml文件即可
在定义Ingress策略之前,需要先部署Ingress Controller,以实现为所有后端Service提供一个统一的入口。Ingress Controller需要实现基于不同HTTP URL向后转发的负载分发机制,并可以灵活设置7层的负载分发策略。如果公有云服务商提供该类型的HTTP路由LoadBalancer,则可以设置其为Ingress Controller.

在Kubernetes中,Ingress Controller将以Pod的形式运行,监控apiserver的/ingress端口后的backend services, 如果service发生变化,则Ingress Controller 应用自动更新其转发规则

选择安装的版本
在这里插入图片描述

我们选择v1.8.0版本

12.2 下载yaml文件

[root@master1 ~]# wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.0/deploy/static/provider/baremetal/deploy.yaml -O ingress-nginx-deploy.yaml

cat ingress-nginx-deploy.yaml

apiVersion: v1
kind: Namespace
metadata:
  labels:
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  name: ingress-nginx
---
apiVersion: v1
automountServiceAccountToken: true
kind: ServiceAccount
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.8.0
  name: ingress-nginx
  namespace: ingress-nginx
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.8.0
  name: ingress-nginx-admission
  namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.8.0
  name: ingress-nginx
  namespace: ingress-nginx
rules:
- apiGroups:
  - ""
  resources:
  - namespaces
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - configmaps
  - pods
  - secrets
  - endpoints
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - services
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - networking.k8s.io
  resources:
  - ingresses
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - networking.k8s.io
  resources:
  - ingresses/status
  verbs:
  - update
- apiGroups:
  - networking.k8s.io
  resources:
  - ingressclasses
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - coordination.k8s.io
  resourceNames:
  - ingress-nginx-leader
  resources:
  - leases
  verbs:
  - get
  - update
- apiGroups:
  - coordination.k8s.io
  resources:
  - leases
  verbs:
  - create
- apiGroups:
  - ""
  resources:
  - events
  verbs:
  - create
  - patch
- apiGroups:
  - discovery.k8s.io
  resources:
  - endpointslices
  verbs:
  - list
  - watch
  - get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.8.0
  name: ingress-nginx-admission
  namespace: ingress-nginx
rules:
- apiGroups:
  - ""
  resources:
  - secrets
  verbs:
  - get
  - create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.8.0
  name: ingress-nginx
rules:
- apiGroups:
  - ""
  resources:
  - configmaps
  - endpoints
  - nodes
  - pods
  - secrets
  - namespaces
  verbs:
  - list
  - watch
- apiGroups:
  - coordination.k8s.io
  resources:
  - leases
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - services
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - networking.k8s.io
  resources:
  - ingresses
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - events
  verbs:
  - create
  - patch
- apiGroups:
  - networking.k8s.io
  resources:
  - ingresses/status
  verbs:
  - update
- apiGroups:
  - networking.k8s.io
  resources:
  - ingressclasses
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - discovery.k8s.io
  resources:
  - endpointslices
  verbs:
  - list
  - watch
  - get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.8.0
  name: ingress-nginx-admission
rules:
- apiGroups:
  - admissionregistration.k8s.io
  resources:
  - validatingwebhookconfigurations
  verbs:
  - get
  - update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.8.0
  name: ingress-nginx
  namespace: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-nginx
subjects:
- kind: ServiceAccount
  name: ingress-nginx
  namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.8.0
  name: ingress-nginx-admission
  namespace: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
  name: ingress-nginx-admission
  namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.8.0
  name: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-nginx
subjects:
- kind: ServiceAccount
  name: ingress-nginx
  namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.8.0
  name: ingress-nginx-admission
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
  name: ingress-nginx-admission
  namespace: ingress-nginx
---
apiVersion: v1
data:
  allow-snippet-annotations: "true"
kind: ConfigMap
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.8.0
  name: ingress-nginx-controller
  namespace: ingress-nginx
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.8.0
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - appProtocol: http
    name: http
    port: 80
    protocol: TCP
    targetPort: http
  - appProtocol: https
    name: https
    port: 443
    protocol: TCP
    targetPort: https
  selector:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  type: NodePort
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.8.0
  name: ingress-nginx-controller-admission
  namespace: ingress-nginx
spec:
  ports:
  - appProtocol: https
    name: https-webhook
    port: 443
    targetPort: webhook
  selector:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.8.0
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  minReadySeconds: 0
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app.kubernetes.io/component: controller
      app.kubernetes.io/instance: ingress-nginx
      app.kubernetes.io/name: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/component: controller
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
        app.kubernetes.io/version: 1.8.0
    spec:
      containers:
      - args:
        - /nginx-ingress-controller
        - --election-id=ingress-nginx-leader
        - --controller-class=k8s.io/ingress-nginx
        - --ingress-class=nginx
        - --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
        - --validating-webhook=:8443
        - --validating-webhook-certificate=/usr/local/certificates/cert
        - --validating-webhook-key=/usr/local/certificates/key
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: LD_PRELOAD
          value: /usr/local/lib/libmimalloc.so
        image: registry.k8s.io/ingress-nginx/controller:v1.8.0@sha256:744ae2afd433a395eeb13dc03d3313facba92e96ad71d9feaafc85925493fee3
        imagePullPolicy: IfNotPresent
        lifecycle:
          preStop:
            exec:
              command:
              - /wait-shutdown
        livenessProbe:
          failureThreshold: 5
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 10
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        name: controller
        ports:
        - containerPort: 80
          name: http
          protocol: TCP
        - containerPort: 443
          name: https
          protocol: TCP
        - containerPort: 8443
          name: webhook
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 10
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        resources:
          requests:
            cpu: 100m
            memory: 90Mi
        securityContext:
          allowPrivilegeEscalation: true
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - ALL
          runAsUser: 101
        volumeMounts:
        - mountPath: /usr/local/certificates/
          name: webhook-cert
          readOnly: true
      dnsPolicy: ClusterFirst
      nodeSelector:
        kubernetes.io/os: linux
      serviceAccountName: ingress-nginx
      terminationGracePeriodSeconds: 300
      volumes:
      - name: webhook-cert
        secret:
          secretName: ingress-nginx-admission
---
apiVersion: batch/v1
kind: Job
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.8.0
  name: ingress-nginx-admission-create
  namespace: ingress-nginx
spec:
  template:
    metadata:
      labels:
        app.kubernetes.io/component: admission-webhook
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
        app.kubernetes.io/version: 1.8.0
      name: ingress-nginx-admission-create
    spec:
      containers:
      - args:
        - create
        - --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
        - --namespace=$(POD_NAMESPACE)
        - --secret-name=ingress-nginx-admission
        env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        image: registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407@sha256:543c40fd093964bc9ab509d3e791f9989963021f1e9e4c9c7b6700b02bfb227b
        imagePullPolicy: IfNotPresent
        name: create
        securityContext:
          allowPrivilegeEscalation: false
      nodeSelector:
        kubernetes.io/os: linux
      restartPolicy: OnFailure
      securityContext:
        fsGroup: 2000
        runAsNonRoot: true
        runAsUser: 2000
      serviceAccountName: ingress-nginx-admission
---
apiVersion: batch/v1
kind: Job
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.8.0
  name: ingress-nginx-admission-patch
  namespace: ingress-nginx
spec:
  template:
    metadata:
      labels:
        app.kubernetes.io/component: admission-webhook
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
        app.kubernetes.io/version: 1.8.0
      name: ingress-nginx-admission-patch
    spec:
      containers:
      - args:
        - patch
        - --webhook-name=ingress-nginx-admission
        - --namespace=$(POD_NAMESPACE)
        - --patch-mutating=false
        - --secret-name=ingress-nginx-admission
        - --patch-failure-policy=Fail
        env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        image: registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407@sha256:543c40fd093964bc9ab509d3e791f9989963021f1e9e4c9c7b6700b02bfb227b
        imagePullPolicy: IfNotPresent
        name: patch
        securityContext:
          allowPrivilegeEscalation: false
      nodeSelector:
        kubernetes.io/os: linux
      restartPolicy: OnFailure
      securityContext:
        fsGroup: 2000
        runAsNonRoot: true
        runAsUser: 2000
      serviceAccountName: ingress-nginx-admission
---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.8.0
  name: nginx
spec:
  controller: k8s.io/ingress-nginx
---
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.8.0
  name: ingress-nginx-admission
webhooks:
- admissionReviewVersions:
  - v1
  clientConfig:
    service:
      name: ingress-nginx-controller-admission
      namespace: ingress-nginx
      path: /networking/v1/ingresses
  failurePolicy: Fail
  matchPolicy: Equivalent
  name: validate.nginx.ingress.kubernetes.io
  rules:
  - apiGroups:
    - networking.k8s.io
    apiVersions:
    - v1
    operations:
    - CREATE
    - UPDATE
    resources:
    - ingresses
  sideEffects: None

12.3 修改yaml文件

#修改其中一部分,添加两行 nodePort
[root@master1 ~]# vim ingress-nginx-deploy.yaml

---
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.8.0
  name: ingress-nginx-controller
  namespace: ingress-nginx
spec:
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - appProtocol: http
    name: http
    port: 80
    protocol: TCP
    targetPort: http
    nodePort: 30080
  - appProtocol: https
    name: https
    port: 443
    protocol: TCP
    targetPort: https
    nodePort: 30443
  selector:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  type: NodePort
---

#查看镜像地址
[root@master1 ~]# grep image: ingress-nginx-deploy.yaml 
        image: registry.k8s.io/ingress-nginx/controller:v1.8.0@sha256:744ae2afd433a395eeb13dc03d3313facba92e96ad71d9feaafc85925493fee3
        image: registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407@sha256:543c40fd093964bc9ab509d3e791f9989963021f1e9e4c9c7b6700b02bfb227b
        image: registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407@sha256:543c40fd093964bc9ab509d3e791f9989963021f1e9e4c9c7b6700b02bfb227b
[root@master1 ~]# 


#在线安装修改镜像地址如下,不然无法拉取
registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:v1.8.0
registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v20230407

#在线安装修改如下
[root@master1 ~]# grep image: ingress-nginx-deploy.yaml 
        image: registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:v1.8.0
        image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v20230407
        image: registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v20230407


#离线安装,上传准备好的镜像
[root@docker-registry ~]# docker load -i nginx-ingress-controller_v1.8.0.tar
[root@docker-registry ~]# docker tag ea299dd31352 192.168.1.9:5000/nginx-ingress-controller:v1.8.0
[root@docker-registry ~]# docker push 192.168.1.9:5000/nginx-ingress-controller:v1.8.0

[root@docker-registry ~]# docker load -i kube-webhook-certgen_v20230407.tar
[root@docker-registry ~]# docker tag 7e7451bb7042 192.168.1.9:5000/kube-webhook-certgen:v20230407
[root@docker-registry ~]# docker push 192.168.1.9:5000/kube-webhook-certgen:v20230407

#修改后
[root@master1 ~]# grep image: deploy.yaml 
        image: 192.168.1.9:5000/controller:v1.8.0
        image: 192.168.1.9:5000/kube-webhook-certgen:v20230407
        image: 192.168.1.9:5000/kube-webhook-certgen:v20230407

12.3安装ingress-nginx-controller

#部署 ingress控制器
[root@master1 ~]# kubectl apply -f ingress-nginx-deploy.yaml
#查看 ingress-nginx-controller pod
[root@master1 ~]# kubectl get pod -n ingress-nginx
NAME                                       READY   STATUS      RESTARTS   AGE
ingress-nginx-admission-create-vd8zr       0/1     Completed   0          24s
ingress-nginx-admission-patch-f26w8        0/1     Completed   0          24s
ingress-nginx-controller-7d6bc9fff-tcxmj   1/1     Running     0          24s


#查看svc
[root@master1 ~]# kubectl get svc -n ingress-nginx
NAME                                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                      AGE
ingress-nginx-controller             NodePort    10.96.196.189   <none>        80:30080/TCP,443:30443/TCP   82s
ingress-nginx-controller-admission   ClusterIP   10.96.255.10    <none>        443/TCP                      82s


[root@master1 ~]# kubectl get all -A | grep ingress
ingress-nginx          pod/ingress-nginx-admission-create-vd8zr         0/1     Completed   0              106s
ingress-nginx          pod/ingress-nginx-admission-patch-f26w8          0/1     Completed   0              106s
ingress-nginx          pod/ingress-nginx-controller-7d6bc9fff-tcxmj     1/1     Running     0              106s
ingress-nginx          service/ingress-nginx-controller             NodePort    10.96.196.189   <none>        80:30080/TCP,443:30443/TCP   107s
ingress-nginx          service/ingress-nginx-controller-admission   ClusterIP   10.96.255.10    <none>        443/TCP                      107s
ingress-nginx          deployment.apps/ingress-nginx-controller    1/1     1            1           107s
ingress-nginx          replicaset.apps/ingress-nginx-controller-7d6bc9fff     1         1         1       106s
ingress-nginx   job.batch/ingress-nginx-admission-create   1/1           4s         106s
ingress-nginx   job.batch/ingress-nginx-admission-patch    1/1           18s        106s

12.4创建资源,应用,ingress文件

创建一个yaml文件,部署 一个资源,服务,ingress,应用进行验证

cat > nginx-app.yaml << 'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: 192.168.1.9:5000/nginx:latest
        ports:
        - containerPort: 80

---

apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  selector:
    app: nginx
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80

---

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx-ingress
spec:
  ingressClassName: "nginx"
  rules:
  - host: mrloam.com
    http:
      paths:
      - pathType: Prefix
        path: /
        backend:
          service:
            name: nginx-service
            port:
              number: 80
EOF

部署应用

#镜像仓库准备好nginx镜像
[root@docker-registry ~]# docker pull nginx
[root@docker-registry ~]# docker tag 605c77e624dd 192.168.1.9:5000/nginx:latest
[root@docker-registry ~]# docker push 192.168.1.9:5000/nginx:latest

 
#部署
[root@master1 ~]# kubectl apply -f nginx-app.yaml
#等待创建完成,查看如下

[root@master1 ~]# kubectl  get ingress
NAME            CLASS   HOSTS        ADDRESS        PORTS   AGE
nginx-ingress   nginx   mrloam.com   192.168.1.22   80      15m

验证

[root@master1 ~]# cat >> /etc/hosts << EOF
192.168.1.22 mrloam.com
EOF

#上面hosts内容写成 192.168.1.23 mrloam.com 或者 192.168.1.21 mrloam.com 都是可以的

[root@master1 ~]# curl mrloam.com:30080
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

12.5 网页访问验证

#新增C:\Windows\System32\drivers\etc
192.168.1.21 mrloam.com

#或者填写成 192.168.1.22 mrloam.com 或 192.168.1.23 mrloam.com ,都是可以访问的

浏览器访问

在这里插入图片描述

评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

Mr.L-OAM

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值