ubuntu22.04安装k8s集群(cri-docker+haproxy+keepalived)

一、参考文档

 【kubernetes】k8s高可用集群搭建(三主三从)_搭建k8s高可用集群-CSDN博客

https://www.cnblogs.com/wangguishe/p/17823687.html#_label13

二、前言

        一直搭建的k8s集群都是1个主节点的学习环境,不适合生产环境,这里介绍一下利用haproxy和keeplived搭建高可用的多主K8s集群。

三、版本计划

角色ip软件
kmaster1192.168.48.210/24kubeadm1.28.0;kubectl1.28.0;kubelet1.28.0;docker-ce24.0.7,cri-docker0.3.9;ipvsadm v1.31 2019/12/24 (compiled with popt and IPVS v1.2.1);haproxy:2.3.6;osixia/keepalived:2.0.20
kmaster2192.168.48.211/24kubeadm1.28.0;kubectl1.28.0;kubelet1.28.0;docker-ce24.0.7,cri-docker0.3.9;ipvsadm v1.31 2019/12/24 (compiled with popt and IPVS v1.2.1);haproxy:2.3.6;osixia/keepalived:2.0.20
kmaster3192.168.48.212/24kubeadm1.28.0;kubectl1.28.0;kubelet1.28.0;docker-ce24.0.7,cri-docker0.3.9;ipvsadm v1.31 2019/12/24 (compiled with popt and IPVS v1.2.1);haproxy:2.3.6;osixia/keepalived:2.0.20
knode1192.168.48.213/24kubeadm1.28.0;kubectl1.28.0;kubelet1.28.0;docker-ce24.0.7,cri-docker0.3.9;ipvsadm v1.31 2019/12/24 (compiled with popt and IPVS v1.2.1)
knode2192.168.48.214/24kubeadm1.28.0;kubectl1.28.0;kubelet1.28.0;docker-ce24.0.7,cri-docker0.3.9;ipvsadm v1.31 2019/12/24 (compiled with popt and IPVS v1.2.1)
knode3192.168.48.215/24kubeadm1.28.0;kubectl1.28.0;kubelet1.28.0;docker-ce24.0.7,cri-docker0.3.9;ipvsadm v1.31 2019/12/24 (compiled with popt and IPVS v1.2.1)
knode4192.168.48.216/24kubeadm1.28.0;kubectl1.28.0;kubelet1.28.0;docker-ce24.0.7,cri-docker0.3.9;ipvsadm v1.31 2019/12/24 (compiled with popt and IPVS v1.2.1)

        注意:为了避免集群脑裂,建议主节点个数为奇数,具体请参考k8s官网介绍。

四、创建虚拟机

1、所有节点都执行

1)系统初始化

        这里是基于ubuntu22.04克隆的7台设备,需要先系统初始化一下,下面是kmaster1的步骤,其他主机参照步骤执行。

#依次开机,设置ip地址,主机名

## 设置为root登录

sudo su
passwd root

vim /etc/ssh/sshd_config
...
PermitRootLogin yes

sudo service ssh restart

# 修改主机名和hosts

hostnamectl set-hostname kmaster1

vim /etc/hosts
...
192.168.48.210 kmaster1
192.168.48.211 kmaster2
192.168.48.212 kmaster3
192.168.48.213 knode1
192.168.48.214 knode2
192.168.48.215 knode3
192.168.48.216 knode4

## ubuntu22.04环境设置静态地址
ssh ziu@192.168.48.x
sudo su
cp /etc/netplan/00-installer-config.yaml /etc/netplan/00-installer-config.yaml.bbk
vim /etc/netplan/00-installer-config.yaml
network:
  ethernets:
    ens33:
      dhcp4: false
      addresses: [192.168.48.210/24]   ##按照版本计划表修改ip
      optional: true
      routes:
        - to: default
          via: 192.168.48.2
      nameservers: 
        addresses: [192.168.48.2]
  version: 2

netplan apply

2)设置转发IPV4并让iptables看到桥接流量

# br_netfilter是一个内核模块,它允许在网桥设备上加入防火墙功能,因此也被称为透明防火墙或桥接模式防火墙。这种防火墙具有部署能力强,隐蔽性好,安全性高的有点
# overlay模块则是用于支持overlay网络文件系统的模块,overlay文件系统是一种在现有文件系统的顶部创建叠加层的办法,以实现联合挂载(Union Mount)。它允许将多个文件系统合并为一个单一的逻辑文件系统,具有层次结构和优先级,这样可以方便的将多个文件系统中的文件或目录合并在一起,而不需要实际复制和移动文件
cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

# 设置所需的 sysctl 参数,参数在重新启动后保持不变
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

# 应用 sysctl 参数而不重新启动
sudo sysctl --system


#通过运行以下指令确认 br_netfilter 和 overlay 模块被加载:

lsmod | grep br_netfilter
lsmod | grep overlay

#通过运行以下指令确认 net.bridge.bridge-nf-call-iptables、net.bridge.bridge-nf-call-ip6tables 和 net.ipv4.ip_forward 系统变量在你的 sysctl 配置中被设置为 1:

sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward

3)设置ipvs

以下使用ipvs的原因是问的AI的解答)

Kubernetes(k8s)多主集群通常建议使用IPVS(IP Virtual Server)作为服务代理,原因主要包括以下几点:

(1)性能优势:
   IPVS内置于Linux内核中,其基于哈希表的快速查找机制在大规模高并发场景下比iptables(Netfilter)的NAT模式具有更高的性能和更低的延迟。

(2)负载均衡算法丰富:
   IPVS支持丰富的负载均衡算法,包括轮询、最少连接、源哈希、加权轮询等,可以根据业务需求灵活选择合适的负载均衡策略。

(3)会话保持能力:
   IPVS支持多种会话保持方式,例如源IP地址会话保持、目的IP地址会话保持、TCP/UDP端口会话保持等,这对于保持客户端与后端服务之间的长连接或状态相关的应用至关重要。

(4)更好的可扩展性和稳定性:
   在大规模集群中,随着Service数量的增长,iptables规则的数量也会迅速增加,这可能会影响系统的稳定性和性能。而IPVS则通过内核态实现高效的负载均衡和转发,避免了这个问题。

(5)更精细的服务管理:
   IPVS可以对单个服务进行粒度更细的管理,如单独设置服务的健康检查、权重调整等,更适合复杂的云原生环境。

(6)支持集群内部通信:
   在k8s集群内部,kube-proxy组件利用IPVS可以更好地处理Pod间的通信和服务发现。

因此,在构建Kubernetes多主集群时,使用IPVS能够提供更高效率、更稳定的网络服务代理功能,以满足分布式系统中的高可用和高性能需求。不过,IPVS并不是必须选项,kube-proxy也支持iptables和userspace两种模式,但在大规模生产环境中,IPVS往往是首选方案。

注意:前面我参考的文档也有ipvs步骤,参数比下面步骤少,具体看自己设置吧。

#安装ipvs

sudo apt update
sudo apt install ipvsadm ipset sysstat conntrack libseccomp2 -y


cat > /etc/modules-load.d/ipvs.conf << EOF
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
EOF



sudo modprobe ip_vs
sudo modprobe ip_vs_rr
sudo modprobe ip_vs_wrr
sudo modprobe ip_vs_sh
sudo modprobe nf_conntrack
sudo modprobe ip_tables
sudo modprobe ip_set
sudo modprobe xt_set
sudo modprobe ipt_set
sudo modprobe ipt_rpfilter
sudo modprobe ipt_REJECT
sudo modprobe ipip

#通过运行以下指令确认模块被加载:

lsmod | grep ip_vs
lsmod | grep ip_vs_rr
lsmod | grep ip_vs_wrr
lsmod | grep ip_vs_sh
lsmod | grep nf_conntrack
lsmod | grep ip_tables
lsmod | grep ip_set
lsmod | grep xt_set
lsmod | grep ipt_set
lsmod | grep ipt_rpfilter
lsmod | grep ipt_REJECT
lsmod | grep ipip

4)安装docker-ce

# 官网提供的步骤
# https://docs.docker.com/engine/install/ubuntu/

for pkg in docker.io docker-doc docker-compose docker-compose-v2 podman-docker containerd runc; do sudo apt-get remove $pkg; done


# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl gnupg -y
#sudo install -m 0755 -d /etc/apt/keyrings


curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg

# Add the repository to Apt sources:
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
  $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update


sudo apt install docker-ce docker-ce-cli containerd.io 
#sudo apt install docker-buildx-plugin docker-compose-plugin  #可以不按照,不是k8s集群必须

#防止在执行apt update和apt upgrade时自动更新
sudo apt-mark hold docker-ce docker-ce-cli containerd.io 


docker version
Client: Docker Engine - Community
 Version:           24.0.7
 API version:       1.43
 Go version:        go1.20.10
 Git commit:        afdd53b
 Built:             Thu Oct 26 09:07:41 2023
 OS/Arch:           linux/amd64
 Context:           default

Server: Docker Engine - Community
 Engine:
  Version:          24.0.7
  API version:      1.43 (minimum version 1.12)
  Go version:       go1.20.10
  Git commit:       311b9ff
  Built:            Thu Oct 26 09:07:41 2023
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.6.26
  GitCommit:        3dd1e886e55dd695541fdcd67420c2888645a495
 runc:
  Version:          1.1.10
  GitCommit:        v1.1.10-0-g18a0cb0
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

5)安装容器进行时cri-docker

# https://github.com/Mirantis/cri-dockerd/releases
# Run these commands as root
## 如果github下载失败,gitee上有同步过来的0.3.8的,也可直接用

wget https://github.com/Mirantis/cri-dockerd/releases/download/v0.3.9/cri-dockerd-0.3.9.amd64.tgz
tar -xf cri-dockerd-0.3.9.amd64.tgz
cd cri-dockerd
mkdir -p /usr/local/bin
install -o root -g root -m 0755 cri-dockerd /usr/local/bin/cri-dockerd
#install packaging/systemd/* /etc/systemd/system
#sed -i -e 's,/usr/bin/cri-dockerd,/usr/local/bin/cri-dockerd,' /etc/systemd/system/cri-docker.service

##编写service-------------------------------------------------------------------------------------
vim /etc/systemd/system/cri-docker.service
[Unit]
Description=CRI Interface for Docker Application Container Engine
Documentation=https://docs.mirantis.com
After=network-online.target firewalld.service docker.service
Wants=network-online.target
Requires=cri-docker.socket

[Service]
Type=notify
ExecStart=/usr/local/bin/cri-dockerd --container-runtime-endpoint fd:// --pod-infra-container-image=registry.aliyuncs.com/google_containers/pause:3.9
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always

# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
# Both the old, and new location are accepted by systemd 229 and up, so using the old location
# to make them work for either version of systemd.
StartLimitBurst=3

# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
# this option work for either version of systemd.
StartLimitInterval=60s

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Comment TasksMax if your systemd version does not support it.
# Only systemd 226 and above support this option.
TasksMax=infinity
Delegate=yes
KillMode=process

[Install]
WantedBy=multi-user.target

##编写socket---------------------------------------------------------------------------------
vim /etc/systemd/system/cri-docker.socket
[Unit]
Description=CRI Docker Socket for the API
PartOf=cri-docker.service

[Socket]
ListenStream=%t/cri-dockerd.sock
SocketMode=0660
SocketUser=root
SocketGroup=docker

[Install]
WantedBy=sockets.target


##加载service配置并设置开机自启
systemctl daemon-reload
systemctl enable --now cri-docker.socket


#插叙cgroup配置,默认就是systemd,所以不用修改了
docker info | grep -i cgroup
 Cgroup Driver: systemd
 Cgroup Version: 2
  cgroupns

6)安装kubeadm,kubelet,kubectl

(1)安装前检查
  • 一台兼容的linux主机

  • 单主机节点内存2GB以上,CPU2核以上

  • 集群主机节点之间网络互联

  • 检查容器进行时

#下面截图的容器进行时任选一个
root@kmaster1:~/cri-dockerd# ls /var/run/cri-dockerd.sock 
/var/run/cri-dockerd.sock
  • 节点之间不可以有重复主机名,mac地址,product_uuid

#检查网口和MAC地址
ip link

#检查product_uuid
sudo cat /sys/class/dmi/id/product_uuid

#检查主机名
hostname
  • 开启端口:6443

#主机端口,这里检查没有输出
nc 127.0.0.1 6443
  • 禁用交换分区

#注释里面swap的行
vim /etc/fstab 

#临时关闭
swapoff -a

#查看时间是否一致
date

#关闭防火墙
# 关闭ufw和firewalled
systemctl stop ufw firewalld
systemctl disable ufw firewalld

# 禁用selinux(ubuntu没有启用,centos才默认启用,需要注意一下)
#默认ubunt默认是不安装selinux的,如果没有selinux命令和配置文件则说明没有安装selinux,则下面步骤就不用做了
sed -ri 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config 
setenforce 0
(2)安装3个命令

#此次的安装步骤
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -

cat >/etc/apt/sources.list.d/kubernetes.list <<EOF 
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF

mv /etc/apt/sources.list.d/docker.list /root/
apt update

sudo apt list kubeadm -a
sudo apt-get install -y kubelet=1.28.0-00 kubeadm=1.28.0-00 kubectl=1.28.0-00
sudo apt-mark hold kubelet kubeadm kubectl


#---------以下安装源可任选一种----------------------------------------
#阿里云的源
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -

cat >/etc/apt/sources.list.d/kubernetes.list <<EOF 
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF

#华为云的源
curl -fsSL  https://repo.huaweicloud.com/kubernetes/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg


echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://repo.huaweicloud.com/kubernetes/apt/ kubernetes-xenial main' | sudo tee /etc/apt/sources.list.d/kubernetes.list


#k8s官方源
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg


echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

2、3个主节点执行

1)设置命令补全

apt install bash-completion -y
echo "source <(kubectl completion bash)" >> ~/.bashrc

source .bashrc

2)安装haproxy-基于docker方式

#创建haproxy配置文件

mkdir /etc/haproxy
vim /etc/haproxy/haproxy.cfg
root@kmaster1:~# grep -v '^#' /etc/haproxy/haproxy.cfg

global
    # to have these messages end up in /var/log/haproxy.log you will
    # need to:
    #
    # 1) configure syslog to accept network log events.  This is done
    #    by adding the '-r' option to the SYSLOGD_OPTIONS in
    #    /etc/sysconfig/syslog
    #
    # 2) configure local2 events to go to the /var/log/haproxy.log
    #   file. A line like the following can be added to
    #   /etc/sysconfig/syslog
    #
    #    local2.*                       /var/log/haproxy.log
    #
    log         127.0.0.1 local2

    pidfile     /var/run/haproxy.pid
    maxconn     4000
    # daemon

    # turn on stats unix socket
    stats socket /var/lib/haproxy/stats

defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000

frontend  kubernetes-apiserver
    mode tcp
    bind *:9443  ## 监听9443端口
    # bind *:443 ssl # To be completed ....

    acl url_static       path_beg       -i /static /images /javascript /stylesheets
    acl url_static       path_end       -i .jpg .gif .png .css .js

    default_backend             kubernetes-apiserver

backend kubernetes-apiserver
    mode        tcp  # 模式tcp
    balance     roundrobin  # 采用轮询的负载算法
 server kmaster1 192.168.48.210:6443 check
 server kmaster2 192.168.48.211:6443 check
 server kmaster3 192.168.48.212:6443 check



root@kmaster1:~# docker run -d \
--restart always \
--name=haproxy \
--net=host  \
-v /etc/haproxy:/usr/local/etc/haproxy:ro \
-v /var/lib/haproxy:/var/lib/haproxy \
haproxy:2.3.6

root@kmaster1:~# docker ps -a

3)安装keepalived-基于docker方式

#查看网口名,这里是ens33,自己根据自己的网口名称修改
root@kmaster1:~# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
    link/ether 00:0c:29:e7:89:b3 brd ff:ff:ff:ff:ff:ff
    altname enp2s1
3: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default 
    link/ether 02:42:58:31:78:56 brd ff:ff:ff:ff:ff:ff
5: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN mode DEFAULT group default 
    link/ether 8e:f2:89:c8:e1:d5 brd ff:ff:ff:ff:ff:ff


#创建keepalived配置文件

mkdir /etc/keepalived
vim /etc/keepalived/keepalived.conf
global_defs {
   script_user root 
   enable_script_security

}

vrrp_script chk_haproxy {
    script "/bin/bash -c 'if [[ $(netstat -nlp | grep 9443) ]]; then exit 0; else exit 1; fi'"  # haproxy 检测脚本,这里需要根据自己实际情况判断
    interval 2  # 每2秒执行一次检测
    weight 11 # 权重变化
}

vrrp_instance VI_1 {
  interface ens33 # 此处通过ip addr命令根据实际填写

  state MASTER # backup节点设为BACKUP
  virtual_router_id 51 # id设为相同,表示是同一个虚拟路由组
  priority 100 #初始权重

  #这里需要注意,我的3个Master节点不在同一个网段,不配置会出现多个Master节点的脑裂现象,值根据当前节点情况,配置其余2个节点
 # unicast_peer {
#		192.168.48.210
#		192.168.48.211
#		192.168.48.212
 # }

  virtual_ipaddress {
    192.168.48.222  # vip 虚拟ip
  }

  authentication {
    auth_type PASS
    auth_pass password
  }

  track_script {
      chk_haproxy
  }

  notify "/container/service/keepalived/assets/notify.sh"
}
docker run --cap-add=NET_ADMIN \
--restart always \
--name keepalived \
--cap-add=NET_BROADCAST \
--cap-add=NET_RAW \
--net=host \
--volume /etc/keepalived/keepalived.conf:/container/service/keepalived/assets/keepalived.conf \
-d osixia/keepalived:2.0.20 \
--copy-service

docker ps -a

4)验证vip的切换

前提:3个主节点都配置完成了haproxy和keepalived,并容器启动成功

#vip所在的master节点

root@kmaster1:~# ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:e7:89:b3 brd ff:ff:ff:ff:ff:ff
    altname enp2s1
    inet 192.168.48.212/24 brd 192.168.48.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet 192.168.48.222/32 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fee7:89b3/64 scope link 
       valid_lft forever preferred_lft forever
3: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
    inet 10.244.163.64/32 scope global tunl0
       valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:7a:40:ff:74 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
5: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default 
    link/ether 8e:f2:89:c8:e1:d5 brd ff:ff:ff:ff:ff:ff
    inet 10.96.0.10/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.96.0.1/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.97.78.110/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever


#非vip的master节点
root@kmaster2:~# ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:0c:29:a6:69:6e brd ff:ff:ff:ff:ff:ff
    altname enp2s1
    inet 192.168.48.211/24 brd 192.168.48.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fea6:696e/64 scope link 
       valid_lft forever preferred_lft forever
3: tunl0@NONE: <NOARP,UP,LOWER_UP> mtu 1480 qdisc noqueue state UNKNOWN group default qlen 1000
    link/ipip 0.0.0.0 brd 0.0.0.0
    inet 10.244.135.2/32 scope global tunl0
       valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:4c:cd:64:63 brd ff:ff:ff:ff:ff:ff
    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
       valid_lft forever preferred_lft forever
5: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default 
    link/ether 8e:f2:89:c8:e1:d5 brd ff:ff:ff:ff:ff:ff
    inet 10.96.0.1/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.96.0.10/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
    inet 10.97.78.110/32 scope global kube-ipvs0
       valid_lft forever preferred_lft forever
10: calibc082d1a6a4@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1480 qdisc noqueue state UP group default qlen 1000
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link 
       valid_lft forever preferred_lft forever
11: cali41e20c84d22@if3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1480 qdisc noqueue state UP group default qlen 1000
    link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 1
    inet6 fe80::ecee:eeff:feee:eeee/64 scope link 
       valid_lft forever preferred_lft forever

5)拉取集群需要的镜像

        这里以kmaster1举例,kmaster2和kmaster3参照执行。

#查看镜像版本
root@kmaster1:~# kubeadm config images list
I0117 08:26:18.644762    4685 version.go:256] remote version is much newer: v1.29.0; falling back to: stable-1.28
registry.k8s.io/kube-apiserver:v1.28.5
registry.k8s.io/kube-controller-manager:v1.28.5
registry.k8s.io/kube-scheduler:v1.28.5
registry.k8s.io/kube-proxy:v1.28.5
registry.k8s.io/pause:3.9
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/coredns/coredns:v1.10.1

#查看阿里云的镜像库的镜像版本
root@kmaster1:~# kubeadm config images list --image-repository registry.aliyuncs.com/google_containers
I0117 08:28:40.181207    4779 version.go:256] remote version is much newer: v1.29.0; falling back to: stable-1.28
registry.aliyuncs.com/google_containers/kube-apiserver:v1.28.5
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.28.5
registry.aliyuncs.com/google_containers/kube-scheduler:v1.28.5
registry.aliyuncs.com/google_containers/kube-proxy:v1.28.5
registry.aliyuncs.com/google_containers/pause:3.9
registry.aliyuncs.com/google_containers/etcd:3.5.9-0
registry.aliyuncs.com/google_containers/coredns:v1.10.1

#下载镜像
root@kmaster1:~# kubeadm config images pull --kubernetes-version=v1.28.0 --image-repository registry.aliyuncs.com/google_containers  --cri-socket unix:///var/run/cri-dockerd.sock
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.28.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.28.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.28.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.28.0
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.9
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.9-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.10.1

3、kmaster1节点执行

        创建集群初始化的配置文件。

#kmaster1上创建k8s-init-config.yaml

vim k8s-init-config.yaml

-----------------------整个文件内容--------------------------------
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: wgs001.com3yjucgqr276rf # 可以自定义,正则([a-z0-9]{6}).([a-z0-9]{16})
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.48.210 # 修改成节点ip
  bindPort: 6443
nodeRegistration:
  criSocket: unix:///var/run/cri-dockerd.sock
  imagePullPolicy: IfNotPresent
  name: kmaster1 # 节点的hostname
  taints: 
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
  - effect: NoSchedule
    key: node-role.kubernetes.io/control-plane
---
apiServer:
  timeoutForControlPlane: 4m0s
  certSANs: # 3主个节点IP和vip的ip
  - 192.168.48.210
  - 192.168.48.211
  - 192.168.48.212
  - 192.168.48.222
apiVersion: kubeadm.k8s.io/v1beta3
controlPlaneEndpoint: "192.168.48.222:6443" # 设置vip高可用地址
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns: {}
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers # 设置国内源
kind: ClusterConfiguration
kubernetesVersion: v1.28.0 # 指定版本
networking:
  dnsDomain: k8s.local
  podSubnet: 10.244.0.0/16 # 增加指定pod的网段
  serviceSubnet: 10.96.0.0/12
scheduler: {}
---
# 用于配置kube-proxy上为Service指定的代理模式: ipvs or iptables
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
---
# 指定cgroup
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
cgroupDriver: "systemd"

执行结果

#kmaster1
root@kmaster1:~# kubeadm init --config kubeadm-init-config.yaml  --upload-certs
[init] Using Kubernetes version: v1.28.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kmaster1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.k8s.local] and IPs [10.96.0.1 192.168.48.210 192.168.48.222 192.168.48.211 192.168.48.212]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [kmaster1 localhost] and IPs [192.168.48.210 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [kmaster1 localhost] and IPs [192.168.48.210 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 10.505761 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
b5649c4e771848c33ffeaa3e18c2dc59e94da94c0a3b98c7463421bf2f1810b5
[mark-control-plane] Marking the node kmaster1 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node kmaster1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule node-role.kubernetes.io/control-plane:NoSchedule]
[bootstrap-token] Using token: wgs001.com3yjucgqr276rf
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 192.168.48.222:6443 --token wgs001.com3yjucgqr276rf \
	--discovery-token-ca-cert-hash sha256:a64483862c6408f4cde941f375d27ea9fa9ee5012ff37be2acd8c8436c09148d \
	--control-plane --certificate-key b5649c4e771848c33ffeaa3e18c2dc59e94da94c0a3b98c7463421bf2f1810b5

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.48.222:6443 --token wgs001.com3yjucgqr276rf \
	--discovery-token-ca-cert-hash sha256:a64483862c6408f4cde941f375d27ea9fa9ee5012ff37be2acd8c8436c09148d 


root@kmaster1:~# mkdir -p $HOME/.kube
root@kmaster1:~# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
root@kmaster1:~# sudo chown $(id -u):$(id -g) $HOME/.kube/config

4、kmaster2和kmaster3执行

kubeadm join 192.168.48.222:6443 --token wgs001.com3yjucgqr276rf \
	--discovery-token-ca-cert-hash sha256:a64483862c6408f4cde941f375d27ea9fa9ee5012ff37be2acd8c8436c09148d \
	--control-plane --certificate-key b5649c4e771848c33ffeaa3e18c2dc59e94da94c0a3b98c7463421bf2f1810b5  --cri-socket unix:///var/run/cri-dockerd.sock

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

5、knode1到knode4执行

kubeadm join 192.168.48.222:6443 --token wgs001.com3yjucgqr276rf \
	--discovery-token-ca-cert-hash sha256:a64483862c6408f4cde941f375d27ea9fa9ee5012ff37be2acd8c8436c09148d  --cri-socket unix:///var/run/cri-dockerd.sock

6、安装calico

请参考我的另一个文档的ubuntu22.04安装k8s学习环境1主2从(cri-docker方式)-CSDN博客calico步骤,calico.yaml的内容太多,就不重复叙述了。

下载github的calico文件的,请到以下地址获取3.27版本

https://github.com/projectcalico/calico.git

git下来的文件路径:calico/manifests/calico.yaml

root@kmaster1:~# kubectl apply -f calico.yaml
poddisruptionbudget.policy/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
serviceaccount/calico-node created
serviceaccount/calico-cni-plugin created
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrole.rbac.authorization.k8s.io/calico-cni-plugin created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-cni-plugin created
daemonset.apps/calico-node created
deployment.apps/calico-kube-controllers created

##查看集群节点状态
root@kmaster1:~# kubectl get nodes 
NAME       STATUS   ROLES           AGE     VERSION
kmaster1   Ready    control-plane   6h12m   v1.28.0
kmaster2   Ready    control-plane   6h9m    v1.28.0
kmaster3   Ready    control-plane   6h8m    v1.28.0
knode1     Ready    <none>          6h8m    v1.28.0
knode2     Ready    <none>          6h8m    v1.28.0
knode3     Ready    <none>          6h8m    v1.28.0
knode4     Ready    <none>          6h8m    v1.28.0

7、设置主节点的etcd高可用

#检查所有master节点,一般kmaster1改了之后,其他节点会同步,只检查即可,修改后,重启一下kubelet
vim /etc/kubernetes/manifests/etcd.yaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.48.212:2379
  creationTimestamp: null
  labels:
    component: etcd
    tier: control-plane
  name: etcd
  namespace: kube-system
spec:
  containers:
  - command:
    - etcd
    - --advertise-client-urls=https://192.168.48.212:2379
    - --cert-file=/etc/kubernetes/pki/etcd/server.crt
    - --client-cert-auth=true
    - --data-dir=/var/lib/etcd
    - --experimental-initial-corrupt-check=true
    - --experimental-watch-progress-notify-interval=5s
    - --initial-advertise-peer-urls=https://192.168.48.212:2380
    - --initial-cluster=kmaster2=https://192.168.48.211:2380,kmaster1=https://192.168.48.210:2380,kmaster3=https://192.168.48.212:2380  ##看这里
...

    
    
#重启kubelet
systemctl restart kubelet

五、集群高可用测试

#创建测试pod,创建成功后会进入容器

kubectl run client --image=ikubernetes/admin-box -it --rm --restart=Never --command -n default -- /bin/bash


#ping测试
ping www.baidu.com


#nslookup测试域名
nslookup  www.baidu.com


#将3个主节点依次关机测试vip192.168.48.222的变化
ip a s

六、其他操作(生产环境慎用)

注意:集群创建有问题,或者加入集群有问题,可以参考一下步骤重置集群。

#驱逐节点的配置
kubectl drain knode3 --delete-emptydir-data --force --ignore-daemonsets 


#重置当前节点配置
kubeadm reset -f --cri-socket unix:var/run/cri-dockerd.sock


#删除节点
kubectl delete node knode3

#删除配置信息
rm -rf /etc/cni/net.d  /root/.kube
ipvsadm --clear

  • 22
    点赞
  • 19
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值