K8S集群搭建——containerd

K8S集群搭建——containerd版

主机名IP地址角色操作系统硬件配置
master100.100.35.66/22管理节点CentOS 92CPU/4G内存/50G
node01100.100.35.67/22工作节点CentOS 91CPU/2G内存/50G
node02100.100.35.68/22工作节点CentOS 91CPU/2G内存/50G

一.前期准备

网络配置

nmcli connection modify ens160 ipv4.method manual ipv4.addresses 100.100.35.68/22 ipv4.gateway 100.100.32.254 ipv4.dns 100.100.36.36 ipv4.dns 100.100.36.12 connection.autoconnect yes

1.更新软件包(每个节点都要做)

yum update -y
yum upgrade -y
#安装要使用的插件,基本的命令工具
yum -y install net-tools lrzsz wget tree vim unzip bash-completion
echo "source /usr/share/bash-completion/bash_completion" >> /etc/profile
source /etc/profile

2.关闭防火墙,SELinux,swap分区(修改完SELinux需要重启主机,每个节点都要做)

systemctl stop firewalld
systemctl disable firewalld
setenforce 0
sed -i 's/enforcing/disabled/' /etc/selinux/config
sed -ri 's/.*swap.*/#&/' /etc/fstab
swapon -a
df -h
# -r 支持扩展正则+ ? () {} |
# -i:直接修改文件,而不是输出到标准输出。这意味着命令会直接更改 /etc/fstab 文件的内容,而不是仅仅显示更改。
# 's/.*swap.*/#&/':这是 sed 的命令模式,其中:
# s:表示替换操作。
# &:在替换模式中,& 表示匹配的文本(即所有匹配到的 swap 相关的行)。
# #&:在替换模式中,# 是注释符号,所以 #& 表示将匹配到的行前面添加 #,从而注释掉这些行。

3.将桥接的IPv4流量传递到iptables的链(每个节点都要做)

bridge(桥接) 是 Linux 系统中的一种虚拟网络设备,它充当一个虚拟的交换机,为集群内的容器提供网络通信功能,容器就可以通过这个 bridge 与其他容器或外部网络通信了。

vim modprobe.sh
#!/bin/bash

# 确保脚本以 root 权限运行
if [ "$(id -u)" -ne 0 ]; then
    echo "请以 root 权限运行此脚本"
    exit 1
fi

# 加载 br_netfilter 模块
modprobe br_netfilter

# 创建模块加载配置文件
echo "br_netfilter" | tee /etc/modules-load.d/br_netfilter.conf

# 创建 sysctl 配置文件
cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

# 应用 sysctl 配置
sysctl -p /etc/sysctl.d/k8s.conf

# 重启 NetworkManager 服务以应用更改
systemctl restart NetworkManager

# 验证 br_netfilter 模块是否已加载
if lsmod | grep -q br_netfilter; then
    echo "br_netfilter 模块已成功加载"
else
    echo "br_netfilter 模块加载失败"
fi

# 验证 IP 转发是否已启用
if sysctl net.ipv4.ip_forward | grep -q "net.ipv4.ip_forward = 1"; then
    echo "IP 转发已启用"
else
    echo "IP 转发未启用"
fi

echo "配置完成"

4.开启ipvs模块

#!/bin/bash

#安装ipvsadm
yum install ipvsadm -y

# 确保脚本以 root 权限运行
if [ "$(id -u)" != "0" ]; then
    echo "此脚本需要 root 权限执行" >&2
    exit 1
fi

# 加载 IPVS 模块
modprobe ip_vs || { echo "加载 ip_vs 模块失败"; exit 1; }

# 创建模块加载配置文件以确保模块在启动时自动加载
echo "ip_vs" | tee /etc/modules-load.d/ip_vs.conf

# 创建 IPVS 配置文件
cat <<EOF | tee /etc/sysctl.d/ipvs.conf
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack_ipv4
EOF

# 应用 IPVS 配置
systemctl restart systemd-modules-load.service ; lsmod | grep ip_vs

# 重启网络服务以应用更改
systemctl restart NetworkManager || systemctl restart network

# 验证 IPVS 模块是否已加载
if lsmod | grep -q ip_vs; then
    echo "IPVS 模块已成功加载"
else
    echo "IPVS 模块加载失败"; exit 1;
fi

# 验证 IP 转发是否已启用
if sysctl net.ipv4.ip_forward | grep -q "net.ipv4.ip_forward = 1"; then
    echo "IP 转发已启用"
else
    echo "IP 转发未启用"; exit 1;
fi

echo "IPVS 配置完成"

5.安装容器运行时containerd(每个节点都要做)

vim install_containerd.sh
yum install -y yum-utils
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
yum install containerd.io-1.7.22-3.1.el9.x86_64 -y
mkdir -p /etc/containerd/
#生成containerd的配置文件
containerd config default | tee /etc/containerd/config.toml
#替换国内镜像源
sed -i "s#registry.k8s.io/#registry.aliyuncs.com/google_containers/#" /etc/containerd/config.toml
sed -i 's#endpoint = ""#endpoint = "https://docker.rainbond.cc"#' /etc/containerd/config.toml
sed -i 's#SystemdCgroup = false#SystemdCgroup = true#' /etc/containerd/config.toml

#配置通信,在k8s环境中,kubelet通过 containerd.sock 文件与 containerd 进行通信,对容器进行管理,指定contaienrd套接字文件地址 
cat <<EOF | tee /etc/crictl.yaml
runtime-endpoint: unix:///var/run/containerd/containerd.sock
image-endpoint: unix:///var/run/containerd/containerd.sock
timeout: 10
debug: false
EOF

systemctl daemon-reload && systemctl start containerd && systemctl enable containerd && systemctl status containerd

6.修改containerd的镜像仓库

	   [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
        [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
           endpoint = ["https://docker.rainbond.cc"]
168        [plugins."io.containerd.grpc.v1.cri".registry.headers]
   169
   170        [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
   171          [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
   172             endpoint = ["https://docker.rainbond.cc"]

二.K8s相关组件安装

1.首先规划一下IP分配

主机名IP地址角色操作系统硬件配置
master100.100.35.66/22管理节点CentOS 92CPU/4G内存/50G
node01100.100.35.67/22工作节点CentOS 91CPU/2G内存/50G
node02100.100.35.68/22工作节点CentOS 91CPU/2G内存/50G

2.配置免密登录(可选,为了方便切换节点,我这里配置了免密登录)

#master执行
ssh-keygen

cp /root/.ssh/id_rsa.pub /root/.ssh/authorized_keys
scp -rp /root/.ssh/* root@100.100.35.67:/root/.ssh/
scp -rp /root/.ssh/* root@100.100.35.68:/root/.ssh/

3.修改host文件,方便节点间相互解析

#master执行
echo "100.100.35.66 k8s-master" >> /etc/hosts
echo "100.100.35.67 k8s-node1" >> /etc/hosts
echo "100.100.35.68 k8s-node2" >> /etc/hosts

#给每个节点都传一份
vim copy_host.sh
for i in `seq 1 2`
do
scp /etc/hosts root@k8s-node$i:/etc/hosts
done

4.安装K8s相关组件(每个节点都要做)

vim kubernetes_install.sh
cat <<EOF | tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/rpm/
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/rpm/repodata/repomd.xml.key
EOF
yum install -y kubelet kubeadm kubectl
systemctl enable kubelet && systemctl start kubelet

5.添加命令补全(看个人习惯,可选)

#添加命令补全
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash) # 在 bash 中设置当前 shell 的自动补全,要先安装 bash-completion 包。
echo "source <(kubectl completion bash)" >> ~/.bashrc # 在您的 bash shell 中永久的添加自动补全
source ~/.bashrc

6.时间同步(新版操作系统默认安装chrony,可忽略此步)

#主机执行脚本
vim ntp.sh
for  i in `seq 66 67`
do
ssh 100.100.35.$i "ntpdate cn.ntp.org.cn"
done

7.修改kubelet默认驱动,以systemd作为Cgroup驱动

  1. cgroupfs:使用传统的 cgroupfs 驱动。这是较早的 Linux 内核版本中使用的驱动,它直接操作内核的 cgroup 文件系统。
  2. systemd:使用 systemd 作为 cgroup 驱动。在较新的 Linux 发行版中,systemd 接管了 cgroup 的管理,因此 Kubernetes 可以通过 systemd 来管理 cgroups。

官方描述:

--cgroup-driver string                                     Driver that the kubelet uses to manipulate cgroups on the host.  Possible values: 'cgroupfs', 'systemd' (default "cgroupfs") (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/  for more information.)
[root@k8s-master ~]# rpm -ql kubelet
/etc/kubernetes
/etc/kubernetes/manifests
/etc/sysconfig/kubelet
/usr/bin/kubelet
/usr/lib/systemd/system/kubelet.service
/usr/share/doc/packages/kubelet
/usr/share/doc/packages/kubelet/README.md
/usr/share/licenses/kubelet
/usr/share/licenses/kubelet/LICENSE
/var/lib/kubelet

cat > /etc/sysconfig/kubelet <<EOF
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
EOF

8.Kubeadm初始化集群(master)

#打印k8s初始化配置文件
kubeadm config print init-defaults > kubeadm_config.yaml

#修改镜像拉取地址
sed -i 's#imageRepository: registry.k8s.io#imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers#' kubeadm_config.yaml
kubeadm init --config kubeadm_config.yaml  --upload-certs
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 100.100.35.66:6443 --token abcdef.0123456789abcdef \
        --discovery-token-ca-cert-hash sha256:4b70dbd1a18e6edce70ceb8dd8e21aac0ec2ba19eef3a09bb90be626b0ebc9e3

9.命令补全,导入calico镜像,安装k8s网络插件

echo "source /usr/share/bash-completion/bash_completion" >> /etc/profile
source /etc/profile
source <(kubectl completion bash) >> ~/.bashrc
source ~/.bashrc
ctr -n k8s.io images import calico-cni.tar
ctr -n k8s.io images import calico-node.tar
ctr -n k8s.io images import calico-controllers.tar
wget https://raw.githubusercontent.com/projectcalico/calico/v3.24.1/manifests/calico.yaml
kubectl apply -f calico.yaml

在这里插入图片描述

10.修改kube-proxy为ipvs模式

kubectl edit configmaps kube-proxy -n kube-system

重启kube-proxy

kubectl  delete pods -n kube-system -l k8s-app=kube-proxy

初始化脚本

#!/bin/bash

function sys_update {
echo -e "\033[34m>>>>>>>>>>>>>>>>>>>>>>>升级软件<<<<<<<<<<<<<<<<<<<<<<<<\033[0m"
yum -y install net-tools lrzsz wget tree vim unzip bash-completion
}

# 定义函数 stop_firewalld
function stop_firewalld {
echo -e "\033[34m>>>>>>>>>>>>>>>>>>>>>>>关闭防火墙<<<<<<<<<<<<<<<<<<<<<<<<\033[0m"
systemctl stop firewalld && systemctl disable firewalld && setenforce 0 && \
sed -i 's/enforcing/disabled/' /etc/selinux/config && sed -ri 's/.*swap.*/#&/' /etc/fstab && \
swapoff -a && df -h && echo -e "\033[32m防火墙关闭成功\033[0m" || echo -e "\033[31m防火墙关闭失败\033[0m"
}

# 定义函数 modprobe_up
function modprobe_up {
echo -e "\033[34m>>>>>>>>>>>>>>>>>>>>>>>加载内核模块<<<<<<<<<<<<<<<<<<<<<<<<\033[0m"
if [ "$(id -u)" -ne 0 ]; then
echo "请以 root 权限运行此脚本"
exit 1
fi
modprobe br_netfilter && echo "br_netfilter" | tee /etc/modules-load.d/br_netfilter.conf && \
cat << EOF | tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sysctl -p /etc/sysctl.d/k8s.conf && systemctl restart NetworkManager && \
if lsmod | grep -q br_netfilter; then echo "br_netfilter 模块已成功加载"; else echo "br_netfilter 模块加载失败"; fi && \
if sysctl net.ipv4.ip_forward | grep -q "net.ipv4.ip_forward = 1"; then echo "IP 转发已启用"; else echo "IP 转发未启用"; fi && \
echo -e "\033[32m内核模块加载成功\033[0m" || echo -e "\033[31m内核模块加载失败\033[0m"
}

# 定义函数 ipvs
function ipvs {
echo -e "\033[34m>>>>>>>>>>>>>>>>>>>>>>>加载ipvs模块<<<<<<<<<<<<<<<<<<<<<<<<\033[0m"
yum install ipvsadm -y && \
modprobe ip_vs || { echo "加载 ip_vs 模块失败"; exit 1; } && \
echo "ip_vs" | tee /etc/modules-load.d/ip_vs.conf && \
cat <<EOF | tee /etc/sysctl.d/ipvs.conf
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack_ipv4
EOF
systemctl restart systemd-modules-load.service && lsmod | grep ip_vs && \
systemctl restart NetworkManager || systemctl restart network && \
if lsmod | grep -q ip_vs; then echo "IPVS 模块已成功加载"; else echo "IPVS 模块加载失败"; exit 1; fi && \
if sysctl net.ipv4.ip_forward | grep -q "net.ipv4.ip_forward = 1"; then echo "IP 转发已启用"; else echo "IP 转发未启用"; exit 1; fi && \
echo -e "\033[32mIPVS 配置完成\033[0m" || echo -e "\033[31mIPVS 配置失败\033[0m"
}

# 定义函数 install_containerd
function install_containerd {
echo -e "\033[34m>>>>>>>>>>>>>>>>>>>>>>>安装containerd<<<<<<<<<<<<<<<<<<<<<<<<\033[0m"
yum install -y yum-utils && yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo && \
yum install containerd.io-1.7.22-3.1.el9.x86_64 -y && mkdir -p /etc/containerd/ && \
containerd config default | tee /etc/containerd/config.toml && \
sed -i "s#registry.k8s.io/#registry.aliyuncs.com/google_containers/#" /etc/containerd/config.toml && \
sed -i 's#endpoint = ""#endpoint = "https://docker.rainbond.cc"#' /etc/containerd/config.toml && \
sed -i 's#SystemdCgroup = false#SystemdCgroup = true#' /etc/containerd/config.toml && \
cat << EOF | tee /etc/crictl.yaml
runtime-endpoint: unix:///var/run/containerd/containerd.sock
image-endpoint: unix:///var/run/containerd/containerd.sock
timeout: 10
debug: false
EOF
systemctl daemon-reload && systemctl start containerd && systemctl enable containerd && \
systemctl status containerd > /dev/null 2>&1
    if [ $? -eq 0 ]; then
        echo "containerd 服务已成功启动"
    else
        echo "containerd 服务启动失败" >&2
        exit 1
    fi
echo -e "\033[32mcontainerd 安装成功\033[0m" || echo -e "\033[31mcontainerd 安装失败\033[0m"
}

# 定义函数 kubernetes_install
function kubernetes_install {
echo -e "\033[34m>>>>>>>>>>>>>>>>>>>>>>>安装k8s组件<<<<<<<<<<<<<<<<<<<<<<<<\033[0m"
cat << EOF | tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/rpm/
enabled=1
gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes-new/core/stable/v1.30/rpm/repodata/repomd.xml.key
EOF
yum install -y kubelet kubeadm kubectl && systemctl enable kubelet && systemctl start kubelet && \
cat <<EOF | tee /etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
EOF
echo -e "\033[32mKubernetes 安装成功\033[0m" || echo -e "\033[31mKubernetes 安装失败\033[0m"
}

# 执行函数
sys_update
stop_firewalld
modprobe_up
ipvs
install_containerd
kubernetes_install
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值