前言
本集群采用v1.24版本的k8s,该版本k8s有较大更新,即弃用了docker。通过查阅官网,采用containerd
作为负责运行容器的软件
注:官网说明
准备运行环境
可以参考下博文https://blog.csdn.net/yy8623977/article/details/124707433,如果安装过docker的话,就会存在containerd,因为containerd作为docker底层的容器运行时,其中docker仅为客户端,也可以仅安装containerd,目前没尝试过,可以自行查下资料
接下来就配置containerd以及启动k8s
1、使用containerd作为容器运行时
1)首先修改containerd配置文件
导出默认配置,config.toml这个文件默认是不存在的
containerd config default > /etc/containerd/config.toml
修改配置文件:其中修改的部分如下:config.toml
[plugins]
[plugins."io.containerd.grpc.v1.cri"]
##sandbox_image值替换为如下,或者如果本地存在镜像,可以修改为本地镜像
sandbox_image = "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
# 添加下面这行
SystemdCgroup = true
......省略部分......
[plugins."io.containerd.grpc.v1.cri".registry.mirrors]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
#endpoint = ["https://registry-1.docker.io"]
# 注释上面那行,添加下面三行
endpoint = ["https://docker.mirrors.ustc.edu.cn"]
[plugins."io.containerd.grpc.v1.cri".registry.mirrors."k8s.gcr.io"]
#endpoint 此endpoint也可换成本地的仓库,其中需要包括容器运行时需要的基础镜像
endpoint = ["https://registry.cn-hangzhou.aliyuncs.com/google_containers"]
[plugins."io.containerd.grpc.v1.cri".image_decryption]
key_model = ""
重启containerd
systemctl daemon-reload && systemctl restart containerd
2)安装crictl命令行工具
crictl是Kubernetes用于管理Containerd上的镜像和容器的一个命令行工具
tar -zxvf crictl-v1.24.0-linux-amd64.tar.gz -C /usr/local/bin
注:crictl工具与CRI运行时交互,使用crictl image 查看镜像时,报错“unix /var/run/dockershim.sock: connect: no such file or directory”
原因:因为当前k8s使用的containerd为容器运行时,而非docker。所以执行
cat <<EOF | sudo tee /etc/crictl.yaml
runtime-endpoint: unix:///run/containerd/containerd.sock
EOF
切换命令crictl的容器运行时
2、安装k8s集群
运行环境的准备请参考博文https://blog.csdn.net/yy8623977/article/details/124707433
1)初始化主节点(在master节点执行,多master中的其中一台执行即可)
kubeadm init \
--apiserver-advertise-address=192.168.95.205 \
--control-plane-endpoint=cluster-endpoint \ //
--image-repository registry.cn-hangzhou.aliyuncs.com/google_containers \
--kubernetes-version v1.24.0 \
--service-cidr=10.96.0.0/16 \
--pod-network-cidr=10.244.0.0/16
说明:
–pod-network-cidr:指定pod网络的IP地址范围
-apiserver-advertise-address:API Server监听的IP地址
-service-cidr:service VIP的IP地址范围,默认是10.96.0.0/12
--control-plane-endpoint:cluster-endpoint 是映射到该 IP 的自定义DNS名称,当前把cluster-endpoint作为本地dns解析,后面关联keepalived中的vip
设置.kube/config(只在master执行)
mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config
注意:由于keepalived未配置,kubectl get node 会出现no route to host的报错
2)安装网络插件calico
参考官网calico官网
calico版本选择,参考
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.24.md
1、下载calico.yaml
curl https://docs.projectcalico.org/archive/v3.19/manifests/calico.yaml -O
2、修改配置
其中 - name: CALICO_IPV4POOL_CIDR的值更改为kubeadm init时候pod-network-cidr的IP
3、部署calico
kubectl apply -f calico.yaml
注意:会报错no route to host,由于未安装keepalived,建议先在本地增加解析192.168.95.205 cluster-endpoint
3)加入node节点(只在node执行)
kubeadm token create --print-join-command ##查询加入节点的命令
kubeadm join cluster-endpoint:6443 --token xxx --discovery-token-ca-cert-hash xxx
4)加入master节点(只在master执行)
首先执行下面脚本,把证书文件传给待加入节点
#!/bin/sh
USER=root # customizable
CONTROL_PLANE_IPS="192.168.95.204" ##修改ip为待加入节点
for host in ${CONTROL_PLANE_IPS}; do
scp /etc/kubernetes/pki/ca.crt "${USER}"@$host:/etc/kubernetes/pki
scp /etc/kubernetes/pki/ca.key "${USER}"@$host:/etc/kubernetes/pki
scp /etc/kubernetes/pki/sa.key "${USER}"@$host:/etc/kubernetes/pki
scp /etc/kubernetes/pki/sa.pub "${USER}"@$host:/etc/kubernetes/pki
scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host:/etc/kubernetes/pki
scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host:/etc/kubernetes/pki
scp /etc/kubernetes/pki/etcd/ca.crt "${USER}"@$host:/etc/kubernetes/pki/etcd
# Quote this line if you are using external etcd
scp /etc/kubernetes/pki/etcd/ca.key "${USER}"@$host:/etc/kubernetes/pki/etcd
done
kubeadm token create --print-join-command ##查询加入节点的命令
kubeadm join cluster-endpoint:6443 --token xxx --discovery-token-ca-cert-hash xxx --control-plane (待加入节点)
# --control-plane 标志通知 kubeadm join 创建一个新的控制平面。加入master必须加这个标记
(其中先在本地hosts文件中添加192.168.95.205 cluster-endpoint,后面部署了keepalived在调整回来)
注意:配置满足 kubeadm 的最低要求 的三台机器作为控制面节点。控制平面节点为奇数有利于机器故障或者分区故障时重新选举。所以最少三台作为master节点
3、安装部署keepalived + nginx
在三台master节点上面执行:
yum install keepalived nginx -y
1)nginx配置
/etc/nginx/nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
include /usr/share/nginx/modules/*.conf;
events {
worker_connections 1024;
}
# 四层负载均衡,为两台Master apiserver组件提供负载均衡
stream {
log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
access_log /var/log/nginx/k8s-access.log main;
upstream k8s-apiserver {
# Master APISERVER IP:PORT
server 192.168.95.205:6443;
# Master2 APISERVER IP:PORT
server 192.168.95.206:6443;
# Master3 APISERVER IP:PORT
server 192.168.95.207:6443;
}
server {
listen 16443;
proxy_pass k8s-apiserver;
}
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/mime.types;
default_type application/octet-stream;
server {
listen 80 default_server;
server_name _;
location / {
}
}
}
2)keepalived配置
Keepalived配置(master1):/etc/keepalived/keepalived.conf
global_defs {
notification_email { #邮件功能,暂时未使用
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
notification_email_from fage@qq.com
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id NGINX_MASTER
}
vrrp_script check_nginx {
script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_1 {
state MASTER
interface ens3
virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
priority 100 # 优先级,备服务器设置 90
advert_int 1 # 指定VRRP 心跳包通告间隔时间,默认1秒
authentication {
auth_type PASS
auth_pass 1111
}
# 虚拟IP
virtual_ipaddress {
192.168.95.132/24
}
track_script {
check_nginx
}
}
Keepalived配置(master2)
global_defs {
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
notification_email_from fage@qq.com
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id NGINX_BACKUP
}
vrrp_script check_nginx {
script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_1 {
state BACKUP
interface ens3
virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
priority 90
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.95.132/24
}
track_script {
check_nginx
}
}
Keepalived配置(master3)
global_defs {
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
notification_email_from fage@qq.com
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id NGINX_MASTER
}
vrrp_script check_nginx {
script "/etc/keepalived/check_nginx.sh"
}
vrrp_instance VI_1 {
state MASTER
interface ens3
virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
priority 80 # 优先级,备服务器设置 90
advert_int 1 # 指定VRRP 心跳包通告间隔时间,默认1秒
authentication {
auth_type PASS
auth_pass 1111
}
# 虚拟IP
virtual_ipaddress {
192.168.95.132/24
}
track_script {
check_nginx
}
}
/etc/keepalived/check_nginx.sh为检查nginx状态脚本,主要作用是防止nginx挂掉影响vip转发请求
/etc/keepalived/check_nginx.sh
cat > /etc/keepalived/check_nginx.sh << "EOF"
count=$(netstat -lntup | grep 16443 | egrep -cv "grep|$$")
if [ "$count" -eq 0 ];then
exit 1
else
exit 0
fi
EOF
chmod +x /etc/keepalived/check_nginx.sh
启动并设置开机启动:
systemctl daemon-reload
systemctl restart nginx && systemctl enable nginx && systemctl status nginx
systemctl restart keepalived && systemctl enable keepalived && systemctl status keepalived
最后通过ip a查看下master1上面是否出现了vip
4、修改k8s相同配置,使高可用生效
1、修改三台master的hosts文件
192.168.95.204 multi-master1
192.168.95.205 multi-master2
192.168.95.206 multi-master3
192.168.95.209 multi-node
192.168.95.132 cluster-endpoint #vip
2、修改三台master /etc/kubernetes/admin.conf文件:
server: https://cluster-endpoint:6443
修改为:server: https://cluster-endpoint:16443
设置为nginx的端口,让nginx作负载均衡
最后重启kubelet服务
本地环境切换ip实现master步骤记录
1)加入master节点(待补充)
2)配置apiserver地址为dns名称形式
配置apiserver为dns名称,目的在于多master节点能够通过vip配置nginx做负载均衡
1、master节点增加本地host解析
本地测试环境为例
192.168.95.204 multi-master1
192.168.95.205 multi-master2
192.168.95.206 multi-master3
192.168.95.209 multi-node
192.168.95.132 cluster-endpoint2 #vip地址
2、重新生产apiserver证书(其中一个master执行即可)
cd /etc/kubernetes/pki
rm apiserver.crt apiserver.key
#生成apiserver证书
kubeadm init phase certs apiserver --control-plane-endpoint=cluster-endpoint2
--control-plane-endpoint string 为控制平面指定一个稳定的 IP 地址或 DNS 名称
#拷贝新生成的证书到其他master节点
scp /etc/kubernetes/pki/apiserver.key "${USER}"@$host:/etc/kubernetes/pki
scp /etc/kubernetes/pki/apiserver.crt "${USER}"@$host:/etc/kubernetes/pki
3、生成新的 kubeconfig 文件 (所有master)
cd /etc/kubernetes
rm -f admin.conf kubelet.conf controller-manager.conf scheduler.conf
#生成kubeconfig文件
kubeadm init phase kubeconfig all
#修改admin.conf 以及 kubelet.conf的clusters.cluster.server
server: https://192.168.95.204:6443 --> server: https://cluster-endpoint2:16443
注意:其中16443为nginx四层转发端口
# 覆盖默认的 kubeconfig 文件
cp /etc/kubernetes/admin.conf $HOME/.kube/config
4、重启kubelet
systemctl restart kubelet
若有收获,就点个赞吧