AlmaLinux基于cri-o+Calico用kubeadm搭建1.24版本多master高可用Kubernetes集群

前言:本文是k8s搭建系列的第三篇,其它三篇分别为
1.Centos7.9 + haproxy + keepalived + docker + flannel + kubeadm +1.23.6
https://blog.csdn.net/lic95/article/details/124903648?spm=1001.2014.3001.5501

2.AlmaLinux + haproxy + keepalived + containd + Calico + kubeadm +1.24
https://blog.csdn.net/lic95/article/details/125018220?spm=1001.2014.3001.5501

3.AlmaLinux + kube-vip + cri-o + Calico + kubeadm +1.24
https://blog.csdn.net/lic95/article/details/125036070

一、部署节点说明

系统主机名IP地址
虚拟负载master192.168.3.30
AlmaLinux release 8.6master01192.168.3.31
AlmaLinux release 8.6master02192.168.3.32
AlmaLinux release 8.6master03192.168.3.33
AlmaLinux release 8.6node01192.168.3.41
AlmaLinux release 8.6node02192.168.3.42
AlmaLinux release 8.6node03192.168.3.43
AlmaLinux release 8.6node04192.168.3.44
AlmaLinux release 8.6node05192.168.3.45

二、初始化各个节点为模板机
1.配置模板机基本配置、主机地址、时间同步、ipvsadm、内核参数

#安装工具软件
yum install vim net-tools wget lsof ipset telnet iproute-tc -y

#关闭防火墙和selinux
systemctl stop firewalld && systemctl disable firewalld
sed -i '/^SELINUX=/c SELINUX=disabled' /etc/selinux/config
setenforce 0

#关闭swap
swapoff -a
sed -i 's/^.*almalinux-swap/#&/g' /etc/fstab

#配置主机地址解析
if [ -z "`cat /etc/hosts | grep \`ip route ls | grep 192.168.3.0/24 | awk '{print $9}'\` | awk '{print $2}'`" ]; then
cat << EOF >> /etc/hosts
192.168.3.30 master
192.168.3.31 master01
192.168.3.32 master02
192.168.3.33 master03
192.168.3.41 node01
192.168.3.42 node02
192.168.3.43 node03
192.168.3.44 node04
192.168.3.45 node05
EOF
fi

#根据IP地址获取主机名并写入hostname
echo `cat /etc/hosts | grep \`ip route ls | grep 192.168.3.0/24 | awk '{print $9}'\` | awk '{print $2}'` >/etc/hostname

#重新登录终端立即生效
hostnamectl set-hostname `cat /etc/hosts | grep \`ip route ls | grep 192.168.3.0/24 | awk '{print $9}'\` | awk '{print $2}'`

# 立即生效
sysctl --system

#配置集群时间同步
yum install -y chrony

#master节点:
if [ ! -z "`cat /etc/hosts | grep \`ip route ls | grep 192.168.3.0/24 | awk '{print $9}'\` | grep master `" ]; then
cat > /etc/chrony.conf << EOF
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
logdir /var/log/chrony
server ntp1.aliyun.com iburst
local stratum 10
allow 192.168.3.0/24
EOF
systemctl restart chronyd
systemctl enable chronyd
fi

#node节点
if [ ! -z "`cat /etc/hosts | grep \`ip route ls | grep 192.168.3.0/24 | awk '{print $9}'\` | grep node `" ]; then
cat > /etc/chrony.conf << EOF
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
logdir /var/log/chrony
server 192.168.3.31 iburst
server 192.168.3.32 iburst
server 192.168.3.33 iburst
EOF
systemctl restart chronyd
systemctl enable chronyd
fi
#查看同步状态:
chronyc sources -v

# 安装ipvsadm,开启 ipvs,不开启 ipvs 将会使用 iptables,但是效率低,所以官网推荐需要开通 ipvs 内核
yum install ipvsadm ipset sysstat conntrack libseccomp -y

cat > /etc/modules-load.d/ipvs.conf <<EOF 
ip_vs
ip_vs_rr
ip_vs_wrr
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
EOF
systemctl restart systemd-modules-load.service

# 激活 br_netfilter 模块
cat << EOF > /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF
modprobe overlay
modprobe br_netfilter
systemctl restart systemd-modules-load.service


# 内核参数设置:开启IP转发,允许iptables对bridge的数据进行处理
cat << EOF > /etc/sysctl.d/k8s.conf 
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF

# 立即生效
sysctl --system

2.在所有节点安装配置cri-o

VERSION=1.24
curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable.repo \
    https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable/CentOS_8/devel:kubic:libcontainers:stable.repo
curl -L -o /etc/yum.repos.d/devel:kubic:libcontainers:stable:cri-o:${VERSION}.repo \
    https://download.opensuse.org/repositories/devel:kubic:libcontainers:stable:cri-o:${VERSION}/CentOS_8/devel:kubic:libcontainers:stable:cri-o:${VERSION}.repo

yum install cri-o podman podman-docker -y
systemctl daemon-reload
systemctl start crio
systemctl enable crio

3.在所有节点安装安装 kubeadm 和相关工具

#由于官方源位于国外,这里配置centos7 kubernetes国内阿里源
cat << EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

#安装 kubeadm 工具
yum install -y kubelet-1.24.1 kubeadm-1.24.1 kubectl-1.24.1

#修改kubelet配置
cat <<EOF >/etc/sysconfig/kubelet
KUBELET_EXTRA_ARGS="--container-runtime=remote --cgroup-driver=systemd --container-runtime-endpoint='unix:///var/run/crio/crio.sock' --runtime-request-timeout=5m"
EOF

systemctl daemon-reload 
systemctl enable kubelet
systemctl start kubelet

三、高可用,在master01、master02、master03部署haproxy + keepalived

1.docker部署haproxy
https://github.com/haproxy/haproxy
在master01、master02、master03上创建配置文件/etc/haproxy/haproxy.cfg,重要配置以中文注释标出:

#在master01、master02、master03上创建配置目录
mkdir -p /etc/haproxy

#在master01、master02、master03上创建配置文件/etc/haproxy/haproxy.cfg,重要配置以中文注释标出:
tee /etc/haproxy/haproxy.cfg << 'EOF'
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
    # to have these messages end up in /var/log/haproxy.log you will
    # need to:
    #
    # 1) configure syslog to accept network log events.  This is done
    #    by adding the '-r' option to the SYSLOGD_OPTIONS in
    #    /etc/sysconfig/syslog
    #
    # 2) configure local2 events to go to the /var/log/haproxy.log
    #   file. A line like the following can be added to
    #   /etc/sysconfig/syslog
    #
    #    local2.*                       /var/log/haproxy.log
    #
    log         127.0.0.1 local2

    #chroot      /var/lib/haproxy
    #pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon

    # turn on stats unix socket
    stats socket /var/lib/haproxy/stats

#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          5m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000

#---------------------------------------------------------------------
# kubernetes apiserver frontend which proxys to the backends
#---------------------------------------------------------------------
frontend kubernetes-apiserver
    mode                 tcp
    bind                 *:16443
    option               tcplog
    default_backend      kubernetes-apiserver

#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend kubernetes-apiserver
    mode        tcp
    balance     roundrobin
    server  master01 192.168.3.31:6443 check
    server  master02 192.168.3.32:6443 check
    server  master03 192.168.3.33:6443 check
EOF

2.分别在三个节点启动haproxy

podman run -d --name=diamond-haproxy --net=host  -v /etc/haproxy:/usr/local/etc/haproxy:ro haproxy

#设置开机自动启动haproxy
podman generate systemd --restart-policy always -t 5 -n --new -f diamond-haproxy
mv container-diamond-haproxy.service /etc/systemd/system/
systemctl daemon-reload
restorecon -RvF container-diamond-haproxy.service
systemctl enable container-diamond-haproxy.service --now
systemctl status container-diamond-haproxy.service

3.podman部署keepalived
https://github.com/osixia/docker-keepalived
keepalived是以VRRP(虚拟路由冗余协议)协议为基础, 包括一个master和多个backup。 master劫持vip对外提供服务。master发送组播,backup节点收不到vrrp包时认为master宕机,此时选出剩余优先级最高的节点作为新的master, 劫持vip。keepalived是保证高可用的重要组件。

#在master01、master02、master03上创建配置目录
mkdir -p /etc/keepalived/

tee /etc/keepalived/keepalived.conf << 'EOF'
global_defs {
   script_user root 
   enable_script_security

}

vrrp_script chk_haproxy {
    #vrrp_script用于检测haproxy是否正常。如果本机的haproxy挂掉,即使keepalived劫持vip,也无法将流量负载到apiserver。
    script "/bin/bash -c 'if [[ $(netstat -nlp | grep 16443) ]]; then exit 0; else exit 1; fi'"  # haproxy 检测
    interval 2  # 每2秒执行一次检测
    weight 11 # 权重变化
}

vrrp_instance VI_1 {
  interface ens160  #根据实际情况修改网卡名称

  state MASTER # backup节点设为BACKUP
  virtual_router_id 51 # id设为相同,表示是同一个虚拟路由组
  priority 100 #初始权重
nopreempt #可抢占

  unicast_peer {

  }

  virtual_ipaddress {
    192.168.3.30  # vip
  }

  authentication {
    auth_type PASS
    auth_pass password
  }

  track_script {
      chk_haproxy
  }

  notify "/container/service/keepalived/assets/notify.sh"
}
EOF

4.分别在三台节点启动keepalived

podman run -d --name keepalived-k8s \
    --cap-add=NET_ADMIN --cap-add=NET_BROADCAST --cap-add=NET_RAW --net=host \
    --volume /etc/keepalived/keepalived.conf:/container/service/keepalived/assets/keepalived.conf:ro \
    -d osixia/keepalived --copy-service

#设置开机自动启动haproxy
podman generate systemd --restart-policy always -t 5 -n --new -f keepalived-k8s
mv container-keepalived-k8s.service /etc/systemd/system/
systemctl daemon-reload
restorecon -RvF container-keepalived-k8s.service
systemctl enable container-keepalived-k8s.service --now
systemctl status container-keepalived-k8s.service

4.查看haproxy keepalived master容器日志,排查问题

podman ps -a | grep keeplived
podman ps -a | grep haproxy
podman logs 7e5484cb6a75

4.测试haproxy + keepalived

[root@master01 ~]# docker ps
CONTAINER ID   IMAGE               COMMAND                  CREATED         STATUS              PORTS     NAMES
27f5a67a7f51   osixia/keepalived   "/container/tool/run…"   3 minutes ago   Up About a minute             boring_hodgkin
cb1157ff6bcb   haproxy             "docker-entrypoint.s…"   7 minutes ago   Up About a minute             diamond-haproxy

[root@master03 ~]# ip a | grep 192.168.3
    inet 192.168.3.33/24 brd 192.168.3.255 scope global noprefixroute ens160
    inet 192.168.3.30/32 scope global ens160

四、安装 master01 节点
1.修复cri-o问题,手动下载pause 3.6镜像并改名,修复下一步无法科学上网的问题,如果会科学上网,可以忽略这一步

podman pull registry.aliyuncs.com/google_containers/pause:3.6
podman tag registry.aliyuncs.com/google_containers/pause:3.6 registry.k8s.io/pause:3.6

2.初始化 kubernetes master01 节点

kubeadm init \
    --image-repository=registry.aliyuncs.com/google_containers  \
    --kubernetes-version v1.24.1 \
    --service-cidr=172.18.0.0/16      \
    --pod-network-cidr=10.244.0.0/16 \
    --control-plane-endpoint=192.168.3.30:6443 \
    --cri-socket=/var/run/crio/crio.sock \
    --upload-certs \
    --v=5

选项说明:
  --image-repository:选择用于拉取镜像的镜像仓库(默认为“k8s.gcr.io” )
  --kubernetes-version:选择特定的Kubernetes版本(默认为“stable-1”)
  --service-cidr:为服务的VIP指定使用的IP地址范围(默认为“10.96.0.0/12”)
  --pod-network-cidr:指定Pod网络的IP地址范围。如果设置,则将自动为每个节点分配CIDR。
  --cri-socket:指定cri为cri-o
 

3.输出内容,可以看到初始化成功的信息和一些提示

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 192.168.3.30:6443 --token ke6jyo.7twag17c2kf9688x \
        --discovery-token-ca-cert-hash sha256:677ecf2a0bd6617daec5f292a962ce6de99275e29373b1cc158cef00329ec57d \
        --control-plane --certificate-key 72b3c0796cdf595bce9f060edfd3742830d3062f35c9f41166d91698bc29b260

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.3.30:6443 --token ke6jyo.7twag17c2kf9688x \
        --discovery-token-ca-cert-hash sha256:677ecf2a0bd6617daec5f292a962ce6de99275e29373b1cc158cef00329ec57d 

3.根据上面提示内容执行如下操作

# 要开始使用集群,您需要以常规用户身份运行以下命令
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

# 或者,如果您是root用户,则可以运行允许命令
export KUBECONFIG=/etc/kubernetes/admin.conf

# 加入.bashrc,方便以后连接服务器自动执行
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >>/root/.bashrc

4.加入master02、master03节点到master

# 修复cri-o问题,手动下载pause 3.6镜像并改名,修复下一步无法科学上网的问题,如果会科学上网,可以忽略这一步
podman pull registry.aliyuncs.com/google_containers/pause:3.6
podman tag registry.aliyuncs.com/google_containers/pause:3.6 registry.k8s.io/pause:3.

# [root@master02 ~]# 
kubeadm join 192.168.3.30:6443 --token ke6jyo.7twag17c2kf9688x \
        --discovery-token-ca-cert-hash sha256:677ecf2a0bd6617daec5f292a962ce6de99275e29373b1cc158cef00329ec57d \
        --control-plane --certificate-key 72b3c0796cdf595bce9f060edfd3742830d3062f35c9f41166d91698bc29b260

# [root@master03 ~]# 
kubeadm join 192.168.3.30:6443 --token ke6jyo.7twag17c2kf9688x \
        --discovery-token-ca-cert-hash sha256:677ecf2a0bd6617daec5f292a962ce6de99275e29373b1cc158cef00329ec57d \
        --control-plane --certificate-key 72b3c0796cdf595bce9f060edfd3742830d3062f35c9f41166d91698bc29b260

# 在master02、master03上根据上面提示内容执行如下操作        
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >>/root/.bashrc

5.测试master01、master02、master03的haproxy高可靠

docker stop $(docker ps -a | grep haproxy)
[root@master02 ~]# docker start $(docker ps -a | grep haproxy)
[root@master03 ~]# docker start $(docker ps -a | grep haproxy)

四、安装Calico网络插件

kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

#安装完毕后新pod状态
[root@master01 ~]# kubectl get pod -A
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE
kube-system   calico-kube-controllers-56cdb7c587-r792d   0/1     Pending   0          112s
kube-system   calico-node-dv5hn                          1/1     Running   0          113s
kube-system   calico-node-j2kcb                          1/1     Running   0          113s
kube-system   calico-node-jhsl5                          1/1     Running   0          113s
kube-system   coredns-74586cf9b6-92x27                   1/1     Running   0          24m
kube-system   coredns-74586cf9b6-fhrkw                   1/1     Running   0          24m
kube-system   etcd-master01                              1/1     Running   0          24m
kube-system   etcd-master02                              1/1     Running   0          21m
kube-system   etcd-master03                              1/1     Running   0          2m25s
kube-system   kube-apiserver-master01                    1/1     Running   0          24m
kube-system   kube-apiserver-master02                    1/1     Running   0          21m
kube-system   kube-apiserver-master03                    1/1     Running   11         2m22s
kube-system   kube-controller-manager-master01           1/1     Running   0          24m
kube-system   kube-controller-manager-master02           1/1     Running   0          20m
kube-system   kube-controller-manager-master03           1/1     Running   2          81s
kube-system   kube-proxy-4lrmz                           1/1     Running   0          21m
kube-system   kube-proxy-55smk                           1/1     Running   0          2m35s
kube-system   kube-proxy-qt79l                           1/1     Running   0          24m
kube-system   kube-scheduler-master01                    1/1     Running   0          24m
kube-system   kube-scheduler-master02                    1/1     Running   0          21m
kube-system   kube-scheduler-master03                    1/1     Running   2          2m29s

五、添加 5个Node节点到集群

# 在master01上获取添加方式
[root@master01 ~]# kubeadm token create --print-join-command
kubeadm join 192.168.3.30:6443 --token ke6jyo.7twag17c2kf9688x \
        --discovery-token-ca-cert-hash sha256:677ecf2a0bd6617daec5f292a962ce6de99275e29373b1cc158cef00329ec57d 
[root@master01 ~]# 

# 在node01、node02、node03、node04、node05上分别执行添加命令

#在任一台master上验证
kubectl get nodes
kubectl get nodes -o wide
kubectl get pods --all-namespaces

#直到全部变为Ready
[root@master01 ~]# kubectl get nodes
NAME       STATUS   ROLES           AGE   VERSION
master01   Ready    control-plane   20m   v1.24.1
master02   Ready    control-plane   17m   v1.24.1
master03   Ready    control-plane   17m   v1.24.1
node01     Ready    <none>          71s   v1.24.1
node02     Ready    <none>          67s   v1.24.1
node03     Ready    <none>          64s   v1.24.1
node04     Ready    <none>          62s   v1.24.1
node05     Ready    <none>          59s   v1.24.1
[root@master01 ~]# 

六、部署dashboard
  Dashboard 是基于网页的 Kubernetes 用户界面。您可以使用 Dashboard 将容器应用部署到 Kubernetes 集群中,也可以对容器应用排错,还能管理集群本身及其附属资源。您可以使用 Dashboard 获取运行在集群中的应用的概览信息,也可以创建或者修改 Kubernetes 资源(如 Deployment,Job,DaemonSet 等等)。例如,您可以对 Deployment 实现弹性伸缩、发起滚动升级、重启 Pod 或者使用向导创建新的应用。
   安装dashboard:(https://github.com/kubernetes/dashboard)

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.5.1/aio/deploy/recommended.yaml

#耐心等待状态变为Running
kubectl get pods -n kubernetes-dashboard


# 修改对外暴露端口
[root@master01 ~]# kubectl edit svc -n kubernetes-dashboard kubernetes-dashboard
将 type: ClusterIP 修改为 type: NodePort 即可


# 获取对外暴露端口
[root@master01 ~]# kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   172.18.175.186   <none>        8000/TCP        82s
kubernetes-dashboard        NodePort    172.18.218.143   <none>        443:30681/TCP   82s
[root@master01 ~]# 

使用浏览器访问:
https://192.168.3.30:30681/#/login

七、整个机器配置完毕

部署nginx服务验证集群
[root@master01 ~]# kubectl create deployment nginx --image=nginx
deployment.apps/nginx created
[root@master01 ~]#

[root@master01 ~]# kubectl expose deployment nginx --port=80 --type=NodePort
service/nginx exposed
[root@master01 ~]# 

[root@master01 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   172.18.0.1      <none>        443/TCP        24m
nginx        NodePort    172.18.76.241   <none>        80:30111/TCP   23s
[root@master01 ~]# 

[root@master01 ~]#  curl http://192.168.3.30:30111
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
......
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
[root@master01 ~]# 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值