k8s高可用集群搭建(ubuntu+docker+kubeadm)

概述

随着容器项目的增多,项目的部署、环境区分、升级、回退、容灾等成为挑战,Google研发的k8s正能解决这一问题;本文高可用集群的网络拓扑结构为:两主两从,且Master节点处于内网中,而Node节点则是外网云服务器;由于节点不处于同一内网而导致搭建极其繁琐,故采取笔者认为最佳的解决方案——搭建VPN(使用Pritunl工具),统一虚拟子网(当然其他方法也有,但经过本人多次尝试VPN方案最为便捷);后文将着重讲解k8s的搭建步骤,分为具体步骤与脚本搭建(步骤若无特殊说明,则所有节点均需执行),以帮助大家避坑。

搭建环境

服务器配置:

  • 8核16G(Master节点)

  • 2核4G(Node节点)

软件环境:

  • Ubuntu22.04版本

  • Kubernetes1.23.6版本

  • DockerCE20.10.21版本

安装部署步骤

关闭防火墙

ufw disable

关闭swap分区

#临时关闭
swapoff -a

#永久关闭,这个需要重启生效
sed -i 's#\/swap.img#\#\/swap.img#g' /etc/fstab

允许iptables&&ipvs

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

modprobe overlay
modprobe br_netfilter

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF
sudo sysctl --system

#开启路由功能
sudo sysctl -w net.ipv4.ip_forward=1

# 开启ipvs
sudo apt-get install ipset ipvsadm

modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack # 高版本内核下使用 nf_conntrack

修改docker的cgroup

# 完全替换daemon.json
# 若不想使用这套配置,可自行添加"exec-opts": ["native.cgroupdriver=systemd"]即可
cat <<EOF | sudo tee /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "10m"
  },
  "storage-driver": "overlay2"
}
EOF

# 重启docker
sudo systemctl enable docker
sudo systemctl daemon-reload
sudo systemctl restart docker

修改journald日志设置

此处为防止服务器日志文件过大而作设置

 

cat <<EOF | sudo tee /etc/systemd/journald.conf
[Journal]
Storage=persistent
 
# 压缩历史日志
Compress=yes
 
SysnIntervalSec=5m
RateLimitInterval=30s
RateLimitBurst=1000
 
# 最大占用空间 10G
SystemMaxUse=10G
 
# 单日志文件最大 200M
SystemMaxFileSize=200M
 
# 日志保存时间 2 周
MaxRetentionSec=2week
 
# 不将日志转发到 syslog
ForwardToSyslog=no
EOF

# 重启
systemctl restart systemd-journald

加载k8s资源列表

curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add

echo "deb https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial main" >>  /etc/apt/sources.list

安装k8s包

#参考kubadm官网:https://kubernetes.io/zh/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
sudo apt-get update
sudo apt-get upgrade -y
sudo apt-get install -y apt-transport-https ca-certificates curl

安装kubelet、kubeadm、kubectl

# 下载kubeadm、kubectl、kubelet
sudo apt install kubeadm=1.23.6-00
sudo apt install kubectl=1.23.6-00
sudo apt install kubelet=1.23.6-00

# 锁定版本
sudo apt-mark hold kubelet kubeadm kubectl

# 设置为开机自启动
systemctl enable kubelet 

修改k8s为ipvs模式

cat <<EOF | sudo tee /etc/default/kubelet
KUBE_PROXY_MODE="ipvs"
EOF

下载k8s相关镜像

# 步骤1
images=(
    kube-apiserver:v1.23.6
    kube-controller-manager:v1.23.6
    kube-scheduler:v1.23.6
    kube-proxy:v1.23.6
    pause:3.6
    etcd:3.5.1-0
    coredns:v1.8.6
)

# 步骤2
for imageName in ${images[@]} ; do
        docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/${imageName}
        if [ $(echo $imageName | awk -F [":"] '{print $1}') != "coredns" ]
        then
          #echo  "----------0-----------"$imageName
          docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/${imageName} k8s.gcr.io/${imageName}
        else
          #echo "-----------1-----------" $imageName
          docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/${imageName} k8s.gcr.io/coredns/${imageName}
        fi
        docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/${imageName}
done

下载网络组件镜像镜像

# 拉取网络组件的docker镜像,后续有用可提前下载
docker pull quay.io/coreos/flannel:v0.13.1-rc1

配置Host

在所有节点上均需配置好host

# 修改host
vim /etc/hosts

# 修改内容(替换为自身的Master节点IP和Node节点IP)
192.168.239.4 k8s-master01
192.168.239.5 k8s-master02
192.168.239.3 k8s-node-01
192.168.239.2 k8s-node-02

kubectl配置IP

如果采用搭建VPN虚拟子网或者想要指定不同网卡的ip则要执行这一步骤

# 查看kubelet状态
systemctl status kubelet

打开红框所示文件

# 10-kubeadm.conf修改内容,添加上--node-ip=当前主机的IP(后续k8s网络组件通信将使用此IP)
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS --node-ip=192.168.239.4

重启主机

reboot

高可用组件安装

仅在两个Master节点执行

Nginx负载均衡

#安装Nginx
apt install nginx -y

cd /etc/nginx

#配置nginx
vim nginx.conf

# 在http选项大括号里添加
log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';                     

# 在http选项大括号外添加
stream {
    log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';

    access_log  /var/log/nginx/k8s-access.log  main;

    upstream k8s-apiserver {
       server 192.168.239.4:6443;   # Master1 APISERVER IP:PORT,修改为本次master节点的ip即可
       server 192.168.239.5:6443;   # Master2 APISERVER IP:PORT
    }
    
    server {
       listen 16443;  # 由于nginx与master节点复用,这个监听端口不能是6443,否则会冲突
       proxy_pass k8s-apiserver;
    }
}

#检验Nginx
nginx -t

#重启Nginx
systemctl restart nginx

#这里是为了解决报错
cd sites-enabled
rm -rf default

#重启Nginx
systemctl restart nginx

#查看Nginx的运行状态
ps -ef | grep nginx 

Keepalived(状态检测和故障隔离)

apt install -y keepalived 

# 编写配置文件
vim /etc/keepalived/keepalived.conf

需要注意修改的是:

  • state:主节点为 MASTER,对应的备份节点为 BACKUP

  • interface:修改为你当前使用的网卡,ifconfig查看

  • mcast_src_ip:当前主机的内网IP

  • virtual_ipaddress:虚拟IP,主节点和备份节点的需一致

# 配置文件内容(注意修改下方的interface、mcast_src_ip、virtual_ipaddress、)


! Configuration File for keepalived
global_defs {
    ## 标识本节点的字条串,通常为 hostname
    router_id k8s-master01
    script_user root
    enable_script_security    
}
## 检测脚本
## keepalived 会定时执行脚本并对脚本执行的结果进行分析,动态调整 vrrp_instance 的优先级。如果脚本执行结果为 0,并且 weight 配置的值大于 0,则优先级相应的增加。如果脚本执行结果非 0,并且 weight配置的值小于 0,则优先级相应的减少。其他情况,维持原本配置的优先级,即配置文件中 priority 对应的值。
vrrp_script chk_apiserver {
    script "/etc/keepalived/check_apiserver.sh"
    # 每2秒检查一次
    interval 2
    # 一旦脚本执行成功,权重减少5
    weight -5
    fall 3  
    rise 2
}
## 定义虚拟路由,VI_1 为虚拟路由的标示符,自己定义名称
vrrp_instance VI_1 {
    ## 主节点为 MASTER,对应的备份节点为 BACKUP
    state MASTER
    ## 绑定虚拟 IP 的网络接口,与本机 IP 地址所在的网络接口相同
    interface ens33
    # 主机的IP地址
    mcast_src_ip 192.168.239.4 # 自身内网IP
    # 虚拟路由id
    virtual_router_id 100
    ## 节点优先级,值范围 0-254,MASTER 要比 BACKUP 高
    priority 100
     ## 优先级高的设置 nopreempt 解决异常恢复后再次抢占的问题
    nopreempt 
    ## 组播信息发送间隔,所有节点设置必须一样,默认 1s
    advert_int 2
    ## 设置验证信息,所有节点必须一致
    authentication {
        auth_type PASS
        auth_pass K8SHA_KA_AUTH
    }
    ## 虚拟 IP 池, 所有节点设置必须一样
    virtual_ipaddress {
            ## 虚拟 ip,可以定义多个        
        192.168.100.190
    }
    track_script {
       chk_apiserver
    }
}

编写监控脚本

# 监控脚本
vim /etc/keepalived/check_apiserver.sh

# 添加如下:
#!/bin/bash
 
err=0
for k in $(seq 1 5)
do
    check_code=$(pgrep kube-apiserver)
    if [[ $check_code == "" ]]; then
        err=$(expr $err + 1)
        sleep 5
        continue
    else
        err=0
        break
    fi
done
 
if [[ $err != "0" ]]; then
    echo "systemctl stop keepalived"
    /usr/bin/systemctl stop keepalived
    exit 1
else
    exit 0
fi

设置开机自启动

# 设置开机自启

systemctl daemon-reload
systemctl start nginx
systemctl start keepalived
systemctl enable nginx
systemctl enable keepalived

集群初始化

仅在两个Master节点执行
# 仅在Master主节点;
# control-plane-endpoint要使用Keepalived虚拟出来的IP,端口为nginx保留端口
# apiserver-advertise-address则为本机VPN虚拟IP

# 一定要在node节点加入前设置ipvs模式

kubeadm init \
  --apiserver-advertise-address=192.168.239.4 \
  --image-repository registry.aliyuncs.com/google_containers \
  --control-plane-endpoint=192.168.100.190:16443 \
  --kubernetes-version v1.23.6 \
  --service-cidr=10.96.0.0/12 \
  --pod-network-cidr=10.244.0.0/16 \
  --upload-certs

# 重启kubelet
systemctl daemon-reload
systemctl restart kubelet
systemctl status kubelet

# 使用kubectl(所有Master节点)
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

此时k8s搭建已接近尾声,但还缺少CNI网络组件

部署CNI网络插件

这里选用flannel作CNI网络插件,请确保各节点都存在【安装部署】中的flannel镜像再部署,否则节点容易NotReady

准备官方kube-flannel.yaml文件

---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
  name: psp.flannel.unprivileged
  annotations:
    seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
    apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
    apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
  privileged: false
  volumes:
  - configMap
  - secret
  - emptyDir
  - hostPath
  allowedHostPaths:
  - pathPrefix: "/etc/cni/net.d"
  - pathPrefix: "/etc/kube-flannel"
  - pathPrefix: "/run/flannel"
  readOnlyRootFilesystem: false
  # Users and groups
  runAsUser:
    rule: RunAsAny
  supplementalGroups:
    rule: RunAsAny
  fsGroup:
    rule: RunAsAny
  # Privilege Escalation
  allowPrivilegeEscalation: false
  defaultAllowPrivilegeEscalation: false
  # Capabilities
  allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
  defaultAddCapabilities: []
  requiredDropCapabilities: []
  # Host namespaces
  hostPID: false
  hostIPC: false
  hostNetwork: true
  hostPorts:
  - min: 0
    max: 65535
  # SELinux
  seLinux:
    # SELinux is unused in CaaSP
    rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups: ['extensions']
  resources: ['podsecuritypolicies']
  verbs: ['use']
  resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-system
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-system
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni
        image: quay.io/coreos/flannel:v0.13.1-rc1
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.13.1-rc1
        command:
        - /opt/bin/flanneld
        args:
        - --public-ip=$(PUBLIC_IP)
        - --iface=tun0
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: PUBLIC_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg

修改yaml文件配置

使用vpn方式时,需修改下方配置,指定虚拟网卡
# 使用VPN网卡注意添加指定网卡以及环境变量
containers:
      - name: kube-flannel
        image: rancher/mirrored-flannelcni-flannel:v0.16.1
        command:
        - /opt/bin/flanneld
        args:
        - --public-ip=$(PUBLIC_IP)   # 添加,固定写法
        - --iface=eth0               # 添加为VPN网卡名
        - --ip-masq
        - --kube-subnet-mgr

# 添加环境并联
env:
        - name: PUBLIC_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP

启动flannel网络插件

kubectl apply -f kube-flannel,yml

开启ipvs模式

# 查看是否已有ipvs模块
lsmod|grep ip_vs

# 编辑配置文件的mode:"ipvs"
kubectl edit cm kube-proxy -n kube-system

# 删除原先的kube-proxy,让其自动生成新的
kubectl delete pod -l k8s-app=kube-proxy -n kube-system

# 测试ipvs模块是否开启成功
ipvsadm -Ln

重置k8s环境

该步骤适用于重置k8s,但不会卸载k8s环境(脚本方式)
#!/bin/bash
kubeadm reset

iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

systemctl stop kubelet

systemctl stop docker

rm -rf /var/lib/cni/*

rm -rf /var/lib/kubelet/*

rm -rf /etc/cni/*

ifconfig cni0 down

ifconfig flannel.1 down

ifconfig docker0 down

ip link delete cni0

ip link delete flannel.1

systemctl start docker

rm -rf $HOME/.kube

彻底卸载k8s环境

谨慎执行,该步骤适用于k8s环境装坏了,彻底重头再来(脚本方式)
#!/bin/bash
echo "-----------------------------------重置kubeadm-----------------------------------"
kubeadm reset -f
echo "-----------------------------------开始卸载-----------------------------------"
sudo apt-get purge -y --auto-remove kubernetes-cni
 
sudo apt-get purge -y --auto-remove kubeadm
 
sudo apt-get purge -y --auto-remove kubectl
 
sudo apt-get purge -y --auto-remove kubelet

echo "-----------------------------------删除遗留文件-----------------------------------"
modprobe -r ipip
 
rm -rf ~/.kube/
 
rm -rf /etc/kubernetes/
 
rm -rf /etc/systemd/system/kubelet.service.d
 
rm -rf /etc/systemd/system/kubelet.service
 
rm -rf /usr/bin/kube*
 
rm -rf /etc/cni
 
rm -rf /opt/cni
 
rm -rf /var/lib/etcd
 
rm -rf /var/etcd
 
apt clean all
 
apt remove -f kube*
echo "-----------------------------------查看是否残留文件-----------------------------------"
dpkg -l | grep kube
echo "-----------------------------------若有残留文件,使用sudo apt-get purge   --auto-remove -----------------------------------"
  • 3
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 3
    评论
要搭建Kubernetes高可用集群,可以按照以下步骤进行操作: 1. 准备环境:确保每个节点满足安装要求,并安装dockerkubeadmkubelet等必要软件。 2. 部署master节点的高可用组件:首先在每个master节点上部署keepalived和haproxy。这些组件将负责提供VIP和负载均衡功能。 3. 使用kubeadm初始化第一个master节点:在其中一个master节点上使用kubeadm init命令进行集群初始化。执行该命令后,会得到一个join命令,记下来以便后续使用。 4. 加入其他master节点:在其他master节点上执行之前记下的join命令,并添加参数--control-plane,以将其加入到集群的控制平面中。 5. 加入worker节点:在每个worker节点上执行join命令,将其加入到集群中。 6. 安装集群网络:根据需要选择合适的网络插件,并在集群中部署。 7. 进行集群测试:使用kubectl命令验证集群是否正常工作。 这样,就完成了Kubernetes高可用集群的搭建过程。请注意,这只是一个简要的概述,实际操作中可能还需要进行一些额外的配置和调整。<span class="em">1</span><span class="em">2</span><span class="em">3</span> #### 引用[.reference_title] - *1* *2* [K8s高可用集群搭建](https://blog.csdn.net/weixin_44917045/article/details/127993927)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v92^chatsearchT0_1"}}] [.reference_item style="max-width: 50%"] - *3* [k8s系列(二)之k8s高可用集群环境搭建](https://blog.csdn.net/qq_29653373/article/details/126147549)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v92^chatsearchT0_1"}}] [.reference_item style="max-width: 50%"] [ .reference_list ]

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

码走偏锋

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值