k8s1.23.0高可用安装

前言

一般安装的话都只是单master多node节点的集群,但是一但master出现故障时,是非常影响使用何效率的,如果出现不可恢复的意外,只有去备份etcd然后再薪的集群里面去恢复他,为了避免一系列问题,所以使用keepalived+haproxy或者keepalived+nginx实现集群高可用和均衡负载。

准备工作

192.168.100.110   Vip
192.168.100.111   master
192.168.100.112   master2
192.168.100.113   master3
192.168.100.114   node1
192.168.100.115   node2
192.168.100.116   node3

以上是我们将要使用的虚拟机和虚拟IP,然后修改/etc/hosts文件:

cat >> /etc/hosts << EOF
192.168.100.111   master
192.168.100.112   master2
192.168.100.113   master3
192.168.100.114   node1
192.168.100.115   node2
192.168.100.116   node3
EOF

然后关闭防火墙、关闭selinux、关闭swap(所有节点上执行):

systemctl stop firewalld && systemctl disable firewalld
sed -i 's/enforcing/disabled/' /etc/selinux/config && setenforce 0
swapoff -a && sed -ri 's/.*swap.*/#&/' /etc/fstab

安装、设置ipvs

yum -y install ipvsadm ipset

创建ipvs设置脚本:

cat > /etc/sysconfig/modules/ipvs.modules << EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF

##执行脚本,验证修改结果:
chmod 755 /etc/sysconfig/modules/ipvs.modules
bash /etc/sysconfig/modules/ipvs.modules
lsmod | grep -e ip_vs -e nf_conntrack_ipv4

安装docker(所有节点安装)

安装需要的软件包

yum install -y yum-utils device-mapper-persistent-data lvm2

添加docker源

yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

安装docker

yum install docker-ce -y && systemctl enable docker.service

设置docker的驱动,和k8s的驱动保持一致,也可以在里面配置你自己的镜像仓库添加参数”insecure-registries”(离线部署k8s时最好配置仓库)

cat > /etc/docker/daemon.json <<EOF
{
    "registry-mirrors": ["http://xsgbmvdm.mirror.aliyuncs.com"],    
    "log-driver":"json-file",
    "log-opts": {"max-size":"50m", "max-file":"3"},
    "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF

可以在docker.service里面修改docker的数据存储路径(根据情况修改,我的在/data目录下面,/data是我的的数据盘

vim  /usr/lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd --graph=/data/docker
// reload配置文件
systemctl daemon-reload

进行时间同步(所有节点执行)

yum install ntpdate -y && ntpdate time.windows.com

配置内核参数,将桥接的IPv4流量传递到iptables的链:

cat > /etc/sysctl.d/k8s.conf <<EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF

#使配置的内核参数生效
sysctl -p

负载均衡配置

安装HAProxy和Keepalived(在所有Master节点上安装HAProxy和Keepalived)

yum -y install haproxy keepalived

在所有Master节点上创建HAProxy配置文件:

cat > /etc/haproxy/haproxy.cfg << EOF
global
    log         127.0.0.1 local2
    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon
    stats socket /var/lib/haproxy/stats

defaults
    mode                    tcp
    log                     global
    option                  tcplog
    option                  dontlognull
    option                  redispatch
    retries                 3
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout check           10s
    maxconn                 3000

frontend  k8s_https *:8443
    mode      tcp
    maxconn      2000
    default_backend     https_sri
    
backend https_sri
    balance      roundrobin
    server master1-api 192.168.100.111:6443  check inter 10000 fall 2 rise 2 weight 1
    server master2-api 192.168.100.112:6443  check inter 10000 fall 2 rise 2 weight 1
    server master3-api 192.168.100.113:6443  check inter 10000 fall 2 rise 2 weight 1
EOF

在Master节点上创建Keepalived配置文件:

cat > /etc/keepalived/keepalived.conf << EOF
global_defs {
   router_id LVS_DEVEL
}

vrrp_script check_haproxy {
    script "/etc/keepalived/check_haproxy.sh"
    interval 3000
}

vrrp_instance VI_1 {
    state Master
    interface ens192
    virtual_router_id 80
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 111111
    }
    virtual_ipaddress {
        192.168.100.110/24
    }
    track_script {

    }
}
EOF

在Master2节点上创建Keepalived配置文件:

cat > /etc/keepalived/keepalived.conf << EOF
global_defs {
   router_id LVS_DEVEL
}

vrrp_script check_haproxy {
    script "/etc/keepalived/check_haproxy.sh"
    interval 3000
}

vrrp_instance VI_1 {
    state Slave
    interface ens192
    virtual_router_id 80
    priority 50
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 111111
    }
    virtual_ipaddress {
        192.168.100.110/24
    }
    track_script {

    }
}
EOF

在Master3节点上创建Keepalived配置文件:

cat > /etc/keepalived/keepalived.conf << EOF
global_defs {
   router_id LVS_DEVEL
}

vrrp_script check_haproxy {
    script "/etc/keepalived/check_haproxy.sh"
    interval 3000
}

vrrp_instance VI_1 {
    state Slave
    interface ens192
    virtual_router_id 80
    priority 30
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 111111
    }
    virtual_ipaddress {
        192.168.100.110/24
    }
    track_script {

    }
}
EOF

在所有Master节点上创建HAProxy检查脚本

cat > /etc/keepalived/check_haproxy.sh << EOF
#!/bin/bash
if [ `ps -C haproxy --no-header | wc -l` == 0 ]; then
        systemctl start haproxy
        sleep 3
        if [ `ps -C haproxy --no-header | wc -l` == 0 ]; then
                systemctl stop keepalived
        fi
fi
EOF

添加可执行权限

chmod +x /etc/keepalived/check_haproxy.sh

在所有Master节点上启动HAProxy和Keepalived,并设置自启动:

systemctl start haproxy keepalived
systemctl enable haproxy keepalived
systemctl status haproxy keepalived

在master上面查看查看keepalived工作状态
在这里插入图片描述
添加kubernetes阿里YUM源

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

所有节点安装kubectl、kubelet、kubeadm并设置开机启动

yum -y install kubelet-1.23.0 kubeadm-1.23.0 kubectl-1.23.0 && systemctl enable kubelet && systemctl start kubelet

由于镜像在google在Registry上,国内无法访问,需要手动从阿里云或其他Registry上下载

kubeadm config images list --kubernetes-version 1.23.0

在这里插入图片描述
在所有Master节点上下载镜像:

kubeadm config images list --kubernetes-version 1.20.5 | sed -e 's/^/docker pull /g' -e 's#k8s.gcr.io#registry.aliyuncs.com/google_containers#g' | sh -x

修改镜像名称,修改之后将镜像传到其他节点

docker images | grep registry.aliyuncs.com/google_containers | awk '{print "docker tag ",$1":"$2,$1":"$2}' | sed -e 's#registry.aliyuncs.com/google_containers#g#2' | sh -x

在这里插入图片描述
注:如果拉取镜像出现问题可以手动拉取镜像,其中calico是后面添加网络时的镜像,在初始化之前不会使用该镜像

初始化高可用集群

在master上做免密登录

ssh-keygen
for host in master master2 master3; do ssh-copy-id -i ~/.ssh/id_rsa.pub $host; done

在Master1节点上创建集群配置文件:

cat /etc/kubernetes/kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.23.0
controlPlaneEndpoint: "192.168.100.110:8443"
apiServer:
  certSANs:
  - 192.168.100.111
  - 192.168.100.112
  - 192.168.100.113
  - 192.168.100.110
networking:
  podSubnet: 10.244.0.0/16

在Master节点上初始化高可用集群:

kubeadm init --config /etc/kubernetes/kubeadm-config.yaml

在这里插入图片描述
在Master节点上拷贝证书至其余Master:

for node in master2 master3; do
  ssh $node "mkdir -p /etc/kubernetes/pki/etcd; mkdir -p ~/.kube/"
  scp /etc/kubernetes/pki/ca.crt $node:/etc/kubernetes/pki/ca.crt
  scp /etc/kubernetes/pki/ca.key $node:/etc/kubernetes/pki/ca.key
  scp /etc/kubernetes/pki/sa.key $node:/etc/kubernetes/pki/sa.key
  scp /etc/kubernetes/pki/sa.pub $node:/etc/kubernetes/pki/sa.pub
  scp /etc/kubernetes/pki/front-proxy-ca.crt $node:/etc/kubernetes/pki/front-proxy-ca.crt
  scp /etc/kubernetes/pki/front-proxy-ca.key $node:/etc/kubernetes/pki/front-proxy-ca.key
  scp /etc/kubernetes/pki/etcd/ca.crt $node:/etc/kubernetes/pki/etcd/ca.crt
  scp /etc/kubernetes/pki/etcd/ca.key $node:/etc/kubernetes/pki/etcd/ca.key
  scp /etc/kubernetes/admin.conf $node:/etc/kubernetes/admin.conf
  scp /etc/kubernetes/admin.conf $node:~/.kube/config
done

将其余Master加入高可用集群:

kubeadm join 192.168.100.110:8443 --token knrben.goiux95j2p04ea0c --discovery-token-ca-cert-hash sha256:3bbb6c58222c96f9bf4c2db0269ff4057e72c98faa65e75a17dc79c5cbe6508c  --control-plane

安装网络

wget https://docs.projectcalico.org/manifests/calico.yaml
#calico.yaml添加网卡信息
# Cluster type to identify the deployment type
  - name: CLUSTER_TYPE
  value: "k8s,bgp"
# 下方熙增新增
  - name: IP_AUTODETECTION_METHOD
    value: "interface=ens192"
    # ens192为本地网卡名字
##执行文件生成网络
kubectl apply -f calico.yaml

集群搭建完成,在任意master上面都可以执行
在这里插入图片描述
更加详细请参考(个人bolg):http://119.91.216.222/

评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值