k8s HA 集群搭建

k8s HA 集群搭建

官方文档:

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/
基于kubeadm + HAproxy + keepalived
单节点宕机不影响正常使用
2个节点宕机,不依赖 master 的服务会活着,直到遇到意外为止。所有 kubectl xxx 都不能用。
etcd 直接集成没单独出来。
在这里插入图片描述

主机列表

192.168.6.83 VIP
192.168.6.84 master1
192.168.6.85 master2
192.168.6.86 master3
192.168.6.87 node01

HAproxy 安装配置

master1 & master2 & master3 3个主机都同样的配置。

yum -y install haproxy
mv /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak
cat << EOF > /etc/haproxy/haproxy.cfg
global
    log         127.0.0.1 local2
    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon

defaults
    mode                    tcp
    log                     global
    retries                 3
    timeout connect         10s
    timeout client          1m
    timeout server          1m

frontend kube-apiserver
    bind *:4443 # 指定前端端口
    mode tcp
    default_backend master

backend master # 指定后端机器及端口,负载方式为轮询
    balance roundrobin
    server master-1  192.168.6.184:6443 check maxconn 2000
    server master-2  192.168.6.185:6443 check maxconn 2000
    server master-3  192.168.6.186:6443 check maxconn 2000
EOF
systemctl start haproxy
systemctl enable haproxy
systemctl status haproxy

Keepalived 安装配置

master
yum install -y keepalived
mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf-back
cat << EOF > /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id LVS_1
}

vrrp_instance VI_1 {
    state MATSER
    interface eth0
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 4444
    }
    virtual_ipaddress {
        192.168.6.183/24
    }
}
EOF
systemctl start keepalived
systemctl enable keepalived
systemctl status keepalived
backup

子节点修改下面内容:

router_id(每个节点都不同)
interface(vip绑定的物理网卡根据实际情况调整)
virtual_ipaddress(vip地址及掩码长度)
state (子节点改为 BACKUP )
priority (优先级低于100即可)

多节点修改 router_id 即可

yum install -y keepalived
mv /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf-back
cat << EOF > /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   router_id LVS_2
}

vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    virtual_router_id 51
    priority 90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 4444
    }
    virtual_ipaddress {
        192.168.6.183/24
    }
}
EOF
systemctl start keepalived
systemctl enable keepalived
systemctl status keepalived

初始化集群

所有节点都须要做。

# 关闭 防火墙
systemctl stop firewalld
systemctl disable firewalld

# 关闭 SeLinux
setenforce 0
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

# 关闭 swap
swapoff -a
yes | cp /etc/fstab /etc/fstab_bak
cat /etc/fstab_bak |grep -v swap > /etc/fstab

#关闭无用服务
systemctl stop postfix
systemctl disable postfix

# 修改 /etc/sysctl.conf
modprobe br_netfilter
cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
vm.swappiness=0
EOF
sysctl -p /etc/sysctl.d/k8s.conf

#开启 ipvs
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

# 设置 yum repository
yum install -y yum-utils \
device-mapper-persistent-data \
lvm2
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

# 安装并启动 docker
yum install -y docker-ce-18.09.7 docker-ce-cli-18.09.7 containerd.io

# 安装 nfs-utils
# 必须先安装 nfs-utils 才能挂载 nfs 网络存储
# 添加ipvs支持
yum install -y nfs-utils ipset ipvsadm

# 配置K8S的yum源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
       http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

# 安装kubelet、kubeadm、kubectl
yum install -y kubelet-1.15.5 kubeadm-1.15.5 kubectl-1.15.5

# 修改docker Cgroup Driver为systemd
mkdir -p /etc/docker/
cat > /etc/docker/daemon.json <<EOF
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "registry-mirrors": ["http://hub-mirror.c.163.com"]
}
EOF

# 重启 docker,并启动 kubelet
systemctl daemon-reload
systemctl restart docker
systemctl enable docker
systemctl enable kubelet

初始化master

#生成默认配置文件
kubeadm config print init-defaults > kubeadm.conf
#修改后的配置文件
cat kubeadm.conf

apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.6.184
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: master1
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.15.5
controlPlaneEndpoint: 192.168.6.183:4443
networking:
  dnsDomain: cluster.local
  podSubnet: 10.44.0.0/16
  serviceSubnet: 10.22.0.0/16
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: "ipvs"
kubeadm init --config kubeadm.conf

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

加入master节点

在已经初始化好的master上生成用于新master加入的证书

kubeadm init phase upload-certs --upload-certs

[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
60846420b1e0ecb0d5a5662e5089ae5e110daae5d4bddd4eb05348d4d37c9081

生成新join token

kubeadm token create --print-join-command

kubeadm join 192.168.6.183:4443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:6543fb89055da2ed51ec460bdb3712df326ffe969855aced2e712494dd2811a9 
在matser2 和 master3 上运行join 加入master.

加上 --control-plane 参数加入master.

kubeadm join 192.168.6.183:4443 \
	--token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash \
    sha256:6543fb89055da2ed51ec460bdb3712df326ffe969855aced2e712494dd2811a9 \
    --control-plane \
    --certificate-key \
    60846420b1e0ecb0d5a5662e5089ae5e110daae5d4bddd4eb05348d4d37c9081

加入node节点

直接加入

kubeadm join 192.168.6.183:4443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:6543fb89055da2ed51ec460bdb3712df326ffe969855aced2e712494dd2811a9 

安装网络插件calico

参考:
https://blog.csdn.net/lswzw/article/details/103044179

安装ingress-nginx

参考:
https://blog.csdn.net/lswzw/article/details/103044078

安装dashboard

参考:
https://blog.csdn.net/lswzw/article/details/90077928

安装监控

参考:
https://blog.csdn.net/lswzw/article/details/102727847

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值