初始化一个高可用集群

基础环境安装

主目录地址


初始化一个高可用的集群

参考文章

使用 kubeadm 创建集群

集群网络系统

网络插件

利用 kubeadm 创建高可用集群

高可用拓扑选项

[软件负载平衡选项指南](https://github.com/kubernetes/kubeadm/blob/main/docs/ha-considerations.md#options-for-software-load-balancing)

按照k8s官方文档利用 kubeadm 创建高可用集群来看,创建一个高可用的集群有两种方式:

  1. 外部的etcd

    这种方式需要你独立安装etcd集群,这样比起一个单机的master来讲要增加五个节点(三个master节点和三个节点用作etcd集群).

  2. stacked

    这种方式需要三个主节点(最少三个节点不能再少).每个节点都有一个etcd服务,同时这个服务只与当前节点的kube-apiserver进行通信(但每个etcd节点会进行通信,为了保证集群的冗余状态所以最少需要三个节点).这中方式比起第一种设置起来更简单,更加易于管理.

第一种方式消耗主机节点较多同时意义不大,暂时不做测试.下面提供两种高可用方案,即基于主机服务的方案和基于静态容器的方案.

基于stacked的方式

此方式需要额外的代理服务来实现,官方提供的是keepalived+haproxy方式,所以沿用官方文档,需要额外了解一下keepalived和haproxy的知识.

然后先熟悉下每个服务的配置文件模板.

keepalived配置文件
! /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
}
vrrp_script check_apiserver {
  script "/etc/keepalived/check_apiserver.sh"
  interval 3
  weight -2
  fall 10
  rise 2
}

vrrp_instance VI_1 {
    state ${STATE}
    interface ${INTERFACE}
    virtual_router_id ${ROUTER_ID}
    priority ${PRIORITY}
    authentication {
        auth_type PASS
        auth_pass ${AUTH_PASS}
    }
    virtual_ipaddress {
        ${APISERVER_VIP}
    }
    track_script {
        check_apiserver
    }
}
  • ${STATE}:一个节点配置为MASTER,其他节点配置为BACKEND,当启动时配置为MASTER的会成为主节点.
  • ${INTERFACE}: 网卡的名称,可以通过命令ip a查看,包含内网IP的即为网卡名称,如ens192.
  • ${ROUTER_ID}: keepalived集群的一个ID标识.同一个集群的ID应该是相同的,同时要保证在整个网段中改ID是唯一的(没有其他keepalived集群在使用的),keepalived通过此ID来确定一个节点是否属于一个集群.
  • ${PRIORITY}: 集群中节点的ID.选举主节点时此值最大的将成为新的主节点,一般为1~100以内(如果此值指定了一个集群内最大的值,那么${STATE}指定的master将会失效,建议${STATE}为master的节点的值最大).
  • ${AUTH_PASS}: 主机认证密码,每个节点主机密码需要相同.
  • ${APISERVER_VIP}: 虚拟IP地址.此地址为集群内没有在使用的IP地址.服务启动后会在主节点自动生成,当主节点结束后会自动选主然后绑定这个IP地址.
  • script: 执行脚本进行检查

/etc/keepalived/check_apiserver.sh脚本内容

cat >/etc/keepalived/check_apiserver.sh<<EOF
#!/bin/sh

errorExit() {
    echo "*** $*" 1>&2
    exit 1
}

curl --silent --max-time 2 --insecure https://localhost:${APISERVER_DEST_PORT}/ -o /dev/null || errorExit "Error GET https://localhost:${APISERVER_DEST_PORT}/"
if ip addr | grep -q ${APISERVER_VIP}; then
    curl --silent --max-time 2 --insecure https://${APISERVER_VIP}:${APISERVER_DEST_PORT}/ -o /dev/null || errorExit "Error GET https://${APISERVER_VIP}:${APISERVER_DEST_PORT}/"
fi
EOF
  • ${APISERVER_VIP}:keepalived的虚拟IP
  • ${APISERVER_DEST_PORT}:API server的通信端口
haproxy配置模板
# /etc/haproxy/haproxy.cfg
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
    log /dev/log local0
    log /dev/log local1 notice
    daemon

#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 1
    timeout http-request    10s
    timeout queue           20s
    timeout connect         5s
    timeout client          20s
    timeout server          20s
    timeout http-keep-alive 10s
    timeout check           10s

#---------------------------------------------------------------------
# apiserver frontend which proxys to the control plane nodes
#---------------------------------------------------------------------
frontend apiserver
    bind *:${APISERVER_DEST_PORT}
    mode tcp
    option tcplog
    default_backend apiserver

#---------------------------------------------------------------------
# round robin balancing for apiserver
#---------------------------------------------------------------------
backend apiserver
    option httpchk GET /healthz
    http-check expect status 200
    mode tcp
    option ssl-hello-chk
    balance     roundrobin
        server ${HOST1_ID} ${HOST1_ADDRESS}:${APISERVER_SRC_PORT} check
        # [...]
  • ${APISERVER_DEST_PORT}: Kubernetes与API Server通信的端口.
  • ${APISERVER_SRC_PORT}: API Server实例使用的端口.
  • ${HOST1_ID}: 负载均衡器的别名
  • ${HOST1_ADDRESS}: 负载均衡器到后端的IP地址或者DNS名称.
  • # […] : 如果有更多的节点,复制上面一行修改即可.在下面我把三个后端节点全部写上了,意为haproxy接收到请求后,会把请求轮询的方式转发到三个master节点的api-server.当然可以不再添加,${HOST1_ADDRESS}设置为127.0.0.1,将本机接收到的请求全部转发到本机的api-server.

了解了每个服务的配置以后,接下来,开始安装服务.

第一种方式 基于主机的安装

# 每个主节点都需要安装
yum install keepalived haproxy -y

然后开始声明每个变量(修改内容后执行,注意每个节点需要修改对应的值)

STATE='MASTER' # keepalived的主节点,其他改为backend.
INTERFACE='ens192'
ROUTER_ID='10'
PRIORITY='100'
AUTH_PASS='sinobase@123'
APISERVER_VIP='20.88.9.30'
APISERVER_DEST_PORT='8443'
APISERVER_SRC_PORT='6443'
HOST1_ID='master1'
HOST1_ADDRESS='20.88.9.31'
HOST2_ID='master2'
HOST2_ADDRESS='20.88.9.32'
HOST3_ID='master3'
HOST3_ADDRESS='20.88.9.33'

接下来开始添加脚本

cat >/etc/keepalived/check_apiserver.sh<<EOF
#!/bin/sh

errorExit() {
    echo "*** $*" 1>&2
    exit 1
}

curl --silent --max-time 2 --insecure https://localhost:${APISERVER_DEST_PORT}/ -o /dev/null || errorExit "Error GET https://localhost:${APISERVER_DEST_PORT}/"
if ip addr | grep -q ${APISERVER_VIP}; then
    curl --silent --max-time 2 --insecure https://${APISERVER_VIP}:${APISERVER_DEST_PORT}/ -o /dev/null || errorExit "Error GET https://${APISERVER_VIP}:${APISERVER_DEST_PORT}/"
fi
EOF

更改keepalived配置

# 备份现有的配置
cp /etc/keepalived/keepalived.conf{,-bak}
cat >/etc/keepalived/keepalived.conf<<EOF
! /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
    router_id LVS_DEVEL
}
vrrp_script check_apiserver {
  script "/etc/keepalived/check_apiserver.sh"
  interval 3
  weight -2
  fall 10
  rise 2
}

vrrp_instance VI_1 {
    state ${STATE}
    interface ${INTERFACE}
    virtual_router_id ${ROUTER_ID}
    priority ${PRIORITY}
    authentication {
        auth_type PASS
        auth_pass ${AUTH_PASS}
    }
    virtual_ipaddress {
        ${APISERVER_VIP}
    }
    track_script {
        check_apiserver
    }
}
EOF

更改haproxy配置

cp /etc/haproxy/haproxy.cfg{,-bak}
cat >/etc/haproxy/haproxy.cfg<<EOF
# /etc/haproxy/haproxy.cfg
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
    log /dev/log local0
    log /dev/log local1 notice
    daemon

#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 1
    timeout http-request    10s
    timeout queue           20s
    timeout connect         5s
    timeout client          20s
    timeout server          20s
    timeout http-keep-alive 10s
    timeout check           10s

#---------------------------------------------------------------------
# apiserver frontend which proxys to the control plane nodes
#---------------------------------------------------------------------
frontend apiserver
    bind *:${APISERVER_DEST_PORT}
    mode tcp
    option tcplog
    default_backend apiserver

#---------------------------------------------------------------------
# round robin balancing for apiserver
#---------------------------------------------------------------------
backend apiserver
    option httpchk GET /healthz
    http-check expect status 200
    mode tcp
    option ssl-hello-chk
    balance     roundrobin
        server ${HOST1_ID} ${HOST1_ADDRESS}:${APISERVER_SRC_PORT} check
        server ${HOST2_ID} ${HOST2_ADDRESS}:${APISERVER_SRC_PORT} check
        server ${HOST3_ID} ${HOST3_ADDRESS}:${APISERVER_SRC_PORT} check
EOF

启动服务

systemctl enable haproxy --now
systemctl enable keepalived --now

启动haproxy可能会有如下输出,可以忽略

Message from syslogd@localhost at Mar  7 20:55:39 ...
 haproxy[27917]:backend apiserver has no server available!

通过命令hostname -I在每个节点执行查看IP地址,会有一个节点多出一个IP地址(${PRIORITY}最大的那个节点).

开始初始化kubernetes

任意一个主节点执行:

# 命令模板
kubeadm init --control-plane-endpoint "LOAD_BALANCER_DNS:LOAD_BALANCER_PORT" --upload-certs
# 执行示例 --upload-certs:自动同步证书,严格来说是将主节点证书上传到kubeadm-certs Secret
# --control-plane-endpoint 指定虚拟IP地址
kubeadm init --control-plane-endpoint "20.88.9.30:8443" --upload-certs  --pod-network-cidr="10.244.0.0/16" --image-repository=registry.aliyuncs.com/google_containers
# 或者
kubeadm init --control-plane-endpoint "${APISERVER_VIP}:${APISERVER_DEST_PORT}" --upload-certs --pod-network-cidr="10.244.0.0/16" --image-repository=registry.aliyuncs.com/google_containers
  • –control-plane-endpoint: 指定虚拟IP地址和haproxy listen的端口.当某个节点故障后,keepalived会自动在其他健康节点重新生成这个虚拟IP.
  • –upload-certs: 其他主节点加入集群后自动同步证书配置
  • –pod-network-cidr: 指定pod网络网段,这个很重要,因为后面要使用flannel网络.
  • –image-repository: 更改镜像获取地址为阿里云,因为国家防火墙原因国内不可访问k8s默认的镜像仓库地址

执行结束后,控制台会输出:

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 20.88.9.30:8443 --token 4uxoal.oxubig3vq0z7sx4p \
	--discovery-token-ca-cert-hash sha256:9d0b5d3a12ffe5d29af2b6881b0fb427387bdf308718327f3eb95039a3b117f9 \
	--control-plane --certificate-key c6a0aedd0e2aade5fb19fc02565aebf78170c14af7cd56172b96e1ce181bed7c

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 20.88.9.30:8443 --token 4uxoal.oxubig3vq0z7sx4p \
	--discovery-token-ca-cert-hash sha256:9d0b5d3a12ffe5d29af2b6881b0fb427387bdf308718327f3eb95039a3b117f9
如果添加主节点执行命令

复制最开始的那段:

  kubeadm join 20.88.9.30:8443 --token 4uxoal.oxubig3vq0z7sx4p \
	--discovery-token-ca-cert-hash sha256:9d0b5d3a12ffe5d29af2b6881b0fb427387bdf308718327f3eb95039a3b117f9 \
	--control-plane --certificate-key c6a0aedd0e2aade5fb19fc02565aebf78170c14af7cd56172b96e1ce181bed7c

在其他节点执行.

同时上翻还有提示你复制配置的命令.或直接复制以下内容(每个主节点都执行)

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config
如果添加从节点执行:

然后开始添加工作节点,复制kube init最后输出的那段内容,在从节点执行:

kubeadm join 20.88.9.30:8443 --token 4uxoal.oxubig3vq0z7sx4p \
	--discovery-token-ca-cert-hash sha256:9d0b5d3a12ffe5d29af2b6881b0fb427387bdf308718327f3eb95039a3b117f9

每个主节点都可以执行命令,如kubectl get nodes查看集群中的所有主机(节点状态都是NotReady,这个没关系).

虽然集群还没有全部完成,但是你可以开始测试集群的高可用性了.

通过命令hostname -I在每个节点可以查看到当前节点的IP,找出有虚拟IP的那个节点,然后没什么好说的了直接执行shutdown

执行完稍等一会,这个节点会被关机,然后开始在其他节点执行获取主机命令kubectl get nodes,如果依旧能获取到集群的主机信息,那么表示集群是高可用的.当然可以继续随意停掉一台主机进行测试,可被停止的主机数量是根据主节点的主机数量来判断的,计算方式为n-1 / 2,也就就一半以上.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值