Kubernetes 高可用集群构建

高可用主要是在master节点上进行,node节点一般都是存在多个,即是出现故障也能够正常运行,我们可以根据自己的需求来对node节点增加以及删减。但是如果master出现故障,那么整个集群将不可用。所有才要对master做一个高可用

测试环境

节点IP地址
master-01192.168.1.100
master-02192.168.1.200
node192.168.1.250

部署过程如下

环境准备

修改主机名称

在对应的主机上进行设置
[root@localhost ~]# hostnamectl set-hostname master-01
[root@localhost ~]# hostnamectl set-hostname master-02
[root@localhost ~]# hostnamectl set-hostname node 

关闭防火墙以及selinux

三个节点操作一致(只展示master-01操作)
[root@master-01 ~]# systemctl stop firewalld
[root@master-01 ~]# systemctl disable firewalld
[root@master-01 ~]# vi /etc/selinux/config 
SELINUX=disabled
[root@master-01 ~]# setenforce 0
[root@master-01 ~]# iptables -F
[root@master-01 ~]# iptables -X
[root@master-01 ~]# iptables -Z
[root@master-01 ~]# /usr/sbin/iptables-save 

配置ssh无密码登录以及修改hosts文件

[root@master-01 ~]# vi /etc/hosts 
192.168.1.100   master-01
192.168.1.200   master-02
192.168.1.250   node
[root@master-01 ~]# ssh-keygen 
[root@master-01 ~]# ssh-copy-id master-02 
[root@master-01 ~]# ssh-copy-id node
[root@master-01 ~]# scp /etc/hosts root@master-02:/etc/hosts 
[root@master-01 ~]# scp /etc/hosts root@node:/etc/hosts 

关闭swap交换分区、开启内核转发功能

三个节点均要执行这一步 (三台操作一致,展示master操作)
[root@master-01 ~]# swapoff -a 
[root@master-01 ~]# vi /etc/fstab 
#/dev/mapper/centos-swap swap                    swap    defaults        0 0
[root@master-01 ~]# mount -a 
[root@master-01 ~]# modprobe br_netfilter 
[root@master-01 ~]# vi /etc/sysctl.conf 
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
[root@master-01 ~]# sysctl -p 

配置本地yum源

三台均需要执行这一步操作
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
[root@master-01 ~]# yum clean all && yum makecache 

时间同步的设置(master-01作为时间同步服务器)

[root@master-01 ~]# yum install -y ntp ntpdate 
[root@master-01 ~]# ntpdate ntp1.aliyun.com 
 8 Jan 10:13:53 ntpdate[14294]: adjust time server 120.25.115.20 offset -0.008179 sec
[root@master-01 ~]# vi /etc/ntp.conf 
17 restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap    //取消前面的注释
 18 server 127.127.1.0   //添加一行这个
 21 #server 0.centos.pool.ntp.org iburst  (注释以server开头的行)
 22 #server 1.centos.pool.ntp.org iburst
 23 #server 2.centos.pool.ntp.org iburst
 24 #server 3.centos.pool.ntp.org iburst
[root@master-01 ~]# systemctl start ntpd && systemctl enable ntpd 

master-02node的操作(操作一致)
[root@master-02 ~]# yum install -y ntpdate 
[root@master-02 ~]# ntpdate master-01 

部署Haproxy+Keepalived

安装Haproxy

master-01 和 master-02 都要安装

[root@master-01 ~]# yum install -y haproxy 
[root@master-01 ~]# vi /etc/haproxy/haproxy.cfg 
42 defaults   在这个项的下面添加如下内容即可
     59     listen k8s_haproxy
     60     bind *:6666
     61     mode tcp
     62     server master-01 192.168.1.100:6443 check inter 2000 rise 2 fall 5
     63     server master-02 192.168.1.200:6443 check inter 2000 rise 2 fall 5
然后将这个配置文件拷贝到master-02节点上即可
[root@master-01 ~]# scp /etc/haproxy/haproxy.cfg root@master-02:/etc/haproxy/haproxy.cfg 

然后启动即可(两个节点都要执行)
[root@master-01 ~]# systemctl start haproxy && systemctl enable haproxy 
[root@master-01 ~]# ss -tan |grep 6666
LISTEN     0      128          *:6666                     *:*                

安装Keepalived

master-01 和 master-02 都要安装

[root@master-01 ~]# yum install -y keepalived 
[root@master-01 ~]# vi /etc/keepalived/keepalived.conf
让文件里面内容只有这一些即可
! Configuration File for keepalived

global_defs {
   router_id LVS_K8S
}

vrrp_instance VI_1 {
    state MASTER
    interface ens33
    virtual_router_id 51
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.1.150
    }
}
[root@master-01 ~]# scp /etc/keepalived/keepalived.conf root@master-02:/etc/keepalived/keepalived.conf:拷贝过去的文件需要进行修改
 state BACKUP    (两个节点的状态均要改为 BACKUP 备份)
 priority ?     (优先级两个节点不能够一致)
修改完成以后我们就可以启动keepalive服务
[root@master-01 ~]# systemctl start keepalived 
[root@master-01 ~]# systemctl enable  keepalived 

安装docker

三个节点均要进行安装操作

[root@master-01 ~]# yum install -y docker 
[root@master-01 ~]# vi /etc/docker/daemon.json 
{
"registry-mirrors": ["https://ocfyrwaf.mirror.aliyuncs.com"]
}
[root@master-01 ~]# systemctl start docker && systemctl enable docker 

安装Kubernetes相关软件包

三个节点都要安装
[root@master-01 ~]#  yum install -y kubeadm-1.18.2-0 kubectl-1.18.2-0 kubelet-1.18.2-0
[root@master-01 ~]# systemctl start kubelet && systemctl enable  kubelet 

配置初始化集群所需的yaml文件

必须要执行这一步,添加相关集群信息,否则高可用性实现不了

[root@master-01 ~]# vi kubeadm-config.yaml 
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
kubernetesVersion: v1.18.2
controlPlaneEndpoint: 192.168.1.150:6666
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
networking: 
  podSubnet: 10.244.0.0/16 

初始化集群操作

[root@master-01 ~]# kubeadm init --config=kubeadm-config.yaml 
之后会出现如下代码代表成功
......
To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of the control-plane node running the following command on each as root:

  kubeadm join 192.168.1.150:6666 --token pnrb3w.1bh133p5guitwx3n \
    --discovery-token-ca-cert-hash sha256:864dc739ca82c91e319446b46cb38888f7e6866bac1eb25e18d23aa47fd53ff2 \
    --control-plane --certificate-key fd53740a2accc69c9c39947df96a5248b0fc82282a20347812b57eb0bccd4191

Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
"kubeadm init phase upload-certs --upload-certs" to reload certs afterward.

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.1.150:6666 --token pnrb3w.1bh133p5guitwx3n \
    --discovery-token-ca-cert-hash sha256:864dc739ca82c91e319446b46cb38888f7e6866bac1eb25e18d23aa47fd53ff2 

根据提示执行如下代码

[root@master-01 ~]# mkdir -p $HOME/.kube
[root@master-01 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master-01 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

查看状态
[root@master-01 ~]# kubectl get nodes 
NAME        STATUS     ROLES    AGE     VERSION
master-01   NotReady   master   5m47s   v1.18.2

master-02加入集群操作

[root@master-02 ~]# kubeadm join 192.168.1.150:6666 --token pnrb3w.1bh133p5guitwx3n \
>     --discovery-token-ca-cert-hash sha256:864dc739ca82c91e319446b46cb38888f7e6866bac1eb25e18d23aa47fd53ff2 \
>     --control-plane --certificate-key fd53740a2accc69c9c39947df96a5248b0fc82282a20347812b57eb0bccd4191

出现如下代码表示成功
To start administering your cluster from this node, you need to run the following as a regular user:

        mkdir -p $HOME/.kube
        sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
        sudo chown $(id -u):$(id -g) $HOME/.kube/config


然后我们执行如下代码:
[root@master-02 ~]# mkdir -p $HOME/.kube
[root@master-02 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master-02 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

查看状态

[root@master-02 ~]# kubectl get nodes 
NAME        STATUS     ROLES    AGE     VERSION
master-01   NotReady   master   3m25s   v1.18.2
master-02   NotReady   master   115s    v1.18.2

安装网络组件

两个节点执行如下命令即可

[root@master-01 ~]#  kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

查看运行状态
[root@master-01 ~]# kubectl get nodes 
NAME        STATUS   ROLES    AGE     VERSION
master-01   Ready    master   8m9s    v1.18.2
master-02   Ready    master   6m39s   v1.18.2

全部变为了Read

加入node节点

执行以下命令即可

[root@node ~]# kubeadm join 192.168.1.150:6666 --token pnrb3w.1bh133p5guitwx3n \
>     --discovery-token-ca-cert-hash sha256:864dc739ca82c91e319446b46cb38888f7e6866bac1eb25e18d23aa47fd53ff2 

master-01查看加入是否成功
[root@master-01 ~]# kubectl get nodes 
NAME        STATUS   ROLES    AGE     VERSION
master-01   Ready    master   11m     v1.18.2
master-02   Ready    master   9m52s   v1.18.2
node        Ready    <none>   107s    v1.18.2

至此,k8s高可用集群搭建完成

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

ball-4444

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值