kubernetes1.16.4高可用部署(keeplived下的多master节点)

一、系统环境

CentOS7.5 64Bit下搭建基于keepalived 多master节点的kubernetes,实现管理节点的高可用,同时搭建基于Web的Dashboard
部署所需的镜像软件及配置文件
链接:https://pan.baidu.com/s/1YOZ-MgVnraYmN1IJc_rkRA
提取码:6fpe

节点功能主机名节点IP
Master、etcd、registrymaster1192.168.1.11/24
Master、etcd、registrymaster2192.168.1.12/24
Master、etcd、registrymaster3192.168.1.13/24
Node1node1192.168.1.14/24
Node2node2192.168.1.15/24
Node3node3192.168.1.16/24
Dashboardk8s-dashboard192.168.1.17/24

二、基础环境配置

2.1主机配置

配置主机名

hostnamectl --static set-hostname  master1
hostnamectl --static set-hostname  master2
hostnamectl --static set-hostname  master3
hostnamectl --static set-hostname  node1
hostnamectl --static set-hostname  node2
hostnamectl --static set-hostname  node3
hostnamectl --static set-hostname  dashboard

配置hosts文件,每台机器上新增以下内容:

192.168.1.11  master1
192.168.1.12  master2
192.168.1.13  master3
192.168.1.14  node1
192.168.1.15  node2
192.168.1.16  node3
192.168.1.17  dashboard

master1上配置免密钥登录

ssh-keygen	#一直回车
for i in {11..17}; do ssh-copy-id root@192.168.1.$i; done

以下操作在所有主机上执行
关闭防火墙,修改SELinux

for i in {11..17}; do ssh  root@192.168.1.$i "systemctl disable firewalld.service;systemctl stop firewalld.service;systemctl mask firewalld.service"; done

编辑 /etc/selinux/config 文件,将SELINUX=enforcing改为ELINUX=disabled

sed -i 's/^SELINUX=enforcing$/SELINUX=disabled/' /etc/selinux/config	#该项设置需要重启后才能生效。
setenforce 0 

关闭系统交换空间使用。

swapoff -a
修改/etc/fstab文件,注释其中的swap行
vim /etc/fstab
......
#/dev/mapper/centos-swap   swap     swap    defaults      0 0

由于iptables被绕过而导致路由出错的问题,因此我们需要保证net.bridge.bridge-nf-call-iptables设置为1。

vim /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward=1
sysctl --system #使配置生效
modprobe br_netfilter #为了保证br_netfilter模块加载

2.2安装软件包

配置阿里云yum源
在阿里云镜像网站找到centos、docker-ce、kubernetes的yum文件https://developer.aliyun.com/mirror/

yum install -y yum-utils device-mapper-persistent-data lvm2 createrepo docker-ce- 18.06.3 docker-ce-cli-8.06.3 containerd.io kubelet-1.16.4 kubeadm-1.16.4 kubectl-1.16.4 
启动docker、kubectl 并设置为开机自启
systemctl enable docker;systemctl start docker	
systemctl enable kubelet;systemctl start kubelet

可以在能联网主机上下载所需的软件包;把下载好的软件包上传至安装主机中,配置为bendiyum源,并安装软件

yum install --downloadonly --downloaddir=/k8s yum-utils device-mapper-persistent-data lvm2 createrepo docker-ce- 18.06.3 docker-ce-cli- 18.06.3 containerd.io kubelet-1.16.4 kubeadm-1.16.4 kubectl-1.16.4 keepalived
--downloadonly 为只下载安装包和依赖
--downloaddir  指定下载目录

2.3配置docker

修改docker的cgroupdriver,添加阿里云镜像加速

vim /etc/docker/daemon.json{
{
        "registry-mirrors": ["https://lt2ws3tf.mirror.aliyuncs.com"],
        "exec-opts": ["native.cgroupdriver=systemd"]
}
重新加载docker
systemctl daemon-reload docker;systemctl restart docker

2.4下载镜像文件

考虑到网络原因,可以先把所需要的镜像下载好,并上传至所有主机中
在这里插入图片描述

docker pull kubernetesui/metrics-scraper:v1.0.3          
docker pull k8s.gcr.io/kube-apiserver:v1.16.4         
docker pull k8s.gcr.io/kube-controller-manager:v1.16.4         
docker pull k8s.gcr.io/kube-proxy:v1.16.4         
docker pull k8s.gcr.io/kube-scheduler: v1.16.4         
docker pull k8s.gcr.io/etcd:3.3.15-0        
docker pull k8s.gcr.io/coredns:1.6.2           
docker pull k8s.gcr.io/pause:3.1  
docker pull kubernetesui/dashboard:v2.0.0-rc6      
docker pull quay.io/coreos/flannel:v0.12.0-amd64 

批量导出镜像
docker images | awk ‘{print $1}’ > images.txt #获取镜像列表
sed –i ‘1d’ images.txt #删除第一行无用信息
docker save -o k8s.tar cat images.txt #保存所有镜像到本地
再把镜像文件k8s.tar上传到所有主机,并导入
docker load -i k8s.tar

部署node节点必须镜像

k8s.gcr.io/kube-proxy:v1.16.4
k8s.gcr.io/pause:3.1
kubernetesui/dashboard:v2.0.0-rc6
kubernetesui/metrics-scraper:v1.0.3
quay.io/coreos/flannel:v0.12.0-amd64

三、master节点配置

master节点上执行以下操作

3.1安装keepalived

yum –y install keepalived
配置keepalived
master1上配置

vim /etc/keepalived/keepalived.conf 
! Configuration File for keepalived
global_defs {
   router_id master1
}
vrrp_instance VI_1 {
    state MASTER	#主节点为master
    interface ens32	#各节点实际通信的网卡
    virtual_router_id 100	#主辅VRID号必须一致
    priority 250	#服务器优先级,优先级高优先获取VIP
    advert_int 1
    authentication {
        auth_type PASS  
        auth_pass 1111  #主辅服务器密码必须一致
    }
    virtual_ipaddress {
        192.168.1.10 #VIP
    }
}

master2上配置

vim /etc/keepalived/keepalived.conf 
! Configuration File for keepalived
global_defs {
   router_id master2
}
vrrp_instance VI_1 {
    state BACKUP	#keepalived备节点为backup
    interface ens32	#各节点实际通信的网卡
    virtual_router_id 100	#主辅VRID号必须一致
    priority 100	#服务器优先级,优先级高优先获取VIP
    advert_int 1
    authentication {
        auth_type PASS  
        auth_pass 1111  #主辅服务器密码必须一致
    }
    virtual_ipaddress {
        192.168.1.10 #VIP
    }
}

master3上配置

vim /etc/keepalived/keepalived.conf 
! Configuration File for keepalived
global_defs {
   router_id master3
}
vrrp_instance VI_1 {
    state BACKUP		#keepalived备节点为backup
    interface ens32	#各节点实际通信的网卡
    virtual_router_id 100	#主辅VRID号必须一致
    priority 50	#服务器优先级,优先级高优先获取VIP
    advert_int 1
    authentication {
        auth_type PASS  
        auth_pass 1111  #主辅服务器密码必须一致
    }
    virtual_ipaddress {
        192.168.1.10#VIP
    }
}

重新keepalived并设置开机自启
systemctl enable keepalived; systemctl start keepalived
在设置为keepalived的master上查看虚拟IP信息,可以看到ens32上配置的VIP

]# ip addr
ens32: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    inet 192.168.1.11/24 brd 192.168.1.255 scope global ens32
       valid_lft forever preferred_lft forever
    inet 192.168.1.10/32 scope global ens32
...............

确保所有节点都能ping通VIP

3.2配置kubernetes

初始化master节点

vim kubeadm-config.yaml 
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.16.4
apiServer:
  certSANs:    #填写所有kube-apiserver节点的hostname、IP、VIP
  - master1
  - master2
  - master3
  - node1
  - node2
  - node3
  - 192.168.1.11
  - 192.168.1.12
  - 192.168.1.13
  - 192.168.1.14
  - 192.168.1.15
  - 192.168.1.16
  - 192.168.1.10
controlPlaneEndpoint: "192.168.1.10:6443"
networking:
  podSubnet: "10.244.0.0/16"

执行初始化
kubeadm init --config=kubeadm-config.yaml
初始化成功后会看到以下信息:复制并保存kubeadm join …

You can now join any number of control-plane nodes by copying certificate authorities 
and service account keys on each node and then running the following as root:

  kubeadm join 192.168.1.10:6443 --token qbwt6v.rr4hsh73gv8vrcij \
    --discovery-token-ca-cert-hash sha256:e306ffc7a126eb1f2c0cab297bbbed04f5bb464a04c05f1b0171192acbbae966 \
    --control-plane       

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.1.10:6443 --token qbwt6v.rr4hsh73gv8vrcij \
    --discovery-token-ca-cert-hash sha256:e306ffc7a126eb1f2c0cab297bbbed04f5bb464a04c05f1b0171192acbbae966 

安装成功之后,会显示一些重要信息,首先按照提示执行以下命令

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> /etc/profile
source /etc/profile
echo $KUBECONFIG    #应该返回/etc/kubernetes/admin.conf

如果初始化失败,可执行kubeadm reset后重新初始化
kubeadm reset
rm -rf $HOME/.kube/config

3.3安装网络插件

给集群安装一个Pod网络组件。这里我选择flannel组件,联网下载
wget https://github.com/coreos/flannel/blob/master/Documentation/kube-flannel.yml
kubectl apply -f kube-flannel.yml #部署环境安装flannel

3.4加入master节点

在主节点master1上运行脚本cert-main-master.sh,将证书分发至master2和master3

cat cert-main-master01.sh 

USER=root   # customizable
CONTROL_PLANE_IPS="192.168.1.12 192.168.1.13"
for host in ${CONTROL_PLANE_IPS}; do
    scp /etc/kubernetes/pki/ca.crt "${USER}"@$host:
    scp /etc/kubernetes/pki/ca.key "${USER}"@$host:
    scp /etc/kubernetes/pki/sa.key "${USER}"@$host:
    scp /etc/kubernetes/pki/sa.pub "${USER}"@$host:
    scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host:
    scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host:
    scp /etc/kubernetes/pki/etcd/ca.crt "${USER}"@$host:etcd-ca.crt
    # Quote this line if you are using external etcd
    scp /etc/kubernetes/pki/etcd/ca.key "${USER}"@$host:etcd-ca.key
done
chmod +x cert-main-master.sh

./cert-main-master1.sh

在master2上运行脚本cert-other-master.sh,将证书移至指定目录

cat cert-other-master2.sh 

USER=root # customizable
mkdir -p /etc/kubernetes/pki/etcd
mv /${USER}/ca.crt /etc/kubernetes/pki/
mv /${USER}/ca.key /etc/kubernetes/pki/
mv /${USER}/sa.pub /etc/kubernetes/pki/
mv /${USER}/sa.key /etc/kubernetes/pki/
mv /${USER}/front-proxy-ca.crt /etc/kubernetes/pki/
mv /${USER}/front-proxy-ca.key /etc/kubernetes/pki/
mv /${USER}/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt
mv /${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key

./cert-main-master2.sh

在master03上运行脚本cert-other-master.sh,将证书移至指定目录

cat cert-other-master3.sh 

USER=root # customizable
mkdir -p /etc/kubernetes/pki/etcd
mv /${USER}/ca.crt /etc/kubernetes/pki/
mv /${USER}/ca.key /etc/kubernetes/pki/
mv /${USER}/sa.pub /etc/kubernetes/pki/
mv /${USER}/sa.key /etc/kubernetes/pki/
mv /${USER}/front-proxy-ca.crt /etc/kubernetes/pki/
mv /${USER}/front-proxy-ca.key /etc/kubernetes/pki/
mv /${USER}/etcd-ca.crt /etc/kubernetes/pki/etcd/ca.crt
mv /${USER}/etcd-ca.key /etc/kubernetes/pki/etcd/ca.key

./cert-main-master3.sh

在master2上执行kubeadm join命令,就成功地将该节点添加进集群中。

kubeadm join 192.168.1.10:6443 --token qbwt6v.rr4hsh73gv8vrcij \
    --discovery-token-ca-cert-hash sha256:e306ffc7a126eb1f2c0cab297bbbed04f5bb464a04c05f1b0171192acbbae966 \
    --control-plane 
在master3上执行kubeadm  join命令,就成功地将该节点添加进集群中。
 kubeadm join 192.168.1.10:6443 --token qbwt6v.rr4hsh73gv8vrcij \
    --discovery-token-ca-cert-hash sha256:e306ffc7a126eb1f2c0cab297bbbed04f5bb464a04c05f1b0171192acbbae966 \
    --control-plane 

运行后在master1上查看节点状态
kubectl get nodes
在这里插入图片描述

3.5加入node节点

在node1~node3上执行kubeadm join命令,将work节点添加进集群中。

kubeadm join 192.168.1.10:6443 --token qbwt6v.rr4hsh73gv8vrcij \
    --discovery-token-ca-cert-hash sha256:e306ffc7a126eb1f2c0cab297bbbed04f5bb464a04c05f1b0171192acbbae966

在master01上验证
kubectl get nodes
在这里插入图片描述
若添加节点失败,可先删除已有的token文件,并创建新的token文件

kubeadm token list  #查看已有的token文件
kubeadm token delete qbwt6v.rr4hsh73gv8vrcij  #删除所有的token
kubeadm token create		#重新生成token
kubeadm token list
kubeadm join 192.168.1.10:6443 --token io12sv.tpf5yoari39rephk     --discovery-token-ca-cert-hash sha256:f61c07a8656d1cdb46a2faa400e0dc613fc128f563d39017b0a4880dc848722d

用新的token重新执行添加节点操作

四、kubernetes配置dashboard

确保node节点上有以下镜像
kubernetesui/dashboard:v2.0.0-rc6
kubernetesui/metrics-scraper:v1.0.3
获取dashboard配置文件
wget https://github.com/kubernetes/dashboard/blob/master/aio/deploy/recommended.yaml
将service修改为NodePort类型
在这里插入图片描述
在这里插入图片描述
kubectl apply -f recommended.yaml #创建dashboard资源

浏览器访问Dashboard页面
浏览器访问 https://192.168.1.10:32500/,点击高级,继续前往

在这里插入图片描述

获取token
创建dashboard-adminuser.yaml

vim dashboard-adminuser.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
 kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard

	kubectl apply -f dashboard-adminuser.yaml		#创建用户
查看admin-user账户的token
	kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')

把获取到的Token复制到登录界面的Token输入框中:
在这里插入图片描述

五、kubernetes高可用测试

在master1上执行重启 关机 或者停止keepalived、kubernetes服务,VIP均会转移到其他master节点。

拷贝master1节点上的 /etc/kubernetes/admin.conf 文件至所有节点,

mkdir -p /etc/kubernetes
for i in {12..17}; do scp /etc/kubernetes/admin.conf 192.168.1.$i:/etc/kubernetes/; done
echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile
source .bash_profile
加载环境变量
echo "source <(kubectl completion bash)" >> ~/.bash_profile
source .bash_profile
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)

执行完以上操作后,在任意节点上都可以执行kubernetes集群的相关操作

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值