参考:https://blog.csdn.net/chenleiking/article/details/84841394
本文使用六台虚拟机,配置含有三个master的k8s集群并实现master之间的高可用,本文所说Master节点是指K8s集群中的master角色节点,Node节点是指集群中的worker角色节点,所有节点全程采用root用户操作。
一、版本信息、环境准备、节点规划、镜像信息
版本信息如下:
OS:CentOS Linux release 7.3.1611
Linux Kernel:Linux 3.10.0-514.el7.x86_64
Docker:18.06-ce
k8s:
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.0", GitCommit:"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d", GitTreeState:"clean", BuildDate:"2018-12-03T21:04:45Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.0", GitCommit:"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d", GitTreeState:"clean", BuildDate:"2018-12-03T20:56:12Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
1、环境准备:
虚拟机或服务器的master节点CPU需2核以上,可通过下述命令查看:
cat /proc/cpuinfo| grep "physical id"| sort| uniq| wc -l
(1)k8s各节点SSH设置免密登录
所有节点用root用户操作,全部设置免密登陆,不做细分。
(2)时间同步。
yum install -y ntpdate
ntpdate -u ntp.api.bz
(3)所有节点必须关闭防火墙及swap。
systemctl disable firewalld.service
systemctl stop firewalld.service
systemctl status firewalld.service
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
setenforce 0
sed -i 's/\(.*swap.*\)/# \1/g' /etc/fstab
swapoff -a
如果修改/etc/fstab里的swap相关信息,需要重启。
2、节点规划:
主机名 IP&Role
10.10.1.200 master1 etcd、Master、Node、keepalived
10.10.1.199 master2 etcd、Master、Node、keepalived
10.10.1.198 master3 etcd、Master、Node、keepalived
10.10.1.201 node1
10.10.1.202 node2
10.10.1.203 node3
10.10.1.210 cluster.kube.com
所有节点主机名和IP加入/etc/hosts解析
cat /etc/hosts
10.10.1.200 master1
10.10.1.201 node1
10.10.1.202 node2
10.10.1.203 node3
10.10.1.198 master3
10.10.1.199 master2
10.10.1.210 cluster.kube.com
3、镜像清单:
k8s.gcr.io/kube-proxy v1.13.0 8fa56d18961f 9 days ago 80.2MB
k8s.gcr.io/kube-scheduler v1.13.0 9508b7d8008d 9 days ago 79.6MB
k8s.gcr.io/kube-controller-manager v1.13.0 d82530ead066 9 days ago 146MB
k8s.gcr.io/kube-apiserver v1.13.0 f1ff9b7e3d6e 9 days ago 181MB
quay.io/calico/node v3.3.2 4e9be81e3a59 9 days ago 75.3MB
quay.io/calico/cni v3.3.2 490d921fa49c 9 days ago 75.4MB
k8s.gcr.io/coredns 1.2.6 f59dcacceff4 5 weeks ago 40MB
k8s.gcr.io/etcd 3.2.24 3cab8e1b9802 2 months ago 220MB
quay.io/coreos/flannel v0.10.0-s390x 463654e4ed2d 10 months ago 47MB
quay.io/coreos/flannel v0.10.0-ppc64l e2f67d69dd84 10 months ago 53.5MB
quay.io/coreos/flannel v0.10.0-arm c663d02f7966 10 months ago 39.9MB
quay.io/coreos/flannel v0.10.0-amd64 f0fad859c909 10 months ago 44.6MB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 11 months ago 742kB
===============================================
二、部署步骤
说明:
Load balancer集群和etcd集群仅用来为kubernetes集群集群服务,不单独设立etcd集群,不对外服务。
2.1、部署keepalived【所有master】
此处的keeplived的主要作用是为haproxy提供vip(10.10.1.210),在三个haproxy实例之间提供主备,降低当其中一个haproxy失效的时对服务的影响。
(1)系统配置
cat >> /etc/sysctl.conf << EOF
net.ipv4.ip_forward = 1
EOF
sysctl -p
(2)安装keepalived
yum install -y keepalived
(3)配置keepalived:
【注意:VIP地址是否正确,且各个节点的priority不同,master1节点为MASTER,其余节点为BACKUP,killall -0 意思是根据进程名称检测进程是否存活】
--------------master1:
cat > /etc/keepalived/keepalived.conf << EOF
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
}
vrrp_script check_haproxy {
script "killall -0 haproxy"
interval 3
weight -2
fall 10
rise 2
}
vrrp_instance VI_1 {
state MASTER
interface ens32
virtual_router_id 51
priority 250
advert_int 1
authentication {
auth_type PASS
auth_pass 35f18af7190d51c9f7f78f37300a0cbd
}
virtual_ipaddress {
10.10.1.210
}
track_script {
check_haproxy
}
}
EOF
--------------master2:
cat > /etc/keepalived/keepalived.conf << EOF
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
}
vrrp_script check_haproxy {
script "killall -0 haproxy"
interval 3
weight -2
fall 10
rise 2
}
vrrp_instance VI_1 {
state BACKUP
interface ens32
virtual_router_id 51
priority 249
advert_int 1
authentication {
auth_type PASS
auth_pass 35f18af7190d51c9f7f78f37300a0cbd
}
virtual_ipaddress {
10.10.1.210
}
track_script {
check_haproxy
}
}
EOF
--------------master3: