前言:
要想部署多master集群平台,首先要搭建单master集群架构,即小编前期的K8S部署(一)(二)(三)(四)博客内容。
多master集群意义:
单master节点容易出现单点故障问题,一旦出现故障,整个集群则不能正常运行。搭建多master集群实现了集群的高可用,避免了出现单点故障问题。
角色分配
Master02实操:
在master01上进行
复制目录到master02
复制kubernetes目录到master02
[root@master k8s]# scp -r /opt/kubernetes/ root@192.168.142.120:/opt
复制etcd目录到master02
master02一定要有etcd证书,否则apiserver服务无法启动
[root@master k8s]# scp -r /opt/etcd/ root@192.168.142.120:/opt
复制服务启动脚本(kube-apiserver.service、kube-controller-manager.service、kube-scheduler.service)
[root@master k8s]# scp /usr/lib/systemd/system/{kube-apiserver,kube-controller-manager,kube-scheduler}.service root@192.168.142.120:/usr/lib/systemd/system/
master02上操作
修改服务脚本
实际上仅需要修改apiserver即可
[root@master02 k8s]# cd /opt/kubernetes/cfg/
[root@master02 cfg]# vim kube-apiserver
KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://192.168.142.220:2379,https://192.168.142.136:2379,https://192.168.142.132:2379 \
#注意地址更改
--bind-address=192.168.142.120 \
--secure-port=6443 \
#注意地址更改
--advertise-address=192.168.142.120 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--kubelet-https=true \
--enable-bootstrap-token-auth \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/opt/kubernetes/ssl/server.pem \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"
开启服务
[root@master02 cfg]# systemctl start kube-apiserver.service
[root@master02 cfg]# systemctl start kube-controller-manager.service
[root@master02 cfg]# systemctl start kube-scheduler.service
添环境变量,查看node节点是否存在
//增加环境变量
[root@master02 cfg]# vim /etc/profile
//末尾添加
export PATH=$PATH:/opt/kubernetes/bin/
[root@master02 cfg]# source /etc/profile
//验证
[root@master02 cfg]# kubectl get node
NAME STATUS ROLES AGE VERSION
192.168.142.132 Ready <none> 2d12h v1.12.3
192.168.142.136 Ready <none> 38h v1.12.3