前言:本次部署将在单节点之上增加一个master以及配置负载均衡和双机热备实现高可用,部署细节可参考前面三张单节点部署系列单节点部署1-Etcd、单节点部署2-Flannel、单节点部署3-Master及Node。
文章目录
一、环境部署
1.节点分配
服务器名称 | IP地址 | 分配 |
---|---|---|
master01 | 192.168.170.128/24 | etcd,apiserver,controller-manager,scheduler |
master02 | 192.168.170.129/24 | etcd证书,apiserver,controller-manager,scheduler |
node1 | 192.168.170.145/24 | etcd+flannel |
node2 | 192.168.170.136/24 | etcd+flannel |
lb1 | 192.168.170.134/24 | nginx+keepalived |
lb2 | 192.168.170.131/24 | nginx+keepalived |
2.拓扑图
- 多节点拓扑图
- 在生产环境当中,大多是采用的多master的群集部署,所以需要建立vip,再加入node节点后只要找vip就行,它会自动会请求某一台固定的master,master里的apiserver会给新添加的节点颁发证书,让节点加入到群集中
- 所有node地址全部指向vip,由vip地址作为调度器去调配给下面的某一台master去管理、调度节点
二、部署
1.master02部署
- 在单节点部署完成后,添加一台master服务器就简单了
1)环境优化
- 关闭防火墙,增强型安全功能,清空防火墙
[root@promote ~]# hostnamectl set-hostname master02
[root@promote ~]# su
[root@master02 ~]# systemctl stop NetworkManager
[root@master02 ~]# systemctl disable NetworkManager
[root@master02 ~]# iptables -F
[root@master02 ~]# systemctl stop firewalld.service
[root@master02 ~]# systemctl disable firewalld.service
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@master02 ~]# setenforce 0
[root@master02 ~]# vim /etc/selinux/config
SELINUX=disabled
2)部署
- 直接在master01上将k8s工作目录以及启动脚本直接复制过来
[root@master01 ~]# scp -r /opt/kubernetes/ root@192.168.170.129:/opt
root@192.168.170.129's password:
token.csv 100% 84 16.6KB/s 00:00
kube-apiserver 100% 939 703.7KB/s 00:00
kube-scheduler 100% 94 106.8KB/s 00:00
kube-controller-manager 100% 483 579.3KB/s 00:00
kubectl 100% 55MB 94.6MB/s 00:00
kube-controller-manager 100% 155MB 88.3MB/s 00:01
kube-scheduler 100% 55MB 89.5MB/s 00:00
kube-apiserver 100% 184MB 94.8MB/s 00:01
ca-key.pem 100% 1675 1.7MB/s 00:00
ca.pem 100% 1359 2.0MB/s 00:00
server-key.pem 100% 1679 1.4MB/s 00:00
server.pem 100% 1643 1.2MB/s 00:00
[root@master02 ~]# cd /opt/kubernetes/
[root@master02 kubernetes]# ls
bin cfg ssl
[root@master02 kubernetes]# ls bin/
kube-apiserver kube-controller-manager kubectl kube-scheduler
[root@master02 kubernetes]# ls cfg/
kube-apiserver kube-controller-manager kube-scheduler token.csv
[root@master02 kubernetes]# ls ssl/
ca-key.pem ca.pem server-key.pem server.pem
//复制master中的三个组件启动脚本
[root@master01 ~]# scp -r /usr/lib/systemd/system/{kube-apiserver,kube-controller-manager,kube-scheduler}.service r