K8s多节点部署---->使用Nginx服务实现负载均衡---->UI界面展示
特别注意:此实验开始前必须要先部署单节master的k8s群集
可以见本人上一篇博客:https://blog.csdn.net/JarryZho/article/details/104193913
环境部署:
相关软件包及文档:
链接:https://pan.baidu.com/s/1l4vVCkZ03la-VpIFXSz1dA
提取码:rg99
使用Nginx做负载均衡:
lb1:192.168.195.147/24 mini-2
lb2:192.168.195.133/24 mini-3
Master节点:
master1:192.168.18.128/24 CentOS 7-3
master2:192.168.18.132/24 mini-1
Node节点:
node1:192.168.18.148/24 CentOS 7-4
node2:192.168.18.145/24 CentOS 7-5
VRRP漂移地址:192.168.18.100
多master群集架构图:
------master2部署------
第一步:优先关闭master2的防火墙服务
[root@master2 ~]# systemctl stop firewalld.service
[root@master2 ~]# setenforce 0
第二步:在master1上操作,复制kubernetes目录到master2
[root@master1 k8s]# scp -r /opt/kubernetes/ root@192.168.18.132:/opt
The authenticity of host '192.168.18.132 (192.168.18.132)' can't be established.
ECDSA key fingerprint is SHA256:mTT+FEtzAu4X3D5srZlz93S3gye8MzbqVZFDzfJd4Gk.
ECDSA key fingerprint is MD5:fa:5a:88:23:49:60:9b:b8:7e:4b:14:4b:3f:cd:96:a0.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.18.132' (ECDSA) to the list of known hosts.
root@192.168.18.132's password:
token.csv 100% 84 90.2KB/s 00:00
kube-apiserver 100% 934 960.7KB/s 00:00
kube-scheduler 100% 94 109.4KB/s 00:00
kube-controller-manager 100% 483 648.6KB/s 00:00
kube-apiserver 100% 184MB 82.9MB/s 00:02
kubectl 100% 55MB 81.5MB/s 00:00
kube-controller-manager 100% 155MB 70.6MB/s 00:02
kube-scheduler 100% 55MB 77.4MB/s 00:00
ca-key.pem 100% 1675 1.2MB/s 00:00
ca.pem 100% 1359 1.5MB/s 00:00
server-key.pem 100% 1675 1.2MB/s 00:00
server.pem 100% 1643 1.7MB/s 00:00
第三步:复制master1中的三个组件启动脚本kube-apiserver.service,kube-controller-manager.service,kube-scheduler.service到master2
[root@master1 k8s]# scp /usr/lib/systemd/system/{kube-apiserver,kube-controller-manager,kube-scheduler}.service root@192.168.18.132:/usr/lib/systemd/system/
root@192.168.18.132's password:
kube-apiserver.service 100% 282 286.6KB/s 00:00
kube-controller-manager.service 100% 317 223.9KB/s 00:00
kube-scheduler.service 100% 281 362.4KB/s 00:00
第四步:master2上操作,修改配置文件kube-apiserver中的IP
[root@master2 ~]# cd /opt/kubernetes/cfg/
[root@master2 cfg]# ls
kube-apiserver kube-controller-manager kube-scheduler token.csv
[root@master2 cfg]# vim kube-apiserver
5 --bind-address=192.168.18.132 \
7 --advertise-address=192.168.18.132 \
#第5和7行IP地址需要改为master2的地址
#修改完成后按Esc退出插入模式,输入:wq保存退出
第五步:拷贝master1上已有的etcd证书给master2使用
特别注意:master2一定要有etcd证书,否则apiserver服务无法启动
[root@master1 k8s]# scp -r /opt/etcd/ root@192.168.18.132:/opt/
root@192.168.18.132's password:
etcd 100% 516 535.5KB/s 00:00
etcd 100% 18MB 90.6MB/s 00:00
etcdctl 100% 15MB 80.5MB/s 00:00
ca-key.pem 100% 1675 1.4MB/s 00:00
ca.pem 100% 1265 411.6KB/s 00:00
server-key.pem 100% 1679 2.0MB/s 00:00
server.pem 100% 1338 429.6KB/s 00:00
第六步:启动master2中的三个组件服务
[root@master2 cfg]# systemctl start kube-apiserver.service
[root@master2 cfg]# systemctl enable kube-apiserver.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.
[root@master2 cfg]# systemctl status kube-apiserver.service
● kube-apiserver.service - Kubernetes API Server
Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
Active: active (running) since 五 2020-02-07 09:16:57 CST; 56min ago
[root@master2 cfg]# systemctl start kube-controller-manager.service
[root@master2 cfg]# systemctl enable kube-controller-manager.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
[root@master2 cfg]# systemctl status kube-controller-manager.service
● kube-controller-manager.service - Kubernetes Controller Manager
Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled)
Active: active (running) since 五 2020-02-07 09:17:02 CST; 57min ago
[root@master2 cfg]# systemctl start kube-scheduler.service
[root@master2 cfg]# systemctl enable kube-scheduler.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
[root@master2 cfg]# systemctl status kube-scheduler.service
● kube-scheduler.service - Kubernetes Scheduler
Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled)
Active: active (running) since 五 2020-02-07 09:17:07 CST; 58min ago
第七步:增加环境变量并生效
[root@master2 cfg]# vim /etc/profile
#末尾添加
export PATH=$PATH:/opt/kubernetes/bin/
[root@master2 cfg]# source /etc/profile
[root@master2 cfg]# kubectl get node
NAME STATUS ROLES AGE VERSION
192.168.18.145 Ready <none> 21h v1.12.3
192.168.18.148 Ready <none> 22h v1.12.3
#此时可以看到node1和node2的加入情况
此时master2部署完毕
------Nginx负载均衡部署------
注意:此处使用nginx服务实现负载均衡,1.9版本之后的nginx具有了四层的转发功能(负载均衡),该功能中多了stream
多节点原理:
和单节点不同,多节点的核心点就是需要指向一个核心的地址,我们之前在做单节点的时候已经将vip地址定义过写入k8s-cert.sh脚本文件中(192.168.18.100),vip开启apiserver,多master开启端口接受node节点的apiserver请求,此时若有新的节点加入,不是直接找moster节点,而是直接找到vip进行spiserver的请求,然后vip再进行调度,分发到某一个master中进行执行,此时master收到请求之后就会给改node节点颁发证书
第一步:上传keepalived.conf和nginx.sh两个文件到lb1和lb2的root目录下
`lb1`
[root@lb1 ~]# ls
anaconda-ks.cfg keepalived.conf 公共 视频 文档 音乐
initial-setup-ks.cfg nginx.sh 模板 图片 下载 桌面
`lb2`
[root@lb2 ~]# ls
anaconda-ks.cfg keepalived.conf 公共 视频 文档 音乐
initial-setup-ks.cfg nginx.sh 模板 图片 下载 桌面
第二步:lb1(192.168.18.147)操作