K8s企业版多节部署

K8s企业版多节部署

实验步骤:
K8s的单节点部署
master2节点部署 
负载均衡部署(使用双机热备)
k8s网站页面
实验环境
使用Nginx做负载均衡:

lb01:192.168.217.136/24 CentOS 7-5

lb02:192.168.217.139/24 CentOS 7-6

Master节点:

master1:192.168.217.130/24 CentOS 7-1

master2:192.168.217.131/24 CentOS 7-2

Node节点:

node1:192.168.217.132/24 CentOS 7-3

node2:192.168.217.133/24 CentOS 7-4

VRRP漂移地址:192.168.217.100

多master群集架构图:

在这里插入图片描述

多节点原理:
和单节点不同,多节点的核心点就是需要指向一个核心的地址,我们之前在做单节点的时候已经将vip地址定义过写入k8s-cert.sh脚本文件中(192.168.18.100),vip开启apiserver,多master开启端口接受node节点的apiserver请求,此时若有新的节点加入,不是直接找moster节点,而是直接找到vip进行spiserver的请求,然后vip再进行调度,分发到某一个master中进行执行,此时master收到请求之后就会给改node节点颁发证书

单节点的部署

可以看我的上一篇博客:
https://blog.csdn.net/Parhoia/article/details/104234305

master2 节点的部署

1、关闭防火墙和安全功能
[root@master2 ~]# systemctl stop firewalld.service
[root@master2 ~]# setenforce 0
master1上的操作
//复制kubernetes目录到master2
[root@localhost kubeconfig]# cd /root/k8s/
[root@localhost k8s]# scp -r /opt/kubernetes/ root@192.168.217.131:/opt
The authenticity of host '192.168.217.131 (192.168.217.131)' can't be established.
ECDSA key fingerprint is SHA256:xU5rjBWsWKwR14QoOSF7Z/OcyD2tya4VLvEXTA8FAMM.
ECDSA key fingerprint is MD5:e5:b4:44:cb:7a:04:f5:1d:e4:50:2d:03:6b:6a:89:6b.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.217.131' (ECDSA) to the list of known hosts.
root@192.168.217.131's password: 
token.csv                                          100%   84    23.1KB/s   00:00    
kube-apiserver                                     100%  939   437.7KB/s   00:00    
kube-scheduler                                     100%   94    63.6KB/s   00:00    
kube-controller-manager                            100%  483   337.3KB/s   00:00    
kube-apiserver                                     100%  184MB  23.0MB/s   00:08    
kubectl                                            100%   55MB  27.3MB/s   00:02    
kube-controller-manager                            100%  155MB  22.2MB/s   00:07    
kube-scheduler                                     100%   55MB  27.3MB/s   00:02    
ca-key.pem                                         100% 1675   443.0KB/s   00:00    
ca.pem                                             100% 1359   897.8KB/s   00:00    
server-key.pem                                     100% 1679   720.3KB/s   00:00    
server.pem                                         100% 1643   836.5KB/s   00:00   

 //复制master中的三个组件启动脚本kube-apiserver.service                   kube-controller-manager.service        kube-scheduler.service
[root@localhost k8s]# scp /usr/lib/systemd/system/{kube-apiserver,kube-controller-manager,kube-scheduler}.service root@192.168.217.131:/usr/lib/systemd/system/
root@192.168.217.131's password: 
kube-apiserver.service                             100%  282   152.7KB/s   00:00    
kube-controller-manager.service                    100%  317   153.0KB/s   00:00    
kube-scheduler.service                             100%  281   236.8KB/s   00:00  

  
//特别注意:master02一定要有etcd证书,否则apiserver服务无法启动
//需要拷贝master01上已有的etcd证书给master02使用
[root@localhost k8s]# scp -r /opt/etcd/ root@192.168.217.131:/opt/
root@192.168.217.131's password: 
etcd                                               100%  523   317.1KB/s   00:00    
etcd                                               100%   18MB   9.2MB/s   00:02    
etcdctl                                            100%   15MB  18.5MB/s   00:00    
ca-key.pem                                         100% 1675   126.1KB/s   00:00    
ca.pem                                             100% 1265    28.6KB/s   00:00    
server-key.pem                                     100% 1679   237.0KB/s   00:00    
server.pem                                         100% 1338    83.7KB/s   00:00    
[root@localhost k8s]# 
master2上的操作
//修改配置文件kube-apiserver中的IP
[root@localhost ~]# cd /opt/kubernetes/cfg/
[root@localhost cfg]# vim kube-apiserver
KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://192.168.195.149:2379,https://192.168.195.150:2379,https://192.168.195.151:2379 \
--bind-address=192.168.217.131 \
--secure-port=6443 \
--advertise-address=192.168.217.131 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--kubelet-https=true \
--enable-bootstrap-token-auth \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"

//启动master02中的三个组件服务
//特别注意:master02一定要有etcd证书,否则apiserver服务无法启动
[root@localhost cfg]# systemctl start kube-apiserver.service
[root@localhost cfg]# systemctl start kube-controller-manager.service
[root@localhost cfg]# systemctl start kube-scheduler.service
[root@localhost cfg]# vim /etc/profile
#末尾添加
export PATH=$PATH:/opt/kubernetes/bin/
[root@localhost cfg]# source /etc/profile
[root@localhost cfg]# kubectl get node
NAME              STATUS   ROLES    AGE   VERSION
192.168.217.132   Ready    <none>   38m   v1.12.3
192.168.217.133   Ready    <none>   28m   v1.12.3
[root@localhost cfg]# 

nginx负载均衡部署

注意:此处使用nginx服务实现负载均衡,1.9版本之后的nginx具有了四层的转发功能(负载均衡),该功能中多了stream
lb01和lb02的操作

下边部署的是lb01和lb02上都需要进行的操作

//上传keepalived.conf和nginx.sh两个文件到lb1和lb2的root目录下
[root@location ~]# ls
anaconda-ks.cfg       keepalived.conf  公共  视频  文档  音乐
initial-setup-ks.cfg  nginx.sh         模板  图片  下载  桌面

[root@location ~]# systemctl stop firewalld.service
[root@location ~]# setenforce 0

[root@location ~]# vim /etc/yum.repos.d/nginx.repo
[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/centos/7/$basearch/
gpgcheck=0
#修改完成后按Esc退出插入模式,输入:wq保存退出
`重新加载yum仓库`
  • 2
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值