Kubernetes---二进制集群(多master节点部署)

本文介绍了如何搭建Kubernetes的多master节点集群以实现高可用性,包括关闭防火墙、拷贝组件、修改配置、设置VIP及负载均衡等步骤。通过双master节点和nginx负载均衡,确保了在master节点宕机时仍能保持服务连续性,并降低了master的资源使用压力。
摘要由CSDN通过智能技术生成

Kubernetes—二进制集群(多master节点部署)

基于上篇单master节点部署,在部署一个master02。

多master节点的优点:

​ 与单master的二进制集群相比,双master集群具备高可用的特性,当一个master宕机时,load Blance就会将VIP虚拟地址转移到另一只master上,保证了master的可靠性 ;

​ 双master的核心在于,需要指向一个核心地址。双master节点开启端口接收node节点的apiserver请求,如有新的节点加入,不会直接找master节点,而且是直接找到vip进行apiserver请求,然后vip再进行调度,分发到某一个master中执行,此时master收到请求后会给新增的node节点颁发证书;

​ 双master集群还建立了nginx负载均衡,缓解了node对master请求的压力,减轻了master对资源的使用。

一:实验环境

master节点:

master01: 192.168.48.152

master02: 192.168.48.153

node节点:

node01:192.168.48.148

node02:192.168.48.138

负载均衡:

nginx01:192.168.48.139 master

nginx02:192.168.48.137 backup

VIP IP地址:

192.168.48.100

二:实验步骤

master02上配置

1、先关闭防火墙,关闭网络管理
[root@localhost ~]# systemctl stop firewalld.service 
[root@localhost ~]# setenforce 0
[root@localhost ~]# systemctl stop NetworkManager
[root@localhost ~]# systemctl disable NetworkManager
2、在master01上, 将 kubernetes目录和组件拷贝到 master02
[root@localhost kubeconfig]# scp -r /opt/kubernetes/ root@192.168.48.153:/opt
[root@localhost kubeconfig]# scp /usr/lib/systemd/system/{kube-apiserver,kube-controller-manager,kube-scheduler}.service root@192.168.48.153:/usr/lib/systemd/system
3、 master02上,修改配置文件 kube-apiserver中的IP地址
[root@localhost cfg]# cd /opt/kubernetes/cfg
[root@localhost cfg]# vim kube-apiserver

--etcd-servers=https://192.168.48.152:2379,https://192.168.48.148:2379,https://192.168.48.138:2379 \
--bind-address=192.168.48.153 \        ‘改成本地ip地址’
--secure-port=6443 \
--advertise-address=192.168.48.153 \   ‘改成本地ip地址’
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
4、 在master01上拷贝etcd证书给master02
[root@localhost ~]# scp -r /opt/etcd/ root@192.168.48.153:/opt/
5、 启动api-server、api-scheduler、api-controller-manager服务
[root@localhost cfg]# systemctl start kube-apiserver.service
[root@localhost cfg]# systemctl enable kube-apiserver.service 

[root@localhost cfg]# systemctl start kube-controller-manager.service
[root@localhost cfg]# systemctl enable kube-controller-manager.service

[root@localhost cfg]# systemctl start kube-scheduler.service
[root@localhost cfg]# systemctl enable kube-scheduler.service
6、增加环境变量
[root@localhost cfg]# vim /etc/profile
‘在末行添加’
export PATH=$PATH:/opt/kubernetes/bin/

[root@localhost cfg]# source /etc/profile    ‘使得环境变量生效’
7、 master02 上查看节点情况
[root@localhost cfg]# kubectl get node
NAME              STATUS   ROLES    AGE   VERSION
192.168.48.148   Ready    <none>   43h   v1.12.3
192.168.48.138   Ready    <none>   41h   v1.12.3
  • #### 搭建nginx负载均衡
1、关闭防火墙
[root@localhost ~]# systemctl stop firewalld.service 
[root@localhost ~]# setenforce 0
2、安装本地yum官方nginx源
[root@localhost ~]# vim /etc/yum.repos.d/nginx.repo
[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/centos/7/$basearch/
gpgcheck=0

[root@localhost ~]# yum list    ‘重新加载yum仓库’
[root@localhost ~]# yum install nginx -y   ‘下载nginx’
3、在nginx配置文件中添加四层转发功能
[root@localhost ~]# vim /etc/nginx/nginx.conf 

stream {
   log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
    access_log  /var/log/nginx/k8s-access.log  main;
    upstream k8s-apiserver {
        server 192.168.48.152:6443;
        server 192.168.48.153:6443;
    }                 
    server { 
                listen 6443;
                proxy_pass k8s-apiserver;
    }
 }

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-rifE9jN6-1588685570426)(C:\Users\xumin\AppData\Roaming\Typora\typora-user-images\1588683307893.png)]

4、 检查配置文件是否有语法错误
[root@localhost ~]# nginx -t   
[root@localhost ~]# systemctl start nginx    ‘开启服务’
[root@localhost ~]# netstat -natp | grep nginx

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-hnvykJU5-1588685570428)(C:\Users\xumin\AppData\Roaming\Typora\typora-user-images\1588683560831.png)]

5、在nginx节点部署keepalived服务,下载keepalived
[root@localhost ~]# yum install keepalived -y
6、 修改 nginx01 配置文件
[root@localhost ~]# vim /etc/keepalived/keepalived.conf

‘删除全部内容,添加以下内容’
! Configuration File for keepalived
global_defs {
   # 接收邮件地址
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   # 邮件发送地址
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id NGINX_MASTER
}
vrrp_script check_nginx {
    script "/etc/nginx/check_nginx.sh"
}
vrrp_instance VI_1 {
    state MASTER
    interface ens33
    #VRRP路由 ID实例,每个实例是唯一的
    virtual_router_id 51
    #优先级,备服务器设置为90
    priority 100
    #指定VRRP心跳包通告间隔时间,默认1秒
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.48.100/24
    }
    # 监控的促发脚本
    track_script {
        check_nginx
    }
}
7、修改nginx02配置文件
[root@localhost ~]# cp keepalived.conf /etc/keepalived/keepalived.conf 
cp: overwrite ‘/etc/keepalived/keepalived.conf’? yes
[root@localhost ~]# vim /etc/keepalived/keepalived.conf 

! Configuration File for keepalived

global_defs {
   # 接收邮件地址
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   # 邮件发送地址
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id NGINX_MASTER
}

vrrp_script check_nginx {
    script "/etc/nginx/check_nginx.sh"
}

vrrp_instance VI_1 {
    state BACKUP
    interface ens33
    virtual_router_id 51
    #优先级的值要比MASTER小
    priority 90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.48.100/24
    }
    track_script {
        check_nginx
    }
}
8、 在两个nignx节点创建check_nginx.sh检测脚本
[root@localhost ~]# vim /etc/nginx/check_nginx.sh

#过滤nginx进程数量
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")
if [ "$count" -eq 0 ];then
    systemctl stop keepalived
fi

[root@localhost ~]# chmod +x /etc/nginx/check_nginx.sh
9、启动keepalived服务
[root@localhost ~]# systemctl start keepalived.service
[root@localhost ~]# ip a	    'nginx服务器查看IP地址'
可看到VIP在nginx01上
10、 验证漂移地址
[root@nginx01 ~]# pkill nginx	 '关闭nginx服务'
[root@nginx01 ~]# systemctl status keepalived	'发现keepalived服务关闭'
[root@nginx02 ~]# ip a  	'可发现VIP地址到nginx02上'
11、 恢复漂移地址的操作
[root@nginx01 ~]# systemctl start nginx        ‘开启nginx’
[root@nginx01 ~]# systemctl start keepalived	'启动keepalived服务'
[root@nginx01 ~]# ip a	'查看发现VIP回到了nginx01节点上'
  • node节点指向LB高可用群集
1、 修改两个node节点配置文件,server ip 地址为统一的VIP地址
[root@node01 ~]# vim /opt/kubernetes/cfg/bootstrap.kubeconfig 
server: https://192.168.48.100:6443

[root@node01 ~]# vim /opt/kubernetes/cfg/kubelet.kubeconfig 
server: https://192.168.48.100:6443

[root@node01 ~]# vim /opt/kubernetes/cfg/kube-proxy.kubeconfig 
server: https://192.168.48.100:6443
‘node02修改相同’

[root@node01 ~]# systemctl restart kubelet
[root@node01 ~]# systemctl restart kube-proxy    ‘重启node节点服务’
[root@node01 ~]# cd /opt/k8s/cfg/
[root@node1 cfg]# grep 100 *    ‘自检’
bootstrap.kubeconfig:    server: https://192.168.48.100:6443
kubelet.kubeconfig:    server: https://192.168.48.100:6443
kube-proxy.kubeconfig:    server: https://192.168.48.100:6443
2、 在nginx01上查看k8s日志
[root@localhost ~]# tail /var/log/nginx/k8s-access.log 	'下面的日志是重启服务的时候产生的'
3、 在master节点测试创建pod
[root@master ~]# kubectl run nginx --image=nginx	'创建一个nginx测试pod'

[root@master ~]# kubectl get pods	'查看状态,目前是正在创建'

[root@master ~]# kubectl get pods	‘稍等一下再次查看,发现pod已经创建完成’
NAME                    READY   STATUS    RESTARTS   AGE
nginx-dbddb74b8-ns6m7   1/1     Running   0          20s
4、查看pod日志
[root@master ~]# kubectl logs nginx-dbddb74b8-ns6m7 	’查看pod日志会报错,是权限问题‘
[root@master ~]# kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous	  ‘指定集群中的匿名用户管理员权限’
[root@master ~]# kubectl logs nginx-dbddb74b8-ns6m7	 '这时访问就没有日志产生'
5、 在两个master节点查看 ,访问node节点的pod资源产生日志
[root@master ~]# kubectl get pods -o wide	‘查看pod网络信息’
NAME                    READY   STATUS    RESTARTS   AGE     IP           NODE              NOMINATED NODE
nginx-dbddb74b8-ns6m7   1/1     Running   0          4m12s   172.17.6.2   192.168.48.148   <none>
[root@node01 ~]# curl 172.17.6.2	 ‘对应节点访问pod’
[root@master ~]# kubectl logs nginx-dbddb74b8-ns6m7	  ‘在master节点访问日志,master02也可以访问’

多master节点部署完成,在配置过程要注意配置文件中IP地址不要写错。

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值