k8s多节点部署

k8s多节点部署

一、实验拓扑

  • mark

  • 主机分配

  • 主机名IP地址资源分配部署的服务
    nginx01192.168.10.902G+4CPUnginx、keepalived
    nginx02192.168.10.1002G+4CPUnginx、keepalived
    VIP192.168.10.200
    master01192.168.10.601G+2CPUapiserver、scheduler、controller-manager、etcd
    master02192.168.10.501G+2CPUapiserver、scheduler、controller-manager
    node01192.168.10.702G+4CPUkubelet、kube-proxy、docker、flannel、etcd
    node02192.168.10.802G+4CPUkubelet、kube-proxy、docker、flannel、etcd

二、master02节点操作

  • 前提已经完成部署二进制单节点群集master01 node1 node2

  • 开局优化

    关闭防火墙,关闭核心防护,关闭网络管理功能(生成环境中一定要关闭它)

[root@localhost ~]# hostnamectl set-hostname master02
[root@localhost ~]# su
[root@master02 ~]# systemctl stop firewalld && systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@master02 ~]# setenforce 0 && sed -i "s/SELINUX=enforcing/SELNIUX=disabled/g" /etc/selinux/config
[root@master02 ~]# systemctl stop NetworkManager && systemctl disable NetworkManager
Removed symlink /etc/systemd/system/multi-user.target.wants/NetworkManager.service.
Removed symlink /etc/systemd/system/dbus-org.freedesktop.NetworkManager.service.
Removed symlink /etc/systemd/system/dbus-org.freedesktop.nm-dispatcher.service.
Removed symlink /etc/systemd/system/network-online.target.wants/NetworkManager-wait-online.service.

123456789101112
  • master节点操作,将master节点的kubernetes配置文件和启动脚本复制到master02节点
[root@master bin]# scp -r /opt/kubernetes/ root@192.168.10.50:/opt/
The authenticity of host '192.168.10.50 (192.168.10.50)' can't be established.
ECDSA key fingerprint is SHA256:TMzdtoj+IhgDyqNAKSTa1eGs7zd4wkaVTMgMzz3nFk4.
ECDSA key fingerprint is MD5:ba:57:09:36:e9:07:fa:ee:5f:81:72:59:b2:c9:39:3e.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.10.50' (ECDSA) to the list of known hosts.
root@192.168.10.50's password: 
kube-apiserver                               100%  939   510.7KB/s   00:00    
token.csv                                    100%   84    99.9KB/s   00:00    
kube-scheduler                               100%   94   150.9KB/s   00:00    
kube-controller-manager                      100%  483   598.3KB/s   00:00    
kube-apiserver                               100%  184MB 120.8MB/s   00:01    
kubectl                                      100%   55MB 117.4MB/s   00:00    
kube-controller-manager                      100%  155MB 117.9MB/s   00:01    
kube-scheduler                               100%   55MB 117.6MB/s   00:00    
ca-key.pem                                   100% 1679   913.8KB/s   00:00    
ca.pem                                       100% 1359     1.3MB/s   00:00    
server-key.pem                               100% 1675     2.1MB/s   00:00    
server.pem                                   100% 1643     1.8MB/s   00:00    
[root@master bin]# scp /usr/lib/systemd/system/{kube-apiserver,kube-controller-manager,kube-scheduler}.service root@192.168.10.50:/usr/lib/systemd/system/
root@192.168.10.50's password: 
kube-apiserver.service                       100%  282   145.8KB/s   00:00    
kube-controller-manager.service              100%  317   316.7KB/s   00:00    
kube-scheduler.service                       100%  281    45.9KB/s   00:00    

12345678910111213141516171819202122232425
  • master02上修改apiserver配置文件中的IP地址
[root@master02 ~]# cd /opt/kubernetes/cfg/
[root@master02 cfg]# ls
kube-apiserver  kube-controller-manager  kube-scheduler  token.csv
[root@master02 cfg]# vim kube-apiserver

KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://192.168.10.60:2379,https://192.168.10.70:2379,https://192.168.10.80:2379 \
--bind-address=192.168.10.50 \
--secure-port=6443 \
--advertise-address=192.168.10.50 \
--allow-privileged=true \
123456789101112
  • 将master节点的etcd证书复制到master02节点(master02上一定要有etcd证书,用来与etcd通信)
[root@master bin]# scp -r /opt/etcd/ root@192.168.10.50:/opt
root@192.168.10.50's password: 
etcd                                         100%  523   177.8KB/s   00:00    
etcd                                         100%   18MB 124.7MB/s   00:00    
etcdctl                                      100%   15MB 114.5MB/s   00:00    
ca-key.pem                                   100% 1675     1.0MB/s   00:00    
ca.pem                                       100% 1265   448.5KB/s   00:00    
server-key.pem                               100% 1675     1.6MB/s   00:00    
server.pem                                   100% 1338   615.8KB/s   00:00  
123456789
  • master02节点查看etcd证书,并启动三个服务
[root@master02 cfg]# tree /opt/etcd
/opt/etcd
├── bin
│   ├── etcd
│   └── etcdctl
├── cfg
│   └── etcd
└── ssl
    ├── ca-key.pem
    ├── ca.pem
    ├── server-key.pem
    └── server.pem

3 directories, 7 files
[root@master02 ~]# systemctl start kube-apiserver.service
[root@master02 ~]# systemctl status kube-apiserver.service
[root@master02 ~]# systemctl enable kube-apiserver.service
[root@master02 ~]# systemctl start kube-controller-manager.service
[root@master02 ~]# systemctl status kube-controller-manager.service
[root@master02 ~]# systemctl enable kube-controller-manager.service
[root@master02 ~]# systemctl enable kube-scheduler.service
[root@master02 ~]# systemctl start kube-scheduler.service
[root@master02 ~]# systemctl status kube-scheduler.service
1234567891011121314151617181920212223
  • 添加环境变量并查看状态
[root@master02 ~]# echo export PATH=$PATH:/opt/kubernetes/bin >> /etc/profile
[root@master02 ~]# source /etc/profile
[root@master02 ~]# kubectl get node
NAME              STATUS   ROLES    AGE   VERSION
192.168.10.70   Ready    <none>   10h   v1.12.3
192.168.10.80   Ready    <none>   9h    v1.12.3

三、nginx负载均衡集群部署

  • 两个nginx主机开局优化(仅展示nginx01的操作):关闭防火墙和核心防护,编辑nginx yum源
[root@localhost ~]# hostnamectl set-hostname nginx01	'//修改主机吗'
[root@localhost ~]# su
[root@nginx01 ~]#  
[root@nginx01 ~]# systemctl stop firewalld && systemctl disable firewalld	'//关闭防火墙与核心防护'
[root@nginx01 ~]# setenforce 0 && sed -i "s/SELINUX=enforcing/SELNIUX=disabled/g" /etc/selinux/config	
[root@nginx01 ~]# vi /etc/yum.repos.d/nginx.repo 	'//编辑nginx的yum源'
[nginx]
name=nginx.repo
baseurl=http://nginx.org/packages/centos/7/$basearch/
enabled=1
gpgcheck=0
[root@nginx01 ~]# yum clean all
[root@nginx01 ~]# yum makecache

1234567891011121314
  • 两台nginx主机安装nginx并开启四层转发(仅展示nginx01的操作)
[root@nginx01 ~]# yum -y install nginx	'//安装nginx'
[root@nginx01 ~]# vi /etc/nginx/nginx.conf 
...省略内容
 13  stream {
 14 
 15     log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
 16      access_log  /var/log/nginx/k8s-access.log  main;        ##指定日志目录
 17 
 18      upstream k8s-apiserver {
 19  #此处为master的ip地址和端口 
 20          server 192.168.10.60:6443;	'//6443是apiserver的端口号'
 21  #此处为master02的ip地址和端口
 22          server 192.168.10.50:6443;
 23      }
 24      server {
 25                  listen 6443;
 26                  proxy_pass k8s-apiserver;
 27      }
 28      }
。。。省略内容
[root@nginx01 ~]# nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
1234567891011121314151617181920212223

mark

  • 启动nginx服务
[root@nginx01 ~]# systemctl start nginx
[root@nginx01 ~]# systemctl enable nginx
Created symlink from /etc/systemd/system/multi-user.target.wants/nginx.service to /usr/lib/systemd/system/nginx.service.
[root@nginx01 ~]# netstat -ntap | grep nginx
tcp        0      0 0.0.0.0:6443            0.0.0.0:*               LISTEN      66053/nginx: master 
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      66053/nginx: master 
123456
  • 两台nginx主机部署keepalived服务(仅展示nginx01的操作)

    yum install -y keepalived

    修改配置文件

[root@nginx01 ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id NGINX_MASTER
}

vrrp_script check_nginx {
    script "/usr/local/nginx/sbin/check_nginx.sh"
}
vrrp_instance VI_1 {
    state MASTER		"backup修改为BACKUP"
    interface ens33
    virtual_router_id 51  "master和backup这里都写51,vrrp路由ID实例,每个实例是唯一的"
    priority 100			"backup改优先级为90"
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.10.200/24
    }
    track_script {
        check_nginx
    }
}

123456789101112131415161718192021222324252627282930313233343536
  • 创建监控脚本,启动keepalived服务,查看VIP地址
[root@nginx01 ~]# mkdir -p /usr/local/nginx/sbin/	'//创建监控脚本目录'
[root@nginx01 ~]# vim /usr/local/nginx/sbin/check_nginx.sh	'//编写监控脚本配置文件'
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")

if [ "$count" -eq 0 ];then
    systemctl stop keepalived
fi
[root@nginx01 ~]# chmod +x /usr/local/nginx/sbin/check_nginx.sh	'//给权限'
[root@nginx01 ~]# systemctl start keepalived	'//开启服务'
[root@nginx01 ~]# systemctl status keepalived
[root@nginx01 ~]# ip a	'//两个nginx服务器查看IP地址'
    VIP在nginx01上
[root@nginx02 ~]# ip a
12345678910111213

nginx01

mark

nginx02

mark

  • 验证漂移地址
[root@nginx01 ~]# pkill nginx	'//关闭nginx服务'
[root@nginx01 ~]# systemctl status keepalived	'//发现keepalived服务关闭了'
[root@nginx02 ~]# ip a	'//现在发现VIP地址跑到nginx02上了'
123

mark

  • 恢复漂移地址的操作
[root@nginx01 ~]# systemctl start nginx
[root@nginx01 ~]# systemctl start keepalived	'//先开启nginx,在启动keepalived服务'
[root@nginx01 ~]# ip a	'//再次查看,发现VIP回到了nginx01节点上'

1234

mark

四、 node节点指向 Nginx高可用群集

1、修改 两个node节点的配置文件,server ip 地址为统一的VIP地址(三个文件)这里如果不修改为Vip,实际上后端节点指的都是主master的api-server地址,这样就形成了master2资源浪费,同时当主master宕机,从master2无法提供服务,也无法形成两个master的高可用,所以这里一定要有负载均衡,实现高可用.

//修改内容:server: https://192.168.10.200:6443(都改成vip地址)

[root@node1 cfg]# vim /opt/kubernetes/cfg/bootstrap.kubeconfig

[root@node1 cfg]# vim /opt/kubernetes/cfg/kubelet.kubeconfig

[root@node1 cfg]# vim /opt/kubernetes/cfg/kube-proxy.kubeconfig

//重启服务
[root@node1 cfg]# systemctl restart kubelet.service 

[root@node1 cfg]# systemctl restart kube-proxy.service

12345678910111213

mark

2、检查修改内容

//确保必须在此路径下 grep 检查
[root@localhost ~]# cd /opt/kubernetes/cfg/
[root@localhost cfg]#  grep 200 *
123

mark

3、接下来在 nginx1 上查看 nginx 的 k8s日志看是否有node访问vip:

[root@nginx01 ~]# tail /var/log/nginx/k8s-access.log
192.168.100.190 192.168.100.170:6443 - [29/Sep/2020:23:12:33 +0800] 200 1122
192.168.100.190 192.168.100.160:6443 - [29/Sep/2020:23:12:33 +0800] 200 1121
'//这里的日志是重启服务的时候产生的'
1234

做了负载均衡之后,访问流量都在负载均衡器上,大大缓解了master的压力

五、 k8s多节点集群测试

  • 在 master1上操作,创建 Pod进行测试
[root@master bin]# ls
kube-apiserver  kube-controller-manager  kubectl  kube-scheduler
[root@master bin]# ./kubectl run nginx --image=nginx
kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.
deployment.apps/nginx created
[root@master bin]# pwd
/opt/kubernetes/bin
1234567
  • 查看pod状态
[root@master bin]# ./kubectl get pods
NAME                     READY   STATUS             RESTARTS   AGE
nginx2-cc5f746cb-fjgrk   1/1     Running            0          2m46s
123
  • 绑定群集中的匿名用户赋予管理员权限(解决日志不可看问题)
[root@master bin]# ./kubectl logs nginx2-cc5f746cb-fjgrk
Error from server (Forbidden): Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=proxy) ( pods/log nginx2-cc5f746cb-fjgrk)
#出现 error 是由于权限不足,下面来授权解决一下这个问题。
解决办法(添加匿名用户授予权限):
[root@master bin]# ./kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous
clusterrolebinding.rbac.authorization.k8s.io/cluster-system-anonymous created
[root@master bin]# ./kubectl logs nginx2-cc5f746cb-fjgrk "没有信息,但不会报错"
1234567
  • 查看 Pod 网络
[root@master bin]# ./kubectl get pods -o wide
NAME                     READY   STATUS             RESTARTS   AGE   IP            NODE              NOMINATED NODE
nginx2-cc5f746cb-fjgrk   1/1     Running            0          53s   172.17.71.3   192.168.100.190   <none>
# -o wide可以显示Pod创建在哪个node节点,这里创建在190
#可以看出,这个在master1上创建的 pod 被分配到了node01上了
#我们可以在对应网络的node节点上操作就可以直接访问。
123456
[root@master bin]# ./kubectl get pods -o wide
NAME                     READY   STATUS             RESTARTS   AGE   IP            NODE              NOMINATED NODE
nginx2-cc5f746cb-fjgrk   1/1     Running            0          53s   172.17.71.3   192.168.100.190   <none>
# -o wide可以显示Pod创建在哪个node节点,这里创建在190
#可以看出,这个在master1上创建的 pod 被分配到了node01上了
#我们可以在对应网络的node节点上操作就可以直接访问。
123456
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值