Kubernetes二进制部署(多节点)

实验环境

在这里插入图片描述
角色分配:

主机名       IP地址        安装软件包
Master01:14.0.0.50     kube-apiserver kube-controller-manager kube-scheduler etcd
Master02:14.0.0.80     kube-apiserver kube-controller-manager kube-scheduler 
Node01:  14.0.0.60     kubelet kube-proxy docker flannel etcd
Node02:  14.0.0.70     kubelet kube-proxy docker flannel etcd
Nginx01+keepalived:14.0.0.90   nginx、keepalived
Nginx02+keepalived:14.0.0.100  nginx、keepalived

实验过程

在部署完单节点集群后,继续部署多节点,前面的操作可以参考上一篇博客:
https://blog.csdn.net/chengu04/article/details/108899870

部署master02节点


1.关闭防火墙,关闭核心防护,关闭网络管理功能(生成环境中一定要关闭它)
[root@localhost ~]# hostnamectl set-hostname master02	#修改主机名
[root@localhost ~]# su
[root@master02 ~]# systemctl stop firewalld	#关闭防火墙
[root@master02 ~]#  setenforce 0 && sed -i "s/SELINUX=enforcing/SELNIUX=disabled/g" /etc/selinux/config	#关闭核心防护
[root@master02 ~]# systemctl stop NetworkManager && systemctl disable NetworkManager  #关闭网络管理功能

2.将master01节点的kubernetes配置文件和启动脚本复制到master02节点
[root@master ~]# scp -r /opt/kubernetes/ root@14.0.0.80:/opt/
[root@master ~]# scp /usr/lib/systemd/system/{kube-apiserver,kube-controller-manager,kube-scheduler}.service root@14.0.0.80:/usr/lib/systemd/system/

3.将master01节点的etcd证书复制到master02节点(master02上一定要有etcd证书,用来与etcd通信)
[root@master ~]# scp -r /opt/etcd/ root@14.0.0.80:/opt

4.master02上修改apiserver配置文件中的IP地址
[root@master02 ~]# cd /opt/kubernetes/cfg/
[root@master02 cfg]# ls
kube-apiserver  kube-controller-manager  kube-scheduler  token.csv
[root@master02 cfg]# vim kube-apiserver

KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://14.0.0.50:2379,https://14.0.0.60:2379,https://14.0.0.70:2379 \
--bind-address=14.0.0.80 \	#修改此处的绑定IP地址
--secure-port=6443 \
--advertise-address=14.0.0.80 \	#修改此处的IP地址
...省略

5.启动master02中的三个组件服务
[root@localhost cfg]# systemctl start kube-apiserver.service
[root@localhost cfg]# systemctl start kube-controller-manager.service
[root@localhost cfg]# systemctl start kube-scheduler.service

6.添加环境变量并查看状态
[root@master02 ~]# echo export PATH=$PATH:/opt/kubernetes/bin >> /etc/profile
[root@master02 ~]# source /etc/profile
[root@master02 ~]# kubectl get node
NAME              STATUS   ROLES    AGE   VERSION
14.0.0.60   Ready    <none>   23h   v1.12.3
14.0.0.70   Ready    <none>   23h   v1.12.3
#看到两个node节点Ready说明master02部署成功

部署nginx负载均衡集群

1.两个nginx主机关闭防火墙和核心防护,编辑nginx的yum源
[root@localhost ~]# hostnamectl set-hostname nginx01	#修改主机名
[root@localhost ~]# su
[root@nginx01 ~]# systemctl stop firewalld && systemctl disable firewalld	#关闭防火墙与核心防护
[root@nginx01 ~]# setenforce 0 && sed -i "s/SELINUX=enforcing/SELNIUX=disabled/g" /etc/selinux/config	
[root@nginx01 ~]# vi /etc/yum.repos.d/nginx.repo 	#编辑nginx的yum源
[nginx]
name=nginx.repo
baseurl=http://nginx.org/packages/centos/7/$basearch/
enabled=1
gpgcheck=0
[root@nginx01 ~]# yum list

2.两台nginx主机安装nginx并开启四层转发(仅展示nginx01的操作)
[root@nginx01 ~]# yum -y install nginx	'//安装nginx'
[root@nginx01 ~]# vi /etc/nginx/nginx.conf 
...省略内容
events {
    worker_connections  1024;
}

  stream {
    log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
            #定义日志格式;
    access_log  /var/log/nginx/access.log  main;
             upstream k8s-apiserver {        #定义代理的IP地址及端口
                      server 14.0.0.50:6443;
                      server 14.0.0.80:6443;
      }
              server {
                      listen 6443;
                      proxy_pass k8s-apiserver;
      }
}
http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;
...省略内容

3.启动nginx服务
[root@nginx01 ~]# nginx -t	#检查nginx语法
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
[root@nginx01 ~]# systemctl start nginx	    #开启服务
[root@nginx01 ~]# systemctl status nginx
[root@nginx01 ~]# netstat -ntap |grep nginx	#会检测出来6443端口
tcp        0      0 0.0.0.0:6443            0.0.0.0:*               LISTEN      1849/nginx: master  
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      1849/nginx: master 

4.两台nginx主机部署keepalived服务(仅展示节点nginx01的配置)
[root@nginx01 ~]# yum -y install keepalived 
[root@nginx01 ~]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived

vrrp_script check_nginx {                           #定义一个函数check_nginx
    script "/usr/local/nginx/sbin/check_nginx.sh"       #函数内容为一个检测nginx服务是否存活的脚本
}

global_defs {
   notification_email {
     acassen@firewall.loc
     failover@firewall.loc
     sysadmin@firewall.loc
   }
   notification_email_from Alexandre.Cassen@firewall.loc
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id LVS_DEVEL01         #定义该主机在群集中的id,nginx02需要命名为不一样的
}

vrrp_instance VI_1 {
    state MASTER              #nginx02节点命名为BACKUP
    interface ens33             #修改网卡名,centos7开始为ens33,centos6为eth0
    virtual_router_id 51
    priority 100                    #nginx02节点优先级设为90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {      #设置VIP
        14.0.0.88
    }
    track_script {             #该vrrp实例VI_1调用上面定义的函数check_nginx
        check_nginx
     }
}

5.创建监控nginx进程的脚本,启动keepalived服务,查看VIP地址
[root@nginx01 ~]# mkdir -p /usr/local/nginx/sbin/	#创建监控脚本目录
[root@nginx01 ~]# vim /usr/local/nginx/sbin/check_nginx.sh	   #编写监控脚本配置文件
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")    #查看nginx进程
if [ "$count" -eq 0 ];then                 #如果nginx进程关闭了,则关闭keepalived服务
    systemctl stop keepalived
fi
[root@nginx01 ~]# chmod +x /usr/local/nginx/sbin/check_nginx.sh
[root@nginx01 ~]# systemctl start keepalived	#开启服务
[root@nginx01 ~]# systemctl status keepalived
[root@nginx01 ~]# ip a	#两个nginx服务器查看IP地址
可以发现VIP在节点nginx01上
[root@nginx02 ~]# ip a

6.验证漂移地址
[root@nginx01 ~]# pkill nginx	   #关闭nginx01节点的nginx服务
[root@nginx01 ~]# systemctl status keepalived	#发现keepalived服务关闭了
[root@nginx02 ~]# ip a	#现在发现VIP地址漂移到nginx02上了

7.恢复漂移地址的操作
[root@nginx01 ~]# systemctl start nginx
[root@nginx01 ~]# systemctl start keepalived	#先开启nginx,在启动keepalived服务
[root@nginx01 ~]# ip a	#再次查看,发现VIP又回到了nginx01节点上

8.修改两个node节点配置文件(bootstrap.kubeconfig ),使用VIP地址,仅展示node01节点的操作
[root@node01 ~]# vi /opt/kubernetes/cfg/bootstrap.kubeconfig 
    server: https://14.0.0.88:6443	   #此地址修改为VIP地址
[root@node01 ~]# vi /opt/kubernetes/cfg/kubelet.kubeconfig 
    server: https://14.0.0.88:6443    	  #此地址修改为VIP地址
[root@node01 ~]# vi /opt/kubernetes/cfg/kube-proxy.kubeconfig 
    server: https://14.0.0.88:6443	  #此地址修改为VIP地址
    
9.重启两个node节点的服务
[root@node01 ~]# systemctl restart kubelet
[root@node01 ~]# systemctl restart kube-proxy
[root@node01 ~]# cd /opt/kubernetes/cfg/
[root@node01 cfg]# grep 88 *	        #过滤当前目录下所有内容中是否包含88,如下代表VIP修改成功
bootstrap.kubeconfig:    server: https://14.0.0.88:6443
kubelet.kubeconfig:    server: https://14.0.0.88:6443
kube-proxy.kubeconfig:    server: https://14.0.0.88:6443

10.在节点nginx01上查看nginx的日志,查看负载均衡是否生效
[root@nginx01 ~]# vim /var/log/nginx/access.log	#下面的日志是重启服务后产生的
14.0.0.60 14.0.0.50:6443 - [30/Sep/2020:11:01:22 +0800] 200 15319
14.0.0.60 14.0.0.50:6443 - [30/Sep/2020:11:01:23 +0800] 200 1115
14.0.0.60 14.0.0.80:6443 - [30/Sep/2020:11:01:23 +0800] 200 1115
14.0.0.60 14.0.0.80:6443 - [30/Sep/2020:11:01:31 +0800] 200 3010
14.0.0.70 14.0.0.50:6443 - [30/Sep/2020:11:01:38 +0800] 200 1115
14.0.0.70 14.0.0.50:6443 - [30/Sep/2020:11:01:38 +0800] 200 1114
#nginx负载均衡生效后,会进行状态检查

11.master节点测试创建pod
[root@master01 ~]# kubectl run nginx --image=nginx	#创建一个运行nginx服务的pod
kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.
deployment.apps/nginx created
[root@master01 ~]# kubectl get pods	      #查看状态,是正在创建
NAME                    READY   STATUS              RESTARTS   AGE
nginx-dbddb74b8-5s6h7   0/1     ContainerCreating   0          13s
[root@master01 ~]# kubectl get pods	     #过会儿再次查看,发现pod已经创建完成,在master02节点也可以查看到
NAME                    READY   STATUS    RESTARTS   AGE
nginx-dbddb74b8-5s6h7   1/1     Running   0          23s

12.查看刚才创建的运行nginx服务的pod的日志
[root@master01 ~]# kubectl logs nginx-dbddb74b8-5s6h   #查看pod日志
Error from server (Forbidden): Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=proxy) ( pods/log nginx-dbddb74b8-5s6h7)
#发现是因为使用了system:anonymous(匿名)用户进行操作,没有权限

[root@master01 ~]# kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous	
                        #将集群中的匿名用户绑定到管理员用户,使其拥有权限
[root@master ~]# kubectl logs nginx-dbddb74b8-5s6h  #此时可以查看,这时没有日志产生

13.访问node节点的pod中的web业务,从而产生日志,并在两个master节点查看
[root@master ~]# kubectl get pods -o wide	#查看pod的完整信息(IP信息)
NAME                    READY   STATUS    RESTARTS   AGE     IP            NODE              NOMINATED NODE
nginx-dbddb74b8-5s6h7   1/1     Running   0          6m29s   172.17.26.2   14.0.0.60   <none>
[root@node01 ~]# curl 172.17.26.2	#在对应的node节点访问pod
[root@master ~]# kubectl logs nginx-dbddb74b8-5s6h7	#再次在master节点查看日志情况,master02节点同样可以查看到
172.17.26.1 - - [30/Apr/2020:17:38:48 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.29.0" "-"

实验故障

搭建完k8s集群后,在master01上创建了一个运行nginx服务的pod,查看其日志时出现如下报错:

[root@master01 ~]# kubectl logs nginx-dbddb74b8-5s6h   #查看pod日志命令
Error from server (Forbidden): Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=proxy) ( pods/log nginx-dbddb74b8-5s6h7)

故障原因:
默认会使用system:anonymous(匿名)用户进行操作,而该用户没有权限

解决方法:

[root@master01 ~]# kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous	
                               #将集群中的匿名用户绑定到管理员用户,使其拥有权限
  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 4
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 4
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值