Kubernetes----多master节点二进制群集部署以及kubectl一些常用命令

本文档详细介绍了如何在已有单节点K8s集群基础上,通过复制组件和服务脚本,设置多Master节点实现高可用。步骤包括添加Master2节点,配置VIP,使用Keepalived进行负载均衡,以及在节点间同步证书和配置,确保在Master节点宕机时能自动漂移并保持服务稳定。
摘要由CSDN通过智能技术生成

多master节点集群架构流程图
在这里插入图片描述

环境

节点ip
master120.0.0.11
master220.0.0.14
node0120.0.0.12
node0220.0.0.13
nginx0120.0.0.15
nginx0220.0.0.16
VIP20.0.0.100

前期:
本次部署是基于单节点二进制部署上面继续操作的单节点部署

为了防止master1宕机,添加 master2 多节点K8s集群

注:优先关闭防火墙和selinux服务

在master1上拷贝重要文件给master2

复制kubernetes目录到master02

[root@master1 k8s]# scp -r /opt/kubernetes/ root@20.0.0.14:/opt
....
Are you sure you want to continue connecting (yes/no)? yes
root@20.0.0.14's password: 
token.csv                                                         100%   84    86.1KB/s   00:00    
kube-apiserver                                                    100%  939     1.2MB/s   00:00    
kube-scheduler                                                    100%   94    52.0KB/s   00:00    
kube-controller-manager                                           100%  483   446.5KB/s   00:00    
kube-apiserver                                                    100%  184MB  30.6MB/s   00:06    
kubectl                                                           100%   55MB  32.1MB/s   00:01    
kube-controller-manager                                           100%  155MB  31.1MB/s   00:05    
kube-scheduler                                                    100%   55MB  30.7MB/s   00:01    
ca-key.pem                                                        100% 1679   741.3KB/s   00:00    
ca.pem                                                            100% 1359     1.5MB/s   00:00    
server-key.pem                                                    100% 1675     1.3MB/s   00:00    
server.pem                                                        100% 1643     1.6MB/s   00:00   

复制master中的三个组件启动脚本 kube-apiserver.service kube-controller-manager.service kube-scheduler.service

[root@localhost k8s]# scp /usr/lib/systemd/system/{kube-apiserver,kube-controller-manager,kube-scheduler}.service root@20.0.0.14:/usr/lib/systemd/system/
root@20.0.0.14's password: 
kube-apiserver.service                                            100%  282   268.1KB/s   00:00    
kube-controller-manager.service                                   100%  317   294.2KB/s   00:00    
kube-scheduler.service                                            100%  281   257.5KB/s   00:00  

master02上文件修改

修改配置文件kube-apiserver中的IP

[root@localhost ~]# cd /opt/kubernetes/cfg/
[root@localhost cfg]# vim kube-apiserver
.....
--bind-address=20.0.0.14 \
--secure-port=6443 \
--advertise-address=20.0.0.14 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--kubelet-https=true \
--enable-bootstrap-token-auth \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/opt/kubernetes/ssl/server.pem  \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"

master也必须要有ectd证书

需要拷贝master01上已有的etcd证书给master02使用

[root@mastar1 k8s]# scp -r /opt/etcd/ root@20.0.0.14:/opt/
root@20.0.0.14's password: 
etcd                                                              100%  523   415.0KB/s   00:00    
etcd                                                              100%   18MB  42.7MB/s   00:00    
etcdctl                                                           100%   15MB  35.2MB/s   00:00    
ca-key.pem                                                        100% 1675   612.1KB/s   00:00    
ca.pem                                                            100% 1265     1.0MB/s   00:00    
server-key.pem                                                    100% 1679     1.7MB/s   00:00    
server.pem                                                        100% 1338     1.7MB/s   00:00    

启动master02中的三个组件服务

[root@localhost cfg]# systemctl start kube-apiserver.service 
[root@localhost cfg]# systemctl start kube-controller-manager.service 
[root@localhost cfg]# systemctl start kube-scheduler.service 
'增加环境变量'
[root@localhost cfg]# vim /etc/profile
'末尾添加'
export PATH=$PATH:/opt/kubernetes/bin/
[root@localhost cfg]# source /etc/profile
[root@localhost cfg]# kubectl get node
NAME              STATUS   ROLES    AGE     VERSION
20.0.0.12   Ready    <none>   2d12h   v1.12.3
20.0.0.14   Ready    <none>   38h     v1.12.3

主要就是负载均衡部署

先创建2个nginx服务

以下是nginx01 以及 02 操作
注:优先关闭防火墙和selinux服务

[root@localhost ~]# vim /etc/yum.repos.d/nginx.repo
[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/centos/7/$basearch/
gpgcheck=0

添加四层转发

[root@localhost ~]# yum install nginx -y
//添加四层转发
[root@localhost ~]# vim /etc/nginx/nginx.conf 

events {
    worker_connections  1024;
}
stream {

   log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
    access_log  /var/log/nginx/k8s-access.log  main;

    upstream k8s-apiserver {
        server 20.0.0.12:6443; 'master下的node1/2节点'
        server 20.0.0.12:6443;
    }
    server {
                listen 6443;
                proxy_pass k8s-apiserver;
    }
    }
http {
[root@localhost ~]# systemctl start nginx

搭建 keepalived 高可用服务

[root@localhost ~]# yum install keepalived -y
//修改配置文件
[root@localhost ~]# vim /etc/keepalived/keepalived.conf
//注意:nginx01是Mster配置如下:

! Configuration File for keepalived 
 
global_defs { 
   ' 接收邮件地址 '
   notification_email { 
     acassen@firewall.loc 
     failover@firewall.loc 
     sysadmin@firewall.loc 
   } 
  ' 邮件发送地址 '
   notification_email_from Alexandre.Cassen@firewall.loc  
   smtp_server 127.0.0.1 
   smtp_connect_timeout 30 
   router_id NGINX_MASTER 
} 

vrrp_script check_nginx {
    script "/etc/nginx/check_nginx.sh"
}

vrrp_instance VI_1 { 
    state MASTER 
    interface eth0 '自己IP查看 是eth0 还是 ens333'
    virtual_router_id 51' VRRP 路由 ID实例,每个实例是唯一的 '
    priority 100    '优先级,备服务器设置 90 '
    advert_int 1    '指定VRRP 心跳包通告间隔时间,默认1秒' 
    authentication { 
        auth_type PASS      
        auth_pass 1111 
    }  
    virtual_ipaddress { 
        20.0.0.100/24  ‘vip漂移地址’
    } 
    track_script {
        check_nginx
    } 
}
//注意:nginx02是Backup配置如下:
! Configuration File for keepalived 
 
global_defs { 
   '接收邮件地址 '
   notification_email { 
     acassen@firewall.loc 
     failover@firewall.loc 
     sysadmin@firewall.loc 
   } 
  ' 邮件发送地址 '
   notification_email_from Alexandre.Cassen@firewall.loc  
   smtp_server 127.0.0.1 
   smtp_connect_timeout 30 
   router_id NGINX_MASTER 
} 

vrrp_script check_nginx {
    script "/etc/nginx/check_nginx.sh"
}

vrrp_instance VI_1 { 
    state BACKUP 
    interface eth0 '自己IP查看 是eth0 还是 ens333'
    virtual_router_id 51 'VRRP 路由 ID实例,每个实例是唯一的 '
    priority 90    '优先级,备服务器设置 90 '
    advert_int 1    '指定VRRP 心跳包通告间隔时间,默认1秒' 
    authentication { 
        auth_type PASS      
        auth_pass 1111 
    }  
    virtual_ipaddress { 
        20.0.0.100/24 
    } 
    track_script {
        check_nginx
    } 
}

当nginx01宕机后,脚本自动关闭keepalived,漂移到nginx02

[root@localhost ~]# vim /etc/nginx/check_nginx.sh
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")

if [ "$count" -eq 0 ];then
    systemctl stop keepalived
fi
[root@localhost ~]# chmod +x /etc/nginx/check_nginx.sh
[root@localhost ~]# systemctl start keepalived

查看nginx服务的地址信息

nginx01查看

[root@localhost ~]# ip a
....
    ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:0c:29:eb:11:2a brd ff:ff:ff:ff:ff:ff
    inet 20.0.0.14/24 brd 20.0.0.255 scope global ens33
       valid_lft forever preferred_lft forever
    inet 20.0.0.100/24 scope global secondary ens33  '漂移地址在nginx01中'
       valid_lft forever preferred_lft forever
    inet6 fe80::53ba:daab:3e22:e711/64 scope link 
       valid_lft forever preferred_lft forever

如果nginx01宕机后 漂移地址就会到nginx02,可以自己尝试以下!

开始修改node节点配置文件统一VIP(bootstrap.kubeconfig,kubelet.kubeconfig)

[root@localhost cfg]# vim /opt/kubernetes/cfg/bootstrap.kubeconfig
[root@localhost cfg]# vim /opt/kubernetes/cfg/kubelet.kubeconfig
[root@localhost cfg]# vim /opt/kubernetes/cfg/kube-proxy.kubeconfig
//统统修改为VIP
server: https://20.0.0.100:6443
[root@localhost cfg]# systemctl restart kubelet.service 
[root@localhost cfg]# systemctl restart kube-proxy.service 
//替换完成直接自检
[root@localhost cfg]# grep 100 *
bootstrap.kubeconfig:    server: https://20.0.0.100:6443
kubelet.kubeconfig:    server: https://20.0.0.100:6443
kube-proxy.kubeconfig:    server: https://20.0.0.100:6443
//在lb01上查看nginx的k8s日志
[root@nginx01 ~]# tail /var/log/nginx/k8s-access.log 
20.0.0.12 20.0.0.11:6443 - [23/Mar/2021:16:02:19 +0800] 200 1115
20.0.0.12 20.0.0.14:6443 - [23/Mar/2021:16:02:19 +0800] 200 1116
20.0.0.13 20.0.0.14:6443 - [23/Mar/2021:16:07:42 +0800] 200 1114
20.0.0.13 20.0.0.11:6443 - [23/Mar/2021:16:07:42 +0800] 200 1115

测试创建pod

创建一个nginx的pod

[root@master ~]# kubectl run nginx --image=nginx
kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.
deployment.apps/nginx created

查看pod
注: ContainerCreating需要等待一下

[root@master ~]# kubectl get pods
NAME                      READY   STATUS              RESTARTS   AGE
nginx-dbddb74b8-kzm6m     0/1     ContainerCreating   0          16s
nginx1-84ccd956fb-qgfh2   1/1     Running             0          14h

再次查看

[root@master ~]# kubectl get pods
NAME                      READY   STATUS    RESTARTS   AGE
nginx-dbddb74b8-kzm6m     1/1     Running   0          20s
nginx1-84ccd956fb-qgfh2   1/1     Running   0          14h
[root@localhost ~]# kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous
clusterrolebinding.rbac.authorization.k8s.io/cluster-system-anonymous created

查看pod网络

[root@master ~]# kubectl get pods -o wide
NAME                      READY   STATUS    RESTARTS   AGE   IP            NODE        NOMINATED NODE
nginx-dbddb74b8-kzm6m     1/1     Running   0          27s   172.17.55.3   20.0.0.12   <none>
nginx1-84ccd956fb-qgfh2   1/1     Running   0          14h   172.17.74.3   20.0.0.13   <none>

详细查看pod

[root@master ~]# kubectl describe pod nginx1-84ccd956fb-qgfh2
Name:               nginx1-84ccd956fb-qgfh2
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               20.0.0.13/20.0.0.13
Start Time:         Tue, 23 Mar 2021 18:23:36 +0800
Labels:             pod-template-hash=84ccd956fb
                    run=nginx1
Annotations:        <none>
Status:             Running
IP:                 172.17.74.3
Controlled By:      ReplicaSet/nginx1-84ccd956fb
Containers:
  nginx1:
    Container ID:   docker://370ebdca7d9a6bfb614c0f59c8d1235dddabc6f5824ce6dd8c80f54d54e574b5
    Image:          nginx
    Image ID:       docker-pullable://nginx@sha256:10b8cc432d56da8b61b070f4c7d2543a9ed17c2b23010b43af434fd40e2ca4aa
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Tue, 23 Mar 2021 18:24:07 +0800
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-qjdz7 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-qjdz7:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-qjdz7
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age   From                Message
  ----    ------     ----  ----                -------
  Normal  Scheduled  14h   default-scheduler   Successfully assigned default/nginx1-84ccd956fb-qgfh2 to 20.0.0.13
  Normal  Pulling    14h   kubelet, 20.0.0.13  pulling image "nginx"
  Normal  Pulled     14h   kubelet, 20.0.0.13  Successfully pulled image "nginx"
  Normal  Created    14h   kubelet, 20.0.0.13  Created container
  Normal  Started    14h   kubelet, 20.0.0.13  Started container

在对应网段的node节点上操作可以直接访问

[root@node2 ~]# curl 172.17.74.3  
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

kubectl命令管理

Kubectl 帮助信息:命令:kubectl --help

常用的命令

create:创建资源(可以从文件或这标准性输入)
expose:暴露资源。把资源对外提供出去(提供端口)
run :运行指定镜像
set :设置指定的对象(例如版本号)
explain:查询资源文件
get:显示信息
edit:编辑指定资源
delete:删除

kubectl run 命令

kubectl run NAME --image=image [–env=“key=value”] [–port=port] [–replicas=replicas] [–dry-run=bool] [–overrides=inline-json] [–command]  [COMMAND] [args…] [options]

NAME:资源名称
–image=image:指定镜像
[–env=“key=value”]:设置Pod中一些参数/变量
[–port=port] :提供的端口
[–replicas=replicas]:副本集的数量
[–dry-run=bool]:试运行的池
[–overrides=inline-json]:是否在线
[–command]  [COMMAND] [args…] [options]:其他的参数指令

简单测试
注:创建一个nginx 资源,指定对外提供端口为80,副本集为3个

[root@master ~]# kubectl run nginx-deployment --image=nginx --port=80 --replicas=3
kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.
deployment.apps/nginx-deployment created

kubect delete 删除命令

[root@master ~]# kubectl delete deploy/nginx
deployment.extensions "nginx" deleted
[root@master ~]# kubectl get pod
NAME                      READY   STATUS    RESTARTS   AGE
nginx1-84ccd956fb-qgfh2   1/1     Running   0          14h
[root@master ~]# 

删除nginx-deployment

[root@master ~]# kubectl delete deploy/nginx1-deployment
deployment.extensions "nginx-deployment" deleted

#再次查看pod资源
[root@master ~]# kubectl get pods
No resources found.
  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值