Kubernetes 高可用搭建

Kubernetes 高可用搭建

集群信息

以下操作,没有注明则是所有节点都需执行

节点规划
主机名节点ip角色
-1192.168.31.237虚拟ip(vip)
k8s-matser-01192.168.31.241master
k8s-matser-02192.168.31.242master
k8s-matser-03192.168.31.243master
k8s-slave-01192.168.31.238slave
k8s-slave-02192.168.31.239slave
k8s-slave-03192.168.31.240slave
安装前准备工作
keepalived提供一个vip实现高可用
添加haproxy来为apiserver提供反向代理的作用,这样来自haproxy的所有请求都将轮询转发到后端的master节点上。如果仅仅使用keepalived,当集群正常工作时,所有流量还是会到具有vip的那台master上,因此加上了haproxy使整个集群的master都能参与进来。

详情参考博客:https://www.cnblogs.com/ssgeek/p/11942062.html
设置hosts解析

操作节点:所有节点( k8s-init,k8s-masters,k8s-slaves )均需执⾏行行

  • 修改hostname hostname
# 分别在各个节点,按照集群规划来设置hostname
hostnamectl set-hostname k8s-master-01 && bash
hostnamectl set-hostname k8s-master-02 && bash
hostnamectl set-hostname k8s-master-03 && bash

hostnamectl set-hostname k8s-slave-01 && bash
hostnamectl set-hostname k8s-slave-02 && bash
hostnamectl set-hostname k8s-slave-03 && bash
  • 添加hosts解析
$ cat >>/etc/hosts<<EOF
192.168.31.237 k8s-vip
192.168.31.241 k8s-master-01
192.168.31.242 k8s-master-02
192.168.31.243 k8s-master-03
192.168.31.238 k8s-slave-01
192.168.31.239 k8s-slave-02
192.168.31.240 k8s-slave-03
EOF
调整系统配置

操作节点: 所有的master和slave节点( k8s-masters,k8s-slaves )需要执⾏行行

设置安全组开放端⼝口
如果节点间⽆无安全组限制(内⽹网机器器间可以任意访问),可以忽略略,否则,⾄至少保证如下端⼝口可通:
k8s-init节点:TCP:7443,60080,60081 k8s-master节点:TCP:6443,2379,2380,UDP协议端
⼝口全部打开 k8s-slave节点:UDP协议端⼝口全部打开

  • 设置iptables

    $ iptables -P FORWARD ACCEPT
    
  • 关闭swap

    $ swapoff -a
    
    # 防⽌止开机⾃自动挂载 swap 分区
    sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
    
  • 关闭selinux防火墙

     $ sed -ri 's#(SELINUX=).*#\1disabled#' /etc/selinux/config
     $ setenforce 0
     $ systemctl disable firewalld && systemctl stop firewalld
     
     # sed -r 支持扩展正则
    
  • 修改内核参数

    $ cat <<EOF > /etc/sysctl.d/k8s.conf
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    net.ipv4.ip_forward=1
    EOF
    
    $ modprobe br_netfilter
    $ sysctl -p /etc/sysctl.d/k8s.conf
    
  • 加载ipvs模块

    $ cat > /etc/sysconfig/modules/ipvs.modules <<EOF
    #!/bin/bash
    modprobe -- ip_vs
    modprobe -- ip_vs_rr
    modprobe -- ip_vs_wrr
    modprobe -- ip_vs_sh
    modprobe -- nf_conntrack_ipv4
    EOF
    
    $ chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4
    
    
    # lsmod  显示已载入系统的模块
    
  • 配置阿里源

    $ curl -o /etc/yum.repos.d/Centos-7.repo http://mirrors.aliyun.com/repo/Centos-7.repo
    $ curl -o /etc/yum.repos.d/docker-ce.repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
    $ cat <<EOF > /etc/yum.repos.d/kubernetes.repo
    [kubernetes]
    name=Kubernetes
    baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
    enabled=1
    gpgcheck=0
    repo_gpgcheck=0
    gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
            http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
    EOF
    
    $ yum clean all && yum makecache
    
  • 同步时间

    yum install -y ntpdate
    ln -sf /usr/share/zoneinfo/Asia/Shanghai  /etc/localtime
    ntpdate -u ntp.aliyun.com && date
    
  • 确保每个节点上 MAC 地址和 product_uuid 的唯一性

    • 可以使用命令 ip linkifconfig -a 来获取网络接口的 MAC 地址
    • 可以使用 sudo cat /sys/class/dmi/id/product_uuid 命令对 product_uuid 校验
  • 配置master-init节点到其他节点免密登录

    ssh-keygen -t rsa  # -t指定算法类型
    ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.31.242
    ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.31.243
    
安装keepalived

master节点

# 安装
yum install -y keepalived

# 修改前备份
 cp /etc/keepalived/keepalived.conf{,.bak}
 
 # 官网网站
 http://www.keepalived.org

配置示例说明:

配置示例:
# 全局配置
global_defs {
   router_id Keepalived-master # 机器标识,不存在是否一致
}

# 检查vrrp相关脚本
vrrp_script check_haproxy {
    script "killall -0 haproxy"  #监控检查脚本,也可以是命令。killall -0 不发送任何信号,但是系统会进行错误检查,常用来检查一个进程是否存在,存在返回0;不存在返回1;
    interval 3  # 脚本或命令执行间隔
    weight -2  # 脚本结果导致的优先级变更:10表示优先级+10;-10则表示优先级-10 
    fall 10  # 执行成功多少次才认为是成功
    rise 2  # 执行失败多少次才认为失败
    # 其他常用参数user,运行脚本的用户和组;init_fail:加速脚本初始化状态是失败状态,time_out超时时间
}

# vrrpd子进程,vrrpd子进程就是用来实现VRRP协议的
vrrp_instance VI_1 {
    state MASTER   # 主备状态
    interface ens192  # 监听网卡
    virtual_router_id 51 # 虚拟路由ID,同一个业务必须一致,且唯一
    priority 250  # 权重
    advert_int 1  # 检查间隔,默认为1秒
    authentication {              #认证配置
        auth_type PASS        # 认证方式,有pass和ah两种
        auth_pass 12345678  # 认证密码
    }
    virtual_ipaddress {
        192.168.31.237  # 虚拟ip,也可以这种写法192.168.31.237/24 dev ens192 lable ens192:2
    }
    track_script {            # 引用监控检查脚本函数
        check_haproxy
    }

}


参数说明:

state:				state指定instance(Initial)的初始状态,就是说在配置好后,这台服务器的初始状态就是这里指定的,但这里指定的不算,还是得要通过竞选通过优先级来确定,里如果这里设置为master,但如若他的优先级不及另外一台,那么这台在发送通告时,会发送自己的优先级,另外一台发现优先级不如自己的高,那么他会就回抢占为master
interface:			实例绑定的网卡,因为在配置虚拟IP的时候必须是在已有的网卡上添加的
dont track primary:        忽略VRRP的interface错误
track interface:	        跟踪接口,设置额外的监控,里面任意一块网卡出现问题,都会进入故障(FAULT)状态,例如,用nginx做均衡器的时候,内网必须正常工作,如果内网出问题了,这个均衡器也就无法运作了,所以必须对内外网同时做健康检查
mcast src ip:		        发送多播数据包时的源IP地址,这里注意了,这里实际上就是在那个地址上发送VRRP通告,这个非常重要,一定要选择稳定的网卡端口来发送,这里相当于heartbeat的心跳端口,如果没有设置那么就用默认的绑定的网卡的IP,也就是interface指定的IP地址
garp master delay:	在切换到master状态后,延迟进行免费的ARP(gratuitous ARP)请求
virtual router id:	        这里设置VRID,这里非常重要,相同的VRID为一个组,他将决定多播的MAC地址
priority 100:		        设置本节点的优先级,优先级高的为master
advert int:		        检查间隔,默认为1秒
virtual ipaddress:	        这里设置的就是VIP,也就是虚拟IP地址,他随着state的变化而增加删除,当state为master的时候就添加,当state为backup的时候删除,这里主要是有优先级来决定的,和state设置的值没有多大关系,这里可以设置多个IP地址
virtual routes:	        原理和virtual ipaddress一样,只不过这里是增加和删除路由
lvs sync daemon interface:lvs syncd绑定的网卡
authentication:	        这里设置认证
auth type:			认证方式,可以是PASS或AH两种认证方式
auth pass:			认证密码
nopreempt:			设置不抢占,这里只能设置在state为backup的节点上,而且这个节点的优先级必须别另外的高
preempt delay:		抢占延迟
debug:				debug级别
notify master:		和sync group这里设置的含义一样,可以单独设置,例如不同的实例通知不同的管理人员,http实例发给网站管理员,mysql的就发邮件给DBA

vrrp 虚拟路由冗余协议


如:/etc/keepalived/haproxy_pid.sh
#!/bin/sh
errorExit(){
 eho "*** $*" 1>&2
 exit 1 
}
curl --silent --max-time 2 --insecure https://localhost:6443/ -o /dev/null || errorExit "Error Get https://localhost:6443/"
if ip addr | grep -q 192.168.31.237; then
   curl --silent --max-time 2 --insecure https://192.168.31.237:6443/ -o /dev/null || errorExit "Error Get https://192.168.31.237:6443/"
fi


转自博客:
https://www.cnblogs.com/cangyuefeng/p/11531983.html
https://www.jianshu.com/p/8e077225e4f7

master-01配置:

cat > /etc/keepalived/keepalived.conf <<EOF 
! Configuration File for keepalived

global_defs {
   router_id Keepalived-master
}

vrrp_script check_haproxy {
    script "killall -0 haproxy"
    interval 3
    weight -2
    fall 10
    rise 2
}

vrrp_instance VI_1 {
    state MASTER
    interface ens192
    virtual_router_id 51
    priority 250
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 12345678
    }
    virtual_ipaddress {
        192.168.31.237
    }
    track_script {
        check_haproxy
    }

}
EOF

k8s-master-02的配置:

cat > /etc/keepalived/keepalived.conf <<EOF 
! Configuration File for keepalived

global_defs {
   router_id Keepalived-back01
}

vrrp_script check_haproxy {
    script "killall -0 haproxy"
    interval 3
    weight -2
    fall 10
    rise 2
}

vrrp_instance VI_1 {
    state BACKUP 
    interface ens192
    virtual_router_id 51
    priority 200
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 12345678
    }
    virtual_ipaddress {
        192.168.31.237
    }
    track_script {
        check_haproxy
    }

}
EOF

k8s-master-03的配置:

cat > /etc/keepalived/keepalived.conf <<EOF 
! Configuration File for keepalived

global_defs {
   router_id Keepalived-back02
}

vrrp_script check_haproxy {
    script "killall -0 haproxy"
    interval 3
    weight -2
    fall 10
    rise 2
}

vrrp_instance VI_1 {
    state BACKUP 
    interface ens192
    virtual_router_id 51
    priority 150
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 12345678
    }
    virtual_ipaddress {
        192.168.31.237
    }
    track_script {
        check_haproxy
    }

}
EOF

启动和检测

systemctl enable keepalived
systemctl start keepalived
systemctl status keepalived

$ ip addr |grep "inet 192"
    inet 192.168.31.241/24 brd 192.168.31.255 scope global ens192
    inet 192.168.31.237/32 scope global ens192

# 检测,分别停掉master-01的Keepalived,查看vip漂移情况
# 再停掉master-02的Keepalived,查看漂移情况,再反过来依次启动,检查是否会漂移回来

 systemctl stop keepalived
 systemctl restart keepalived
 ip addr |grep "inet 192"

安装haproxy

在master操作

# 安装
yum install -y haproxy

# 配置
三台master节点的配置均相同,配置中声明了后端代理的三个master节点服务器,指定了haproxy运行的端口为16443等,因此16443端口为集群的入口。

配置前先备份
cp /etc/haproxy/haproxy.cfg{,.bak}
cat > /etc/haproxy/haproxy.cfg << EOF
#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
    # to have these messages end up in /var/log/haproxy.log you will
    # need to:
    # 1) configure syslog to accept network log events.  This is done
    #    by adding the '-r' option to the SYSLOGD_OPTIONS in
    #    /etc/sysconfig/syslog
    # 2) configure local2 events to go to the /var/log/haproxy.log
    #   file. A line like the following can be added to
    #   /etc/sysconfig/syslog
    #
    #    local2.*                       /var/log/haproxy.log
    #
    log         127.0.0.1 local2
    
    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon 
       
    # turn on stats unix socket
    stats socket /var/lib/haproxy/stats
#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------  
defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000
#---------------------------------------------------------------------
# kubernetes apiserver frontend which proxys to the backends
#--------------------------------------------------------------------- 
frontend kubernetes-apiserver
    mode                 tcp
    bind                 *:16443
    option               tcplog
    default_backend      kubernetes-apiserver    
#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend kubernetes-apiserver
    mode        tcp
    balance     roundrobin
    server      k8s-master-01   192.168.31.241:6443 check
    server      k8s-master-02   192.168.31.242:6443 check
    server      k8s-master-03   192.168.31.243:6443 check
#---------------------------------------------------------------------
# collection haproxy statistics message
#---------------------------------------------------------------------
listen stats
    bind                 *:1080
    stats auth           admin:awesomePassword
    stats refresh        5s
    stats realm          HAProxy\ Statistics
    stats uri            /admin?stats
EOF
haproxy配置详解:
http://www.ttlsa.com/linux/haproxy-study-tutorial/
https://blog.csdn.net/tantexian/article/details/50056199
#此处重点k8s集群

启动和检查

systemctl enable haproxy
systemctl start haproxy
systemctl status haproxy
netstat -lntup|grep haproxy
安装docker
# Docker CE 镜像 参考阿里云
https://developer.aliyun.com/mirror/docker-ce?spm=a2c6h.13651102.0.0.53322f70ZDikrA

# 查看所有可用的版本
yum list docker-ce --showduplicates | sort -r

# 安装,也可以指定版本安装
yum install docker-ce -y

## 配置docker加速
mkdir -p /etc/docker
cat <<EOF | sudo tee /etc/docker/daemon.json
{
  "registry-mirrors": [
    "https://8xpk5wnt.mirror.aliyuncs.com"
  ],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF

其他配置参考官方网站:
https://kubernetes.io/docs/setup/production-environment/container-runtimes/#docker
cat <<EOF | sudo tee /etc/docker/daemon.json
{
  "registry-mirrors": [
    "https://8xpk5wnt.mirror.aliyuncs.com"        # 加速
  ],
  "exec-opts": ["native.cgroupdriver=systemd"],  #执行选项
  "log-driver": "json-file",                # 日志驱动程序
  "log-opts": {					# 日志选项
    "max-size": "100m"           # 最大大小
  },
  "storage-driver": "overlay2"      #存储驱动程序
}
EOF

说明:
对于运行 Linux 内核版本 4.0 或更高版本,或使用 3.10.0-51 及更高版本的 RHEL 或 CentOS 的系统,overlay2是首选的存储驱动程序。


## 启动docker
$ systemctl enable docker && systemctl start docker

# 检查
docker -v
安装kubeadm,kubelet,kubectl
# 安装
yum list kubelet --showduplicates | sort -r
yum install -y kubelet-1.19.8 kubeadm-1.19.8 kubectl-1.19.8 --disableexcludes=kubernetes

# 设置kubelet卡机启动
systemctl enable kubelet

# 检查版本
kubelet --version
自动补全

master节点

 yum install bash-completion -y
 source /usr/share/bash-completion/bash_completion
 source <(kubectl completion bash)
 echo "source <(kubectl completion bash)" >> ~/.bashrc
安装master

在具有vip的master节点上操作,这里是k8s-master-01

# 创建kubeadm配置文件
# 修改时参考下默认生成的文件:
 kubeadm config print init-defaults > kubeadm-conf1.yaml

参考:https://pkg.go.dev/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta2

vim kubeadm-conf.yaml
apiServer:
  certSANs:
    - k8s-master-01
    - k8s-master-02
    - k8s-master-03
    - k8s-vip
    - 192.168.31.237
    - 192.168.31.241
    - 192.168.31.242
    - 192.168.31.243
    - 127.0.0.1
  extraArgs:
    authorization-mode: Node,RBAC
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta1
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: "192.168.31.237:16443"
controllerManager: {}
dns: 
  type: CoreDNS
etcd:
  local:    
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers   # 修改成阿里镜像源
kind: ClusterConfiguration
kubernetesVersion: v1.19.0
networking: 
  dnsDomain: cluster.local  
  podSubnet: 10.244.0.0/16  # Pod 网段,flannel插件需要使用这个网段
  serviceSubnet: 10.96.0.0/12
scheduler: {}

先测试一下文件是否可用

kubeadm init --config ~/kubeadm-conf.yaml --dry-run

提前下载镜像

$ kubeadm config images list --config kubeadm-conf.yaml
$ kubeadm config images pull --config kubeadm-conf.yaml

执行初始化

kubeadm init --config ~/kubeadm-conf.yaml
echo $?

正常出现如下:
第一个join命令是加入其他master节点,第二个是用于加入worker节点

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:

  kubeadm join 192.168.31.237:16443 --token p4s73k.bf3v0448ktuuf9p4 \
    --discovery-token-ca-cert-hash sha256:d580a66d443a484fc87a32fc7ec8711c4c44767c3a211a838458145c14eac512 \
    --control-plane 

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.31.237:16443 --token p4s73k.bf3v0448ktuuf9p4 \
    --discovery-token-ca-cert-hash sha256:d580a66d443a484fc87a32fc7ec8711c4c44767c3a211a838458145c14eac512 

拷贝证书文件到其他master节点

# 拷贝证书相关配置文件                                
ssh root@192.168.31.242  mkdir -p /etc/kubernetes/pki/etcd
scp /etc/kubernetes/admin.conf root@192.168.31.242:/etc/kubernetes
scp /etc/kubernetes/pki/{ca.*,sa.*,front-proxy-ca.*} root@192.168.31.242:/etc/kubernetes/pki
scp /etc/kubernetes/pki/etcd/ca.* root@192.168.31.242:/etc/kubernetes/pki/etcd

ssh root@192.168.31.243  mkdir -p /etc/kubernetes/pki/etcd
scp /etc/kubernetes/admin.conf root@192.168.31.243:/etc/kubernetes
scp /etc/kubernetes/pki/{ca.*,sa.*,front-proxy-ca.*} root@192.168.31.243:/etc/kubernetes/pki
scp /etc/kubernetes/pki/etcd/ca.* root@192.168.31.243:/etc/kubernetes/pki/etcd


按照上面提示输出命令

allmatser,init外的master节点在加入集群后执行如下

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

加入master

kubeadm join 192.168.31.237:16443 --token p4s73k.bf3v0448ktuuf9p4 \
    --discovery-token-ca-cert-hash sha256:d580a66d443a484fc87a32fc7ec8711c4c44767c3a211a838458145c14eac512 \
    --control-plane 

加入work

kubeadm join 192.168.31.237:16443 --token p4s73k.bf3v0448ktuuf9p4 \
    --discovery-token-ca-cert-hash sha256:d580a66d443a484fc87a32fc7ec8711c4c44767c3a211a838458145c14eac512 
安装集群网络

所有master节点 (init节点执行就行,无需所有)

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

$ vi kube-flannel.yml
...      
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-amd64
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        - --iface=ens192  # 如果机器存在多网卡的话,指定内网网卡的名称,默认不指定的话会找第一块网
        resources:
          requests:
            cpu: "100m"
...

# 先拉取镜像,此过程国内速度比较慢,若报错多执行几次
$ docker pull quay.io/coreos/flannel:v0.14.0-amd64

# 执行flannel安装
$ kubectl apply -f kube-flannel.yml

# 检查
[root@k8s-master-01 ~]#  kubectl get pods -n kube-system
NAME                                    READY   STATUS    RESTARTS   AGE
coredns-6d56c8448f-2sp5j                1/1     Running   0          33m
coredns-6d56c8448f-4pvjv                1/1     Running   0          33m
etcd-k8s-master-01                      1/1     Running   11         19h
etcd-k8s-master-02                      1/1     Running   8          163m
etcd-k8s-master-03                      1/1     Running   15         157m
kube-apiserver-k8s-master-01            1/1     Running   10         19h
kube-apiserver-k8s-master-02            1/1     Running   10         163m
kube-apiserver-k8s-master-03            1/1     Running   22         157m
kube-controller-manager-k8s-master-01   1/1     Running   9          19h
kube-controller-manager-k8s-master-02   1/1     Running   8          163m
kube-controller-manager-k8s-master-03   1/1     Running   9          157m
kube-flannel-ds-595gl                   1/1     Running   0          32s
kube-flannel-ds-brzmf                   1/1     Running   0          32s
kube-flannel-ds-ckkb7                   1/1     Running   0          32s
kube-flannel-ds-grfbc                   1/1     Running   0          32s
kube-flannel-ds-mjztr                   1/1     Running   0          32s
kube-flannel-ds-rkddx                   1/1     Running   0          32s
kube-proxy-26kj2                        1/1     Running   5          19h
kube-proxy-2hbnf                        1/1     Running   5          18h
kube-proxy-6gw8j                        1/1     Running   5          18h
kube-proxy-j8tq7                        1/1     Running   8          163m
kube-proxy-lxq9x                        1/1     Running   10         19h
kube-proxy-zqtdb                        1/1     Running   6          158m
kube-scheduler-k8s-master-01            1/1     Running   9          19h
kube-scheduler-k8s-master-02            1/1     Running   8          163m
kube-scheduler-k8s-master-03            1/1     Running   7          157m


# 需要耐心等待一会

查看状态

[root@k8s-master-01 ~]#  kubectl get no
NAME            STATUS   ROLES    AGE   VERSION
k8s-master-01   Ready    master   17h   v1.19.8
k8s-master-02   Ready    master   58m   v1.19.8
k8s-master-03   Ready    master   52m   v1.19.8
k8s-slave-01    Ready    <none>   17h   v1.19.8
k8s-slave-02    Ready    <none>   17h   v1.19.8
k8s-slave-03    Ready    <none>   17h   v1.19.8

[root@k8s-master-01 ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS      MESSAGE                                                                                       ERROR
controller-manager   Unhealthy   Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused   
scheduler            Unhealthy   Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused   
etcd-0               Healthy     {"health":"true"}  

在其中一台master节点命令检查集群及pod状态
[root@k8s-master-03 ~]#  kubectl get pods --all-namespaces
NAMESPACE     NAME                                    READY   STATUS    RESTARTS   AGE
kube-system   coredns-6d56c8448f-98759                1/1     Running   5          17h
kube-system   coredns-6d56c8448f-tqwws                1/1     Running   5          17h
kube-system   etcd-k8s-master-01                      1/1     Running   8          17h
kube-system   etcd-k8s-master-02                      1/1     Running   3          60m
kube-system   etcd-k8s-master-03                      1/1     Running   3          55m
kube-system   kube-apiserver-k8s-master-01            1/1     Running   6          17h
kube-system   kube-apiserver-k8s-master-02            1/1     Running   5          60m
kube-system   kube-apiserver-k8s-master-03            1/1     Running   3          55m
kube-system   kube-controller-manager-k8s-master-01   1/1     Running   6          17h
kube-system   kube-controller-manager-k8s-master-02   1/1     Running   3          60m
kube-system   kube-controller-manager-k8s-master-03   1/1     Running   4          55m
kube-system   kube-flannel-ds-bx729                   1/1     Running   2          55m
kube-system   kube-flannel-ds-jwtcj                   1/1     Running   5          16h
kube-system   kube-flannel-ds-kwp42                   1/1     Running   3          16h
kube-system   kube-flannel-ds-lf4zn                   1/1     Running   6          16h
kube-system   kube-flannel-ds-pxpr4                   1/1     Running   6          16h
kube-system   kube-flannel-ds-w5wrc                   1/1     Running   2          60m
kube-system   kube-proxy-26kj2                        1/1     Running   2          17h
kube-system   kube-proxy-2hbnf                        1/1     Running   2          17h
kube-system   kube-proxy-6gw8j                        1/1     Running   2          17h
kube-system   kube-proxy-j8tq7                        1/1     Running   3          60m
kube-system   kube-proxy-lxq9x                        1/1     Running   7          17h
kube-system   kube-proxy-zqtdb                        1/1     Running   3          55m
kube-system   kube-scheduler-k8s-master-01            1/1     Running   6          17h
kube-system   kube-scheduler-k8s-master-02            1/1     Running   3          60m
kube-system   kube-scheduler-k8s-master-03            1/1     Running   3          55m

验证集群的高可用性能
# 关闭master01,查看集群状态
[root@k8s-master-02 ~]# kubectl get node
NAME            STATUS     ROLES    AGE    VERSION
k8s-master-01   NotReady   master   19h    v1.19.8
k8s-master-02   Ready      master   172m   v1.19.8
k8s-master-03   Ready      master   166m   v1.19.8
k8s-slave-01    Ready      <none>   19h    v1.19.8
k8s-slave-02    Ready      <none>   19h    v1.19.8
k8s-slave-03    Ready      <none>   19h    v1.19.8
[root@k8s-master-02 ~]# kubectl get pods --all-namespaces
NAMESPACE     NAME                                    READY   STATUS    RESTARTS   AGE
kube-system   coredns-6d56c8448f-2sp5j                1/1     Running   1          41m
kube-system   coredns-6d56c8448f-4pvjv                1/1     Running   1          41m
kube-system   etcd-k8s-master-01                      1/1     Running   12         19h
kube-system   etcd-k8s-master-02                      1/1     Running   9          172m
kube-system   etcd-k8s-master-03                      1/1     Running   16         166m
kube-system   kube-apiserver-k8s-master-01            1/1     Running   11         19h
kube-system   kube-apiserver-k8s-master-02            1/1     Running   11         172m
kube-system   kube-apiserver-k8s-master-03            1/1     Running   23         166m
kube-system   kube-controller-manager-k8s-master-01   1/1     Running   10         19h
kube-system   kube-controller-manager-k8s-master-02   1/1     Running   9          172m
kube-system   kube-controller-manager-k8s-master-03   1/1     Running   10         166m
kube-system   kube-flannel-ds-595gl                   1/1     Running   2          9m6s
kube-system   kube-flannel-ds-brzmf                   1/1     Running   1          9m6s
kube-system   kube-flannel-ds-ckkb7                   1/1     Running   1          9m6s
kube-system   kube-flannel-ds-grfbc                   1/1     Running   1          9m6s
kube-system   kube-flannel-ds-mjztr                   1/1     Running   2          9m6s
kube-system   kube-flannel-ds-rkddx                   1/1     Running   1          9m6s
kube-system   kube-proxy-26kj2                        1/1     Running   6          19h
kube-system   kube-proxy-2hbnf                        1/1     Running   6          19h
kube-system   kube-proxy-6gw8j                        1/1     Running   6          19h
kube-system   kube-proxy-j8tq7                        1/1     Running   9          172m
kube-system   kube-proxy-lxq9x                        1/1     Running   11         19h
kube-system   kube-proxy-zqtdb                        1/1     Running   7          166m
kube-system   kube-scheduler-k8s-master-01            1/1     Running   10         19h
kube-system   kube-scheduler-k8s-master-02            1/1     Running   9          172m
kube-system   kube-scheduler-k8s-master-03            1/1     Running   8          166m


# 初学k8s,此处存在一个疑问,k8s高可用集群,三主三从必须启动两个以上的主节点,才能访问。如果是多个主节点,不知道是怎么计算比例?因为我关掉任意两个主节点,都会报错,读取不到node节点信息。特意加了一个master4不行,一直以为是主节点3出问题了。

[root@k8s-master-01 ~]# kubectl get node
NAME            STATUS     ROLES    AGE     VERSION
k8s-master-01   Ready      master   24h     v1.19.8
k8s-master-02   NotReady   master   7h57m   v1.19.8
k8s-master-03   Ready      master   159m    v1.19.8
k8s-master-04   NotReady      master   17m     v1.19.8
k8s-slave-01    NotReady   <none>   24h     v1.19.8
k8s-slave-02    NotReady   <none>   24h     v1.19.8
k8s-slave-03    NotReady   <none>   24h     v1.19.8
[root@k8s-master-01 ~]# kubectl get node
Error from server: etcdserver: request timed out
[root@k8s-master-01 ~]# kubectl get node
Error from server: etcdserver: request timed out
[root@k8s-master-01 ~]# kubectl get node
Error from server: etcdserver: request timed out
[root@k8s-master-01 ~]# kubectl get node
Error from server (InternalError): an error on the server ("") has prevented the request from succeeding

# 目前是三个主机至少开两个,四个至少开三个。

答案应该是有一个算法或者选举,raft选举, 节点总算/2+1。
集群缩容

master节点

kubectl drain <node name> --delete-local-data --force --ignore-daemonsets
kubectl delete node <node name>

node节点

node节点
kubeadm reset
集群扩容
默认情况下,加入集群的token是 24小时过期
重新生成一个 token,命令如下:

# 显示获取token列表
$ kubeadm token list
# 生成新的token
$ kubeadm token create
除token外,join命令还需要一个sha256的值,通过以下方法计算

openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
用上面输出的token和sha256的值或者是利用kubeadm token create --print-join-command拼接join命令即可

替换如下:
kubeadm join 192.168.31.237:16443 --token 8dcwwq.ztsqz12gnua2mbul \
    --discovery-token-ca-cert-hash sha256:d580a66d443a484fc87a32fc7ec8711c4c44767c3a211a838458145c14eac512 \
    --control-plane 

演示集群扩缩容
# 删除k8s-master-03
kubectl drain k8s-master-03 --delete-local-data --force --ignore-daemonsets
kubectl delete node k8s-master-03

#执行下面清理命令

#在master01 重新生成token
[root@k8s-master-01 ~]# kubeadm token create
W0721 13:57:58.851096   19810 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
1aeae5.9kpyzyp08rligx9m

[root@k8s-master-01 ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
d580a66d443a484fc87a32fc7ec8711c4c44767c3a211a838458145c14eac512

# 拷贝相关证书
ssh root@192.168.31.243  mkdir -p /etc/kubernetes/pki/etcd
scp /etc/kubernetes/admin.conf root@192.168.31.243:/etc/kubernetes
scp /etc/kubernetes/pki/{ca.*,sa.*,front-proxy-ca.*} root@192.168.31.243:/etc/kubernetes/pki
scp /etc/kubernetes/pki/etcd/ca.* root@192.168.31.243:/etc/kubernetes/pki/etcd


#在master节点重新执行join
kubeadm join 192.168.31.237:16443 --token 1aeae5.9kpyzyp08rligx9m \
    --discovery-token-ca-cert-hash sha256:d580a66d443a484fc87a32fc7ec8711c4c44767c3a211a838458145c14eac512 \
    --control-plane 
    
    
出现如下报错:
...
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
error execution phase check-etcd: etcd cluster is not healthy: failed to dial endpoint https://192.168.31.243:2379 with maintenance client: context deadline exceeded
To see the stack trace of this error execute with --v=5 or higher



# 查看node状态,可以看到没有加入
[root@k8s-master-01 ~]# kubectl get node
NAME            STATUS   ROLES    AGE     VERSION
k8s-master-01   Ready    master   21h     v1.19.8
k8s-master-02   Ready    master   4h50m   v1.19.8
k8s-slave-01    Ready    <none>   21h     v1.19.8
k8s-slave-02    Ready    <none>   21h     v1.19.8
k8s-slave-03    Ready    <none>   21h     v1.19.8

# 查看kubeadm信息
[root@k8s-master-01 ~]# kubectl describe configmaps kubeadm-conf -n kube-system 
...
apiEndpoints:
  k8s-master-01:
    advertiseAddress: 192.168.31.241
    bindPort: 6443
  k8s-master-02:
    advertiseAddress: 192.168.31.242
    bindPort: 6443
  k8s-master-03:
    advertiseAddress: 192.168.31.243
    bindPort: 6443
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterStatus
...
# 可以看到这个节点的信息还残留在etcd中

# 进入etcd清理
[root@k8s-master-01 ~]# docker ps | grep etcd
bd0f0803660f   0369cf4303ff  
...

[root@k8s-master-01 ~]# docker cp bd0f0803660f:/usr/local/bin/etcdctl /usr/bin
[root@k8s-master-01 ~]# which etcdctl
/usr/bin/etcdctl
[root@k8s-master-01 ~]# export ETCDCTL_API=3

$ alias etcdctl='etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key'

[root@k8s-master-01 ~]#  etcdctl member list
2d9b7012db6a7425, started, k8s-master-01, https://192.168.31.241:2380, https://192.168.31.241:2379, false
4a4b286b28b131d7, started, k8s-master-02, https://192.168.31.242:2380, https://192.168.31.242:2379, false
692230c05174381a, started, k8s-master-03, https://192.168.31.243:2380, https://192.168.31.243:2379, false

## 删除 etcd 集群成员 k8s-master-2-11
[root@k8s-master-01 ~]# etcdctl member remove 692230c05174381a
Member 692230c05174381a removed from cluster dfd6802d2f971374

[root@k8s-master-01 ~]#  etcdctl member list
2d9b7012db6a7425, started, k8s-master-01, https://192.168.31.241:2380, https://192.168.31.241:2379, false
4a4b286b28b131d7, started, k8s-master-02, https://192.168.31.242:2380, https://192.168.31.242:2379, false



# 如出现如下:
[root@k8s-master-03 ~]# kubeadm join 192.168.31.237:16443 --token 1aeae5.9kpyzyp08rligx9m \
>     --discovery-token-ca-cert-hash sha256:d580a66d443a484fc87a32fc7ec8711c4c44767c3a211a838458145c14eac512 \
>     --control-plane 
[preflight] Running pre-flight checks
	[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 20.10.7. Latest validated version: 19.03
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR DirAvailable--etc-kubernetes-manifests]: /etc/kubernetes/manifests is not empty
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

#则删除此文件夹
# /etc/kubernetes/manifests is not empty
rm -rf /etc/kubernetes/manifests


# 再次加入
This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Control plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

	mkdir -p $HOME/.kube
	sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

[root@k8s-master-01 ~]# kubectl get node
NAME            STATUS   ROLES    AGE     VERSION
k8s-master-01   Ready    master   21h     v1.19.8
k8s-master-02   Ready    master   5h18m   v1.19.8
k8s-master-03   Ready    master   36s     v1.19.8
k8s-slave-01    Ready    <none>   21h     v1.19.8
k8s-slave-02    Ready    <none>   21h     v1.19.8
k8s-slave-03    Ready    <none>   21h     v1.19.8

设置master节点是否可调度

默认部署成功后,master节点无法调度业务pod,如需设置master节点也可以参与pod的调度,需执行:

$ kubectl taint node k8s-master-01 node-role.kubernetes.io/master:NoSchedule-
$ kubectl taint node k8s-master-02 node-role.kubernetes.io/master:NoSchedule-
$ kubectl taint node k8s-master-03 node-role.kubernetes.io/master:NoSchedule-
清理环境

如果你的集群安装过程中遇到了其他问题,我们可以使用下面的命令来进行重置:

# 在全部集群节点执行
kubeadm reset
ifconfig cni0 down && ip link delete cni0
ifconfig flannel.1 down && ip link delete flannel.1
rm -rf /run/flannel/subnet.env
rm -rf /var/lib/cni/
mv /etc/kubernetes/ /tmp
mv /var/lib/etcd /tmp
mv ~/.kube /tmp
iptables -F
iptables -t nat -F
ipvsadm -C
ip link del kube-ipvs0
ip link del dummy0
安装dashboard

master-01

wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml

$ vi recommended.yaml
# 修改Service为NodePort类型,文件的45行上下
......
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard
  type: NodePort  # 加上type=NodePort变成NodePort类型的服务
......


$ kubectl apply -f recommended.yaml 

[root@k8s-master-01 ~]# kubectl -n kubernetes-dashboard get svc
NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.108.1.64      <none>        8000/TCP        20s
kubernetes-dashboard        NodePort    10.104.215.193   <none>        443:31110/TCP   20s


使用浏览器访问 https://192.168.31.241:31110,其中172.21.51.143为master节点的外网ip地址

创建ServiceAccount进行访问
$ vi dashboard-admin.conf
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: admin
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
  name: admin
  namespace: kubernetes-dashboard

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin
  namespace: kubernetes-dashboard
}


[root@k8s-master-01 ~]# kubectl apply -f dashboard-admin.conf
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding
clusterrolebinding.rbac.authorization.k8s.io/admin created

[root@k8s-master-01 ~]# kubectl -n kubernetes-dashboard get secret |grep admin-token
admin-token-4dprw                  kubernetes.io/service-account-token   3      21s

# kubectl -n kubernetes-dashboard get secret admin-token-4dprw  -o jsonpath={.data.token}|base64 -d

[root@k8s-master-01 ~]#  kubectl -n kubernetes-dashboard get secret admin-token-4dprw  -o jsonpath={.data.token}|base64 -d
eyJhbGciOiJSUzI1NiIsImtpZCI6IlI5dXlNMU9LaUU2QUJEZHU1S2w5S2tuUmNKRlFOSi1LakwzYVgyeEZ5b1kifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi10b2tlbi00ZHBydyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJhZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImRmYmM4MjA3LTdkNzEtNGViMy1iZDg5LTZmZjJjZTM4MzIyZCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDphZG1pbiJ9.fjxdwqhRoE7Jk_zAlHKf05aggnjqkaSwKhldO9soCNaHBrWtURzkiAYONue8OCXsQQ8qeKPzlXr-BCIBYqDdFizVOe7G2ntBVc7yaruJJh0GKXZOULrKSaQj0-_CjDc4SCmPqKB8zxcSIK1W-EuRf_OfwEENbbb7Tl_DAIp9Y3AkWNDK3p_7E5fcTWX_A-yPdUvwrMS7ps5nKUUCi5I6wJusiY2reJlFVcAv0aRR4mF3CGKE9FqddYMZwKbqgNxHaX-AYrUBonQnhiTX3wXCGAxxHPg9bZEeujFnk7osODwapU_Uah_5aIaeOuHeyjjFd4o7xWGgfGtiYfERIqCpIw
kubernetes服务访问之ingress

对于kubernetes的service,无论是cluster-ip还是NodePort均是四层的负载均衡,集群内的服务如何实现七层的负载均衡,就需要借助ingress,ingress控制器的实现方式有很多,如nginx,contour,haproxy,traefik,Istio。几种常用的ingress功能对比和选型可以参考这里:(https://www.kubernetes.org.cn)

ingress-nginx是7层负载均衡,负责统一管理外部对k8s-cluster中service的请求,主要包含:

  • ingress-nginx-controller:根据用户编写的ingress规则,(创建的ingress的yaml文件),动态的去更改nginx服务的配置文件,并且reload重载使其生效,(是自动化,通过lua脚本来实现);

  • ingress资源对象:将nginx的配置抽象成一个ingress对象

    apiVersion: networking.k8s.io/v1beta1
    kind: Ingress
    metadata:
      name: simple-example
    spec:
      rules:
      - host: foo.bar.com
        http:
          paths:
          - path: /
            backend:
              serviceName: service1
              servicePort: 8080
    

实现逻辑

  • ingress controller通过和kubernetes api交互,动态的去感知集群中ingress规则变化
  • 然后读取ingress规(规则就是写明看哪个域名对应哪个service),安装自定义的规则,生成一段nginx配置
  • 再写到nginx-ingress-controller的pod中,这个ingress controller的pod里运行着一个nginx服务,控制器把生成的nginx配置写入 /etc/nginx/nginx.conf文件中
  • 然后reload一下是配置生效,以此达到域名分别配置和动态更新的问题

安装

官方文档:https://github.com/kubernetes/ingress-nginx/blob/master/docs/deploy/index.md

$ wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/mandatory.yaml

# 修改如下:
[root@k8s-master-01 ~]# vim mandatory.yaml
[root@k8s-master-01 ~]# grep -n5 nodeSelector mandatory.yaml
212-    spec:
213-      # wait up to five minutes for the drain of connections
214-      hostNetwork: true #添加为host模式
215-      terminationGracePeriodSeconds: 300
216-      serviceAccountName: nginx-ingress-serviceaccount
217:      nodeSelector:
218-        ingress: "true"  #替换此处,来决定将ingress部署在哪些机器
219-      containers:
220-        - name: nginx-ingress-controller
221-          image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0
222-          args:


# 为要部署的的节点添加label
$ kubectl label node k8s-master-01 ingress=true
$  kubectl create -f mandatory.yaml

nginx-ingress-controller-66bff489bb-thgjt   1/1     Running   0          3m26s
[root@k8s-master-01 ~]# kubectl  get ns
NAME                   STATUS   AGE
default                Active   2d22h
ingress-nginx          Active   3m35s
kube-node-lease        Active   2d22h
kube-public            Active   2d22h
kube-system            Active   2d22h
kubernetes-dashboard   Active   2d
[root@k8s-master-01 ~]# kubectl -n ingress-nginx get pod
NAME                                        READY   STATUS    RESTARTS   AGE
nginx-ingress-controller-66bff489bb-thgjt   1/1     Running   0          4m3s

l
[root@k8s-master-01 ~]# grep -n5 nodeSelector mandatory.yaml
212- spec:
213- # wait up to five minutes for the drain of connections
214- hostNetwork: true #添加为host模式
215- terminationGracePeriodSeconds: 300
216- serviceAccountName: nginx-ingress-serviceaccount
217: nodeSelector:
218- ingress: “true” #替换此处,来决定将ingress部署在哪些机器
219- containers:
220- - name: nginx-ingress-controller
221- image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0
222- args:

为要部署的的节点添加label

$ kubectl label node k8s-master-01 ingress=true
$ kubectl create -f mandatory.yaml

nginx-ingress-controller-66bff489bb-thgjt 1/1 Running 0 3m26s
[root@k8s-master-01 ~]# kubectl get ns
NAME STATUS AGE
default Active 2d22h
ingress-nginx Active 3m35s
kube-node-lease Active 2d22h
kube-public Active 2d22h
kube-system Active 2d22h
kubernetes-dashboard Active 2d
[root@k8s-master-01 ~]# kubectl -n ingress-nginx get pod
NAME READY STATUS RESTARTS AGE
nginx-ingress-controller-66bff489bb-thgjt 1/1 Running 0 4m3s


  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值