k8s高可用集群部署教程

Kubernetes高可用原理:

Kubernetes的高可用主要指的是控制平面的高可用,简单说,就是有多套Master节点组件和Etcd组件,kube-apiserver使用负载平衡器暴露给工作节点,工作节点通过负载均衡连接到各Master。

HA有两种做法:

方法一:使用堆叠(stacked)控制平面节点,集群master节点与etcd节点共存,etcd也运行在控制节点,混布在一起;
混部

方法二:使用外部独立的etcd集群节点,etcd不与Master节点混布;
单独部署

两种方式的相同之处在于都提供了控制平面的冗余,实现了集群高可以用,区别在于:

  1. Etcd混布方式:所需机器资源少部署简单,利于管理容易进行横向扩展风险大,一台宿主机挂了,master和etcd就都少了一套,集群冗余度受到的影响比较大。
  2. Etcd独立部署方式:所需机器资源多(按照Etcd集群的奇数原则,这种拓扑的集群关控制平面最少就要6台宿主机了)。部署相对复杂,要独立管理etcd集群和和master集群。解耦了控制平面和Etcd,集群风险小,健壮性强,单独挂了一台master或etcd对集群的影响很小。

环境规划(本次采用etcd和master混部)

本次配置主机硬件配置: CPU: 4C;内存:32G;Linux内核:5.4.140-1.el7.elrepo ;docker版本:docker-ce-18.06.3.ce-3.el7

主机名角色划分IP地址安装组件
node-01k8s-master1192.168.1.11kubelet kube-apiserver kube-controller-manager kube-scheduler etcd docker
node-02k8s-master2192.168.1.12kubelet kube-apiserver kube-controller-manager kube-scheduler etcd docker
node-03k8s-master3192.168.1.13kubelet kube-apiserver kube-controller-manager kube-scheduler etcd docker
node-04k8s-node1192.168.1.14kubelet kube-proxy docker
node-05k8s-node2192.168.1.15kubelet kube-proxy docker
node-06k8s-node3192.168.1.16kubelet kube-proxy docker
node-7LB(主)192.168.1.17(VIP:192.168.1.10)haproxy keepalived
node-08LB(备)192.168.1.18(VIP:192.168.1.10)haproxy keepalived

准备工作

1.升级内核(所有主机)
# 查看本机内核
[root@node-01 ~]#  uname -r 
3.10.0-1127.el7.x86_64
# 安装elrepo
[root@node-01 ~]#  yum install -y https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm
[root@node-01 ~]# yum clean all && yum makecache
# 查看可用kernel版本
[root@node-01 ~]#  yum list available --disablerepo=* --enablerepo=elrepo-kernel
已加载插件:fastestmirror
Loading mirror speeds from cached hostfile
 * elrepo-kernel: mirrors.neusoft.edu.cn
可安装的软件包
kernel-lt.x86_64                                                      5.4.140-1.el7.elrepo                                     elrepo-kernel
kernel-lt-devel.x86_64                                                5.4.140-1.el7.elrepo                                     elrepo-kernel
kernel-lt-doc.noarch                                                  5.4.140-1.el7.elrepo                                     elrepo-kernel
kernel-lt-headers.x86_64                                              5.4.140-1.el7.elrepo                                     elrepo-kernel
kernel-lt-tools.x86_64                                                5.4.140-1.el7.elrepo                                     elrepo-kernel
kernel-lt-tools-libs.x86_64                                           5.4.140-1.el7.elrepo                                     elrepo-kernel
kernel-lt-tools-libs-devel.x86_64                                     5.4.140-1.el7.elrepo                                     elrepo-kernel
kernel-ml.x86_64                                                      5.13.10-1.el7.elrepo                                     elrepo-kernel
kernel-ml-devel.x86_64                                                5.13.10-1.el7.elrepo                                     elrepo-kernel
kernel-ml-doc.noarch                                                  5.13.10-1.el7.elrepo                                     elrepo-kernel
kernel-ml-headers.x86_64                                              5.13.10-1.el7.elrepo                                     elrepo-kernel
kernel-ml-tools.x86_64                                                5.13.10-1.el7.elrepo                                     elrepo-kernel
kernel-ml-tools-libs.x86_64                                           5.13.10-1.el7.elrepo                                     elrepo-kernel
kernel-ml-tools-libs-devel.x86_64                                     5.13.10-1.el7.elrepo                                     elrepo-kernel
perf.x86_64                                                           5.13.10-1.el7.elrepo                                     elrepo-kernel
python-perf.x86_64                                                    5.13.10-1.el7.elrepo
[root@node-01 ~]#  yum install -y kernel-lt-5.4.140-1.el7.elrepo --enablerepo=elrepo-kernel 
# 查看当前系统可用内核版本
[root@node-01 ~]#   cat /boot/grub2/grub.cfg |grep CentOS
menuentry 'CentOS Linux (5.4.140-1.el7.elrepo.x86_64) 7 (Core)' --class centos --class gnu-linux --class gnu --class os --unrestricted $menuentry_id_option 'gnulinux-3.10.0-1127.el7.x86_64-advanced-3f303684-1cc1-4740-8282-31234550639f' {
menuentry 'CentOS Linux (3.10.0-1127.el7.x86_64) 7 (Core)' --class centos --class gnu-linux --class gnu --class os --unrestricted $menuentry_id_option 'gnulinux-3.10.0-1127.el7.x86_64-advanced-3f303684-1cc1-4740-8282-31234550639f' {
menuentry 'CentOS Linux (0-rescue-5ac013fff1d445d488fffabc1942b358) 7 (Core)' --class centos --class gnu-linux --class gnu --class os --unrestricted $menuentry_id_option 'gnulinux-0-rescue-5ac013fff1d445d488fffabc1942b358-advanced-3f303684-1cc1-4740-8282-31234550639f' {

#  设置从新内核开机启动
[root@node-01 ~]#  grub2-set-default "CentOS Linux (5.4.140-1.el7.elrepo.x86_64) 7 (Core)" 
#  验证设置是否生效
[root@node-01 ~]#  grub2-editenv list 
saved_entry=CentOS Linux (5.4.140-1.el7.elrepo.x86_64) 7 (Core)
[root@node-01 ~]#  reboot
[root@node-01 ~]#  uname -r
5.4.140-1.el7.elrepo.x86_64

#  卸载掉旧版本内核(此步可省)
[root@node-01 ~]#  rpm -qa |grep kernel
[root@node-01 ~]#  yum remove kernel-3.10.0-1127.el7.x86_64
2.设置主机名(所有主机)
[root@node-01 ~]# hostnamectl set-hostname node-01
......
3.配置hosts(所有主机)
[root@node-01 ~]#   cat  /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.1.11  node-01
192.168.1.12  node-02
192.168.1.13  node-03
192.168.1.14  node-04
192.168.1.15  node-05
192.168.1.16  node-06
192.168.1.17  node-07
192.168.1.18  node-08
4.配置master节点互信,master到node节点免密登录(此步只在三个master节点操作)
[root@node-01 ~]#   ssh-keygen -t rsa
[root@node-01 ~]#   ssh-copy-id root@node-xx
5.安装依赖包(所有节点)
[root@node-01 ~]#  yum install -y chrony conntrack ipvsadm ipset jq htop net-tools nc iptables \
   curl sysstat libseccomp wget socat git yum-utils device-mapper-persistent-data lvm2
6.关闭防火墙(所有节点)
[root@node-01 ~]#  systemctl stop firewalld && systemctl disable firewalld
7.关闭SELinux(所有节点)
[root@node-01 ~]#  setenforce 0
[root@node-01 ~]#  sed -i 's/^SELINUX=.*/SELINUX=disabled/'  /etc/selinux/config
8.关闭swap分区(所有节点)
[root@node-01 ~]#  swapoff -a
[root@node-01 ~]#  sed -i '/ swap/ s/^\(.*\)$/#\1/g' /etc/fstab
9.优化内核参数
[root@node-01 ~]#  cat > kubernetes.conf <<EOF
   net.bridge.bridge-nf-call-iptables=1
   net.bridge.bridge-nf-call-ip6tables=1
   net.ipv4.ip_forward=1     #ipvs转发功能
   net.ipv4.tcp_tw_recycle=0
   net.ipv4.neigh.default.gc_thresh1=1024
   net.ipv4.neigh.default.gc_thresh2=2048
   net.ipv4.neigh.default.gc_thresh3=4096
   vm.swappiness=0
   vm.overcommit_memory=1
   vm.panic_on_oom=0
   fs.inotify.max_user_instances=8192
   fs.inotify.max_user_watches=1048576
   fs.file-max=52706963
   fs.nr_open=52706963
   net.ipv6.conf.all.disable_ipv6=1
   net.netfilter.nf_conntrack_max=2310720
EOF
[root@node-01 ~]#  cp kubernetes.conf /etc/sysctl.d/kubernetes.conf
[root@node-01 ~]#  sysctl -p /etc/sysctl.d/kubernetes.conf
10.开启ipvs支持
[root@node-01 ~]#  cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
[root@node-01 ~]#  chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs
11.设置时区,设置系统时间同步

参考我的博文:Linux集群时间同步

12.两台LB节点安装haproxy和keepalived服务
[root@node-07 ~]#  yum -y install haproxy keepalived #LB两台都需要

修改LB(主节点) keepalived配置文件

[root@node-07 ~]#   cat /etc/keepalived/keepalived.conf 
! Configuration File for keepalived

global_defs {
   router_id LVS_DEVEL

# 添加如下内容
   script_user root
   enable_script_security
}

vrrp_script check_haproxy {
    script "/etc/keepalived/check_haproxy.sh"         # 检测脚本路径
    interval 3
    weight -2 
    fall 10
    rise 2
}

vrrp_instance VI_1 {
    state MASTER            # MASTER,指定为主节点
    interface eth0         # 本机网卡名
    virtual_router_id 51
    priority 100             # 权重100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.1.10      # 虚拟IP
    }
    track_script {
        check_haproxy       # 模块
    }
}

修改LB(备节点) keepalived配置文件

[root@node-08 ~]#   cat /etc/keepalived/keepalived.conf 
! Configuration File for keepalived

global_defs {
   router_id LVS_DEVEL

# 添加如下内容
   script_user root
   enable_script_security
}

vrrp_script check_haproxy {
    script "/etc/keepalived/check_haproxy.sh"         # 检测脚本路径
    interval 3
    weight -2 
    fall 10
    rise 2
}

vrrp_instance VI_1 {
    state BACKUP            # BACKUP
    interface eth0         # 本机网卡名
    virtual_router_id 51
    priority 90             # 权重90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        192.168.1.10      # 虚拟IP
    }
    track_script {
        check_haproxy       # 模块
    }
}

修改haproxy的配置,两台LB节点haproxy配置都一样

[root@node-07 ~]#   cat /etc/haproxy/haproxy.cfg # LB两台均需要
#---------------------------------------------------------------------
# Example configuration for a possible web application.  See the
# full configuration options online.
#
#   http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
#
#---------------------------------------------------------------------

#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
    # to have these messages end up in /var/log/haproxy.log you will
    # need to:
    #
    # 1) configure syslog to accept network log events.  This is done
    #    by adding the '-r' option to the SYSLOGD_OPTIONS in
    #    /etc/sysconfig/syslog
    #
    # 2) configure local2 events to go to the /var/log/haproxy.log
    #   file. A line like the following can be added to
    #   /etc/sysconfig/syslog
    #
    #    local2.*                       /var/log/haproxy.log
    #
    log         127.0.0.1 local2

    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     4000
    user        haproxy
    group       haproxy
    daemon

    # turn on stats unix socket
    stats socket /var/lib/haproxy/stats

#---------------------------------------------------------------------
# common defaults that all the 'listen' and 'backend' sections will
# use if not designated in their block
#---------------------------------------------------------------------
defaults
    mode                    http
    log                     global
    option                  httplog
    option                  dontlognull
    option http-server-close
    option forwardfor       except 127.0.0.0/8
    option                  redispatch
    retries                 3
    timeout http-request    10s
    timeout queue           1m
    timeout connect         10s
    timeout client          1m
    timeout server          1m
    timeout http-keep-alive 10s
    timeout check           10s
    maxconn                 3000

#---------------------------------------------------------------------
# main frontend which proxys to the backends
#---------------------------------------------------------------------
frontend  kubernetes-apiserver
    mode                        tcp
    bind                        *:6443
    option                      tcplog
    default_backend             kubernetes-apiserver

#---------------------------------------------------------------------
# static backend for serving up images, stylesheets and such
#---------------------------------------------------------------------
listen stats
    bind            *:1080
    stats auth      admin:awesomePassword
    stats refresh   5s
    stats realm     HAProxy\ Statistics
    stats uri       /admin?stats
#---------------------------------------------------------------------
# round robin balancing between the various backends
#---------------------------------------------------------------------
backend kubernetes-apiserver
    mode        tcp
    balance     roundrobin
    server  node-01 192.168.1.11:6443 check
    server  node-02 192.168.1.12:6443 check
    server  node-03 192.168.1.13:6443 check
	

两台LB节点编写健康监测脚本

[root@node-07 ~]#  cat /etc/keepalived/check_haproxy.sh   # LB两台均需要
#!/bin/sh
# HAPROXY down
A=`ps -C haproxy --no-header | wc -l`
if [ $A -eq 0 ]
then
  systmectl start haproxy
  if [ ps -C haproxy --no-header | wc -l -eq 0 ]
  then
    killall -9 haproxy
    echo "HAPROXY down" | mail -s "haproxy"
    sleep 3600
  fi 
fi
[root@node-07 ~]#   chmod +x /etc/keepalived/check_haproxy.sh

启动keepalived和haproxy ( LB两台均需要)

[root@node-07 ~]#  systemctl start keepalived && systemctl enable keepalived
[root@node-07 ~]#  systemctl start haproxy && systemctl enable haproxy

14. 安装指定版本docker(所有节点)

参考我的博文centos7安装docker

15.配置docker(所有节点)

修改docker默认驱动,将默认cgroups修改为systemd,和kubeadm统一

[root@node-01 ~]#  vim /etc/docker/daemon.json   
"exec-opts": ["native.cgroupdriver=systemd"]
[root@node-01 ~]#  systemctl daemon-reload && systemctl restart docker
16.安装kubernetes

配置kubernetes的yum源,所有节点

[root@node-01 ~]#   cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
[root@node-01 ~]#  yum clean all && yum makecache
[root@node-01 ~]#  yum install -y  kubelet-1.18.8-0 kubeadm-1.18.8-0 kubectl-1.18.8-0

获取集群初始化文件 (node-01节点执行)

[root@node-01 ~]#   kubeadm config print init-defaults > kubeadm-config.yaml

修改集群初始化配置文件

[root@node-01 ~]#   vim kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.1.11   #本机IP
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: node-01           #本机主机名
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: "192.168.1.10:6443"   #虚拟IP和haproxy端口
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers       #镜像仓库源要根据自己实际情况修改成阿里
kind: ClusterConfiguration
kubernetesVersion: v1.18.8                 #k8s版本
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16"   #指定pod网段
  serviceSubnet: 10.96.0.0/12  #指定service的网段
scheduler: {}

---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
featureGates:
  SupportIPVSProxyMode: true
mode: ipvs

提前下载相关镜像 node-01利用初始化配置文件可以这样下载

[root@node-01 ~]#  kubeadm config images pull --config kubeadm-config.yaml

node-02,node-03节点没有初始化配置文件,按如下办法下载

[root@node-02 ~]#  kubeadm config images list --kubernetes-version=v1.18.8
[root@node-02 ~]#  for  image in kube-proxy:v1.18.8 kube-apiserver:v1.18.8 kube-controller-manager:v1.18.8 kube-scheduler:v1.18.8  pause:3.2  coredns:1.6.7  etcd:3.4.3-0 ;do docker pull registry.aliyuncs.com/google_containers/$image ;done

node-01初始化集群

[root@node-01 ~]#  kubeadm init --config kubeadm-config.yaml  --upload-certs

kubectl 命令的自动补全功能

[root@node-01 ~]#   yum install -y bash-completion
[root@node-01 ~]#   source /usr/share/bash-completion/bash_completion
[root@node-01 ~]#   source <(kubectl completion bash)
[root@node-01 ~]#   echo "source <(kubectl completion bash)" >> ~/.bashrc
[root@node-01 ~]#   exit  #退出重新登录

利用提示信息,添加剩下的两个master节点和三个node节点

[root@node-02 ~]#  kubeadm join 192.168.1.10:6443 --token....
[root@node-04 ~]# kubeadm join 192.168.1.10:6443 --token ....
17.部署CNI

查看节点状态

[root@node-01 ~]#  kubectl get nodes
NAME      STATUS   ROLES    AGE   VERSION
AME      STATUS     ROLES    AGE     VERSION
node-01   NotReady   master   5m22s   v1.18.8
node-02   NotReady   master   4m20s   v1.18.8
node-03   NotReady   master   3m39s   v1.18.8
node-04   NotReady   <none>   2m57s   v1.18.8
node-05   NotReady   <none>   2m24s   v1.18.8
node-06   NotReady   <none>   2m3s    v1.18.8

部署网络插件flannel

[root@node-01 ~]#  kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

再次查看节点状态

[root@node-01 ~]#  kubectl get nodes
NAME      STATUS   ROLES    AGE   VERSION
node-01   Ready    master   18m   v1.18.8
node-02   Ready    master   17m   v1.18.8
node-03   Ready    master   16m   v1.18.8
node-04   Ready    <none>   16m   v1.18.8
node-05   Ready    <none>   15m   v1.18.8
node-06   Ready    <none>   15m   v1.18.8

查看pod

[root@node-01 ~]#  kubectl get pods -n kube-system
NAME                              READY   STATUS    RESTARTS   AGE   IP             NODE      NOMINATED NODE   READINESS GATES
coredns-7ff77c879f-fv89l          1/1     Running   0          36m   10.244.0.2     node-01   <none>           <none>
coredns-7ff77c879f-tj9cm          1/1     Running   0          36m   10.244.3.2     node-04   <none>           <none>
etcd-node-01                      1/1     Running   0          36m   192.168.1.11   node-01   <none>           <none>
etcd-node-02                      1/1     Running   0          34m   192.168.1.12   node-02   <none>           <none>
etcd-node-03                      1/1     Running   0          33m   192.168.1.13   node-03   <none>           <none>
kube-apiserver-node-01            1/1     Running   0          36m   192.168.1.11   node-01   <none>           <none>
kube-apiserver-node-02            1/1     Running   0          34m   192.168.1.12   node-02   <none>           <none>
kube-apiserver-node-03            1/1     Running   0          33m   192.168.1.13   node-03   <none>           <none>
kube-controller-manager-node-01   1/1     Running   0          36m   192.168.1.11   node-01   <none>           <none>
kube-controller-manager-node-02   1/1     Running   0          34m   192.168.1.12   node-02   <none>           <none>
kube-controller-manager-node-03   1/1     Running   0          34m   192.168.1.13   node-03   <none>           <none>
kube-flannel-ds-2qrqg             1/1     Running   0          20m   192.168.1.13   node-03   <none>           <none>
kube-flannel-ds-jcn4x             1/1     Running   0          20m   192.168.1.16   node-06   <none>           <none>
kube-flannel-ds-lkhlx             1/1     Running   0          20m   192.168.1.14   node-04   <none>           <none>
kube-flannel-ds-mfttn             1/1     Running   0          20m   192.168.1.15   node-05   <none>           <none>
kube-flannel-ds-nv9qc             1/1     Running   0          20m   192.168.1.12   node-02   <none>           <none>
kube-flannel-ds-tkbxh             1/1     Running   0          20m   192.168.1.11   node-01   <none>           <none>
kube-proxy-6zxxj                  1/1     Running   1          33m   192.168.1.16   node-06   <none>           <none>
kube-proxy-fdbm5                  1/1     Running   0          36m   192.168.1.11   node-01   <none>           <none>
kube-proxy-m2vcn                  1/1     Running   1          35m   192.168.1.13   node-03   <none>           <none>
kube-proxy-mppwh                  1/1     Running   1          33m   192.168.1.15   node-05   <none>           <none>
kube-proxy-qzrrv                  1/1     Running   1          34m   192.168.1.14   node-04   <none>           <none>
kube-proxy-r4v6j                  1/1     Running   1          35m   192.168.1.12   node-02   <none>           <none>
kube-scheduler-node-01            1/1     Running   0          36m   192.168.1.11   node-01   <none>           <none>
kube-scheduler-node-02            1/1     Running   0          34m   192.168.1.12   node-02   <none>           <none>
kube-scheduler-node-03            1/1     Running   0          34m   192.168.1.13   node-03   <none>           <none>
18.部署dashboard

下载官方推荐配置文件,本文安装2.2.0版本

 [root@node-01 ~]#  wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml 

修改配置文件service为 NodePort ,指定主机端口30080

[root@node-01 ~]#  cat recommended.yaml 
.......
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort          #修改类型
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30080     #添加端口 
  selector:
    k8s-app: kubernetes-dashboard
 ......   
[root@node-01 ~]#  kubectl apply -f recommended.yaml

创建admin-token用户

[root@node-01 ~]#  cat dashboard-adminuser.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard

[root@node-01 ~]#  kubectl apply -f dashboard-adminuser.yaml

获取token

[root@node-01 ~]#  kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}')

登录:https://NodeIP:30080

  • 3
    点赞
  • 13
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

张折耳

此处应有打赏,就看兄弟你的了!

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值