Kubeadm部署Kubernetes1.18.6集群1

Kubeadm 部署 Kubernetes1.18.6 集群

一、环境说明

主机名IP地址角色系统
k8s-master192.168.203.212k8s-masterCentos7.6
k8s-node-1192.168.203.213k8s-nodeCentos7.6
k8s-node-2192.168.203.214k8s-nodeCentos7.6
  • 注意:官方建议每台机器至少双核2G内存,同时需确保MAC和product_uuid唯一(参考下面的命令查看)
ip link
cat /sys/class/dmi/id/product_uuid

二、环境配置

  • 以下命令在三台主机上均需运行

1、设置阿里云yum源(可选)

curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
rm -rf /var/cache/yum && yum makecache && yum -y update && yum -y autoremove
# 注意: 网络条件不好,可以不用 update

2、安装依赖包

yum install -y epel-release conntrack ipvsadm ipset jq sysstat curl iptables libseccomp

3、关闭防火墙

systemctl stop firewalld && systemctl disable firewalld
iptables -F && iptables -X && iptables -F -t nat && iptables -X -t nat && iptables -P FORWARD ACCEPT

4、关闭SELinux

setenforce 0
sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/config

5、关闭swap分区

swapoff -a
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

6、加载内核模块

cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
modprobe -- br_netfilter
EOF

chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules
  • modprobe ip_vs lvs基于4层的负载均很
  • modprobe ip_vs_rr 轮询
  • modprobe ip_vs_wrr 加权轮询
  • modprobe ip_vs_sh 源地址散列调度算法
  • modprobe nf_conntrack_ipv4 连接跟踪模块
  • modprobe br_netfilter 遍历桥的数据包由iptables进行处理以进行过滤和端口转发

7、设置内核参数

cat << EOF | tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF

sysctl -p /etc/sysctl.d/k8s.conf
  • overcommit_memory是一个内核对内存分配的一种策略,取值又三种分别为0, 1, 2
    • overcommit_memory=0, 表示内核将检查是否有足够的可用内存供应用进程使用;如果有足够的可用内存,内存申请允许;否则,内存申请失败,并把错误返回给应用进程。
    • overcommit_memory=1, 表示内核允许分配所有的物理内存,而不管当前的内存状态如何。
    • overcommit_memory=2, 表示内核允许分配超过所有物理内存和交换空间总和的内存
  • net.bridge.bridge-nf-call-iptables 设置网桥iptables网络过滤通告
  • net.ipv4.tcp_tw_recycle 设置 IP_TW 回收
  • vm.swappiness 禁用swap
  • vm.panic_on_oom 设置系统oom(内存溢出)
  • fs.inotify.max_user_watches 允许用户最大监控目录数
  • fs.file-max 允许系统打开的最大文件数
  • fs.nr_open 允许单个进程打开的最大文件数
  • net.ipv6.conf.all.disable_ipv6 禁用ipv6
  • net.netfilter.nf_conntrack_max 系统的最大连接数

8、安装 Docker

1、首先卸载旧版

[root@k8s ~]# 是所有节点操作

[root@k8s ~]# yum remove docker \
           docker-client \
           docker-client-latest \
           docker-common \
           docker-latest \
           docker-latest-logrotate \
           docker-logrotate \
           docker-selinux \
           docker-engine-selinux \
           docker-engine
2、安装依赖包
[root@k8s ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
3、设置安装源(阿里云)
[root@k8s ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
4、启用测试库(可选)
yum-config-manager --enable docker-ce-edge
yum-config-manager --enable docker-ce-test
5、安装
[root@k8s ~]# yum makecache fast
yum list docker-ce --showduplicates | sort -r
[root@k8s ~]# yum -y install docker-ce
6、启动
[root@k8s ~]# systemctl start docker
7、开机自启设置(docker一定要设开机自启)
[root@k8s ~]# systemctl enable docker
  • Docker建议配置阿里云镜像加速

  • 安装完成后配置启动时的命令,否则 docker 会将 iptables FORWARD chain 的默认策略设置为DROP

  • 另外Kubeadm建议将 systemd 设置为 cgroup 驱动,所以还要修改 daemon.json

[root@k8s ~]# sed -i "13i ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT" /usr/lib/systemd/system/docker.service

[root@k8s ~]# tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["https://bk6kzfqm.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}
EOF

[root@k8s ~]# systemctl daemon-reload
[root@k8s ~]# systemctl restart docker

9、安装 kubeadm 和 kubelet

1、配置安装源
[root@k8s ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
#重建yum缓存,输入y添加证书认证
yum makecache fast
2、安装
[root@k8s ~]# yum install -y kubelet kubeadm kubectl
[root@k8s ~]# systemctl enable --now kubelet
3、配置自动补全命令
#安装bash自动补全插件
[root@k8s ~]# yum install bash-completion -y
#设置kubectl与kubeadm命令补全,下次login生效
[root@k8s ~]# kubectl completion bash > /etc/bash_completion.d/kubectl
[root@k8s ~]# kubeadm completion bash > /etc/bash_completion.d/kubeadm

10、拉取所需镜像

  • 由于国内网络因素,kubernetes镜像需要从mirrors站点或通过dockerhub用户推送的镜像拉取
[root@k8s ~]# kubeadm config images list --kubernetes-version v1.18.6
W0803 07:13:18.584538   11055 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
k8s.gcr.io/kube-apiserver:v1.18.6
k8s.gcr.io/kube-controller-manager:v1.18.6
k8s.gcr.io/kube-scheduler:v1.18.6
k8s.gcr.io/kube-proxy:v1.18.6
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.7
1、拉取镜像
  • 另因阿里云的镜像暂时还没更新到v1.18.5版本,所以通过dockerhub上拉取,目前阿里云最新同步版本是v1.18.3,想通过v1.18.3版本拉取镜像请参考
[root@k8s ~]# vim get-k8s-images.sh
#!/bin/bash
# Script For Quick Pull K8S Docker Images

KUBE_VERSION=v1.18.6
PAUSE_VERSION=3.2
CORE_DNS_VERSION=1.6.7
ETCD_VERSION=3.4.3-0

# pull kubernetes images from hub.docker.com
docker pull kubeimage/kube-proxy-amd64:$KUBE_VERSION
docker pull kubeimage/kube-controller-manager-amd64:$KUBE_VERSION
docker pull kubeimage/kube-apiserver-amd64:$KUBE_VERSION
docker pull kubeimage/kube-scheduler-amd64:$KUBE_VERSION
# pull aliyuncs mirror docker images
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:$PAUSE_VERSION
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$CORE_DNS_VERSION
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:$ETCD_VERSION

# retag to k8s.gcr.io prefix
docker tag kubeimage/kube-proxy-amd64:$KUBE_VERSION  k8s.gcr.io/kube-proxy:$KUBE_VERSION
docker tag kubeimage/kube-controller-manager-amd64:$KUBE_VERSION k8s.gcr.io/kube-controller-manager:$KUBE_VERSION
docker tag kubeimage/kube-apiserver-amd64:$KUBE_VERSION k8s.gcr.io/kube-apiserver:$KUBE_VERSION
docker tag kubeimage/kube-scheduler-amd64:$KUBE_VERSION k8s.gcr.io/kube-scheduler:$KUBE_VERSION
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:$PAUSE_VERSION k8s.gcr.io/pause:$PAUSE_VERSION
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$CORE_DNS_VERSION k8s.gcr.io/coredns:$CORE_DNS_VERSION
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:$ETCD_VERSION k8s.gcr.io/etcd:$ETCD_VERSION

# untag origin tag, the images won't be delete.
docker rmi kubeimage/kube-proxy-amd64:$KUBE_VERSION
docker rmi kubeimage/kube-controller-manager-amd64:$KUBE_VERSION
docker rmi kubeimage/kube-apiserver-amd64:$KUBE_VERSION
docker rmi kubeimage/kube-scheduler-amd64:$KUBE_VERSION
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/pause:$PAUSE_VERSION
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$CORE_DNS_VERSION
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:$ETCD_VERSION

网络不佳时:百度云下载镜像:https://pan.baidu.com/s/1ii0HpU1jM3TZvY1sHDz_qg

密码:y3zk

[root@k8s ~]# docker image load -i k8s-1.18.6-images.tar
2、Master 导出镜像
[root@k8s-master ~]# docker save $(docker images | grep -v REPOSITORY | awk 'BEGIN{OFS=":";ORS=" "}{print $1,$2}') -o k8s-1.18.6-images.tar
3、node 节点导入镜像
[root@k8s-node1 ~]# docker image load -i k8s-1.18.6-images.tar

三、初始化集群

  • 以下命令如无特殊说明,均在k8s-master上执行

1、使用kubeadm init初始化集群(注意修 apiserver 地址为本机IP)

[root@k8s-master ~]# kubeadm init  --kubernetes-version=v1.18.6 --apiserver-advertise-address=192.168.203.212 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.1.0.0/16
  • –kubernetes-version=v1.16.2 : 加上该参数后启动相关镜像(刚才下载的那一堆)

  • –pod-network-cidr=10.244.0.0/16 :(Pod 中间网络通讯我们用flannel,flannel要求是10.244.0.0/16,这个IP段就是Pod的IP段)

  • –service-cidr=10.1.0.0/16 : Service(服务)网段(和微服务架构有关)

  • 初始化成功后会输出类似下面的加入命令,暂时无需运行,先记录。

[root@k8s-master ~]# kubeadm join 192.168.203.212:6443 --token u7x1ds.5tiiipijzgoyhfim     --discovery-token-ca-cert-hash sha256:b2b18c68862df62971aaf94652acb447c437003d30f34a7e84f870ce17a1a3d4

2、为需要使用kubectl的用户进行配置

#把密钥配置加载到自己的环境变量里
export KUBECONFIG=/etc/kubernetes/admin.conf

#每次启动自动加载$HOME/.kube/config下的密钥配置文件(K8S自动行为)
[root@k8s-master ~]#  mkdir -p $HOME/.kube
[root@k8s-master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

3、 集群网络配置(选择一种就可以)

1、安装 flannel 网络
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
# 注意:修改集群初始化地址及镜像能否拉去
2、安装Pod Network(使用七牛云镜像)
curl -o kube-flannel.yml https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
sed -i "s/quay.io\/coreos\/flannel/quay-mirror.qiniu.com\/coreos\/flannel/g" kube-flannel.yml
kubectl apply -f kube-flannel.yml
rm -f kube-flannel.yml
  • 使用下面的命令确保所有的Pod都处于Running状态,可能要等到许久。
kubectl get pod --all-namespaces -o wide
3、安装 calico 网络(推荐使用)
[root@k8s-master ~]# wget https://docs.projectcalico.org/v3.15/manifests/calico.yaml
# 注意修该集群初始化地址
[root@k8s-master ~]# vim  calico.yam
- name: CALICO_IPV4POOL_CIDR
  value: "10.224.0.0/16"
## 给ip更改下,放开ip。
[root@k8s-master ~]# kubectl apply -f calico.yaml

4、向Kubernetes集群中添加Node节点

  • 在k8s-node-2和k8s-node-3上运行之前在k8s-node-1输出的命令
[root@k8s-node1 ~]# kubeadm join 192.168.203.212:6443 --token u7x1ds.5tiiipijzgoyhfim     --discovery-token-ca-cert-hash sha256:b2b18c68862df62971aaf94652acb447c437003d30f34a7e84f870ce17a1a3d4
  • 注意:没有记录集群 join 命令的可以通过以下方式重新获取
[root@k8s-master ~]# kubeadm token create --print-join-command --ttl=0

2、为需要使用kubectl的用户进行配置

  • 查看集群中的节点状态,可能要等等许久才Ready
[root@k8s-master ~]# kubectl get nodes

5、kube-proxy 开启 ipvs

[root@k8s-master ~]# kubectl get configmap kube-proxy -n kube-system -o yaml > kube-proxy-configmap.yaml
[root@k8s-master ~]# sed -i 's/mode: ""/mode: "ipvs"/' kube-proxy-configmap.yaml
[root@k8s-master ~]# kubectl apply -f kube-proxy-configmap.yaml
[root@k8s-master ~]# rm -f kube-proxy-configmap.yaml (可删可不删)
[root@k8s-master ~]# kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}' 

四、部署 kubernetes-dashboard

1、在 Master 上部署 Dashboard

[root@k8s-master ~]# kubectl get pods -A  -o wide
kube-system            calico-kube-controllers-578894d4cd-sjfw7      1/1     Running   1          124m    10.224.36.66      k8s-node1    <none>           <none>
kube-system            calico-node-mmcnj                             1/1     Running   1          124m    192.168.203.212   k8s-master   <none>           <none>
kube-system            calico-node-v5wzw                             1/1     Running   1          124m    192.168.203.213   k8s-node1    <none>           <none>
kube-system            coredns-66bff467f8-gdlfd                      1/1     Running   2          3h23m   10.224.235.197    k8s-master   <none>           <none>
kube-system            coredns-66bff467f8-ptjwb                      1/1     Running   2          3h23m   10.224.235.198    k8s-master   <none>           <none>
kube-system            etcd-k8s-master                               1/1     Running   2          3h23m   192.168.203.212   k8s-master   <none>           <none>
kube-system            kube-apiserver-k8s-master                     1/1     Running   2          3h23m   192.168.203.212   k8s-master   <none>           <none>
kube-system            kube-controller-manager-k8s-master            1/1     Running   4          3h23m   192.168.203.212   k8s-master   <none>           <none>
kube-system            kube-proxy-6tn68                              1/1     Running   2          3h23m   192.168.203.212   k8s-master   <none>           <none>
kube-system            kube-proxy-xzl2j                              1/1     Running   1          156m    192.168.203.213   k8s-node1    <none>           <none>
kube-system            kube-scheduler-k8s-master                     1/1     Running   5          3h23m   192.168.203.212   k8s-master   <none>           <none>
kubernetes-dashboard   kubernetes-dashboard-84b6b4578b-rp6kx         1/1     Running   1          103m    10.224.235.196    k8s-master   <none>           <none>
kubernetes-dashboard   kubernetes-metrics-scraper-86f6785867-pfmvx   1/1     Running   1          103m    10.224.235.195    k8s-master   <none>           <none>

2、下载并修改Dashboard安装脚本(在Master上执行)

wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta5/aio/deploy/recommended.yaml   ##一般下不了,手动执行下面的
[root@k8s-master ~]# vim > recommended.yaml<<-EOF
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard

---

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""

---

apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque

---

kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard

---

kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]

---

kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]

---

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.0.0-beta1
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule

---

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: kubernetes-metrics-scraper

---

kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-metrics-scraper
  name: kubernetes-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: kubernetes-metrics-scraper
    spec:
      containers:
        - name: kubernetes-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.0
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
      serviceAccountName: kubernetes-dashboard
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
EOF
  • 修改recommended.yaml文件内容(vi recommended.yaml):
[root@k8s-master ~]# vim recommended.yaml
---
#增加直接访问端口
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort #增加
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 30008 #增加
  selector:
    k8s-app: kubernetes-dashboard

---
#因为自动生成的证书很多浏览器无法使用,所以我们自己创建,注释掉kubernetes-dashboard-certs对象声明
#apiVersion: v1
#kind: Secret
#metadata:
#  labels:
#    k8s-app: kubernetes-dashboard
#  name: kubernetes-dashboard-certs
#  namespace: kubernetes-dashboard
#type: Opaque

---

3、创建证书

[root@k8s-master ~]# mkdir dashboard-certs
[root@k8s-master ~]# cd dashboard-certs/
#创建命名空间
[root@k8s-master dashboard-certs]# kubectl create namespace kubernetes-dashboard
# 创建私钥key文件
[root@k8s-master dashboard-certs]# openssl genrsa -out dashboard.key 2048
#证书请求
[root@k8s-master dashboard-certs]# openssl req -days 36000 -new -out dashboard.csr -key dashboard.key -subj '/CN=dashboard-cert'
#自签证书
[root@k8s-master dashboard-certs]# openssl x509 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt
#创建kubernetes-dashboard-certs对象
[root@k8s-master dashboard-certs]# kubectl create secret generic kubernetes-dashboard-certs --from-file=dashboard.key --from-file=dashboard.crt -n kubernetes-dashboard

4、创建 dashboard 管理员

1、创建账号
[root@k8s-master dashboard-certs]# vim dashboard-admin.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: dashboard-admin
  namespace: kubernetes-dashboard
#保存退出后执行
[root@k8s-master dashboard-certs]# kubectl create -f dashboard-admin.yaml
2、为用户分配权限
[root@k8s-master dashboard-certs]# vim dashboard-admin-bind-cluster-role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: dashboard-admin-bind-cluster-role
  labels:
    k8s-app: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: dashboard-admin
  namespace: kubernetes-dashboard
#保存退出后执行
[root@k8s-master dashboard-certs]# kubectl create -f dashboard-admin-bind-cluster-role.yaml

5、安装 Dashboard

#安装
[root@k8s-master ~]# kubectl apply -f  ~/recommended.yaml

#检查结果
[root@k8s-master ~]# kubectl get pods -A  -o wide

[root@k8s-master ~]# kubectl get service -n kubernetes-dashboard  -o wide
NAME                        TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE   SELECTOR
dashboard-metrics-scraper   ClusterIP   10.1.186.219   <none>        8000/TCP        19m   k8s-app=dashboard-metrics-scraper
kubernetes-dashboard        NodePort    10.1.60.1      <none>        443:30008/TCP   19m   k8s-app=kubernetes-dashboard

6、查看并复制用户Token

[root@k8s-master ~]# kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep dashboard-admin | awk '{print $1}')
Name:         dashboard-admin-token-9mqvw
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: dashboard-admin
              kubernetes.io/service-account.uid: 762b0839-9ba3-4442-b123-e2c2b37a1088

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1025 bytes
namespace:  20 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6Ik9VVVQ1YkdpeDA1N1U0OUc4X0RZM2ppUndsNUdUNTRuOU1jZ0RuSUcxd00ifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tOW1xdnciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNzYyYjA4MzktOWJhMy00NDQyLWIxMjMtZTJjMmIzN2ExMDg4Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.W5DLW4dYX5u33Lg97BYb33eIWL5gTFT5xyZ5uqcPun4ChpMY7lrGA2GuxPhdfWbju7DaFMr7eacgnWoAOQzr_rgrCWCWnT7xmEWpvChbi7VVpyEGrVVqxXVRYIWMrpP5s-TYD8doEjeoxrFDwo4CWX7zv834vhkjnharY5ZBZYEKAw06Eg7d-HFsq8ZTAkeg8wXtuRd_OHvPddAuxmZCnf3Y3yLh6Ak7n3OkWKBupY7pRVUnzDBT2Nk7vv0YrAFm6f6x2Wg-WeE7Wbgwt7cOBMo2fJixfdmo0GDdwv0stCk4kuz-8wXpGtR2nGzEWX7AY5snT9AEabYvrOLIYRy7sQ

7、访问

  • [https://192.168.203.212:30008,选择Token登录,输入刚才复制的密钥

在这里插入图片描述

登录成功:
在这里插入图片描述

8、安装部署 metrics-server 插件

1、简单介绍
  • heapster已经被metrics-server取代,如果使用kubernetes的自动扩容功能的话,那首先得有一个插件,然后该插件将收集到的信息(cpu、memory…)与自动扩容的设置的值进行比对,自动调整pod数量。关于该插件,在kubernetes的早些版本中采用的是heapster,1.13版本正式发布后,丢弃了heapster,官方推荐采用metrics-sever。
2、下载相关yaml文件
  • https://github.com/kubernetes-incubator/metrics-server
[root@k8s-master ~]# git clone https://github.com/kubernetes-incubator/metrics-server.git

因网络问题,手动安装metrics-server。
包的链接:https://pan.baidu.com/s/1Dixo-Np-TyrqnSaof7Iapw
密码:h03g

[root@k8s-master ~]# unzip  metrics-server-master
[root@k8s-master ~]# cd metrics-server-master/deploy/1.8+/
[root@k8s-master 1.8+]# ll
总用量 28
-rw-r--r-- 1 root root 384 4月  28 09:46 aggregated-metrics-reader.yaml
-rw-r--r-- 1 root root 308 4月  28 09:46 auth-delegator.yaml
-rw-r--r-- 1 root root 329 4月  28 09:46 auth-reader.yaml
-rw-r--r-- 1 root root 298 4月  28 09:46 metrics-apiservice.yaml
-rw-r--r-- 1 root root 815 4月  28 09:46 metrics-server-deployment.yaml
-rw-r--r-- 1 root root 291 4月  28 09:46 metrics-server-service.yaml
-rw-r--r-- 1 root root 502 4月  28 09:46 resource-reader.yaml
3、修改安装脚本
[root@k8s-master 1.8+]# vim metrics-server-deployment.yaml
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: metrics-server
  namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: metrics-server
  namespace: kube-system
  labels:
    k8s-app: metrics-server
spec:
  selector:
    matchLabels:
      k8s-app: metrics-server
  template:
    metadata:
      name: metrics-server
      labels:
        k8s-app: metrics-server
    spec:
      serviceAccountName: metrics-server
      volumes:
      # mount in tmp so we can safely use from-scratch images and/or read-only containers
      - name: tmp-dir
        emptyDir: {}
      containers:
      - name: metrics-server
        image: mirrorgooglecontainers/metrics-server-amd64:v0.3.6  # 修改镜像下载地址
        args:        # 添加以下内容
          - --cert-dir=/tmp
          - --secure-port=4443
          - --kubelet-insecure-tls
          - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        ports:
        - name: main-port
          containerPort: 4443
          protocol: TCP
        securityContext:
          readOnlyRootFilesystem: true
          runAsNonRoot: true
          runAsUser: 1000
        imagePullPolicy: Always
        volumeMounts:
        - name: tmp-dir
          mountPath: /tmp
5、执行安装脚本并产看结果
#安装
[root@k8s-master 1.8+]# kubectl create -f .

#1-2分钟后查看结果
[root@k8s-master 1.8+]# kubectl top nodes
NAME         CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
k8s-master   256m         12%    2002Mi          52%       
k8s-node1    103m         5%     1334Mi          34%       
k8s-node2    144m         7%     1321Mi          34%  

再回到dashboard界面可以看到CPU和内存使用情况了:
在这里插入图片描述

源码

Github仓库:https://github.com/sunweisheng/Kubernetes

9、导出认证

[root@k8s-master01 dashboard]#  kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep dashboard-admin | awk '{print $1}')
Name:         admin-user-token-cj5l4
Namespace:    kube-system
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: d4a13fad-f427-435b-86a7-6dfc534e926d

Type:  kubernetes.io/service-account-token

Data
====
ca.crt:     1350 bytes
namespace:  11 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6Il9wNERQb2tOU2pMRkdoTXlDSDRIOVh5R3pLdnA2ektIMHhXQVBucEdldFUifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJhZG1pbi11c2VyLXRva2VuLWNqNWw0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFkbWluLXVzZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJkNGExM2ZhZC1mNDI3LTQzNWItODZhNy02ZGZjNTM0ZTkyNmQiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWRtaW4tdXNlciJ9.T65yeuBa2ExprRigERC-hPG-WSdaW7B-04O5qRcXn7SLKpK_4tMM8rlraClGmc-ppSDIi35ZjK0SVb8YGDeUnt2psJlRLYVEPsJXHwYiNUfrigVs67Uo3aMGhSdjPEaqdZxsnRrReSW_rfX8odjXF0-wGKx7uA8GelUJuRNIZ0eBSu_iGJchpZxU_K3AdU_dmcyHidKzDxbPLVgAb8m7wE9wcelWVK9g6UOeg71bO0gJtlXrjWrBMfBjvnC4oLDBYs9ze96KmeOLwjWTOlwXaYg4nIuVRL13BaqmBJB9lcRa3jrCDsRT0oBZrBymvqxbCCN2VVjDmz-kZXh7BcWVLg
[root@k8s-master01 dashboard]# vim /root/.kube/config    # 增加 token 内容 
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01EZ3dNekE0TVRZeU1Wb1hEVE13TURnd01UQTRNVFl5TVZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBT0pzCk5PRHJHNEJEdDBwb2VybDluUGFQK01IdGxXTFI5QzdjV0ZOTTdQV1Ftc1RlOU9jcGpFdXNPRFBxWUdoazRHQWUKVkZ3SVluQ1V3bXA2VzNRTnUwSE1FeS8rRWlCc1R1dmxja1FqUklDWDhXYkNJZ092Ti90emtZeWdMWDBaZE81Vgo5dEg0cTcyOFhIbmoxMWUxQXpBOGJwT0s4d0dTUE9CMG4rZjkxSVkrVjdBanlQWlNtUFphR1dOWmV3cFVid0dBCnJreTNwYmkvMitqd01sQkVOY3p3UEl6Z3kwTW5CMitFRXhHZ296QVZZMTRtdXZjRUNNQjVBNW5rTzJJRmQ5S3kKeHFORjUzK0NtM2wyQ2ZWT0NVYXNTSHhlVTlLYStwWFhOeDZONE9pN0xSZ0pRSVB3QnpVM3NMOUZWUXp5WDdpKwptWlVKZzcvcVZETWkzd3VCZTEwQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFOZVFyZVhzbHJwN1R5TmdaaWFYVk05c0JmV2gKWEJFd0VjYi94WnAxaWpGaTJ4NFgyLzgrSUtzZitwRU10MjQrNTZ0bEtvbmphTEVJUENUUnVvbmlmTHVwTHpXRApyMGM2U2dGUWdNaFdvdUtBcWtHNjQvTFkvMFlBMUQ2VVJuaVYxNTVYS281TWwxZ0llSnpXOUE4SFdBbjhkUGJ2CkpNVWFSaVZmNC95WXIzc3lrZEpiTU5yYzNWZUxtdGpFYWJLOEtJUVU4d1NTdVVXdjA0cUQzMC9tbjB6OUZwR08KL05iZDM5MExqaFpRUHNUcURGdVpWVjlZNnVXTGF5N0Y3UWh6eDZaTDlaTjFFbUN0VDIvMldmM0RIV3I1dzJFWApZdDRZbFozd2dqOXBRd0J0VVUrT0hxOHFIaXczMDVrL2haQmpmNEM4WVgzK3NsMWhoeFBXTDFlSktoZz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://192.168.203.212:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM4akNDQWRxZ0F3SUJBZ0lJUVQwMFh3ODZLVll3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TURBNE1ETXdPREUyTWpGYUZ3MHlNVEE0TURNd09ERTJNalphTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQW5xaGZiQXhGL1ByMjVPaloKTjZaaVR0U1dIR3g4OS91RFJ4VzJSTUJkWXZSOVZoWTYxZGo0Mkx0TjYxUDRuSytSeEtjTkJGYlA5SFUzNzlRWAp1WG9uUHVWckU0QVJFZUh0RG5FUDRzNzdsU0RXRVFYU2dWL09ndkZpSVcwTmM2YSt6cm54bHlNZzhMeFdkUmo5CjFiSkF5bTlzQzRRaERpOUdMTk96Qk9HM0xaeHh6THBJK1VzSDQxMzNlcDcvZ21hVUpWL05GQy91c0Y0R2hJaTcKS1Mvb0k4WEdJL0JqRGt0d2xnSXNFY291aWJDc1BCdXVIVEF1MThYclA0TjdORzU1RWNZY1k2NnNJd3l3UVBLYwpiTWxoSVR0dUdpSVFHbWpCaFRYUEFDajZ6ZlZ5MjN2bUhROEJiQUNVQjRKYm1sRnViVHFwY3pFU29JMU4wYTBFCmRESlEvUUlEQVFBQm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFKTktKWmdZTWpPbGZ6T0ZacElCc2tKL3JsNlFXOUkvcURkRQo3TmN0dGtJNkxHUitndjBtc0VGUUtteDA1dTJYLzZEbXpmZGhwK1poUzUrY0hxME9rSEpPRUowT1c0VlpNY0FECmw4b0cwemNwcm5RRXRuTUJkRVU0dy9hLzlySHlNTENEbVpvK0FYdkoyTTdVMlhVQ0VsVXRvM0pkMjZxbVp1d3IKWWNJL0kwU240b1NhNlhLYVBISkdpTUxoVnZpYVFVMVNhT2JtTlpxSHcxRFBudDQ2NUpIZ2RUQTJhVVIwVjYwRgpiY1VhNUk0ck0veVY1UHV6QjJxaEVCM0htdnNsQmlsUk12SmZIelRVRkM2OXF6MElZUXI5ZGFLcnJVRzRKRit4CmxPOG00WEVjaVVsQmRveFRDUDRjZmpzOHNVWnRrSGR1RWJDNTNCM3AzQWJXNUo3dEN0cz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBbnFoZmJBeEYvUHIyNU9qWk42WmlUdFNXSEd4ODkvdURSeFcyUk1CZFl2UjlWaFk2CjFkajQyTHRONjFQNG5LK1J4S2NOQkZiUDlIVTM3OVFYdVhvblB1VnJFNEFSRWVIdERuRVA0czc3bFNEV0VRWFMKZ1YvT2d2RmlJVzBOYzZhK3pybnhseU1nOEx4V2RSajkxYkpBeW05c0M0UWhEaTlHTE5PekJPRzNMWnh4ekxwSQorVXNINDEzM2VwNy9nbWFVSlYvTkZDL3VzRjRHaElpN0tTL29JOFhHSS9CakRrdHdsZ0lzRWNvdWliQ3NQQnV1CkhUQXUxOFhyUDRON05HNTVFY1ljWTY2c0l3eXdRUEtjYk1saElUdHVHaUlRR21qQmhUWFBBQ2o2emZWeTIzdm0KSFE4QmJBQ1VCNEpibWxGdWJUcXBjekVTb0kxTjBhMEVkREpRL1FJREFRQUJBb0lCQUFSazExZXZzMVNCUXNzdQpJNjNsM3IwZUtCWWJidzZUR1p5alhrdmpJL0wwb0cvODU1NDZoeEhCaGpQcFBHNWljbUFHM0ZadGJRN3hIQjU1Ck9qcjV4aEo0MmhGTkw2dldIUEdVY2dNdkJrcW9BU1d4aXBYb3FGaDZCT1MyRjNSSGZ1dE12UU1aaHZVRDBrVWwKN3dtM0NSSlNLYVRjQU9wYXBzL2hBUWsya3hNaFFkeXFWSmdyWnlLUGxEWlhyZlpvRmtYSUV4czdzcTRUTjdhcAp6c1pYMU0xMXF0d2Y0R1djL05QUE8yVFZtR3ZBR2tZN1BTVVVWRnVvSzVxaVJyS0RYd040UVhKZmo4YVJBSnNrCnhSRGk3T1U1cTNGNllKdW1XcmFXUm16bWJkc1RBNDF3MUpEY00vZW1uUmE5UEJDTS9DQUlDNWh1aEJkcXl6UVcKajNHcDFVVUNnWUVBMDB0SmY3S0tZYU0wTjR2ZjZLNGNTK1c4bUJ4aTErVFRlZWhHVXpQSmFHMDVSZ3cxWE5ZNQpoVWQ2bmxnQml6ZW9UMllpT1p6eUN5SzZFTklxUVdjSkVGSnhPN0ZFTzJEVXYxZmJ1MEhpdW45TU9lbGlsTm1RCjBFY0xERFdaNzIzYmJQR3N6bENqb0p5dldUNHA5U00xUC9sTUpHTDRRQmVIUVdkRFlmaXMvYXNDZ1lFQXdEb0wKTWlVSzhDb1BIN2JvTzFrbU9EQVN5bThuZ3lhc0FkWk1FNWRnc2dhWTNnOHA5R0RIUGtWYVk5TlFralI2NDZFbQppWGpYWlV2QU54OWQxV2dkWnFkMDZEbUpsejhUaHRLY0k2WXRXSnpHNlVZU1E0azBzekRFN0dBV3hCTmorL25WCmxJY09sL3JSbm5UMEpNU0pyZlQzTFIrNHJaY2svY055VGZiQnMvY0NnWUJsSDBJRXlHanNBNVNwQk50Ylc0Q1YKWGxUZEk3QzJqSUFkZHVtNVJpNmRPTERSY21SVGt2OGlaeXdxL2dsM0hHTUQ1T2g2VkQrT3pzYm41LzFySWFtMwowd3o0T1lWak9adDRHODlBbG02eFBOMDVWaFhsRVI3Nlh4WE5lUlc3L2dLbTZCOEErcHprOERnSGFQWGhxVUVCCjVnam4zU25jV3FaVHlkejQxVy9OVXdLQmdRQzZFVFUzSTY0VHpOSjQ0MmFsMCtSdjdQQ3piS2ExaDZCbDR0WWUKL0orSGsyVXpSVUJhSWJlYTZpelZoZjF1bUVmL3dNUjV1elBjQkZnWncwM2p2WFVBSWNYQzU3YnNaUXowcXphaQpOeitiajUzbXZZSCtSM2h3bnh0dXBwQkMyWFlsdUs1cHA5V1RmU0Njbmg2WTNIbGNua3NJTGJWb3FtNFBDTG5ECkI4QjEvd0tCZ0d1QU5Jd280R3N4eXJtMnd1a29SR1B0dHFmbndzWENseGhJWjhZSDk2SUlsczJRN0FPenpHTW4Kd2JjUnExazFEVUVSd0g5QkZDMEpQMzhuZy9ocHgway9KZkVOQmUxRzNaNHVQU0J0Wi9laDdqcUUzNG1RU3NQdwoyT3ZPVm5Feml6NHJ6aHlYaDhyRDB6V1JtcFJ3cFZWWEdVR2NONitMdlowSG9UVGZMbkswCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
    token: eyJhbGciOiJSUzI1NiIsImtpZCI6ImJYcE0tZlpBYy1MalNZaHJEa0VXaHp1Mi1VMHUwQVBoLTZMeUk3TDgzTVkifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tOXdmd2siLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZGUwNDJkMDktNmIwOS00Yjc5LWE1MWItNmIxOGJhNWE3MDE4Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmRhc2hib2FyZC1hZG1pbiJ9.EzMMBVu-5tfrVy4OkaIvehrvNRW34UulYBQnLHoYAMQwRtkx390Ehty7KSRxwiBWazH2ggGOHuydYpDlhfrR4Dw6-4qwh0C8Rz5JturKTbZNPe3N4HMlXFJHnDu54Fhx1Vxs7wEFXVupIGIYDvRbLPxo8A4MBA0xWpO8PRElfTgPpY_x5jDwh0XS1kwsdSYBXqLjRIR9hVQ2qI6oJ599kIvRFC6_BCSEje_tlW0eaDcs5KzF6FuHQ767bACJ2iMWCEbRUbunCH5PmJv0swjZ8Yvh7z4HiMpnf-kCZ7ZluaOqOzLeYRyeQVYQ20xfGpPUUjIPD1Y4sNFmfjWY2NHltw
    ##注意token的格式

[root@k8s-master01 dashboard]# cp /root/.kube/config /root/k8s-dashboard.kubeconfig
[root@k8s-master01 dashboard]# sz k8s-dashboard.kubeconfig 

10、用文件认证登录

在这里插入图片描述
在这里插入图片描述

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值