kubernetes(k8s)环境搭建及过程问题处理
前言
由于集群环境部署,需要搭建一套kubernetes(k8s)环境部署应用服务,现记录搭建过程,以供参考!。
搭建过程中主要参考:
https://blog.csdn.net/qq_34288630/article/details/118905853
环境准备
虚拟机版本:VMware®15.5.2 build-15785246(版本紧供参考,任何版本均可)
Linux版本: Centos7
docker版本: 1.13.1(由于之前虚拟机中已存在docker没有重新安装版本)
一、复制虚拟机
为不影响原虚拟机功能我采用的是克隆原有虚拟机的方式
右键选中虚拟机–> 管理–>克隆 下一步 到选择“创建完整克隆” 后面修改名称和存储路径
在搭建的过程中采用3台虚拟机作为集群部署方案
192.168.154.159 k8s-master
192.168.154.160 k8s-node1
192.168.154.161 k8s-node2
修改主机名,登录各个虚拟机单独修改,修改命令如下
hostnamectl set-hostname master
二、环境安装操作
安装docker-ce (所有机器)
设置k8s环境前置条件(所有机器)
安装k8s v1.16.0 master管理节点
安装k8s v1.16.0 node工作节点
安装flannel(master)
虚拟机安装好后
请确保使用这两个ip在master和node上能互相ping通。
安装Docker(所有机器都安装)
所有安装k8s的机器都需要安装docker,命令如下:
2.1 安装docker所需的工具
yum install -y yum-utils device-mapper-persistent-data lvm2
2.2 配置阿里云的docker源
yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
2.3 指定安装这个版本的docker-ce
yum install -y docker-ce-18.09.9-3.el7
2.4 启动docker
systemctl enable docker && systemctl start docker
四、设置k8s环境准备条件(所有机器都需要执行)
我这边安装k8s的机器是2个CPU和4g内存(最少也得2个CPU和2g内存),在虚拟机里面配置一下就可以了。然后执行以下脚本做一些准备操作。所有安装k8s的机器都需要这一步操作。
2.5 关闭防火墙
systemctl disable firewalld
systemctl stop firewalld
2.6 关闭selinux
临时禁用selinux
setenforce 0
永久关闭 修改/etc/sysconfig/selinux文件设置
sed -i ‘s/SELINUX=permissive/SELINUX=disabled/’ /etc/sysconfig/selinux
sed -i “s/SELINUX=enforcing/SELINUX=disabled/g” /etc/selinux/config
2.7 禁用交换分区
swapoff -a
永久禁用,打开/etc/fstab注释掉swap那一行。
sed -i ‘s/.swap./#&/’ /etc/fstab
2.8修改内核参数
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
三、安装k8s master管理节点
在执行这一步时候必须先完成三、四步。之后执行下面操作:
安装kubeadm、kubelet、kubectl
由于官方k8s源在google,国内无法访问,这里使用阿里云yum源
3.1 执行配置k8s阿里云源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
3.2 安装kubeadm、kubectl、kubelet
yum install -y kubectl-1.16.0-0 kubeadm-1.16.0-0 kubelet-1.16.0-0
3.3启动kubelet服务
systemctl enable kubelet && systemctl start kubelet
初始化k8s 以下这个命令开始安装k8s需要用到的docker镜像,因为无法访问到国外网站,所以这条命令使用的是国内的阿里云的源(registry.aliyuncs.com/google_containers)。
另一个非常重要的是:这里的–apiserver-advertise-address使用的是master和node间能互相ping通的ip,我这里就是master节点的ip,(这个ip必须修改成自己的,别忘了)。
这条命令执行时会卡在[preflight] You can also perform this action in beforehand using ''kubeadm config images pull,大概需要2分钟,请耐心等待。
镜像卡住的时候可以直接从网上下载相应文件然后再docker里load input -a ***.tar 就好
kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.16.0 --apiserver-advertise-address 192.168.154.159 --pod-network-cidr=10.244.0.0/16 --token-ttl 0
192.168.154.159----这个地址是k8s-master 虚拟机机器的地址(master地址)
10.244.0.0 ----镜像地址内部环境交互地址,目前先用这个不需要修改,在后面网路配置中会实际分配
3.4上面安装完成后,k8s会提示你输入如下命令,执行
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
master节点执行这条就可以不需要做过多修改
记住node加入集群的命令 上面kubeadm init执行成功后会返回给你node节点加入集群的命令,等会要在node节点上执行,需要保存下来
命令长这样,你直接复制粘贴,之后要在node节点执行,目的就是让node节点加入集群
kubeadm join 192.168.154.159:6443 --token abcdef.0123456789abcdef
–discovery-token-ca-cert-hash sha256:c2d6067d5c3b12118275958dee222226d09a89fc5fb559687dc989d2508d5a50
如果上面的命令忘记了,可以使用如下命令重新获取。
kubeadm token create --print-join-command
以上,安装master节点完毕。可以使用kubectl get nodes查看一下,此时master处于NotReady状态,暂时不用管。
四、安装k8s node工作节点
在执行这一步时候必须先完成三、四步。之后执行下面操作:
安装kubeadm、kubelet
4.1 执行配置k8s阿里云源
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
4.2 安装kubeadm、kubectl、kubelet
yum install -y kubeadm-1.16.0-0 kubelet-1.16.0-0
4.3 启动kubelet服务
systemctl enable kubelet && systemctl start kubelet
加入集群 这里加入集群的命令每个人都不一样,可以登录master节点,使用kubeadm token create --print-join-command 来获取。获取后执行如下。
加入集群,如果这里不知道加入集群的命令,可以登录master节点,使用kubeadm token create --print-join-command 来获取
kubeadm join 192.168.3.133:6443 --token abcdef.0123456789abcdef
–discovery-token-ca-cert-hash sha256:3707dca3ca933e7a59b9f54cff852025ab5a7d087f3c5454799d86bcb156c03a
加入成功后,可以在master节点上使用kubectl get nodes命令查看到加入的节点。
类似或有下面的界面不过此时界面是NoReady的状态
五、安装flannel (三台机器都要安装)
以上步骤安装完后,机器搭建起来了,但状态还是NotReady状态,如下图,master机器需要安装flanneld。
下载 flannel 的 yml文件:
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
执行命令:
kubectl apply -f kube-flannel.yml
等待k8s 把 pod:flannel生效 可以用命令查看状态进度:
kubectl get pods -n kube-system
等 flannel 状态显示 Running后
执行查看k8s集群运行状态命令:
sudo kubectl get nodes
此时发现master虚拟机状态已经Ready了,但是工作node没有起来,状态是NotReady
下面去配置工作node虚拟机上的flannel配置
如果还是NotReady的状态看下kube-flannel.yml文件是否正确,可以从官网直接拉一份
或者直接vi kube-flannel.yml
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: psp.flannel.unprivileged
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
privileged: false
volumes:
- configMap
- secret
- emptyDir
- hostPath
allowedHostPaths:
- pathPrefix: "/etc/cni/net.d"
- pathPrefix: "/etc/kube-flannel"
- pathPrefix: "/run/flannel"
readOnlyRootFilesystem: false
*# Users and groups*
runAsUser:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
fsGroup:
rule: RunAsAny
# Privilege Escalation
allowPrivilegeEscalation: false
defaultAllowPrivilegeEscalation: false
# Capabilities
allowedCapabilities: ['NET_ADMIN']
defaultAddCapabilities: []
requiredDropCapabilities: []
# Host namespaces
hostPID: false
hostIPC: false
hostNetwork: true
hostPorts:
- min: 0
max: 65535
# SELinux
seLinux:
# SELinux is unused in CaaSP
rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: flannel
rules:
- apiGroups: ['extensions']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-amd64
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/os
operator: In
values:
- linux
- key: beta.kubernetes.io/arch
operator: In
values:
- amd64
hostNetwork: true
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.11.0-amd64
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.11.0-amd64
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-arm64
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/os
operator: In
values:
- linux
- key: beta.kubernetes.io/arch
operator: In
values:
- arm64
hostNetwork: true
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.11.0-arm64
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.11.0-arm64
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-arm
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/os
operator: In
values:
- linux
- key: beta.kubernetes.io/arch
operator: In
values:
- arm
hostNetwork: true
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.11.0-arm
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.11.0-arm
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-ppc64le
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/os
operator: In
values:
- linux
- key: beta.kubernetes.io/arch
operator: In
values:
- ppc64le
hostNetwork: true
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.11.0-ppc64le
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.11.0-ppc64le
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds-s390x
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: beta.kubernetes.io/os
operator: In
values:
- linux
- key: beta.kubernetes.io/arch
operator: In
values:
- s390x
hostNetwork: true
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.11.0-s390x
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.11.0-s390x
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
将以上内容粘贴到kube-flannel.yml文件中然后重新安装
重新安装过程如下
#第一步,在master节点删除flannel
kubectl delete -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
#第二步,在node节点清理flannel网络留下的文件
ifconfig cni0 down
ip link delete cni0
ifconfig flannel.1 down
ip link delete flannel.1
rm -rf /var/lib/cni/
rm -rf /etc/cni/net.d/*
注:执行完上面的操作,重启kubelet
重启命令如下
systemctl daemon-reload
systemctl restart kubelet
此时如果还不行
ifconfig 查看时候有cni 或者查看pod log
如果出现
untime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
cni 没有的话删除网络cni配置
解决方法:vi /var/lib/kubelet/kubeadm-flags.env
删除其中的 --network-plugin=cni
重启kubelet
在node节点机器上
运行命令把master节点的配置copy到工作节点(192.168.154.159为master节点ip)
scp -r 192.168.154.159:/etc/cni /etc/cni
重启启动 kubelet
systemctl restart kubelet
回到master节点查看
执行查看k8s集群运行状态命令:
sudo kubectl get nodes
到此k8s集群搭建完毕!
六、安装 Dashboard 插件
Kubernetes Dashboard 是 k8s集群的一个 WEB UI管理工具,代码托管在 github 上,地址:https://github.com/kubernetes/dashboard
直接使用官方的配置文件安装即可:
wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
默认不对外开放端口,下载好了后需要自己手动改下配置文件,
为了测试方便,我们将Service改成NodePort类型,注意 YAML 中最下面的 Service 部分新增一个type=NodePort:
其中nodePort 就是对外的访问端口:
因为我是在我的node1节点安装的(k8s-node1节点ip):访问地址https://192.168.154.161:32443/#/cluster?namespace=default
当然直接访问集群中的任何一个节点 IP 加上上面的32443端口即可打开 dashboard 页面了。
Error from server (AlreadyExists): error when creating "kubernetes-dashboard.yaml": services "kubernetes-dashboard" already exist
解决方法:
kubectl get pod --all-namespaces
发现的确存在一个异常ImagePullBackOff
然后使用
kubectl delete -f kubernetes-dashboard.yaml
初步判断是存在dashboard已经存在了
然后直接部署新版本的dashboard即可:
然后我们可以查看 dashboard 的外网访问端口:
kubectl get svc kubernetes-dashboard -n kube-system
由于 dashboard 默认是自建的 https 证书,该证书是不受浏览器信任的,所以我们需要强制跳转就可以了。
这是由于该用户没有对default命名空间的访问权限。
身份认证
登录 dashboard 的时候支持 Kubeconfig 和token 两种认证方式,Kubeconfig 中也依赖token 字段,所以生成token 这一步是必不可少的。
创建一个用于自签证书的目录
$ mkdir kubernetes-dashboard-key && cd kubernetes-dashboard-key
生成证书请求的key
$ openssl genrsa -out dashboard.key 2048
#192.168.0.113为master节点的IP地址
$ openssl req -new -out dashboard.csr -key dashboard.key -subj ‘/CN=192.168.154.160’
#生成自签证书
$ openssl x509 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt
删除原有证书
$ kubectl delete secret kubernetes-dashboard-certs -n kube-system
#创建新证书的secret
$ kubectl create secret generic kubernetes-dashboard-certs --from-file=dashboard.key --from-file=dashboard.crt -n kube-system
#查找正在运行的pod
$ kubectl get pod -n kube-system
#删除pod,让k8s自动拉起一个新的pod,相对于重启
kubectl delete pod kubernetes-dashboard-7d6c598b5f-fvcg8 -n kube-system
查询token pod
kubectl get secret -n kube-system|grep kubernetes-dashboard-token
生成token
kubectl describe secret kubernetes-dashboard-token-v96jt -n kube-system
以上为搭建k8s过程 如有问题请留言交流!
提示:如有问题请指正,谢谢!