dashboard的安装使用

目录

一、下载dashboard的yaml文件

二、修改dsashboard的yaml文件镜像

三、修改dashboard的yaml文件内容

四、创建dashboard


通常而言kubernetes中完成的所有操作都是通过命令行工具kubectl完成的

但为了提供更丰富的用户体验,k8s还开发了一个基于web的用户界面Dashboard

dashboard是官方提供的一个K8S的前端组件,使操作更简便

查看目前k8s集群版本

一、下载dashboard的yaml文件

 1、进入github官方下载 

  每个版本的dashboard可支持的K8S集群版本不一样,需要根据自己K8S的版本选择对应的dashboard, 比如我K8S版本是1.24; dashboard-v2.6.1可以支持1.24

  2、下载yaml文件,页面有下载链接

https://raw.githubusercontent.com/kubernetes/dashboard/v2.6.1/aio/deploy/recommended.yaml
创建个目录用来存放dashborad相关文件
root@k8s-deploy:~/yaml/20220724# mkdir dashboard-v2.6.1
root@k8s-deploy:~/yaml/20220724# cd dashboard-v2.6.1

下载yaml文件
root@k8s-deploy:~/yaml/20220724/dashboard-v2.6.1# wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.6.1/aio/deploy/recommended.yaml

将yaml修改名称
root@k8s-deploy:~/yaml/20220724/dashboard-v2.6.1# mv recommended.yaml dashboard-v2.6.1.yaml

二、修改dsashboard的yaml文件镜像

1、yaml文件中有两个镜像是互联网的,这里我把这两个镜像下载到本地harbor

2、下载镜像

把yaml文件中的两个镜像下载下来
root@k8s-deploy:~/yaml/20220724/dashboard-v2.6.1# docker pull kubernetesui/dashboard:v2.6.1
root@k8s-deploy:~/yaml/20220724/dashboard-v2.6.1# docker pull kubernetesui/metrics-scraper:v1.0.8

重新打标签
root@k8s-deploy:~/yaml/20220724/dashboard-v2.6.1# docker tag kubernetesui/dashboard:v2.6.1 harbor.magedu.net/baseimages/dashboard:v2.6.1
root@k8s-deploy:~/yaml/20220724/dashboard-v2.6.1# docker tag kubernetesui/metrics-scraper:v1.0.8 harbor.magedu.net/baseimages/metrics-scraper:v1.0.8


上传到harbor服务器
root@k8s-deploy:~/yaml/20220724/dashboard-v2.5.1# docker push harbor.magedu.net/baseimages/dashboard:v2.6.1
root@k8s-deploy:~/yaml/20220724/dashboard-v2.5.1# docker push harbor.magedu.net/baseimages/metrics-scraper:v1.0.8

3、修改yaml文件,把镜像更换为本地harbor镜像

把两个镜像换成harbor镜像

root@k8s-deploy:~/yaml/20220724/dashboard-v2.6.1# vim dashboard-v2.6.1.yaml
containers:
        - name: kubernetes-dashboard
          image: harbor.magedu.net/baseimages/dashboard:v2.6.1
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
containers:
        - name: dashboard-metrics-scraper
          image: harbor.magedu.net/baseimages/metrics-scraper:v1.0.8
          ports:
            - containerPort: 8000
              protocol: TCP

三、修改dashboard的yaml文件内容

 dashboard默认情况只限于K8S内部访问,因此需要再暴露端口给客户端外部环境访问

在Service资源里增加两行

 service端口是443,而后把请求再转给pod的8443端口,手动增加NodePort类型,再增加个30000端口

 上面的30000端口必须要在hosts文件中定义的端口范围之内

四、创建dashboard

1、创建资源

root@k8s-deploy:~/yaml/20220724/dashboard-v2.6.1# kubectl apply -f dashboard-v2.6.1.yaml

2、 访问下node节点的30000端口,一定要是https协议才行,因为它是有证书的

显示需要token

dashboard只是个访问页面,并没有创建访问的用户;用户需要自己手动创建

3、创建访问用户

创建资源文件
root@k8s-deploy:~/yaml/20220724/dashboard-v2.5.1# vim admin-user.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user          #用户创建在下面的namespace中
  namespace: kubernetes-dashboard


---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding        #做角色绑定
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io        #api版本
  kind: ClusterRole        #授予admin-user用户ClusterRole(集群管理员)权限
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user        #该用户与上面 ClusterRole角色进行绑定,拥有集群管理员权限
  namespace: kubernetes-dashborad  

创建资源
root@k8s-deploy:~/yaml/20220724/dashboard-v2.5.1# kubectl apply -f admin-user.yaml
serviceaccount/admin-user created
clusterrolebinding.rbac.authorization.k8s.io/admin-user created

创建Secret文件
root@k8s-deploy:~/yaml/20220724/dashboard-v2.5.1# vim admin-secret.yaml
apiVersion: v1
kind: Secret                             #类型是Secret
type: kubernetes.io/service-account-token
metadata:
  name: dashboard-admin-user             #Secret的名字是这个
  namespace: kubernetes-dashboard        #创建在这个namesapce中
  annotations:                           #注解
    kubernetes.io/service-account.name: "admin-user"   
    # 注解内容是和哪个service-account进行绑定是和admin-user进行绑定 ,也就是为admin-user创建一个叫做dashboard-admin-user名字的Secret,这个Secret中就包含它需要登录的tonken了

创建资源
root@k8s-deploy:~/yaml/20220724/dashboard-v2.5.1# kubectl apply -f admin-secret.yaml
secret/dashboard-admin-user created

4、取出tonken

root@k8s-deploy:~/yaml/20220724/dashboard-v2.6.1# kubectl get secrets -A | grep admin
kubernetes-dashboard   dashboard-admin-user              kubernetes.io/service-account-token   3      17s


root@k8s-deploy:~/yaml/20220724/dashboard-v2.6.1# kubectl describe secrets dashboard-admin-user -n kubernetes-dashboard
Name:         dashboard-admin-user
Namespace:    kubernetes-dashboard
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: admin-user
              kubernetes.io/service-account.uid: 52419154-cc5f-4276-9b1a-2c76eb08f2a9

Type:  kubernetes.io/service-account-token

Data
====
namespace:  20 bytes
token:      eyJhbGciOiJSUzI1NiIsImtpZCI6ImIxX29tV0s1MEZyT216ZDhiN0lGNGx3VUQxQ1ltQ3ZaWTZmRm1zQkJMZHMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdXNlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJhZG1pbi11c2VyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNTI0MTkxNTQtY2M1Zi00Mjc2LTliMWEtMmM3NmViMDhmMmE5Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmFkbWluLXVzZXIifQ.DX-sID2NzIjIzCMgMb9ugcC-7icaSiLOGZyWp1-PiY_W4oKaphuHhSspEH1M98agOrP-NIrzuKko_GCqxyEVmrJ6Mws7YAGE80_RQpxAyERMA4M0qQv8JPw8U3IJMmvw34xnyJLYBiSaLv4RVIG2IqPu635mEIZNmkZ7r5Cs0DhOxCxnK086QNj1zMqdu7p-NEmYGZedS1TAw7rVW8gBZbgvzViO8jMAZWYf2arN77RbNOPbLTyzCWKc8qwL2fcOpkwSiGCxKzpFV4cnwb4n8RCgtxgi5B3q5OwOyQC_SfCFOzr_RHqq65voZ0SS2buMk9SwC-q_k-dMlxe1dqYhjw
ca.crt:     1302 bytes

 把tonken取出来就可以登录页面了

记一次报错:

登录页面后发现没有任何资源

 看了下pod状态有报错

root@k8s-deploy:~/yaml/20220724/dashboard-v2.6.1# kubectl logs -f dashboard-metrics-scraper-67969bbbb6-hprg5 -n kubernetes-dashboard
W1121 06:50:20.910976       1 client_config.go:617] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
{"level":"info","msg":"Kubernetes host: https://10.100.0.1:443","time":"2022-11-21T06:50:20Z"}
{"level":"info","msg":"Namespace(s): []","time":"2022-11-21T06:50:20Z"}
{"level":"error","msg":"Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)","time":"2022-11-21T06:57:20Z"}
10.200.137.64 - - [21/Nov/2022:06:57:21 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/v2.6.1"
172.31.7.112 - - [21/Nov/2022:06:57:29 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.24"
172.31.7.112 - - [21/Nov/2022:06:57:39 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.24"
172.31.7.112 - - [21/Nov/2022:06:57:49 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.24"
10.200.137.64 - - [21/Nov/2022:06:57:51 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/v2.6.1"
172.31.7.112 - - [21/Nov/2022:06:57:59 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.24"
172.31.7.112 - - [21/Nov/2022:06:58:09 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.24"
172.31.7.112 - - [21/Nov/2022:06:58:19 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.24"
{"level":"error","msg":"Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)","time":"2022-11-21T06:58:20Z"}
10.200.137.64 - - [21/Nov/2022:06:58:21 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/v2.6.1"
172.31.7.112 - - [21/Nov/2022:06:58:29 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.24"
172.31.7.112 - - [21/Nov/2022:06:58:39 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.24"
172.31.7.112 - - [21/Nov/2022:06:58:49 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.24"
10.200.137.64 - - [21/Nov/2022:06:58:51 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/v2.6.1"
172.31.7.112 - - [21/Nov/2022:06:58:59 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.24"
172.31.7.112 - - [21/Nov/2022:06:59:09 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.24"
172.31.7.112 - - [21/Nov/2022:06:59:19 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.24"

root@k8s-deploy:~/yaml/20220724/dashboard-v2.6.1# kubectl logs -f kubernetes-dashboard-557cd5b7d6-qcmc9 -n kubernetes-dashboard
2022/11/21 06:50:20 Starting overwatch
2022/11/21 06:50:20 Using namespace: kubernetes-dashboard
2022/11/21 06:50:20 Using in-cluster config to connect to apiserver
2022/11/21 06:50:20 Using secret token for csrf signing
2022/11/21 06:50:20 Initializing csrf token from kubernetes-dashboard-csrf secret
2022/11/21 06:50:20 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
2022/11/21 06:50:20 Successful initial request to the apiserver, version: v1.24.3
2022/11/21 06:50:20 Generating JWE encryption key
2022/11/21 06:50:20 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
2022/11/21 06:50:20 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
2022/11/21 06:50:21 Initializing JWE encryption key from synchronized object
2022/11/21 06:50:21 Creating in-cluster Sidecar client
2022/11/21 06:50:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2022/11/21 06:50:21 Auto-generating certificates
2022/11/21 06:50:21 Successfully created certificates
2022/11/21 06:50:21 Serving securely on HTTPS port: 8443
2022/11/21 06:50:51 Successful request to sidecar
2022/11/21 06:54:40 http: TLS handshake error from 10.200.166.128:30606: remote error: tls: bad certificate
2022/11/21 06:54:40 http: TLS handshake error from 10.200.166.128:28961: remote error: tls: bad certificate
2022/11/21 06:54:40 http: TLS handshake error from 10.200.166.128:9274: remote error: tls: bad certificate
2022/11/21 06:54:40 http: TLS handshake error from 10.200.166.128:29868: remote error: tls: bad certificate

  • 4
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值