k8s—部署dashboard可视化界面

1、下载recommended.yaml配置文件

1)根据自己安装的kubernetes版本安装适配的dashboard

        我安装的kubernetes是1.24.2版本的,需要安装v:2.6.1版本的dashboard 

2)下载地址:https://raw.githubusercontent.com/kubernetes/dashboard/v2.6.1/aio/deploy/recommended.yaml

3)可以直接用wget方式下载,也可以手动下载上传到服务器中;

2、修改配置文件

1)将名称空间改为kube-system(改不改都可,只要自己知道在哪个命名空间即可)

[root@kub-k8s-master ~] # sed -i '/namespace/ s/kubernetes-dashboard/kube-system/g' recommended.yaml

2)修改Service的类型

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort                #修改类型为NodePort,大概在40行
  ports:
    - port: 443
      nodePort: 31260                #添加nodePort端口
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard

3、拉取镜像

1)查看要拉取的镜像

[root@k8s-master ~]# cat recommended.yaml |grep image
          image: kubernetesui/dashboard:v2.6.1
          imagePullPolicy: Always           #提前拉取镜像需要改成ifNotPresent,不会重复拉取
          image: kubernetesui/metrics-scraper:v1.0.8

2)提前拉取镜像,可节省时间

4、应用配置文件并创建dashboard

1)应用recommended.yaml

[root@k8s-master ~]# kubectl apply -f recommended.yaml 
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

2)查看pod的状态确保dashboard已经部署成功

[root@k8s-master ~]# kubectl get pod -n kubernetes-dashboard
NAME                                         READY   STATUS    RESTARTS   AGE
dashboard-metrics-scraper-7c5f6dcf49-gsvxr   1/1     Running   0          33s
kubernetes-dashboard-78fdc69869-hqzzk        1/1     Running   0          33s

5、访问dashboard

1)查看service,访问端口为31260

[root@k8s-master ~]# kubectl get service -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.107.136.107   <none>        8000/TCP        61m
kubernetes-dashboard        NodePort    10.102.22.176    <none>        443:31260/TCP   61m

2)浏览器输入https://masterIP:端口号访问即可;我的是https://192.168.22.139:31260,然后选择token登录

6、编写 dashboard-adminuser.yaml文件并应用

[root@k8s-master ~]# vim dashboard-adminuser.yaml

[root@k8s-master ~]# cat dashboard-adminuser.yaml 
apiVersion: v1
kind: ServiceAccount
metadata:
    name: admin-user
    namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
    name: admin-user
roleRef:
    apiGroup: rbac.authorization.k8s.io
    kind: ClusterRole
    name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kube-system

[root@k8s-master ~]# kubectl apply -f dashboard-adminuser.yaml 
Warning: resource serviceaccounts/admin-user is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
serviceaccount/admin-user configured
Warning: resource clusterrolebindings/admin-user is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
clusterrolebinding.rbac.authorization.k8s.io/admin-user configured

#我已经创建过一次了,所以是configured

7、获取token并登录

[root@k8s-master ~]# kubectl create token admin-user  --namespace kube-system
eyJhbGciOiJSUzI1NiIsImtpZCI6IldYTHM4VXdhclB0MWozZFNyS2xMc2VWVjZGNXdtVXNjNTUtLU1hZVlXSncifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzIyNzU3MjU0LCJpYXQiOjE3MjI3NTM2NTQsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiMzQzM2VhZGYtNDkzMi00ZjE3LTk3ZmMtZDdkYWIyZThiZTVmIn19LCJuYmYiOjE3MjI3NTM2NTQsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTphZG1pbi11c2VyIn0.nRpshRe0t06fW1QCMtvIvRMhCOe9UTWW-GW7rFNgxCq1G0G15VAk7KRn5dha0CY1x1IUoZJwfay7HQwyP3W4LCUyjcJTwjUQnl7tgeHA4I_NgF_Qw21WFT77tJGCAa6Z6rY9IfrfdkVuA_Rbci0hdzKpstzZSb_mV6mTxkU9UrMzu0krGKTkewdCvq9cO9H9tIkljFzEJ7a-y4BONneHUn9qY5eEEcvFSfM7eT1dPP43oUrTLdEAgx5NZaafHT88J-bhyFqQSvJfO4vktjCGqyw5k87xxtbKnh45QrVO_j6cvhJHIq_MO8gWNQE0z-URfi2BFZC-C8lz3Akx3szW5A

  • 4
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Kubernetes 集群中部署 ZooKeeper,可以通过 YAML 文件定义一个 ZooKeeper 的 Deployment 和一个 Service,然后使用 kubectl apply 命令进行部署。 下面是一个示例 YAML 文件,用于在 Kubernetes 集群中部署一个 3 节点的 ZooKeeper: ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: zookeeper spec: replicas: 3 selector: matchLabels: app: zookeeper template: metadata: labels: app: zookeeper spec: containers: - name: zookeeper image: zookeeper:3.5.8 ports: - containerPort: 2181 env: - name: ZOO_MY_ID valueFrom: fieldRef: fieldPath: metadata.uid - name: ZOO_SERVERS value: "server.1=zookeeper-0:2888:3888;2181 server.2=zookeeper-1:2888:3888;2181 server.3=zookeeper-2:2888:3888;2181" volumeMounts: - name: zookeeper-data mountPath: /data - name: zookeeper-datalog mountPath: /datalog volumes: - name: zookeeper-data emptyDir: {} - name: zookeeper-datalog emptyDir: {} --- apiVersion: v1 kind: Service metadata: name: zookeeper spec: selector: app: zookeeper ports: - name: client port: 2181 protocol: TCP targetPort: 2181 clusterIP: None ``` 这个 YAML 文件定义了一个 Deployment 和一个 Service。Deployment 中指定了 3 个副本,使用的镜像为 zookeeper:3.5.8。每个 ZooKeeper 节点会挂载两个空目录作为数据存储目录和事务日志目录。ZooKeeper 的配置通过环境变量 ZOO_MY_ID 和 ZOO_SERVERS 指定。ZOO_MY_ID 表示当前节点的 ID,可以使用 Kubernetes 中的 UID 来自动生成。ZOO_SERVERS 表示集群中所有节点的 ID、主机名、端口号等信息。 Service 中定义了一个名为 zookeeper 的 headless service,用于暴露 ZooKeeper 集群的 2181 端口,以供客户端连接。由于使用了 clusterIP: None,这个 Service 不会创建 ClusterIP,只会创建 Endpoints,用于将客户端请求转发到后端 ZooKeeper 节点。 将上述 YAML 文件保存为 zk.yaml,并使用 kubectl apply 命令进行部署: ``` kubectl apply -f zk.yaml ``` 部署完成后,可以使用 kubectl get 命令查看 ZooKeeper 的 Deployment 和 Service: ``` $ kubectl get deployment zookeeper NAME READY UP-TO-DATE AVAILABLE AGE zookeeper 3/3 3 3 2m $ kubectl get svc zookeeper NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE zookeeper ClusterIP None <none> 2181/TCP 2m ``` 如果需要可视化地管理 Kubernetes 集群和部署应用程序,也可以使用一些 Kubernetes Dashboard 工具,例如 Kubernetes Dashboard、KubeSphere 等。这些工具提供了一系列的图形化操作和监控界面,方便用户进行集群管理和应用程序部署
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值