目录
1.规划设计整个集群的网络拓扑结构,配置防火墙和堡垒机,在堡垒机上安装部署ansible服务。
2.安装Kubeadm部署单master的集群,形成1台master2台node的集群环境。使用deployment在集群中启动nginx的pod作为web应用。
3.安装metrics-server,采用HPA技术,让pod达到自动缩放效果。当CPU使用率达到50%进行水平扩缩,实现启动最小20个pod,最多40个pod。
3.1.安装好metrics-server,运行php-apache
3.2.上传hpa.example.tar的镜像到两个node节点上
4.部署NFS服务器,实现集群的数据一致性,在nginx容器中使用PV和PVC与NFS融合,保证容器提供的web服务内容一致。
4.2.创建pv使用nfs服务器上的共享目录(在master上操作)
5.构建CI/CD环境,安装gitlab,Jenkins,harbor实现代码发布、镜像制作、数据备份等流水线工作。
6.使用Prometheus对k8s集群进行监控,在Grafana中配置好Prometheus的数据源进行数据展示,更直观地监控整个web集群的性能。
7.部署ingress给web业务进行负载均衡,使用探针(liveless\readiness\startup)的httpGET和exec方法试探pod能否正常提供服务,增强业务pod的可靠性
8.安装k8s的dashboard对整个集群资源进行掌控了解
项目名称
基于k8s+docker+Prometheus的可监控高可用web集群
项目环境
centos 7.9、Kubernetes v1.20.6、Docker 23.0.3、ansible 2.9.27、Prometheus 2.42、Grafana 9.1.2等
项目描述
模拟企业中的生产环境,底层采用k8s管理的docker集群提供web服务,通过ansible实现自动化运维,使用NFS实现数据同源,利用Prometheus实现监控功能并通过Grafana进行监控数据可视化,搭建Jenkins构建CI/CD环境,构造一个高并发,高可用的web集群。
项目步骤
1.规划设计整个集群的网络拓扑结构,配置防火墙和堡垒机,在堡垒机上安装部署ansible服务。
1.1.规划设计集群的网络拓扑结构
1.2.前期准备工作
所有服务器都关闭防火墙和selinux
#关闭防火墙
systemctl disable firewalld
#关闭selinux
sed -i '/^SELINUX=/ s/enforcing/disabled/' /etc/selinux/config
1.3.配置防火墙
在防火墙服务器上开启路由功能并添加SNAT/DANT策略
#编辑脚本配置SNAT策略,实现从内网访问外网
#!/bin/bash
#清除filter表和nat表里的防火墙规则
iptables -F
iptables -t nat -F
#snat策略
iptables -t nat -A POSTROUTING -s 192.168.81.0/24 -o ens33 -j SNAT --to-source 192.168.43.128
#开启路由功能
echo 1 > /proc/sys/net/ipv4/ip_forward
#在实现SNAT的基础上,编辑标本添加DNAT策略,实现从外网访问内网
#!/bin/bash
#清除filter表和nat表里的防火墙规则
iptables -F
iptables -t nat -F
#snat策略
iptables -t nat -A POSTROUTING -s 192.168.81.0/24 -o ens33 -j SNAT --to-source 192.168.43.128
#开启路由功能
echo 1 > /proc/sys/net/ipv4/ip_forward
#dnat
#dnat web
iptables -t nat -A PREROUTING -i ens33 -d 192.168.43.128 -p tcp --dport 80 -j DNAT --to-destination 192.168.81.0/24
1.4.在堡垒机上安装部署ansible服务
#1.从ansible上生成ssh密钥同步到两台node主机上,实现无密钥登陆管理
[root@localhost ~]# ssh-keygen
#2.建立免密通道
[root@localhost ~]# ssh-copy-id -i id_rsa.pub root@192.168.81.54
[root@localhost .ssh]# ssh-copy-id -i id_rsa.pub root@192.168.81.98
[root@localhost .ssh]# ssh-copy-id -i id_rsa.pub root@192.168.81.99
[root@localhost .ssh]# ssh-copy-id -i id_rsa.pub root@192.168.81.97
#3.验证是否建立成功(远程登陆)
[root@ansible ~]# ssh 'root@192.168.81.54'
Last login: Fri Apr 7 10:28:46 2023
[root@lb1 ~]# exit
登出
Connection to 192.168.81.54 closed.
[root@ansible ~]# ssh 'root@192.168.81.99'
Last login: Fri Apr 7 10:28:57 2023
[root@lb2 ~]# exit
登出
Connection to 192.168.81.99 closed.
[root@ansible ~]# ssh 'root@192.168.81.98'
Last login: Fri Apr 7 10:29:22 2023
[root@web2 ~]# exit
登出
Connection to 192.168.81.98 closed.
[root@ansible ~]# ssh 'root@192.168.81.97'
Last login: Fri Apr 7 10:29:53 2023 from 192.168.81.97
#4.安装ansible
[root@ansible ~]# yum install epel-release -y
[root@ansible ~]# yum install ansible -y
#5.将node主机添加到管主机单中
[root@ansible ~]# cd /etc/ansible
[root@ansible ansible]# ls
ansible.cfg hosts roles
[root@ansible ansible]# cat hosts
[webservers]
192.168.81.54
192.168.81.99
192.168.81.98
192.168.81.97
2.安装Kubeadm部署单master的集群,形成1台master2台node的集群环境。使用deployment在集群中启动nginx的pod作为web应用。
2.1.k8s集群搭建
k8s集群具体的搭建过程在之前的文章中有详细介绍,此处不过多赘述:
K8S安装部署的详细步骤与注意事项!_本地安装k8s_钰儿yu1228的博客-CSDN博客
2.2.在集群中启动nginx
编写一个yaml文件nginx.yaml,使用deployment控制器,启动一个nginx的pod,3个副本,然后创建一个服务service-nginx,发布nginx集群(使用deployment控制器启动pod)
[root@k8smaster service]# cat nginx-3.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-3
labels:
app: nginx-3
spec:
replicas: 3
selector:
matchLabels:
app: nginx-3
template:
metadata:
labels:
app: nginx-3
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
name: http-web-svc
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service-3
spec:
type: NodePort
selector:
app: nginx-3
ports:
- name: name-of-service-port
protocol: TCP
port: 80
targetPort: http-web-svc
nodePort: 30008
验证:
3.安装metrics-server,采用HPA技术,让pod达到自动缩放效果。当CPU使用率达到50%进行水平扩缩,实现启动最小20个pod,最多40个pod。
3.1.安装好metrics-server,运行php-apache
[root@k8smaster hpa]# wget https://k8s.io/examples/application/php-apache.yaml
3.2.上传hpa.example.tar的镜像到两个node节点上
3.3.修改php-apache.yaml
增加配置修改参数,防止再次拉取hpa镜像
spec:
containers:
- name: nginx-3
image: k8s.gcr.io/hpa-example
imagePullPolicy: IfNotPresent #添加,当镜像在本地有时不再重新拉取镜像
3.4.启动pod
[root@k8smaster hpa]# kubectl apply -f php-apache.yaml
[root@k8smaster hpa]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-559d658b74-2247s 1/1 Running 0 86m 10.244.62.166 k8snode-1 <none> <none>
nginx-deployment-559d658b74-cv6zr 1/1 Running 0 106m 10.244.62.165 k8snode-1 <none> <none>
nginx-deployment-559d658b74-sr2jv 1/1 Running 0 107m 10.244.163.156 k8snode-2 <none> <none>
php-apache-567d9f79d-jkqwb 1/1 Running 0 5s 10.244.163.160 k8snode-2 <none> <none>
3.5.增加水平扩缩,创建hpa
kubectl autoscale deployment php-apache --cpu-percent=50 --min=20 --max=40
[root@k8smaster hpa]# kubectl get hpa
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
php-apache Deployment/php-apache 0%/10% 1 10 1 16s
3.6.验证效果
增加负载:
客户端 Pod 中的容器在无限循环中运行,向 php-apache 服务发送查询。
kubectl run -i --tty load-generator --rm --image=busybox:1.28 --restart=Never -- /bin/sh -c "while sleep 0.01; do wget -q -O- http://php-apache; done"
将启动pod的最大值设置为10,方便观察效果:
ctrl+c停止负载后的效果:
动态观察变化:
kubectl get hpa php-apache --watch
4.部署NFS服务器,实现集群的数据一致性,在nginx容器中使用PV和PVC与NFS融合,保证容器提供的web服务内容一致。
4.1.搭建好nfs服务器
搭建好nfs服务器,在k8s集群中的每个节点上都安装相关软件,因为节点服务器里创建卷需要支持nfs网络文件系统:
[root@k8smaster ~]# yum install nfs-utils -y
设置共享目录:
[root@nfs ~]# cat /etc/exports
/web 192.168.81.0/24(ro,all_squash,sync)
[root@nfs /]# cd web
[root@nfs web]# ls
index.html sanchuang
[root@nfs web]# cat index.html
Welcome!!!
Good luck to you!!
Have fun!
[root@nfs web]# exportfs -rv
exporting 192.168.81.0/24:/web
测试k8s集群能否挂载nfs共享目录:
[root@k8smaster ~]# mkdir /pv_pvc
[root@k8smaster ~]# mount 192.168.81.201:/web /pv_pvc
[root@k8smaster ~]# cd /pv_pvc/
[root@k8smaster pv_pvc]# ls
index.html sanchuang
4.2.创建pv使用nfs服务器上的共享目录(在master上操作)
[root@k8smaster pv]# pwd
/root/pv
[root@k8smaster pv]# cat nfs-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: sc-nginx-pv
labels:
type: sc-nginx-pv
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
storageClassName: nfs
nfs:
path: "/web"
server: 192.168.81.201
readOnly: false
[root@k8smaster pv]# kubectl apply -f nfs-pv.yaml
4.3.创建pvc使用pv
[root@k8smaster pv]# cat nfs-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: sc-nginx-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
storageClassName: nfs #使用nfs类型的pv
[root@k8smaster pv]# kubectl apply -f nfs-pvc.yaml
4.4.指定pod使用pvc
[root@k8smaster pv]# cat nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
volumes:
- name: sc-pv-storage-nfs
persistentVolumeClaim:
claimName: sc-nginx-pvc
containers:
- name: sc-pv-container-nfs
image: nginx
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: sc-pv-storage-nfs
[root@k8smaster pv]# kubectl apply -f nginx-deployment.yaml
验证:
[root@k8smaster pv]# curl 10.244.62.184
Welcome!!!
Good luck to you!!
Have fun!
5.构建CI/CD环境,安装gitlab,Jenkins,harbor实现代码发布、镜像制作、数据备份等流水线工作。
5.1.搭建私有仓库harbor
5.1.1.环境准备:
1.提前安装docker和docker compose
2.下载harbor的源码
3.将源码上传到Linux服务器
[root@docker Dockerfile]# mkdir harbor
[root@docker Dockerfile]# cd harbor/
[root@docker harbor]# ls
harbor-offline-installer-v2.1.0.tgz
#对文件进行解压
[root@docker harbor]# tar xf harbor-offline-installer-v2.1.0.tgz
[root@docker harbor]# cp harbor.yml.tmpl harbor.yml
5.1.2.harbor安装过程
修改配置文件:
[root@docker harbor]# vim harbor.yml
hostname: 192.168.81.140 #修改
# http related config
http:
# port for http, default is 80. If https enabled, this port will redirect to https port
port: 8090 #修改
#如果没有证书,可以在配置文件中注释掉https的相关内容
上传docker-compose文件,并赋予可执行权限:
[root@docker harbor]# chmod +x docker-compose
#将docker-compose存放到PATH变量目录下就可以
#[root@docker harbor]# cp docker-compose /usr/bin/
执行安装操作:
[root@docker harbor]# ./install.sh
访问:
在windows机器上访问网站,配置harbor
默认登陆的用户名和密码:
admin Harbor12345
在harbor中创建一个项目sanchuang
新建自己的用户名和密码
5.1.3.本机上传拉取镜像
在另外一台docker宿主机上使用这个仓库:
[root@docker ~]# cat /etc/docker/daemon.json
{
"registry-mirrors": ["https://registry.docker-cn.com"],
"insecure-registries" : ["192.168.81.140:8090"]
}
#重启docker
[root@docker ~]# systemctl daemon-reload
[root@docker ~]# systemctl restart docker
修改镜像名字:
[root@docker ~]# docker tag nginx:latest 192.168.81.140:8090/sanchuang/nginx:latest
登陆私有仓库:
[root@docker harbor]# docker login 192.168.81.140:8090
Username: lay
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
推送镜像到私人仓库:
[root@docker harbor]# docker push 192.168.81.140:8090/sanchuang/nginx:latest
5.2.搭建gitlab
参考官网流程:https://gitlab.cn/install/
根据提示配置域名:
[root@gitlab ~]# vim /etc/gitlab/gitlab.rb
#启动极狐gitlab
[root@gitlab ~]# sudo gitlab-ctl reconfigure
获取初始化密码:
[root@gitlab ~]# cat /etc/gitlab/initial_root_password
# WARNING: This value is valid only in the following conditions
# 1. If provided manually (either via `GITLAB_ROOT_PASSWORD` environment variable or via `gitlab_rails['initial_root_password']` setting in `gitlab.rb`, it was provided before database was seeded for the first time (usually, the first reconfigure run).
# 2. Password hasn't been changed manually, either via UI or via command line.
#
# If the password shown here doesn't work, you must reset the admin password following https://docs.gitlab.com/ee/security/reset_user_password.html#reset-your-root-password.
Password: 87A2+QNqMiSSneececlY37DP0YRPAN31dcFpfAHD3Lk=
访问页面:
5.3.安装Jenkins
安装搭建Jenkins的步骤在之前的文章中有详细的配置过程,此处不过多赘述
在docker和k8s中安装Jenkins,详细教程!!!_钰儿yu1228的博客-CSDN博客
6.使用Prometheus对k8s集群进行监控,在Grafana中配置好Prometheus的数据源进行数据展示,更直观地监控整个web集群的性能。
6.1.使用Docker容器部署Prometheus
使用Docker容器部署Prometheus_docker部署prometheus_钰儿yu1228的博客-CSDN博客
6.2.配置Grafana
#启动grafana工具
[root@localhost prometheus]# service grafana-server start
#设置grafana开机启动
[root@localhost prometheus]# systemctl enable grafana-server
在Grafana中配置prometheus的数据源,并导入相关模板进行数据可视化展示,界面如图:
7.部署ingress给web业务进行负载均衡,使用探针(liveless\readiness\startup)的httpGET和exec方法试探pod能否正常提供服务,增强业务pod的可靠性
7.1.定义探针
[root@k8smaster probe]# wget https://k8s.io/examples/pods/probe/exec-liveness.yaml --no-check-certificate
[root@k8smaster probe]# cat exec-liveness.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
test: liveness
name: liveness-exec
spec:
containers:
- name: liveness
image: busybox #修改
args:
- /bin/sh
- -c
- touch /tmp/healthy; sleep 30; rm -f /tmp/healthy; sleep 600
livenessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5
periodSeconds: 5
[root@k8smaster probe]# kubectl apply -f exec-liveness.yaml
7.2.部署ingress实现负载均衡
7.2.1.安装部署ingress controller
[root@k8smaster ~]# cd ingress/
[root@k8smaster ingress]# ls
[root@k8smaster ingress]# wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.1/deploy/static/provider/cloud/deploy.yaml
下载拉取相关镜像:
[root@k8smaster ingress]# docker pull registry.k8s.io/ingress-nginx/controller:v1.8.1@sha256:e5c4824e7375fcf2a393e1c03c293b69759af37a9ca6abdb91b13d78a93da8bd
Error response from daemon: Get https://us-west2-docker.pkg.dev/v2/k8s-artifacts-prod/images/ingress-nginx/controller/manifests/sha256:e5c4824e7375fcf2a393e1c03c293b69759af37a9ca6abdb91b13d78a93da8bd: dial tcp 108.177.125.82:443: connect: connection refused
将所有的镜像scp到所有的节点服务器上:
[root@k8smaster ingress]# scp ingress-nginx-controllerv1.1.0.tar.gz k8snode-1:/root
ingress-nginx-controllerv1.1.0.tar.gz 100% 276MB 45.9MB/s 00:06
[root@k8smaster ingress]# scp ingress-nginx-controllerv1.1.0.tar.gz k8snode-2:/root
[root@k8smaster ingress]# scp kube-webhook-certgen-v1.1.0.tar.gz k8snode-2:/root
kube-webhook-certgen-v1.1.0.tar.gz 100% 47MB 48.8MB/s 00:00
[root@k8smaster ingress]# scp kube-webhook-certgen-v1.1.0.tar.gz k8snode-1:/root
导入镜像到所有的节点:
[root@k8snode-1 ~]# docker load -i ingress-nginx-controllerv1.1.0.tar.gz
[root@k8snode-2 ~]# docker load -i kube-webhook-certgen-v1.1.0.tar.gz
执行yaml文件,创建ingress controller:
kubectl apply -f ingress-deploy.yaml
7.2.2.检查pod是否启动
查看命名空间:
[root@k8smaster ingress]# kubectl get ns
NAME STATUS AGE
default Active 99d
ingress-nginx Active 107s
kube-node-lease Active 99d
kube-public Active 99d
kube-system Active 99d
查看pod是否启动:
[root@k8smaster ingress]# kubectl get pod -n ingress-nginx
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-c6h2b 0/1 Completed 0 2m17s
ingress-nginx-admission-patch-8m4bp 0/1 Completed 1 2m17s
ingress-nginx-controller-6c8ffbbfcf-7qksk 1/1 Running 0 2m17s
ingress-nginx-controller-6c8ffbbfcf-swfjc 1/1 Running 0 2m17s
7.3.创建ingress资源
7.3.1.启动service,暴露nginx
[root@k8smaster ingress]# kubectl apply -f nginx-3.yaml
deployment.apps/nginx-deployment-3 created
service/nginx-service-3 configured
[root@k8smaster ingress]# cat nginx-3.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-3
labels:
app: nginx-3
spec:
replicas: 3
selector:
matchLabels:
app: nginx-3
template:
metadata:
labels:
app: nginx-3
spec:
containers:
- name: nginx-deployment-3
image: nginx
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
name: http-web-svc
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service-3
spec:
type: NodePort
selector:
app: nginx-3
ports:
- name: name-of-service-port
protocol: TCP
port: 80
targetPort: 80
检查服务是否发布成功:访问服务暴露的ip
[root@k8smaster service]# kubectl describe svc nginx-service-3
Name: nginx-service-3
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=nginx-3
Type: NodePort
IP Families: <none>
IP: 10.104.34.53
IPs: 10.104.34.53
Port: name-of-service-port 80/TCP
TargetPort: http-web-svc/TCP
NodePort: name-of-service-port 30080/TCP
Endpoints: 10.244.163.136:80,10.244.163.137:80,10.244.62.147:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
[root@k8smaster service]# kubectl get svc
\NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 100d
mysql-service NodePort 10.107.198.6 <none> 3306:30006/TCP 85d
nginx ClusterIP None <none> 80/TCP 85d
nginx-service ClusterIP 10.98.148.225 <none> 80/TCP 86d
nginx-service-2 NodePort 10.100.239.164 <none> 80:30007/TCP 86d
nginx-service-3 NodePort 10.104.34.53 <none> 80:30080/TCP 85d
php-apache ClusterIP 10.103.201.130 <none> 80/TCP 96d
[root@k8smaster service]# curl 10.104.34.53
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
7.3.2.创建yaml文件,启动ingress
[root@k8smaster ingress]# cat sc-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: sc-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx-example
rules:
- host: www.li.com
http:
paths:
- path: / #基于路径的负载均衡
pathType: Prefix
backend:
service:
name: nginx-service-3 #创建服务的名字
port:
number: 80 #宿主机上暴露的端口号
- host: www.wei.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-service-4
port:
number: 80
7.3.3.查看创建是否成功
[root@k8smaster ingress]# kubectl apply -f sc-ingress.yaml
ingress.networking.k8s.io/sc-ingress created
[root@k8smaster ingress]# kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
sc-ingress nginx-example www.li.com,www.wei.com 192.168.81.97,192.168.81.98 80 10s
7.3.4.验证
获取ingress controller对应的service暴露宿主机的端口,访问宿主机和相关端口,就可以验证ingress controller是否能进行负载均衡:
[root@k8smaster ingress]# kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller NodePort 10.108.167.155 <none> 80:30065/TCP,443:31369/TCP 24h
ingress-nginx-controller-admission ClusterIP 10.97.20.100 <none> 443/TCP 24h
在其他宿主机或者weindows机器上使用域名访问:
vim /etc/hosts
#添加配置
192.168.81.97 www.li.com
192.168.81.98 www.wei.com
8.安装k8s的dashboard对整个集群资源进行掌控了解
8.1.准备工作
下载dashboard,修改配置文件:
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
#修改service的配置,其他配置不修改
vim recommended.yaml
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort #指定类型
ports:
- port: 443
targetPort: 8443
nodePort: 30088 #指定宿主机的端口号
selector:
k8s-app: kubernetes-dashboard
8.2.启动dashboard
8.2.1.应用配置文件
[root@k8smaster dashboard]# kubectl apply -f recommended.yaml
8.2.2.查看是否启动dashboard的启动情况
查看pod是否启动:
[root@k8smaster dashboard]# kubectl get pod --all-namespaces|grep dashboard
kubernetes-dashboard dashboard-metrics-scraper-66dd8bdd86-xdp79 1/1 Running 0 4m13s
kubernetes-dashboard kubernetes-dashboard-785c75749d-d9ktr 1/1 Running 0 4m13s
查看service是否启动:
[root@k8smaster dashboard]# kubectl get svc --all-namespaces|grep dashboard
kubernetes-dashboard dashboard-metrics-scraper ClusterIP 10.100.150.154 <none> 8000/TCP 3m21s
kubernetes-dashboard kubernetes-dashboard NodePort 10.100.11.255 <none> 443:30088/TCP 3m21s
8.2.3.浏览器访问(使用https协议访问)
登录界面获取token:
#获取dashboard的secret的名字
[root@k8smaster dashboard]# kubectl get secret -n kubernetes-dashboard|grep dashboard-token
kubernetes-dashboard-token-7kk6x kubernetes.io/service-account-token 3 8m34s
#获取secret里的token
[root@k8smaster dashboard]# kubectl describe secret kubernetes-dashboard-token-7kk6x -n kubernetes-dashboard
Name: kubernetes-dashboard-token-7kk6x
Namespace: kubernetes-dashboard
Labels: <none>
Annotations: kubernetes.io/service-account.name: kubernetes-dashboard
kubernetes.io/service-account.uid: 9d927444-793e-499d-bbfa-180a448d537a
Type: kubernetes.io/service-account-token
Data
====
ca.crt: 1066 bytes
namespace: 20 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IlhzUERpWFY2RmJNV0liTXhfRUFOQzFKLVRyVkh6UV9XRFc4S01EdUItS2MifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC10b2tlbi03a2s2eCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjlkOTI3NDQ0LTc5M2UtNDk5ZC1iYmZhLTE4MGE0NDhkNTM3YSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDprdWJlcm5ldGVzLWRhc2hib2FyZCJ9.Tnv4HyLP9tfwQqG_2G6lqIC8Lx4i1irO_i9OLckP_m7wrnDgZsqt-RMQv9ryflYULpSKh1Yvz0poLQZjRoTQulA8hW0L5XD8FETF9oaUMupdgqIyj40NX2-R1UpgJZB5bSxUVifmiDtTYuiw0uYhOG1AOu6TCdap8qhOb_3_NACDifsSaOeAcW2DFvDT9iEG4ycXng6itN4v3Uz4rAF3LYk8F1cliY3rakha_Pl9rqhNZvvTPNMYruuYWeK15Ypl_Kxk1UkJbM4mluaC4XsCDaCihFgkGGY3jXudqljzxNCJiTT8O1LgQVSNkBbw6cqoI4oqcZpHj30Q4ZUyACwivw
登录成功后,dashboard不能访问任何资源对象,需要RBAC鉴权:
[root@k8smaster dashboard]# kubectl create clusterrolebinding serviceaccount-cluster-admin --clusterrole=cluster-admin --user=system:serviceaccount:kubernetes-dashboard:kubernetes-dashboard
clusterrolebinding.rbac.authorization.k8s.io/serviceaccount-cluster-admin created
验证:
项目心得
通过完成这个项目,对容器化、集群管理和监控知识更加熟悉,对于k8s的架构和组件更加了解。通过解决问题,进一步提升了技术和解决问题的能力