k8s学习-第5部分部署ruoyi前后端分离版

目录

安装Redis和MySQL

安装MySQL(mysql chart)

构建后端镜像

构建前端镜像

搭建私有镜像仓库

部署后端(ruoyi-admin)

部署后台应用

部署前端(ruoyi-ui)

Pod启动顺序

Ingress(入口)

路径类型

主机名匹配

DashBoard

安装dashboard


若依源码下载:

运行环境:

  • JDK >= 1.8
  • MySQL >= 5.7
  • Maven >= 3.0
  • Node >= 12
  • Redis >= 3

部署步骤:

  • 部署Redis
  • 部署MySQL
  • 构建后端镜像
  • 构建前端镜像
  • 搭建私有镜像仓库
  • 部署后端
  • 部署前端

源码结构:

安装Redis和MySQL

安装Redis
若依使用Redis 作为缓存使用,安转单节点就可以,数据不需要持久化。
Redis chart

#集群配置文件路径
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
#添加仓库
helm repo add bitnami https://charts.bitnami.com/bitnami
#安装redis
helm install redis \
             --set architecture=standalone \
             --set-string auth.password=123456 \
             --set master.persistence.enabled=false \ # 数据不需写入持久卷
             --set master.persistence.medium=Memory \ # 数据写入内存
             --set master.persistence.sizeLimit=1Gi \ # 使用的内存大小不超过一个g
             bitnami/redis \
             --kubeconfig=/etc/rancher/k3s/k3s.yaml

将redis 提示信息复制出来

NAME: redis
LAST DEPLOYED: Mon Oct 31 14:57:52 2022
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
CHART NAME: redis
CHART VERSION: 17.3.7
APP VERSION: 7.0.5

** Please be patient while the chart is being deployed **

Redis® can be accessed via port 6379 on the following DNS name from within your cluster:

    redis-master.default.svc.cluster.local



To get your password run:

    export REDIS_PASSWORD=$(kubectl get secret --namespace default redis -o jsonpath="{.data.redis-password}" | base64 -d)

To connect to your Redis® server:

1. Run a Redis® pod that you can use as a client:

   kubectl run --namespace default redis-client --restart='Never'  --env REDIS_PASSWORD=$REDIS_PASSWORD  --image docker.io/bitnami/redis:7.0.5-debian-11-r7 --command -- sleep infinity

   Use the following command to attach to the pod:

   kubectl exec --tty -i redis-client \
   --namespace default -- bash

2. Connect using the Redis® CLI:
   REDISCLI_AUTH="$REDIS_PASSWORD" redis-cli -h redis-master

To connect to your database from outside the cluster execute the following commands:

    kubectl port-forward --namespace default svc/redis-master 6379:6379 &
    REDISCLI_AUTH="$REDIS_PASSWORD" redis-cli -h 127.0.0.1 -p 6379

安装MySQL(mysql chart

  • 创建一个数据库ry-vue
  • 导入初始化数据

将ruoyi项目的sql文件提前上传到/home/app/sql目录下

使用sql文件生成configMap

kubectl create configmap ruoyi-init-sql --from-file=/home/app/sql

开始安装MySQL 新建ruoyi-mysql.yaml

auth:
  rootPassword: "123456"
  # 自动生成我们自己的数据库
  database: ry-vue

# 集群模式
architecture: replication

# 数据库初始化脚本 需自行创建
initdbScriptsConfigMap: ruoyi-init-sql

primary:
  persistence:
    size: 2Gi
    enabled: true

secondary:
  # 从节点的数量
  replicaCount: 2
  persistence:
    size: 2Gi
    enabled: true
helm install db -f ruoyi-mysql.yaml \
                bitnami/mysql \
                --kubeconfig=/etc/rancher/k3s/k3s.yaml

复制出

Services:

  echo Primary: db-mysql-primary.default.svc.cluster.local:3306
  echo Secondary: db-mysql-secondary.default.svc.cluster.local:3306

端口转发

kubectl port-forward svc/redis-master --address=192.168.56.109 6379:6379
kubectl port-forward svc/db-mysql-primary --address=192.168.56.109 3306:3306

 启动ruoyi项目修改redis mysql配置 启动看是否报错 无异常则开始构建镜像 windows系统下需要提前在win下安装docker

构建后端镜像

在项目的根目录下创建Dockerfile文件。这个文件就叫Dockerfile没有后缀 D大写

#编译
FROM maven AS build
WORKDIR /build/app
#将本地的maven目录装载到容器中的maven目录下,这样就不用重复下载依赖的jar包了
#VOLUME ~/.m2 /root/.m2
COPY . .
RUN mvn clean package

#打包
FROM openjdk:8u342-jre
WORKDIR /app/ruoyi
COPY --from=build /build/app/ruoyi-admin/target/ruoyi-admin.jar .
EXPOSE 8080
ENTRYPOINT ["java","-jar","ruoyi-admin.jar"]
#打包镜像
docker build -t ruoyi-admin:v3.8 .    

构建前端镜像

注意事项:

1.编译前端代码需要使用Node.js, 强烈建议在容器中进行编译。

D:\gitproject\RuoYi-Vue\ruoyi-ui>docker run --name=node -it --rm -v D:\gitproject\RuoYi-Vue\ruoyi-ui:/app/ruoyi-ui node:14-alpine sh
/ # cd /app/ruoyi-ui

/app/ruoyi-ui npm install --registry=https://registry.npmmirror.com

/app/ruoyi-ui npm run build:prod


/app/ruoyi-ui  ls #出现dist为打包成功
README.md          bin                dist               package-lock.json  public             vue.config.js
babel.config.js    build              node_modules       package.json       src

2.对于不了解前端的,在本机使用Node编译极大概率会出错。

3.推荐使用Node 14,最新版本的Node会报错。

4.执行命令打包到正式环境,不要打包预编译环境,不然部署之后运行会报错 。

ruoyi-ui目录下创建dockerfile

FROM node:14-alpine AS build
WORKDIR /build/ruoyi-ui
COPY . .
# 安装依赖并打包到正式环境
RUN npm install --registry=https://registry.npmmirror.com && npm run build:prod

FROM nginx:1.22
WORKDIR /app/ruoyi-ui
COPY --from=build /build/ruoyi-ui/dist .
EXPOSE 80

cd到ruoyi-ui根目录 执行

docker build -t ruoyi-ui:v3.8 .    

前后端都显示下面信息表示build成功 docker images查看镜像

搭建私有镜像仓库

搭建私有镜像仓库
通常,公司的项目不允许推送到互联网上,因此我们需要搭建私有镜像仓库。
搭建私有镜像仓库我们可以使用registryharbor

docker run -d -p 5000:5000 --restart always --name registry registry:2

将镜像推送到私有镜像仓库

#推送后端镜像
#修改镜像tag
docker tag ruoyi-admin:v3.8 172.29.192.1:5000/ruoyi-admin:v3.8
#推送到私有镜像仓库中
docker push 172.29.192.1:5000/ruoyi-admin:v3.8    

#推送前端镜像
#修改镜像tag
docker tag ruoyi-ui:v3.8 172.29.192.1:5000/ruoyi-ui:v3.8
#推送到私有镜像仓库中
docker push 172.29.192.1:5000/ruoyi-ui:v3.8    

push/pull命令默认使用HTTPS协议推送或拉取镜像,但是我们搭建的registry使用HTTP协议,因此会报下面的错误:

解决这个问题,需要修改/etc/docker/daemon.json,加入下面的配置172.19.240.1是我的win linux子系统的ip 

"insecure-registries": ["172.19.240.1:5000"]  

这里我出现了以下错误 这是没关闭v2ray软件导致的 关闭就可以推送上去了

received unexpected HTTP status: 500 writing response to 172.19.240.1:5000: writing HTTP PATCH request: write tcp 127.0.0.1:9780->127.0.0.1:10809: wsasend: An existing connection was forcibly closed by the remote host.

 

重启docker之后,再次推送就可以了。

在kubernetes集群中,使用crictl pull 172.29.192.1:5000/ruoyi-admin:v3.8命令拉取镜像也会报同样的错误。

我们需要修改containerd的配置文件,K3s提供了一种简单的方法。

每一台机器上修改/etc/rancher/k3s/registries.yaml

mirrors:
  docker.io:
    endpoint:
      - "https://fsp2sfpr.mirror.aliyuncs.com/"
  # 加入下面的配置
  172.29.192.1:5000:
    endpoint:
      #使用http协议
      - "http://172.29.192.1:5000"

然后重启每一个节点

#重启master组件 master机器
systemctl restart k3s

#重启node组件 2个worker机器
systemctl restart k3s-agent

查看containerd的配置文件cat  
/var/lib/rancher/k3s/agent/etc/containerd/config.toml


配置完成后,就可以成功拉取镜像了。

部署后端(ruoyi-admin)

Redis和MySQL的DNS地址 之前复制出来的

#Redis can be accessed via port 6379 on the following DNS name from within your cluster:
redis-master.default.svc.cluster.local

#MySQL DNS NAME
Primary: 
	db-mysql-primary.default.svc.cluster.local:3306
Secondary: 
	db-mysql-secondary.default.svc.cluster.local:3306

 使用配置文件生成configMap

application-k8s.yaml 将idea源码中的application内容copy过来改一下mysql redis的ip改为上面的redis-master db-mysql-primary ....

# 数据源配置
spring:
  # redis 配置
  redis:
    # 地址
    host: redis-master
    # 端口,默认为6379
    port: 6379
    # 数据库索引
    database: 0
    # 密码
    password: 123456
    # 连接超时时间
    timeout: 10s
    lettuce:
      pool:
        # 连接池中的最小空闲连接
        min-idle: 0
        # 连接池中的最大空闲连接
        max-idle: 8
        # 连接池的最大数据库连接数
        max-active: 8
        # #连接池最大阻塞等待时间(使用负值表示没有限制)
        max-wait: -1ms
  datasource:
    type: com.alibaba.druid.pool.DruidDataSource
    driverClassName: com.mysql.cj.jdbc.Driver
    druid:
      # 主库数据源
      master:
        url: jdbc:mysql://db-mysql-primary:3306/ry-vue?useUnicode=true&characterEncoding=utf8&zeroDateTimeBehavior=convertToNull&useSSL=true&serverTimezone=GMT%2B8
        username: root
        password: 123456
      # 从库数据源
      slave:
        # 从数据源开关/默认关闭
        enabled: true
        url: jdbc:mysql://db-mysql-secondary:3306/ry-vue?useUnicode=true&characterEncoding=utf8&zeroDateTimeBehavior=convertToNull&useSSL=true&serverTimezone=GMT%2B8
        username: root
        password: 123456
      # 初始连接数
      initialSize: 5
      # 最小连接池数量
      minIdle: 10
      # 最大连接池数量
      maxActive: 20
      # 配置获取连接等待超时的时间
      maxWait: 60000
      # 配置间隔多久才进行一次检测,检测需要关闭的空闲连接,单位是毫秒
      timeBetweenEvictionRunsMillis: 60000
      # 配置一个连接在池中最小生存的时间,单位是毫秒
      minEvictableIdleTimeMillis: 300000
      # 配置一个连接在池中最大生存的时间,单位是毫秒
      maxEvictableIdleTimeMillis: 900000
      # 配置检测连接是否有效
      validationQuery: SELECT 1 FROM DUAL
      testWhileIdle: true
      testOnBorrow: false
      testOnReturn: false
      webStatFilter:
        enabled: true
      statViewServlet:
        enabled: true
        # 设置白名单,不填则允许所有访问
        allow:
        url-pattern: /druid/*
        # 控制台管理用户名和密码
        login-username: ruoyi
        login-password: 123456
      filter:
        stat:
          enabled: true
          # 慢SQL记录
          log-slow-sql: true
          slow-sql-millis: 1000
          merge-sql: true
        wall:
          config:
            multi-statement-allow: true

创建configMap

kubectl create configmap ruoyi-admin-config --from-file=/home/app/application-k8s.yaml

kubectl describe configmap/ruoyi-admin-config

部署后台应用

Deployment配置模版/Service配置模版

spring boot 加载配置文件的最高优先级是项目根路径下的config子目录,打包的时候指定的项目根目录是/app/ruoyi,所以可以将configMap中的配置文件挂载到容器中的/app/ruoyi/config目录中

svc-ruoyi-admin.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: ruoyi-admin
  labels:
    app: ruoyi-admin
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ruoyi-admin
  template:
    metadata:
      labels:
        app: ruoyi-admin
    spec:
      containers:
        - name: ruoyi-admin
          image: 172.29.192.1:5000/ruoyi-admin:v3.8
          ports:
            - containerPort: 8080
          volumeMounts:
            # springBoot启动时,在jar包所在位置的config目录下查找配置文件
            # jar包所在的位置就是dockerfile中WORKDIR定义的目录,即/app/ruoyi
            - mountPath: /app/ruoyi/config
              name: config
          # 使用application-k8s.yaml作为配置文件
          # 启动命令如下: java -jar ruoyi-admin.jar --spring.profiles.active=k8s
          args: ["--spring.profiles.active=k8s"]
      volumes:
        - name: config
          configMap:
            name: ruoyi-admin-config
---
apiVersion: v1
kind: Service
metadata:
  name: ruoyi-admin
spec:
  type: ClusterIP
  selector:
    app: ruoyi-admin
  ports:
    - port: 8080
      targetPort: 8080
kubectl apply -f svc-ruoyi-admin.yaml

查看service

测试一下:curl 10.43.61.103:8080

部署前端(ruoyi-ui)

nginx配置文件

server {
    listen       80;
    server_name  localhost;
    charset utf-8;

    location / {
        # dockerfile中WORKDIR目录
        root   /app/ruoyi-ui;
        try_files $uri $uri/ /index.html;
        index  index.html index.htm;
    }

    location /prod-api/ {
        proxy_set_header Host $http_host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header REMOTE-HOST $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        # 后端service的DNS
        proxy_pass http://ruoyi-admin:8080/;
    }

    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   html;
    }
}

创建configMap

kubectl create configmap ruoyi-ui-config --from-file=/home/app/conf/nginx.conf 
kubectl describe configmap/ruoyi-ui-config

kubernetes资源清单 svc-ruoyi-ui.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: ruoyi-ui
  labels:
    app: ruoyi-ui
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ruoyi-ui
  template:
    metadata:
      labels:
        app: ruoyi-ui
    spec:
      containers:
        - name: ruoyi-ui
          image: 10.150.36.72:5000/ruoyi-ui:v3.8
          ports:
            - containerPort: 80
          volumeMounts:
            - mountPath: /etc/nginx/conf.d
              name: config
      volumes:
        - name: config
          configMap:
            name: ruoyi-ui-config
            items:
              - key: nginx.conf
                path: default.conf
---
apiVersion: v1
kind: Service
metadata:
  name: ruoyi-ui
spec:
  type: NodePort
  selector:
    app: ruoyi-ui
  ports:
    - port: 80
      targetPort: 80
      nodePort: 30080

浏览器访问:http://192.168.56.109:30080/ 改成自己的主节点ip

Pod启动顺序

Pod启动顺序
应用部署完成后,当我们重启服务时,如果ruoyi-admin在mysql或redis之前启动,服务会报错,启动失败。
初始化容器与启动顺序
我们可以使用初始化容器(Init Container)来控制启动顺序。
        ●Pod中的初始化容器在应用容器之前启动。
        ●初始化容器未执行完成,应用容器不会启动。
        ●多个初始化容器按顺序执行,前一个执行完成才会执行下一个。
前端依赖
前端应用ruoyi-ui需要等待后端服务ruoyi-admin就绪之后再启动。
初始化容器示例

修改svc-ruoyi-ui.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: ruoyi-ui
  labels:
    app: ruoyi-ui
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ruoyi-ui
  template:
    metadata:
      labels:
        app: ruoyi-ui
    spec:
      initContainers:
        - name: wait-for-ruoyi-admin
          image: nginx:1.22
          command:
            - sh
            - -c
            - |
              until curl -m 3 ruoyi-admin:8080 
              do
                echo waiting for ruoyi-admin;
                sleep 5;
              done
      containers:
        - name: ruoyi-ui
          image: 10.150.36.72:5000/ruoyi-ui:v3.8
          ports:
            - containerPort: 80
          volumeMounts:
            - mountPath: /etc/nginx/conf.d
              name: config
      volumes:
        - name: config
          configMap:
            name: ruoyi-ui-config
            items:
              - key: nginx.conf
                path: default.conf
---
apiVersion: v1
kind: Service
metadata:
  name: ruoyi-ui
spec:
  type: NodePort
  selector:
    app: ruoyi-ui
  ports:
    - port: 80
      targetPort: 80
      nodePort: 30080

使用until do的方式虽然可以实现等待依赖的服务就绪,但是他是一个无限循环,最好的方式是设置失败重试次数,超过这个次数,初始化容器以失败的状态退出,Pod启动终止。

后端依赖

启动后端应用ruoyi-admin需要先确认MySQL和Redis服务已经就绪。

数据库就绪检查示例

修改svc-ruoyi-admin.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: ruoyi-admin
  labels:
    app: ruoyi-admin
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ruoyi-admin
  template:
    metadata:
      labels:
        app: ruoyi-admin
    spec:
      initContainers:
        - name: wait-for-mysql
          image: bitnami/mysql:8.0.31-debian-11-r0
          env:
            - name: MYSQL_ROOT_PASSWORD
              value: "123456"
          command:
            - sh
            - -c
            - |
              set -e
              maxTries=10
              while [ "$$maxTries" -gt 0 ] \
                    && ! mysqladmin ping --connect-timeout=3 -s \
                                    -hdb-mysql-primary -uroot -p$$MYSQL_ROOT_PASSWORD
              do 
                  echo 'Waiting for MySQL to be available'
                  sleep 5
                  let maxTries--
              done
              if [ "$$maxTries" -le 0 ]; then
                  echo >&2 'error: unable to contact MySQL after 10 tries'
                  exit 1
              fi
        - name: wait-for-redis
          image: bitnami/redis:7.0.5-debian-11-r7
          env:
            - name: REDIS_PASSWORD
              value: "123456"
          command:
            - sh
            - -c
            - |
              set -e
              maxTries=10
              while [ "$$maxTries" -gt 0 ] \
                    && ! timeout 3 redis-cli -h redis-master -a $$REDIS_PASSWORD ping
              do 
                  echo 'Waiting for Redis to be available'
                  sleep 5
                  let maxTries--
              done
              if [ "$$maxTries" -le 0 ]; then
                  echo >&2 'error: unable to contact Redis after 10 tries'
                  exit 1
              fi
      containers:
        - name: ruoyi-admin
          image: 10.150.36.72:5000/ruoyi-admin:v3.8
          ports:
            - containerPort: 8080
          volumeMounts:
            # springBoot启动时,在jar包所在位置的config目录下查找配置文件
            # jar包所在的位置就是dockerfile中WORKDIR定义的目录,即/app/ruoyi
            - mountPath: /app/ruoyi/config
              name: config
          # 使用application-k8s.yaml作为配置文件
          # 启动命令如下: java -jar ruoyi-admin.jar --spring.profiles.active=k8s
          args: ["--spring.profiles.active=k8s"]
      volumes:
        - name: config
          configMap:
            name: ruoyi-admin-config
---
apiVersion: v1
kind: Service
metadata:
  name: ruoyi-admin
spec:
  type: ClusterIP
  selector:
    app: ruoyi-admin
  ports:
    - port: 8080
      targetPort: 8080

Ingress(入口)

Ingress(入口)


如果将应用发布为NodePort类型的Service,那么可以通过集群内的任意一台主机的端口访问服务。
当集群位于公有云或私有云上时,要从互联网进行访问,需要使用公网IP或者域名,公网IP是相对稀缺的资源,不可能给所有主机都分配公网IP,并且随着公开的服务变多,众多的端口也变得难以管理。
面对这种情况,我们可以使用Ingress。
Ingress 可实现:
        ○URL路由规则配置
        ○负载均衡、流量分割、流量限制
        ○HTTPS配置
        ○基于名字的虚拟托管
创建 Ingress 资源,需要先部署 Ingress 控制器,例如 ingress-nginx
不同控制器用法和配置是不一样的。
K3s自带来一个基于Traefik的Ingress控制器,因此我们可以直接创建Ingress资源,无需再安装ingress控制器了。


注意:Ingress 只能公开HTTP 和 HTTPS 类型的服务到互联网。
公开其他类型的服务,需要使用NodePortLoadBalancer类型的Service。


创建Ingress
Ingress配置示例

 ruoyi-ingress.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ruoyi-ingress
spec:
  rules:
    - http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: ruoyi-ui
                port:
                  number: 80

注意:这里的path,需要跟ruoyi-ui使用的nginx.conf中的location一致,不然会报错。

kubectl get ingress
kubectl describe ingress

所有服务都通过公网IP或域名的80端口访问。

路径类型

Ingress 中的每个路径必须设置路径类型(Path Type),当前支持的路径类型有三种:

Exact:精确匹配 URL 路径。区分大小写。

Prefix:URL 路径前缀匹配。区分大小写。并且对路径中的元素逐个完成。

(说明:/foo/bar 匹配 /foo/bar/baz, 但不匹配 /foo/barbaz)

ImplementationSpecific:对于这种路径类型,匹配方法取决于 IngressClass定义的处理逻辑。

主机名匹配

主机名匹配示例

 修改ruoyi-ingress

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ruoyi-ingress
spec:
  rules:
    #类似于nginx的虚拟主机配置
    - host: "front.ruoyi.com"
      http:
        paths:
          - pathType: Prefix
            path: "/"
            backend:
              service:
                name: ruoyi-ui
                port:
                  number: 80
    - host: "backend.ruoyi.com"
      http:
        paths:
          - pathType: Prefix
            path: "/"
            backend:
              service:
                name: ruoyi-admin
                port:
                  number: 8080

电脑中的hosts文件中添加2条记录:192.168.xxxx是master节点的ip

打开http://front.rouyi.com和http://backend.ruoyi.com分别访问前后端。

附图

DashBoard

kubernetes中管理集群中资源的方式通常有四种:命令行、YAML、API和图形界面。其中dashboard是K8s官方的图形界面工具。使用简单,操作方便,能监控node和pod等。

安装dashboard

参考博客

dashboard是通过yaml和镜像搭建的。先在github网站找到与k8s集群版本兼容性打√的版本,再下载对应的yaml文件:GitHub官方地址:Releases · kubernetes/dashboard · GitHub

recommended.yaml

# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
 
apiVersion: v1
kind: Namespace
metadata:
  name: kubernetes-dashboard
 
---
 
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
 
---
 
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard
 
---
 
apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-certs
  namespace: kubernetes-dashboard
type: Opaque
 
---
 
apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-csrf
  namespace: kubernetes-dashboard
type: Opaque
data:
  csrf: ""
 
---
 
apiVersion: v1
kind: Secret
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-key-holder
  namespace: kubernetes-dashboard
type: Opaque
 
---
 
kind: ConfigMap
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard-settings
  namespace: kubernetes-dashboard
 
---
 
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
rules:
  # Allow Dashboard to get, update and delete Dashboard exclusive secrets.
  - apiGroups: [""]
    resources: ["secrets"]
    resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
    verbs: ["get", "update", "delete"]
    # Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
  - apiGroups: [""]
    resources: ["configmaps"]
    resourceNames: ["kubernetes-dashboard-settings"]
    verbs: ["get", "update"]
    # Allow Dashboard to get metrics.
  - apiGroups: [""]
    resources: ["services"]
    resourceNames: ["heapster", "dashboard-metrics-scraper"]
    verbs: ["proxy"]
  - apiGroups: [""]
    resources: ["services/proxy"]
    resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
    verbs: ["get"]
 
---
 
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
rules:
  # Allow Metrics Scraper to get metrics from the Metrics server
  - apiGroups: ["metrics.k8s.io"]
    resources: ["pods", "nodes"]
    verbs: ["get", "list", "watch"]
 
---
 
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard
 
---
 
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubernetes-dashboard
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: kubernetes-dashboard
subjects:
  - kind: ServiceAccount
    name: kubernetes-dashboard
    namespace: kubernetes-dashboard
 
---
 
kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: kubernetes-dashboard
  template:
    metadata:
      labels:
        k8s-app: kubernetes-dashboard
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      containers:
        - name: kubernetes-dashboard
          image: kubernetesui/dashboard:v2.7.0
          imagePullPolicy: Always
          ports:
            - containerPort: 8443
              protocol: TCP
          args:
            - --auto-generate-certificates
            - --namespace=kubernetes-dashboard
            # Uncomment the following line to manually specify Kubernetes API server Host
            # If not specified, Dashboard will attempt to auto discover the API server and connect
            # to it. Uncomment only if the default does not work.
            # - --apiserver-host=http://my-address:port
          volumeMounts:
            - name: kubernetes-dashboard-certs
              mountPath: /certs
              # Create on-disk volume to store exec logs
            - mountPath: /tmp
              name: tmp-volume
          livenessProbe:
            httpGet:
              scheme: HTTPS
              path: /
              port: 8443
            initialDelaySeconds: 30
            timeoutSeconds: 30
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      volumes:
        - name: kubernetes-dashboard-certs
          secret:
            secretName: kubernetes-dashboard-certs
        - name: tmp-volume
          emptyDir: {}
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
 
---
 
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 8000
      targetPort: 8000
  selector:
    k8s-app: dashboard-metrics-scraper
 
---
 
kind: Deployment
apiVersion: apps/v1
metadata:
  labels:
    k8s-app: dashboard-metrics-scraper
  name: dashboard-metrics-scraper
  namespace: kubernetes-dashboard
spec:
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: dashboard-metrics-scraper
  template:
    metadata:
      labels:
        k8s-app: dashboard-metrics-scraper
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      containers:
        - name: dashboard-metrics-scraper
          image: kubernetesui/metrics-scraper:v1.0.8
          ports:
            - containerPort: 8000
              protocol: TCP
          livenessProbe:
            httpGet:
              scheme: HTTP
              path: /
              port: 8000
            initialDelaySeconds: 30
            timeoutSeconds: 30
          volumeMounts:
          - mountPath: /tmp
            name: tmp-volume
          securityContext:
            allowPrivilegeEscalation: false
            readOnlyRootFilesystem: true
            runAsUser: 1001
            runAsGroup: 2001
      serviceAccountName: kubernetes-dashboard
      nodeSelector:
        "kubernetes.io/os": linux
      # Comment the following tolerations if Dashboard must not be deployed on master
      tolerations:
        - key: node-role.kubernetes.io/master
          effect: NoSchedule
      volumes:
        - name: tmp-volume
          emptyDir: {}

然后执行下面的命令

#1. 部署 Dashboard UI
[root@k8s-master01 ~]# kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
 
#2. 设置访问端口,找到 type: ClusterIP 改为 type: NodePort
[root@k8s-master01 ~]# kubectl edit svc kubernetes-dashboard -n kubernetes-dashboard
 
#3. 查看端口
[root@k8s-master01 ~]# kubectl get svc -A |grep kubernetes-dashboard
kubernetes-dashboard   dashboard-metrics-scraper   ClusterIP   10.111.244.84   <none>        8000/TCP                 3m52s
kubernetes-dashboard   kubernetes-dashboard        NodePort    10.106.76.164   <none>        443:32749/TCP            3m53s
 
#3. 访问: https://集群任意IP:端口 进入登录界面      
https://192.168.78.133:32749

 第一次访问显示

 需要配置访问账号

创建实例用户官网:dashboard/creating-sample-user.md at master · kubernetes/dashboard · GitHub

# 创建访问账号,准备一个yaml文件dashuser.yaml内容如下 执行kubectl apply -f dashuser.yaml


apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin-user
  namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: admin-user
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: admin-user
  namespace: kubernetes-dashboard
 

执行

# 获取访问令牌
[root@k8s-master01 ~]# kubectl -n kubernetes-dashboard create token admin-user
eyJhbGciOiJSUzI1NiIsImtpZCI6IkMwZmZUeVU5VE5CeVR0VUgxQlF0RmktNG1PU1pCcmlkNjdGb3dCOV90dEEifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjY2NDI3ODc4LCJpYXQiOjE2NjY0MjQyNzgsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiZmM1MmYyOWUtMzgyMS00YjQxLWEyNDMtNTE5MzZmYWQzNTYzIn19LCJuYmYiOjE2NjY0MjQyNzgsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDphZG1pbi11c2VyIn0.t7MWL1qKpxFwujJtZEOFRlQshp-XVvD9dJsu41_v97PCw5AaH3pHSP-fqdnsqobQ__HlxLjECcGSHhnDtyC8Z1uVX74iWOBU_qVDwKN0hezcmlSyB9SglMYDJ0_UokDMiOY7KdfpwnX_SoOYQrjKyCjXBMI9iSFWK6sIT6CQYpntd57wDDG6jPOHI2VsMjAMYdmzC7qhxGXfaMlXkERvti3gkuzAELQOVBtQJszoyXTykrd4eQAD5720ERQ-ky0gof2lDexkmjffB_9Ksa7Ubuq7i5sMzrHVql9bhUBK1Hjwlmo6hZUn4ldySoJrPnZ3yS5J8WPc1NF9e8GDhaYYYg
 
# 现在复制令牌并将其粘贴到登录屏幕上的Enter令牌字段中即可成功登录。

效果如下

  • 0
    点赞
  • 14
    收藏
    觉得还不错? 一键收藏
  • 2
    评论
Kubernetes(简称为K8s)是一个开源的容器编排平台,用于自动化部署、扩展和管理容器化应用程序。Ruoyi-Cloud是基于Spring Cloud和Vue.js的一套微服务架构解决方案,用于快速构建企业级的云管理平台。 要在Kubernetes部署Ruoyi-Cloud,可以按照以下步骤进行操作: 1. 创建Kubernetes集群:首先需要创建一个Kubernetes集群,可以使用云服务提供商(如AWS、Azure、GCP)或使用本地工具(如Minikube、k3s)来搭建集群。 2. 构建Docker镜像:将Ruoyi-Cloud的各个微服务模块打包成Docker镜像,并上传到一个可访问的镜像仓库(如Docker Hub、私有镜像仓库)。 3. 编写Kubernetes配置文件:创建Kubernetes的配置文件,描述Ruoyi-Cloud各个微服务的部署方式、资源需求、服务暴露方式等。可以使用YAML格式编写配置文件。 4. 部署Ruoyi-Cloud:使用kubectl命令行工具或Kubernetes管理界面(如Kubernetes Dashboard)来部署Ruoyi-Cloud。通过应用配置文件,Kubernetes会自动创建和管理Ruoyi-Cloud的各个微服务实例。 5. 监控和扩展:使用Kubernetes提供的监控和扩展功能,可以实时监控Ruoyi-Cloud的运行状态,并根据需要进行水平扩展或缩减。 请注意,以上只是一个简要的概述,实际部署过程可能会因具体环境和需求而有所不同。建议参考KubernetesRuoyi-Cloud的官方文档以获取更详细的部署指南和最佳实践。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值