k8s搭建并部署项目流程 (新手)

说明

本文介绍了搭建k8s master单节点的搭建过程以及部署一个java前后端分离项目的流程。因为之前也没怎么接触过k8s,所以文章中可能会有漏掉或者多余的步骤,理解可能也有错误,如有发现错误的地方,还请多多见谅。

1 环境准备

  1. 一台2G2核以上的centos7服务器或虚拟机

2 搭建k8s环境

参考
k8s的安装部署


2.1 修改主机名

修改主机名的作用主要是便于区分node名称,如果搭建单机的k8s,应该可以不用修改,这里只是为了好看。

把主机ip对应的名字添加到新行 :192.168.30.101 master

[root@localhost /]# vi etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.30.101 master

重启虚拟机生效


2.2 关闭firewalld和selinux

[root@master /]# service firewalld stop
Redirecting to /bin/systemctl stop firewalld.service
[root@master /]# systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@master /]# getenforce
Enforcing
[root@master /]# setenforce 0
[root@master /]# getenforce
Permissive
[root@master /]# 

2.3 安装docker

安装yum相关工具,下载docker-ce.repo文件

yum install -y yum-utils

装docker-ce软件

yum-config-manager \
    --add-repo \
    https://download.docker.com/linux/centos/docker-ce.repo

启动docker服务,设置docker开机自启

systemctl start docker
systemctl enable docker

2.4 关闭交换分区

swapoff -a

还需要永久关闭 否则下次重启后k8s会出现异常

打开/etc/fstab 文件 将其中swap一行注释

vi /etc/fstab
#
# /etc/fstab
# Created by anaconda on Wed Jan 31 17:19:57 2024
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
UUID=cbedf5c5-f708-41c7-a5be-f7f170ee47cc /                       xfs     defaults        0 0
UUID=70b9e7e6-9676-487e-ae98-d70dda203d2e /boot                   xfs     defaults        0 0
#UUID=5cfcd09d-6b6f-43dc-9a2e-474192c831c6 swap                    swap    defaults        0 0

2.5 修改内核参数

执行

cat <<EOF >>  /etc/sysctl.conf 
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1
vm.swappiness=0
EOF

sysctl -p 让参数生效到内核里面

[root@master /]# sysctl -p
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_nonlocal_bind = 1
net.ipv4.ip_forward = 1
vm.swappiness = 0
[root@master /]# 

2.6 安装kubadm,kubctl,kublet软件

添加kubernetes yum软件源

执行

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

安装kubeadm,kubelet,kubectl,并且指定版本,因为1.24的版本默认运行时环境不是docker了

yum install -y kubelet-1.23.6 kubeadm-1.23.6 kubectl-1.23.6

设置开机自启,因为kubelet是k8s在node节点上的代理,必须开机要运行的

systemctl enable  kubelet

2.7 部署kubernetes master

提前准备coredns:1.8.4的镜像,后面需要使用

CoreDNS是DNS服务器软件,通常用于在容器化环境(尤其是Kubernetes管理的环境)中支持服务发现功能。

docker pull  coredns/coredns:1.8.4

修改标签名称

docker tag coredns/coredns:1.8.4 registry.aliyuncs.com/google_containers/coredns:v1.8.4

在master服务器上进行初始化操作

注意’apiserver-advertise-address’得改成自己的

kubeadm init \
--apiserver-advertise-address=192.168.30.101 \
--image-repository registry.aliyuncs.com/google_containers \
--service-cidr=10.1.0.0/16 \
--pod-network-cidr=10.244.0.0/16

可能会出现下面情况

[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.

原因

默认情况下Kubernetes Cgroup Driver为system,而上述方法安装的docker Cgroup Driver为cgroupfs

[root@master /]# docker info |grep Cgroup
 Cgroup Driver: cgroupfs
 Cgroup Version: 1

解决办法

修改docker的cgroup驱动

编辑或新建

vi etc/docker/daemon.json

添加一下内容

{
  "exec-opts": ["native.cgroupdriver=systemd"]
}

重启docker

    systemctl restart docker

重置kubeadm

kubeadm reset

重新初始化

kubeadm init \
--apiserver-advertise-address=192.168.30.101 \
--image-repository registry.aliyuncs.com/google_containers \
--service-cidr=10.1.0.0/16 \
--pod-network-cidr=10.244.0.0/16

完成初始化的新建文件和目录的操作,在master上完成

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

查看节点信息

[root@master /]# kubectl get nodes
NAME     STATUS     ROLES                  AGE   VERSION
master   NotReady   control-plane,master   19s   v1.23.6
[root@master /]# 

部署master节点完成


2.8 安装网络插件flannel

flannel: k8s的网络插件,作用就是实现pod之间的通信

kube-flannel.yaml 文件需要自己去创建,内容如下:

---
kind: Namespace
apiVersion: v1
metadata:
  name: kube-flannel
  labels:
    pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
rules:
- apiGroups:
  - ""
  resources:
  - pods
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes/status
  verbs:
  - patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: flannel
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: flannel
subjects:
- kind: ServiceAccount
  name: flannel
  namespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: flannel
  namespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-flannel-cfg
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
data:
  cni-conf.json: |
    {
      "name": "cbr0",
      "cniVersion": "0.3.1",
      "plugins": [
        {
          "type": "flannel",
          "delegate": {
            "hairpinMode": true,
            "isDefaultGateway": true
          }
        },
        {
          "type": "portmap",
          "capabilities": {
            "portMappings": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: kube-flannel-ds
  namespace: kube-flannel
  labels:
    tier: node
    app: flannel
spec:
  selector:
    matchLabels:
      app: flannel
  template:
    metadata:
      labels:
        tier: node
        app: flannel
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/os
                operator: In
                values:
                - linux
      hostNetwork: true
      priorityClassName: system-node-critical
      tolerations:
      - operator: Exists
        effect: NoSchedule
      serviceAccountName: flannel
      initContainers:
      - name: install-cni-plugin
       #image: flannelcni/flannel-cni-plugin:v1.1.0 for ppc64le and mips64le (dockerhub limitations may apply)
        image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
        command:
        - cp
        args:
        - -f
        - /flannel
        - /opt/cni/bin/flannel
        volumeMounts:
        - name: cni-plugin
          mountPath: /opt/cni/bin
      - name: install-cni
       #image: flannelcni/flannel:v0.19.1 for ppc64le and mips64le (dockerhub limitations may apply)
        image: docker.io/rancher/mirrored-flannelcni-flannel:v0.19.1
        command:
        - cp
        args:
        - -f
        - /etc/kube-flannel/cni-conf.json
        - /etc/cni/net.d/10-flannel.conflist
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      containers:
      - name: kube-flannel
       #image: flannelcni/flannel:v0.19.1 for ppc64le and mips64le (dockerhub limitations may apply)
        image: docker.io/rancher/mirrored-flannelcni-flannel:v0.19.1
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        resources:
          requests:
            cpu: "100m"
            memory: "50Mi"
          limits:
            cpu: "100m"
            memory: "50Mi"
        securityContext:
          privileged: false
          capabilities:
            add: ["NET_ADMIN", "NET_RAW"]
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: EVENT_QUEUE_DEPTH
          value: "5000"
        volumeMounts:
        - name: run
          mountPath: /run/flannel
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
        - name: xtables-lock
          mountPath: /run/xtables.lock
      volumes:
      - name: run
        hostPath:
          path: /run/flannel
      - name: cni-plugin
        hostPath:
          path: /opt/cni/bin
      - name: cni
        hostPath:
          path: /etc/cni/net.d
      - name: flannel-cfg
        configMap:
          name: kube-flannel-cfg
      - name: xtables-lock
        hostPath:
          path: /run/xtables.lock
          type: FileOrCreate

部署flannel

[root@master flannel]# kubectl apply -f kube-flannel.yml
namespace/kube-flannel created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
[root@master flannel]# 

再次查看节点信息,状态变为Ready

[root@master /]# kubectl get nodes
NAME     STATUS   ROLES                  AGE     VERSION
master   Ready    control-plane,master   6m54s   v1.23.6
[root@master /]# 

k8smaster单节点环境搭建完成


3 部署nginx

参考
使用k8s部署一个简单的nginx服务

3.1 环境准备

创建命名空间test

[root@master /]# kubectl create namespace test
namespace/test created
[root@master /]# kubectl get namespace
NAME              STATUS   AGE
default           Active   11m
kube-flannel      Active   5m45s
kube-node-lease   Active   11m
kube-public       Active   11m
kube-system       Active   11m
test              Active   27s
[root@master /]# 

创建yaml文件存放路径,方便管理

[root@master /]# cd data/
[root@master data]# ls
flannel
[root@master data]# mkdir test
[root@master data]# cd test
[root@master test]# mkdir nginx
[root@master test]# mkdir java
[root@master test]# ls
flannel  java  nginx
[root@master test]# cd nginx/
[root@master nginx]# 

创建nginx的发布Deployment

apiVersion: apps/v1
kind: Deployment 
metadata:
  labels:
    app: nginx
  name: nginx-deployment
  namespace: test
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx:1.14.0
        ports:
        - containerPort: 80
        name: nginx
        volumeMounts:
        - name: conf
          mountPath: /etc/nginx/nginx.conf  #需要挂载的目录
        - name: log
          mountPath: /var/log/nginx
        - name: html
          mountPath: /etc/nginx/html
      tolerations:
      - key: "key"
        operator: "Equal"
        value: "nginx"
        effect: "NoSchedule"
      volumes:
      - name: conf
        hostPath:
          path: /data/test/nginx/conf/nginx.conf  #宿主机挂载目录
      - name: log
        hostPath:
          path: /data/test/nginx/logs
          type: Directory
      - name: html
        hostPath:
          path: /data/test/nginx/html
          type: Directory

创建目录

[root@master nginx]# mkdir conf
[root@master nginx]# mkdir logs
[root@master nginx]# mkdir html
[root@master nginx]# ls
conf  html  logs  nginx-deployment.yaml
[root@master nginx]# 

3.2 创建deployment的nginx

kubectl create -f nginx-deployment.yaml
[root@master nginx]# kubectl get pod -n test
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-74bc7cb47f-tkf2q   0/1     Pending   0          16s

没有运行成功,查看一下具体信息

kubectl describe pod nginx-deployment-74bc7cb47f-tkf2q -n test

查看输出的Events块

Events:
  Type     Reason            Age   From               Message
  ----     ------            ----  ----               -------
  Warning  FailedScheduling  51s   default-scheduler  0/1 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate.

查了一下发现是因为使用kubeadm初始化的集群,出于安全考虑Pod不会被调度到Master Node上,不参与工作负载。允许master节点部署pod即可解决问题,命令如下:

kubectl taint nodes --all node-role.kubernetes.io/master-

再次查看pod状态

[root@master nginx]# kubectl get pod -n test
NAME                                READY   STATUS              RESTARTS      AGE
nginx-deployment-74bc7cb47f-tkf2q   0/1     RunContainerError   4 (17s ago)   12m

还是不行

相同的办法查看异常信息,发现是没有挂载的nginx.conf文件

解决办法:

把yaml文件里的*相关的代码注释后重新部署,进入pod把nginx.conf内容复制出来,或者找一个现成的初始nginx.conf配置文件,放在对应目录下,这里可以直接复制以下内容:

user  nginx;
worker_processes  auto;

error_log  /var/log/nginx/error.log notice;
pid        /var/run/nginx.pid;


events {
    worker_connections  1024;
}


http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;

    #gzip  on;

    include /etc/nginx/conf.d/*.conf;
}
[root@master conf]# vi nginx.conf
[root@master conf]# ls
nginx.conf
[root@master conf]# 

删除原来pod,重新部署

[root@master nginx]# kubectl delete -f nginx-deployment.yaml
deployment.apps "nginx-deployment" deleted
[root@master nginx]# kubectl create -f nginx-deployment.yaml
deployment.apps/nginx-deployment created
[root@master nginx]# 

启动成功

[root@master nginx]# kubectl get pod -n test
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-74bc7cb47f-gcbm7   1/1     Running   0          10s
[root@master nginx]# 

但是还需要一个nginx service才能访问


3.3 创建nginx-service

创建文件 nginx-service.yaml

apiVersion: v1
kind: Service
metadata:
  labels:
   app: nginx
  name: nginx-deployment
  namespace: test  #命名空间
spec:
  ports:
  - port: 9000
    name: nginx-service80
    protocol: TCP
    targetPort: 80  #映射端口
    nodePort: 31000 #外界访问端口
  selector:
    app: nginx
  type: NodePort
kubectl create -f nginx-service.yaml

访问 192.168.30.101:31000,可以看到nginx的主界面,说明部署成功


4.部署后端服务

参考
k8s学习-第5部分部署ruoyi前后端分离版

4.1.准备

这里使用ruoyi-vue-plus项目部署,这是一个前后端分离的开源项目,部署即用,有需要可以从下方链接拉取代码并自己配置好数据库,redis环境

ruoyi-vue-plus
项目地址

4.2 后端镜像构建

项目配置好后,打包(具体打包步骤可查看官方文档),ruoyi-admin下有写好的Dockerfile文件

在这里插入图片描述

将jar包,dockerfile上传至虚拟机

打包镜像

docker build -t ruoyi-admin .    

在这里插入图片描述


4.3 搭建本地私有仓库

首先下载registry 镜像

docker pull registry

在daemon.json文件中添加私有镜像仓库地址

vi /etc/docker/daemon.json
{
  "insecure-registries": ["192.168.30.101:5000"],
  #添加,注意用逗号结尾
  "registry-mirrors": ["https://6ijb8ubo.mirror.aliyuncs.com"]
}

在这里插入图片描述

重启docker

systemctl restart docker.service

运行 registry 容器

docker run -itd -v /data/test/registry:/var/lib/registry -p 5000:5000 --restart=always --name registry registry:latest 

在这里插入图片描述


4.4 push后端镜像

为镜像打标签

docker tag ruoyi-admin:latest 192.168.30.101:5000/ruoyi-admin:v1

在这里插入图片描述

上传到私有仓库

docker push 192.168.30.101:5000/ruoyi-admin:v1

列出私有仓库的所有镜像

curl http://192.168.30.101:5000/v2/_catalog
[root@master java]# curl http://192.168.30.101:5000/v2/_catalog
{"repositories":["ruoyi-admin"]}

4.5 部署后端

创建svc-ruoyi-admin.yaml文件

apiVersion: apps/v1
kind: Deployment
metadata:
  name: ruoyi-admin
  namespace: test
  labels:
    app: ruoyi-admin
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ruoyi-admin
  template:
    metadata:
      labels:
        app: ruoyi-admin
    spec:
      containers:
        - name: ruoyi-admin
          image: 192.168.30.101:5000/ruoyi-admin:v1  #镜像pull地址
          ports:
            - containerPort: 8080

---
apiVersion: v1
kind: Service
metadata:
  name: ruoyi-admin
  namespace: test
spec:
  #type: ClusterIP   #集群ip模式:只能通过svc IP访问
  type: NodePort     #可通过外部端口访问
  selector:
    app: ruoyi-admin
  ports:
    - port: 8080
      protocol: TCP
      targetPort: 8080
      nodePort: 32000

执行

kubectl apply -f svc-ruoyi-admin.yaml

成功

[root@master java]# kubectl get pod -n test
NAME                                READY   STATUS    RESTARTS      AGE
nginx-deployment-74bc7cb47f-gcbm7   1/1     Running   1 (30m ago)   87m
ruoyi-admin-fb6bd6c69-nwqcb         1/1     Running   0             11s
 curl 192.168.30.101:32000

在这里插入图片描述


5 部署前端

ruoyi-ui目录下输入 npm run build:prod

打包后生成打包文件在 ruoyi-ui/dist 目录 将 dist 目录下文件(不包含 dist 目录) 上传到部署服务器 /data/test/nginx/html 目录下

修改nginx服务的配置文件

worker_processes  1;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;

events {
    worker_connections  1024;
}

http {
    include       mime.types;
    default_type  application/octet-stream;
    sendfile        on;
    keepalive_timeout  65;
    # 限制body大小
    client_max_body_size 100m;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                          '$status $body_bytes_sent "$http_referer" '
                          '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    upstream server {
        ip_hash;
        server 192.168.30.101:32000;
    }

    upstream monitor-admin {
        server 127.0.0.1:9090;
    }

    upstream xxljob-admin {
        server 127.0.0.1:9100;
    }

    server {
        listen       80;
        server_name  localhost;

        # https配置参考 start
        #listen       443 ssl;

        # 证书直接存放 /docker/nginx/cert/ 目录下即可 更改证书名称即可 无需更改证书路径
        #ssl on;
        #ssl_certificate      /etc/nginx/cert/xxx.local.crt; # /etc/nginx/cert/ 为docker映射路径 不允许更改
        #ssl_certificate_key  /etc/nginx/cert/xxx.local.key; # /etc/nginx/cert/ 为docker映射路径 不允许更改
        #ssl_session_timeout 5m;
        #ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE:ECDH:AES:HIGH:!NULL:!aNULL:!MD5:!ADH:!RC4;
        #ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        #ssl_prefer_server_ciphers on;
        # https配置参考 end

        # 演示环境配置 拦截除 GET POST 之外的所有请求
        # if ($request_method !~* GET|POST) {
        #     rewrite  ^/(.*)$  /403;
        # }

        # location = /403 {
        #     default_type application/json;
        #     return 200 '{"msg":"演示模式,不允许操作","code":500}';
        # }

        # 限制外网访问内网 actuator 相关路径
        location ~ ^(/[^/]*)?/actuator(/.*)?$ {
            return 403;
        }

        location / {
            root   /etc/nginx/html;
            try_files $uri $uri/ /index.html;
            index  index.html index.htm;
        }

        location /prod-api/ {
            proxy_set_header Host $http_host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header REMOTE-HOST $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_pass http://192.168.30.101:32000/;
        }

        # https 会拦截内链所有的 http 请求 造成功能无法使用
        # 解决方案1 将 admin 服务 也配置成 https
        # 解决方案2 将菜单配置为外链访问 走独立页面 http 访问
        location /admin/ {
            proxy_set_header Host $http_host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header REMOTE-HOST $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_pass http://monitor-admin/admin/;
        }

        # https 会拦截内链所有的 http 请求 造成功能无法使用
        # 解决方案1 将 xxljob 服务 也配置成 https
        # 解决方案2 将菜单配置为外链访问 走独立页面 http 访问
        location /xxl-job-admin/ {
            proxy_set_header Host $http_host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header REMOTE-HOST $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_pass http://xxljob-admin/xxl-job-admin/;
        }

        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }
    }
}

重新部署nginx后访问 192.168.30.101:31000

在这里插入图片描述

完成!

6.常用命令

创建命名空间

kubectl create namespace 名称

删除命名空间

kubectl delete namespace 名称

创建pod

kubectl create -f xxx.yaml

查询命名空间下的pod

kubectl get pod -n 命名空间

删除pod

kubectl delete deploy pod名称 -n 命名空间 或 kubectl delete -f xxx.yaml

查看pod信息

kubectl describe pod pod名称 -n 命名空间

查看pod日志

kubectl logs -f --tail 1000 pod名称 -n 命名空间

查看命名空间内pod的ip端口信息

kubectl get svc -n 命名空间
  • 17
    点赞
  • 18
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值