K8S实战(部署java应用、mysql、nacos集群)

1、环境隔离

1.1、使用命名空间进行环境隔离,如下创建一个开发的命名空间

kubectl create namespace zo-dev          #创建名字为zo-dev的命名空间

kubectl delete namespace zo-dev          #删除名字为zo-dev的命名空间,删除命名空间时候其下的所有资源会被一并删除

kubectl api-resources --namespaced=true

kubectl get namespace      # 查看有哪些命名空间

kubectl describe namespace zo-dev  # 查看名字为zo-dev空间的相关信息

kubectl delete ns zo-dev --force --grace-period=0      #强制删除命名空间

1.2也可以使用yaml文件创建

 apiVersion: v1
 kind: Namespace
 metadata:
   name: zo-dev
   labels:
     name: zo-dev

然后执行 kubectl apply -f xx.yaml

1.3 跨命名空间之间的应用通信

      实现即隔离,也可可以部分互通,比如A团队应用在一个空间,B团队在另一个空间,则二者可以通信。

1.4 命名空间中的资源限制

     可限制某个命名空间的POD、CPU、内存、存储资源的总数

     k8s实践(5)k8s的命名空间Namespace_k8s创建命名空间_hguisu的博客-CSDN博客

1.5 客户通过可不同的人员分配不同的账号,使之只能操作对应空间的pod,可参考:

        k8s dashboard 配置指导_51CTO博客_k8s 配置中心  (参考dashboard基于命名空间的权限分发)

     关于K8s集群环境工作组隔离配置多集群切换的一些笔记_k8s管理_山河已无恙_InfoQ写作社区

1.6 关于K8S Port的理解

apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: zo-web
  name: zo-web
  namespace: zo-dev
spec:
  type: NodePort # 仅允许k8s集群内访问,此时可以在K8S中部署一个nginx直接指向这个集群IP
  ports:
    - name: http
      protocol: TCP
      targetPort: 80 #POD控制器中定义的端口
      port: 9999 #集群内服务访问端口
      nodePort: 3232 # 占用node服务器上的IP
  selector:
    app: zo-web

2、基本命令

2.1 查看某个命名空间的下的pod 

kubectl get pods -n zo-dev

2.2 查看某个容器的日志

kubectl logs XXXX -n zo-dev    #用上面显示的pod名称查看某个命名空间下的某个POD的日志

2.3 查看所有的ingress

kubectl api-resources | grep ingress

2.4  删除空间

 kubectl delete namespace ingress-nginx

2.5 删除名为nginx的ingressclass

kubectl delete ingressclass nginx

2.6 查看所有正在运行的pod

kubectl get pod -A

3、部署应用

如下分别给出 java应用及java应用暴露的service示例

3.1 JAVA 应用部署及暴露服务

创建2个prod的java应用、对外挂载日志目录、并暴露为集群服务(只能通过集群内部访问,然后通过ingress-nginx对外暴露服务或者也可以在k8s内部安装nginx,然后在该nginx中配置JAVA应用的集群IP即可)

kind: Deployment
apiVersion: apps/v1
metadata:
  name: zo-java
  namespace: zo-dev
  labels:
    k8s-app: zo-java
spec:
  replicas: 2
  selector:
    matchLabels:
      k8s-app: zo-java
  template:
    metadata:
      name: zo-java
      creationTimestamp: null
      labels:
        k8s-app: zo-java
    spec:
      containers:
        - name: zo-java
          image: registry.cn-hangzhou.aliyuncs.com/zo-base/zo-java:1.0.0
          command:
            - java
            - -Djava.security.egd=file:/dev/./urandom
            - -Dspring.profiles.active=offline
            - -jar
            - zo-java-template.jar
          ports:
            - name: http
              containerPort: 10000
              protocol: TCP
          resources: {}
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          imagePullPolicy: Always
          volumeMounts:
            - mountPath: /logs/zo-template-log
              name: logs
          securityContext:
            privileged: false
      volumes:
        - name: logs
          hostPath:
            path: /root/logs/zo-java
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      dnsPolicy: ClusterFirst
      securityContext: {}
      imagePullSecrets:
        - name: zo-docker
      schedulerName: default-scheduler
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 25%
      maxSurge: 25%
  revisionHistoryLimit: 10
  progressDeadlineSeconds: 600
---
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    k8s-app: zo-java
  name: zo-java
  namespace: zo-dev
spec:
  type: ClusterIP # 仅允许k8s集群内访问,此时可以在K8S中部署一个nginx直接指向这个集群IP
  ports:
    - name: http
      protocol: TCP
      port: 10000
      targetPort: 10000
  selector:
    k8s-app: zo-java

3.3 Nginx-web-ui部署及暴露服务

部署一个nginx,并对外挂载目录,并指定部署节点然后使用nodePort模式,建议在生产环境中将nginx的配置地址放到一个云盘上,同时部署多个nginx-web-ui,共享同一份配置。

kind: Deployment
apiVersion: apps/v1
metadata:
  name: nginx-web-ui
  namespace: zo-dev
  labels:
    k8s-app: nginx-web-ui
spec:
  replicas: 1
  selector:
    matchLabels:
      k8s-app: nginx-web-ui
  template:
    metadata:
      name: nginx-web-ui
      creationTimestamp: null
      labels:
        k8s-app: nginx-web-ui
    spec:
      nodeName: k8snode142 # 当部署1个POD时指定该节点部署,也可以删除该配置,可以部署N个PODS,然后连接其中一个nginx-web-ui同步该机器上的配置
      containers:
        - name: nginx-web-ui
          image: cym1102/nginxwebui:latest
          command:
            - java
            - -Dfile.encoding=UTF-8
            - -jar
            - /home/nginxWebUI.jar
          resources: {}
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          imagePullPolicy: Always
          volumeMounts:
            - mountPath: /home/nginxWebUI
              name: data
          securityContext:
            privileged: false
      volumes:
        - name: data
          hostPath:
            path: /root/nginxWebUI
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
      dnsPolicy: ClusterFirst
      securityContext: {}
      schedulerName: default-scheduler
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 25%
      maxSurge: 25%
  revisionHistoryLimit: 10
  progressDeadlineSeconds: 600
---
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    k8s-app: nginx-web-ui
  name: nginx-web-ui
  namespace: zo-dev
spec:
  type: NodePort # 允许通过NODE+IP的方式直接访问
  ports:
    - name: http
      protocol: TCP
      port: 80 # 应用占用的端口
      targetPort: 80 # POD容器端口
      nodePort: 80 # POD容器占用的node节点端口
    - name: https
      protocol: TCP
      port: 443
      targetPort: 443
      nodePort: 443
    - name: web-ui-dashboard
      protocol: TCP
      port: 8080
      targetPort: 8080
      nodePort: 17979
    - name: nacos-dashboard
      protocol: TCP
      port: 8848
      targetPort: 8848
      nodePort: 8848
    - name: nacos-grpc
      protocol: TCP
      port: 9848
      targetPort: 9848
      nodePort: 9848
  selector:
    k8s-app: nginx-web-ui
status:
  loadBalancer: {}

4、安装helm 

4.1 现在二进制文件,并上传到/usr/local/helm目录下

https://pan.baidu.com/s/1GXgKLAmxIhFztBLcsvpxpA?pwd=cjcj

然后直接执行tar解压命令

yum install -y wget
 
mkdir -p /usr/local/helm
 
cd /usr/local/helm
 
wget https://get.helm.sh/helm-v3.10.0-linux-amd64.tar.gz
 
tar zxvf helm-v3.10.0-linux-amd64.tar.gz
 
mv -f linux-amd64/helm /usr/bin

4.2、使用helm部署nginx ingress

helm upgrade install ingress-nginx ingress-nginx/ingress-nginx \
--namespace zo-dev \
--set controller.publishService.enabled=true

5、安装ingress-nginx

参考文章:《做一个不背锅运维:一篇搞定K8s Ingress》 - 知乎

使用如下脚本执行

kubectl apply -f ingress-nginx-controller.yaml

然后查看是否全部运行成功,这里需要等一会才会显示全部正常

kubectl get pod -n zo-dev | grep ingress-nginx

也可以查看  所有的服务

kubectl get svc -n zo-dev | grep ingress-nginx

需要删除已安装的ingress-nginx

# 直接执行如下命令(建议才用此模式)
kubectl delete -f ingress-nginx-controller.yaml
然后重新执行
kubectl apply -f ingress-nginx-controller.yaml



#或直接删除某个命名空间,可以删除下面的ingress-nginx,但该方法不能彻底删除一些资源
kubectl delete namespace ingress-nginx #不建议,且同一个命名空间下已安装了应用不能通过删除命名空间的方式去执行



#或逐个执行如下命令
kubectl delete -n zo-dev deployment ingress-nginx-controller
kubectl delete -n zo-dev job ingress-nginx-admission-create
kubectl delete -n zo-dev job ingress-nginx-admission-patch

kubectl delete ingressclass nginx  删除名为nginx的ingressclass

脚本如下:

apiVersion: v1
kind: Namespace
metadata:
  labels:
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  name: zo-dev
---  
apiVersion: v1
automountServiceAccountToken: true
kind: ServiceAccount
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.6.4
  name: ingress-nginx
  namespace: zo-dev
---
apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.6.4
  name: ingress-nginx-admission
  namespace: zo-dev
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.6.4
  name: ingress-nginx
  namespace: zo-dev
rules:
- apiGroups:
  - ""
  resources:
  - namespaces
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - configmaps
  - pods
  - secrets
  - endpoints
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - services
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - networking.k8s.io
  resources:
  - ingresses
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - networking.k8s.io
  resources:
  - ingresses/status
  verbs:
  - update
- apiGroups:
  - networking.k8s.io
  resources:
  - ingressclasses
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - coordination.k8s.io
  resourceNames:
  - ingress-nginx-leader
  resources:
  - leases
  verbs:
  - get
  - update
- apiGroups:
  - coordination.k8s.io
  resources:
  - leases
  verbs:
  - create
- apiGroups:
  - ""
  resources:
  - events
  verbs:
  - create
  - patch
- apiGroups:
  - discovery.k8s.io
  resources:
  - endpointslices
  verbs:
  - list
  - watch
  - get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.6.4
  name: ingress-nginx-admission
  namespace: zo-dev
rules:
- apiGroups:
  - ""
  resources:
  - secrets
  verbs:
  - get
  - create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.6.4
  name: ingress-nginx
rules:
- apiGroups:
  - ""
  resources:
  - configmaps
  - endpoints
  - nodes
  - pods
  - secrets
  - namespaces
  verbs:
  - list
  - watch
- apiGroups:
  - coordination.k8s.io
  resources:
  - leases
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
- apiGroups:
  - ""
  resources:
  - services
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - networking.k8s.io
  resources:
  - ingresses
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - events
  verbs:
  - create
  - patch
- apiGroups:
  - networking.k8s.io
  resources:
  - ingresses/status
  verbs:
  - update
- apiGroups:
  - networking.k8s.io
  resources:
  - ingressclasses
  verbs:
  - get
  - list
  - watch
- apiGroups:
  - discovery.k8s.io
  resources:
  - endpointslices
  verbs:
  - list
  - watch
  - get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.6.4
  name: ingress-nginx-admission
rules:
- apiGroups:
  - admissionregistration.k8s.io
  resources:
  - validatingwebhookconfigurations
  verbs:
  - get
  - update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.6.4
  name: ingress-nginx
  namespace: zo-dev
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-nginx
subjects:
- kind: ServiceAccount
  name: ingress-nginx
  namespace: zo-dev
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.6.4
  name: ingress-nginx-admission
  namespace: zo-dev
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
  name: ingress-nginx-admission
  namespace: zo-dev
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.6.4
  name: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-nginx
subjects:
- kind: ServiceAccount
  name: ingress-nginx
  namespace: zo-dev
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.6.4
  name: ingress-nginx-admission
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
  name: ingress-nginx-admission
  namespace: zo-dev
---
apiVersion: v1
data:
  allow-snippet-annotations: "true"
kind: ConfigMap
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.6.4
  name: ingress-nginx-controller
  namespace: zo-dev
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.6.4
  name: ingress-nginx-controller
  namespace: zo-dev
spec:
  externalTrafficPolicy: Local
  ipFamilies:
  - IPv4
  ipFamilyPolicy: SingleStack
  ports:
  - appProtocol: http
    name: http
    port: 80
    protocol: TCP
    targetPort: 80
    nodePort: 80
  - appProtocol: https
    name: https
    port: 443
    protocol: TCP
    targetPort: 443
    nodePort: 443
  selector:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  type: NodePort
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.6.4
  name: ingress-nginx-controller-admission
  namespace: zo-dev
spec:
  ports:
  - appProtocol: https
    name: https-webhook
    port: 443
    targetPort: webhook
  selector:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
  type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.6.4
  name: ingress-nginx-controller
  namespace: zo-dev
spec:
  minReadySeconds: 0
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app.kubernetes.io/component: controller
      app.kubernetes.io/instance: ingress-nginx
      app.kubernetes.io/name: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/component: controller
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
    spec:
      containers:
      - args:
        - /nginx-ingress-controller
        - --publish-service=$(POD_NAMESPACE)/ingress-nginx-controller
        - --election-id=ingress-nginx-leader
        - --controller-class=k8s.io/ingress-nginx
        - --ingress-class=nginx
        - --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
        - --validating-webhook=:8443
        - --validating-webhook-certificate=/usr/local/certificates/cert
        - --validating-webhook-key=/usr/local/certificates/key
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: LD_PRELOAD
          value: /usr/local/lib/libmimalloc.so
        image: dyrnq/ingress-nginx-controller:v1.6.4
        imagePullPolicy: IfNotPresent
        lifecycle:
          preStop:
            exec:
              command:
              - /wait-shutdown
        livenessProbe:
          failureThreshold: 5
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 10
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        name: controller
        ports:
        - containerPort: 80
          name: http
          protocol: TCP
        - containerPort: 443
          name: https
          protocol: TCP
        - containerPort: 8443
          name: webhook
          protocol: TCP
        readinessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz
            port: 10254
            scheme: HTTP
          initialDelaySeconds: 10
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 1
        resources:
          requests:
            cpu: 100m
            memory: 90Mi
        securityContext:
          allowPrivilegeEscalation: true
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - ALL
          runAsUser: 101
        volumeMounts:
        - mountPath: /usr/local/certificates/
          name: webhook-cert
          readOnly: true
      dnsPolicy: ClusterFirst
      nodeSelector:
        kubernetes.io/os: linux
      serviceAccountName: ingress-nginx
      terminationGracePeriodSeconds: 300
      volumes:
      - name: webhook-cert
        secret:
          secretName: ingress-nginx-admission
---
apiVersion: batch/v1
kind: Job
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.6.4
  name: ingress-nginx-admission-create
  namespace: zo-dev
spec:
  template:
    metadata:
      labels:
        app.kubernetes.io/component: admission-webhook
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
        app.kubernetes.io/version: 1.6.4
      name: ingress-nginx-admission-create
    spec:
      containers:
      - args:
        - create
        - --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
        - --namespace=$(POD_NAMESPACE)
        - --secret-name=ingress-nginx-admission
        env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        image: dyrnq/kube-webhook-certgen:v20220916-gd32f8c343
        imagePullPolicy: IfNotPresent
        name: create
        securityContext:
          allowPrivilegeEscalation: false
      nodeSelector:
        kubernetes.io/os: linux
      restartPolicy: OnFailure
      securityContext:
        fsGroup: 2000
        runAsNonRoot: true
        runAsUser: 2000
      serviceAccountName: ingress-nginx-admission
---
apiVersion: batch/v1
kind: Job
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.6.4
  name: ingress-nginx-admission-patch
  namespace: zo-dev
spec:
  template:
    metadata:
      labels:
        app.kubernetes.io/component: admission-webhook
        app.kubernetes.io/instance: ingress-nginx
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
        app.kubernetes.io/version: 1.6.4
      name: ingress-nginx-admission-patch
    spec:
      containers:
      - args:
        - patch
        - --webhook-name=ingress-nginx-admission
        - --namespace=$(POD_NAMESPACE)
        - --patch-mutating=false
        - --secret-name=ingress-nginx-admission
        - --patch-failure-policy=Fail
        env:
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        image: dyrnq/kube-webhook-certgen:v20220916-gd32f8c343
        imagePullPolicy: IfNotPresent
        name: patch
        securityContext:
          allowPrivilegeEscalation: false
      nodeSelector:
        kubernetes.io/os: linux
      restartPolicy: OnFailure
      securityContext:
        fsGroup: 2000
        runAsNonRoot: true
        runAsUser: 2000
      serviceAccountName: ingress-nginx-admission
---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.6.4
  name: nginx
spec:
  controller: k8s.io/ingress-nginx
---
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
  labels:
    app.kubernetes.io/component: admission-webhook
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.6.4
  name: ingress-nginx-admission
webhooks:
- admissionReviewVersions:
  - v1
  clientConfig:
    service:
      name: ingress-nginx-controller-admission
      namespace: zo-dev
      path: /networking/v1/ingresses
  failurePolicy: Fail
  matchPolicy: Equivalent
  name: validate.nginx.ingress.kubernetes.io
  rules:
  - apiGroups:
    - networking.k8s.io
    apiVersions:
    - v1
    operations:
    - CREATE
    - UPDATE
    resources:
    - ingresses
  sideEffects: None

K8S安装mysql

 8.1 在zo-dev命名空间下指定某个主机安装mysql8.0并挂载目录到宿主机上

apiVersion: apps/v1
kind: Deployment
metadata:
  name: mysql-dev
  namespace: zo-dev
spec:
  selector:
    matchLabels:
      app: mysql-dev
  template:
    metadata:
      labels:
        app: mysql-dev
    spec:
      nodeName: k8snode160 # 当部署1个POD时指定该节点部署,也可以删除该配置,可以部署N个PODS
      containers:
        - name: mysql
          image: mysql:8.0
          env:
            - name: MYSQL_ROOT_PASSWORD
              value: "123456"
          ports:
            - containerPort: 3306
          volumeMounts:
            - name: mysql-data
              mountPath: /var/lib/mysql
            - name: conf
              mountPath: /etc/mysql/conf.d
            - name: logs
              mountPath: /logs
      volumes:
        - name: mysql-data
          hostPath:
            path: /root/mysql-dev/data
        - name: conf
          hostPath:
            path: /root/mysql-dev/conf
        - name: logs
          hostPath:
            path: /root/mysql-dev/logs
---
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: mysql-dev
  name: mysql-dev
  namespace: zo-dev
spec:
  type: NodePort # 允许通过NODE+IP的方式直接访问
  ports:
    - name: db
      protocol: TCP
      port: 3306 # 应用占用的端口
      targetPort: 3306 # POD容器端口
      nodePort: 3306 # POD容器占用的node节点端口,如果要再同一台机器上部署多个mysql数据库,仅需要修改该端口,其他不要动。
  selector:
    app: mysql-dev
status:
  loadBalancer: {}

由于挂载到宿主机上后也担心宿主机会出现问题,因此再到宿主机上把mysql备份到远程NFS上

8.2 把mysql的数据挂载的NFS服务器上,防止宿主机挂了,数据库丢失。

有2种做法,1是先安装数据库,然后把宿主机下的目录挂载到NFS服务器上

2是安装的yml文件中直接配置为NFS的PV(持久卷)上。

 如下是部署一个有状态的单机版mysql,并把数据挂载到NFS服务器上。

只要StatefulSet的name同样,哪把该应用重新删除,也可以挂载到原先的数据。

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: mysql-dev-test
  namespace: zo-dev
spec:
  serviceName: mysql-dev-test
  replicas: 1
  selector:
    matchLabels:
      app: mysql-dev-test
  template:
    metadata:
      labels:
        app: mysql-dev-test
    spec:
      nodeName: k8snode161 # 当部署1个POD时指定该节点部署,也可以删除该配置,可以部署N个PODS
      containers:
        - name: mysql
          image: mysql:8.0
          env:
            - name: MYSQL_ROOT_PASSWORD
              value: "123456"
          ports:
            - containerPort: 3306
          volumeMounts:
            - name: mysql-data-test
              mountPath: /var/lib/mysql
  volumeClaimTemplates:
  - metadata:
      name: mysql-data-test
      annotations:
        volume.beta.kubernetes.io/storage-class: "dev-nfs-storage"  # 通过storage-class把data数据备份到NFS上,会自动产生10G PVC声明 / 绑定自动产生的 PV,关于storeageclass部署参看 https://blog.csdn.net/tzszhzx/article/details/130241866 的4.2 构建能自动分配PV的storageClass
    spec:
      accessModes:
        - ReadWriteMany
      resources:
        requests:
          storage: 10Gi
---
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: mysql-dev-test
  name: mysql-dev-test
  namespace: zo-dev
spec:
  type: NodePort # 允许通过NODE+IP的方式直接访问
  ports:
    - name: db
      protocol: TCP
      port: 3306 # 应用占用的端口
      targetPort: 3306 # POD容器端口
      nodePort: 3307 # POD容器占用的node节点端口,如果要再同一台机器上部署多个mysql数据库,仅需要修改该端口,其他不要动。
  selector:
    app: mysql-dev-test
status:
  loadBalancer: {}

9、K8S部署nacos集群

---
apiVersion: v1
kind: Service
metadata:
  name: nacos-dev
  namespace: zo-dev
  labels:
    app: nacos-dev
spec:
  type: ClusterIP
  clusterIP: None
  ports:
    - port: 8848
      name: server
      targetPort: 8848
    - port: 9848
      name: client-rpc
      targetPort: 9848
    - port: 9849
      name: raft-rpc
      targetPort: 9849
    ## 兼容1.4.x版本的选举端口
    - port: 7848
      name: old-raft-rpc
      targetPort: 7848
  selector:
    app: nacos
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: nacos-cm
  namespace: zo-dev
data:
  mysql.db.host: "mysql-dev.zo-dev" # mysql在k8s中的集群服务地址,也可以是集群IP
  mysql.db.name: "nacos-dev"
  mysql.port: "3306"
  mysql.user: "root"
  mysql.password: "123456"
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: nacos
  namespace: zo-dev
spec:
  serviceName: nacos-dev
  replicas: 3
  template:
    metadata:
      labels:
        app: nacos
      annotations:
        pod.alpha.kubernetes.io/initialized: "true"
    spec:
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: "app"
                    operator: In
                    values:
                      - nacos
              topologyKey: "kubernetes.io/hostname"
      containers:
        - name: k8snacos
          imagePullPolicy: Always
          image: nacos/nacos-server:latest
          resources:
            requests:
              memory: "2Gi"
              cpu: "500m"
          ports:
            - containerPort: 8848
              name: client
            - containerPort: 9848
              name: client-rpc
            - containerPort: 9849
              name: raft-rpc
            - containerPort: 7848
              name: old-raft-rpc
          env:
            - name: NACOS_REPLICAS
              value: "3"
            - name: SPRING_DATASOURCE_PLATFORM
              value: "mysql"
            - name: MYSQL_SERVICE_HOST
              valueFrom:
                configMapKeyRef:
                  name: nacos-cm
                  key: mysql.db.host
            - name: MYSQL_SERVICE_DB_NAME
              valueFrom:
                configMapKeyRef:
                  name: nacos-cm
                  key: mysql.db.name
            - name: MYSQL_SERVICE_PORT
              valueFrom:
                configMapKeyRef:
                  name: nacos-cm
                  key: mysql.port
            - name: MYSQL_SERVICE_USER
              valueFrom:
                configMapKeyRef:
                  name: nacos-cm
                  key: mysql.user
            - name: MYSQL_SERVICE_PASSWORD
              valueFrom:
                configMapKeyRef:
                  name: nacos-cm
                  key: mysql.password
            - name: NACOS_SERVER_PORT
              value: "8848"
            - name: NACOS_APPLICATION_PORT
              value: "8848"
            - name: PREFER_HOST_MODE
              value: "hostname"
            - name: NACOS_SERVERS
              value: "nacos-0.nacos-dev.zo-dev.svc.cluster.local:8848 nacos-1.nacos-dev.zo-dev.svc.cluster.local:8848 nacos-2.nacos-dev.zo-dev.svc.cluster.local:8848" # 这里安装实际的命名空间修改,如zo-test
  selector:
    matchLabels:
      app: nacos

安装完nacos集群后,通过nginx把8848(http)/9848(tpc) 这2个端口对外暴露出来。

8848是http、9848是客户端与服务端的grpc协议,必须暴露2个出来,spring的客户端才能连接上。

删除一个ingressclass

kubectl delete ingressclass nginx

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
你可以使用以下的 YAML 文件来部署 Nacos 集群: ```yaml apiVersion: v1 kind: Namespace metadata: name: nacos --- apiVersion: v1 kind: Service metadata: name: nacos-service namespace: nacos spec: selector: app: nacos ports: - protocol: TCP port: 8848 targetPort: 8848 type: LoadBalancer --- apiVersion: apps/v1 kind: Deployment metadata: name: nacos-deployment namespace: nacos spec: replicas: 3 selector: matchLabels: app: nacos template: metadata: labels: app: nacos spec: containers: - name: nacos-server image: nacos/nacos-server:v1.4.1 ports: - containerPort: 8848 env: - name: MODE value: "cluster" - name: SPRING_DATASOURCE_PLATFORM value: "mysql" - name: MYSQL_SERVICE_HOST value: "mysql-host" # 修改为实际的 MySQL 主机名或 IP 地址 - name: MYSQL_SERVICE_DB_NAME value: "nacos" # 修改为实际的数据库名称 - name: MYSQL_SERVICE_PORT value: "3306" # 修改为实际的 MySQL 端口号 - name: MYSQL_SERVICE_USER value: "nacos" # 修改为实际的数据库用户名 - name: MYSQL_SERVICE_PASSWORD value: "nacos" # 修改为实际的数据库密码 ``` 请注意,上述 YAML 文件假设你已经有一个可用的 MySQL 数据库,并且将其相关信息填入了环境变量中。你需要修改 `MYSQL_SERVICE_HOST`,`MYSQL_SERVICE_DB_NAME`,`MYSQL_SERVICE_PORT`,`MYSQL_SERVICE_USER`,`MYSQL_SERVICE_PASSWORD` 这些变量的值,以适应你的实际环境。 你可以使用 `kubectl apply -f <yaml文件名>` 命令来应用这个 YAML 文件并部署 Nacos 集群

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值