k8s模拟发布生产环境tomcat

k8s模拟发布生产环境tomcat8带代码,映射configmap的配置文件和站点目录

1.搭建k8s环境,k8s-v1.13.3版本

[root@master ~]# docker info

Containers: 35

 Running: 11

 Paused: 0

 Stopped: 24

Images: 8

Server Version: 18.06.1-ce

[root@master ~]# kubectl get node

NAME     STATUS   ROLES    AGE   VERSION

master   Ready    master   15d   v1.13.3

node1    Ready    <none>   15d   v1.13.3

node2    Ready    <none>   15d   v1.13.3

[root@master ~]# kubectl get cs

NAME                 STATUS    MESSAGE              ERROR

controller-manager   Healthy   ok                   

scheduler            Healthy   ok                   

etcd-0               Healthy   {"health": "true"}   

2.搭建私有仓库docker-harbor  宿主机方式搭建

[root@master ~]# yum -y install docker-distribution.x86_64

[root@master ~]# systemctl enable docker-distribution.service

[root@master ~]# systemctl restart docker-distribution.service

[root@master ~]# vim /etc/docker/daemon.json  #设置当docker客户端端机器和私有仓库交互式,不用https证书验证

{

"insecure-registries":["192.168.43.166:5000"]     #docker-harbor机器的ip地址和端口:5000

}

[root@master ~]# systemctl restart docker

[root@master ~]# docker pull centos

[root@master ~]# docker images |grep centos

centos                                                                        latest              9f38484d220f        4 months ago        202MB

[root@master ~]# docker tag centos:latest 192.168.43.166:5000/centos:v1

[root@master ~]# docker images |grep centos

192.168.43.166:5000/centos                                                    v1                  9f38484d220f        4 months ago        202MB

centos                                                                        latest              9f38484d220f        4 months ago        202MB

[root@master ~]# docker push 192.168.43.166:5000/centos:v1

[root@master ~]# curl http://192.168.43.166:5000/v2/_catalog

{"repositories":["centos"]}

[root@master ~]# curl http://192.168.43.166:5000/v2/centos/tags/list

{"name":"centos","tags":["v1"]}

3.编写tomcat-Dockerfile文件,构建tomcat自定义image:

[root@master ~]# mkdir /tomcat

[root@master ~]# cd /tomcat/

[root@master tomcat]# rz

上传tomcat包和jdk包

[root@master tomcat]# ls

apache-tomcat-8.0.32.tar.gz  jdk-8u65-linux-x64.gz

[root@master tomcat]# echo 111 > index.html

[root@master tomcat]# vim Dockerfile

#Dockerfile文件构建新镜像

#Base centos  基础镜像  (他妈妈是谁)

FROM centos

#Maintainer   维护者信息(他爸爸是谁)

MAINTAINER shi 1441107787@qq.com

#RUN,在容器里运行命令

RUN useradd -s /sbin/nologin -M tomcat

#安装vim

RUN yum -y install vim

#设置工作访问时候的WORKDIR路径,即默认登录容器的落脚点

ENV MYPATH /usr/local

WORKDIR $MYPATH

#ADD在构建容器肚子里添加什么文件,添加的文件需和Dockerfile文件同一目录,tar.gz的会自动解压,下面将文件添加到容器/usr/local/下

ADD jdk-8u65-linux-x64.gz /usr/local/

ADD apache-tomcat-8.0.32.tar.gz /usr/local/

#模拟发布代码

COPY index.html /usr/local/apache-tomcat-8.0.32/webapps/ROOT/

#下面配置环境变量

ENV JAVA_HOME /usr/local/jdk1.8.0_65

ENV PATH=$JAVA_HOME/bin:$PATH

ENV CLASSPATH $JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

ENV CATALINA_HOME /usr/local/apache-tomcat-8.0.32

ENV CATALINA_BASE /usr/local/apache-tomcat-8.0.32

ENV PATH $PATH:$JAVA_HOME/bin:$CATALINA_HOME/lib:$CATALINA_HOME/bin

#下面为对外一个8080端口

EXPOSE 8080

#下面为执行启动tomcat的命令,下面4条命令都可以启动tomcat,加tail的,交互创建容器后即可能查看启动日志,若catalina.sh执行的,不加tail,交互时也可自动查看日志

CMD ["/usr/local/apache-tomcat-8.0.32/bin/catalina.sh","run"]

[root@master tomcat]# ls

apache-tomcat-8.0.32.tar.gz  Dockerfile  index.html  jdk-8u65-linux-x64.gz

[root@master tomcat]# docker build -t mycangku/mytomcat8:v1 .

[root@master tomcat]# docker images |grep tomcat

mycangku/mytomcat8                                                            v1                  92d4659a72d4        21 seconds ago      740MB

测试自己创建的tomcat镜像:

[root@master tomcat]# docker run -d -p 8080:8080 --name tomcat1 mycangku/mytomcat8:v1

125f51f8c44f12d18fbdd31a30963d21a7bc82bbe3e2fbbca2d25aad067e8dfa

[root@master tomcat]# docker ps |grep tomcat

125f51f8c44f        mycangku/mytomcat8:v1                                           "/usr/local/apache-t…"   16 seconds ago      Up 14 seconds       0.0.0.0:8080->8080/tcp   tomcat1

[root@master tomcat]# curl 192.168.43.166:8080

111

[root@master tomcat]# docker rm -f tomcat1    #测试完毕,删除测试容器,表示自定义tomcat镜像没问题。

[root@master tomcat]# docker images |grep tomcat

mycangku/mytomcat8                                                            v1                  92d4659a72d4        3 minutes ago       740MB

[root@master tomcat]# docker tag mycangku/mytomcat8:v1 192.168.43.166:5000/mytomcat8:v1

[root@master tomcat]# docker images |grep tomcat

192.168.43.166:5000/mytomcat8                                                 v1                  92d4659a72d4        4 minutes ago       740MB

mycangku/mytomcat8                                                            v1                  92d4659a72d4        4 minutes ago       740MB

[root@master tomcat]# docker push 192.168.43.166:5000/mytomcat8:v1  #将自定义tomcat镜像推送到私有仓库

[root@master tomcat]# curl http://192.168.43.166:5000/v2/_catalog

{"repositories":["centos","mytomcat8"]}

[root@master tomcat]# curl http://192.168.43.166:5000/v2/mytomcat8/tags/list

{"name":"mytomcat8","tags":["v1"]}

4.使用k8s运行自定义的tomcat镜像:

1)k8s测试运行3个tomcat副本镜像——只是单纯创建应用,不用pv/pvc挂载永久存储和不挂载configmap配置文件。

[root@master tomcat]# cat tomcat.yaml

apiVersion: apps/v1

kind: Deployment

metadata:

  name: tomcat-deployment

  labels:

    app: tomcat

spec:

  replicas: 3

  selector:

    matchLabels:

      app: tomcat

  template:

    metadata:

      labels:

        app: tomcat

    spec:

      containers:

      - name: tomcat

        image: 192.168.43.166:5000/mytomcat8:v1

        imagePullPolicy: IfNotPresent

        ports:

        - containerPort: 8080

---

kind: Service

apiVersion: v1

metadata:

  name: my-service

spec:

  type: NodePort

  selector:

    app: tomcat

  ports:

  - protocol: TCP

    port: 8080

    targetPort: 8080

    nodePort: 32222

[root@master tomcat]# kubectl apply -f tomcat.yaml

[root@master tomcat]# kubectl get pod -o wide

NAME                                 READY   STATUS    RESTARTS   AGE     IP            NODE    NOMINATED NODE   READINESS GATES

tomcat-deployment-5cfd4b6d4d-7pcj5   1/1     Running   0          5m42s   10.244.1.14   node1   <none>           <none>

tomcat-deployment-5cfd4b6d4d-rc8ft   1/1     Running   0          5m42s   10.244.2.11   node2   <none>           <none>

tomcat-deployment-5cfd4b6d4d-sbjlh   1/1     Running   0          5m42s   10.244.2.10   node2   <none>           <none>

[root@master tomcat]# kubectl get deployment

NAME                READY   UP-TO-DATE   AVAILABLE   AGE

tomcat-deployment   3/3     3            3           5m53s

[root@master tomcat]# kubectl get service

NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE

kubernetes   ClusterIP   10.1.0.1       <none>        443/TCP          15d

my-service   NodePort    10.1.185.102   <none>        8080:32222/TCP   6m8s

[root@master tomcat]# curl node1:32222

111

[root@master tomcat]# curl node2:32222

111

[root@master tomcat]# kubectl delete -f tomcat.yaml    #删除测试k8s运行的自定义tomcat的image的镜像,清除环境

[root@master tomcat]# cd

2)k8s正式运行3个tomcat副本镜像——模拟生产环境,使用pv/pvc挂载永久存储和挂载configmap配置文件,方便解耦配置文件。

Master: 192.168.43.166   node1:192.168.43.167  node2: 192.168.43.168

master上搭建nfs服务

[root@master ~]# yum -y install nfs-utils

注意:两个node节点也要安装nfs-utils,不是为了nfs服务,而是为了能安装客户端,能mount挂载nfs-server

[root@master ~]# mkdir /opt/pv

[root@master ~]# tail -1 /etc/exports

/opt/pv *(no_root_squash,rw)

[root@master ~]# exportfs -rv

[root@master ~]# systemctl restart nfs

创建pv和pvc:

[root@master ~]# vim pv-nfs.yaml

apiVersion: v1

kind: PersistentVolume

metadata:

  name: httpd-pv

  labels:

    name: httpd-pv

spec:

  accessModes:

  - ReadWriteOnce   

  capacity:

    storage: 5Gi

  nfs:

    path: /opt/pv

    server: 192.168.43.166

#PersistentVolume: pvc的类型

#name: httpd-pv :  pvc的名字

#accessMode: pvc的访问模式:ReadWriteOnce:只允许一个节点去读写; ReadOnlyMany: 可以同时被多个节点去读; ReadWriteMany: 同时支持多个节点去读写

#capacity: 容量大小

#nfs:  指定pv关联的存储类型是nfs,也可是其他的类型,只是格式写法不同

[root@master ~]# vim pvc.yaml

apiVersion: v1

kind: PersistentVolumeClaim

metadata:

  name: httpd-pvc

  labels:

    name: httpd-pvc

spec:

  selector:

    matchLabels:

      name: httpd-pv

  volumeName: httpd-pv

  accessModes:

  - ReadWriteOnce

  resources:

    requests:

      storage: 5Gi

#name: httpd-pv  选择指定要关联pv的名字

##volumeName: httpd-pv  指定pv的名字

##accessModes: 访问模式,必须pvc要和pv模式一样

[root@master ~]# kubectl apply -f pv-nfs.yaml

[root@master ~]# kubectl apply -f pvc.yaml

[root@master ~]# kubectl get pv

NAME       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS        CLAIM                                      STORAGECLASS   REASON   AGE

httpd-pv   5Gi        RWO            Retain           Bound         default/httpd-pvc                                                  39s

[root@master ~]# kubectl get pvc

NAME        STATUS   VOLUME     CAPACITY   ACCESS MODES   STORAGECLASS   AGE

httpd-pvc   Bound    httpd-pv   5Gi        RWO                           48s

自定义tomcat的配置文件,创建configmap——虽然是自定义tomcat配置文件,但还是和原配置文件一样,只不过以后能方便修改

[root@master ~]# vim server.xml   #将测试的tomcat的原配置文件cp出来,没做任何修改,以后方便修改使用

<?xml version='1.0' encoding='utf-8'?>

<!--

  Licensed to the Apache Software Foundation (ASF) under one or more

  contributor license agreements.  See the NOTICE file distributed with

  this work for additional information regarding copyright ownership.

  The ASF licenses this file to You under the Apache License, Version 2.0

  (the "License"); you may not use this file except in compliance with

  the License.  You may obtain a copy of the License at

      http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software

  distributed under the License is distributed on an "AS IS" BASIS,

  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

  See the License for the specific language governing permissions and

  limitations under the License.

-->

<!-- Note:  A "Server" is not itself a "Container", so you may not

     define subcomponents such as "Valves" at this level.

     Documentation at /docs/config/server.html

 -->

<Server port="8005" shutdown="SHUTDOWN">

  <Listener className="org.apache.catalina.startup.VersionLoggerListener" />

  <!-- Security listener. Documentation at /docs/config/listeners.html

  <Listener className="org.apache.catalina.security.SecurityListener" />

  -->

  <!--APR library loader. Documentation at /docs/apr.html -->

  <Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" />

  <!-- Prevent memory leaks due to use of particular java/javax APIs-->

  <Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener" />

  <Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" />

  <Listener className="org.apache.catalina.core.ThreadLocalLeakPreventionListener" />

  <!-- Global JNDI resources

       Documentation at /docs/jndi-resources-howto.html

  -->

  <GlobalNamingResources>

    <!-- Editable user database that can also be used by

         UserDatabaseRealm to authenticate users

    -->

    <Resource name="UserDatabase" auth="Container"

              type="org.apache.catalina.UserDatabase"

              description="User database that can be updated and saved"

              factory="org.apache.catalina.users.MemoryUserDatabaseFactory"

              pathname="conf/tomcat-users.xml" />

  </GlobalNamingResources>

  <!-- A "Service" is a collection of one or more "Connectors" that share

       a single "Container" Note:  A "Service" is not itself a "Container",

       so you may not define subcomponents such as "Valves" at this level.

       Documentation at /docs/config/service.html

   -->

  <Service name="Catalina">

    <!--The connectors can use a shared executor, you can define one or more named thread pools-->

    <!--

    <Executor name="tomcatThreadPool" namePrefix="catalina-exec-"

        maxThreads="150" minSpareThreads="4"/>

    -->

    <!-- A "Connector" represents an endpoint by which requests are received

         and responses are returned. Documentation at :

         Java HTTP Connector: /docs/config/http.html (blocking & non-blocking)

         Java AJP  Connector: /docs/config/ajp.html

         APR (HTTP/AJP) Connector: /docs/apr.html

         Define a non-SSL/TLS HTTP/1.1 Connector on port 8080

    -->

    <Connector port="8080" protocol="HTTP/1.1"

               connectionTimeout="20000"

               redirectPort="8443" />

    <!-- A "Connector" using the shared thread pool-->

    <!--

    <Connector executor="tomcatThreadPool"

               port="8080" protocol="HTTP/1.1"

               connectionTimeout="20000"

               redirectPort="8443" />

    -->

    <!-- Define a SSL/TLS HTTP/1.1 Connector on port 8443

         This connector uses the NIO implementation that requires the JSSE

         style configuration. When using the APR/native implementation, the

         OpenSSL style configuration is required as described in the APR/native

         documentation -->

    <!--

    <Connector port="8443" protocol="org.apache.coyote.http11.Http11NioProtocol"

               maxThreads="150" SSLEnabled="true" scheme="https" secure="true"

               clientAuth="false" sslProtocol="TLS" />

    -->

    <!-- Define an AJP 1.3 Connector on port 8009 -->

    <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" />

    <!-- An Engine represents the entry point (within Catalina) that processes

         every request.  The Engine implementation for Tomcat stand alone

         analyzes the HTTP headers included with the request, and passes them

         on to the appropriate Host (virtual host).

         Documentation at /docs/config/engine.html -->

    <!-- You should set jvmRoute to support load-balancing via AJP ie :

    <Engine name="Catalina" defaultHost="localhost" jvmRoute="jvm1">

    -->

    <Engine name="Catalina" defaultHost="localhost">

      <!--For clustering, please take a look at documentation at:

          /docs/cluster-howto.html  (simple how to)

          /docs/config/cluster.html (reference documentation) -->

      <!--

      <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster"/>

      -->

      <!-- Use the LockOutRealm to prevent attempts to guess user passwords

           via a brute-force attack -->

      <Realm className="org.apache.catalina.realm.LockOutRealm">

        <!-- This Realm uses the UserDatabase configured in the global JNDI

             resources under the key "UserDatabase".  Any edits

             that are performed against this UserDatabase are immediately

             available for use by the Realm.  -->

        <Realm className="org.apache.catalina.realm.UserDatabaseRealm"

               resourceName="UserDatabase"/>

      </Realm>

      <Host name="localhost"  appBase="webapps"

            unpackWARs="true" autoDeploy="true">

        <!-- SingleSignOn valve, share authentication between web applications

             Documentation at: /docs/config/valve.html -->

        <!--

        <Valve className="org.apache.catalina.authenticator.SingleSignOn" />

        -->

        <!-- Access log processes all example.

             Documentation at: /docs/config/valve.html

             Note: The pattern used is equivalent to using pattern="common" -->

        <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"

               prefix="localhost_access_log" suffix=".txt"

               pattern="%h %l %u %t "%r" %s %b" />

      </Host>

    </Engine>

  </Service>

</Server>

根据自定义配置文件创建configmap文件-供应用pod挂载配置文件:

[root@master ~]# kubectl create configmap tomcat-configmap --from-file=/root/server.xml

[root@master ~]# kubectl get configmap

NAME               DATA   AGE

tomcat-configmap   1      8s

[root@master ~]# kubectl describe configmap tomcat-configmap  #查看configmap的内容信息,和上面配置文件内容一样

.......

创建deployment类型的应用pod,并且挂载pvc站点目录和configmap文件(双挂载映射)

[root@master ~]# cd /tomcat/

[root@master tomcat]# vim tomcat-deployment.yaml

apiVersion: extensions/v1beta1

kind: Deployment

metadata:

  name: www.example

  labels:

    app: tomcat-httpd

spec:

  replicas: 2

  selector:

    matchLabels:

      app: tomcat-application

      tier: frontent

  template:

    metadata:

      name: tomcat-application

      labels:

        app: tomcat-application

        tier: frontent

    spec:

      restartPolicy: Always

      containers:

      - name: tomcat-application

        image: 192.168.43.166:5000/mytomcat8:v1

        imagePullPolicy: IfNotPresent

        env:

        - name: USER

          value: root

        livenessProbe:

          exec:

            command:

            - cat

            - /usr/local/apache-tomcat-8.0.32/webapps/ROOT/index.html

          initialDelaySeconds: 40

          periodSeconds: 5

          failureThreshold: 3

          timeoutSeconds: 10

        readinessProbe:

          exec:

            command:

            - cat

            - /usr/local/apache-tomcat-8.0.32/webapps/ROOT/index.html

          initialDelaySeconds: 40

          periodSeconds: 5

          failureThreshold: 3

          timeoutSeconds: 10

        volumeMounts:

        - name: httpconfigmap-volume

          mountPath: /usr/local/apache-tomcat-8.0.32/conf/server.xml  #原来配置文件内容一点未动

          subPath: path/to/server.xml

        - mountPath: /tmp/html/   #映射一个目录到宿主机,测试修改configmap的配置文件站点目录为该目录下测试解耦配置文件

          name: web-directory8

      volumes:

      - name: httpconfigmap-volume

        configMap:

          name: tomcat-configmap

          defaultMode: 0777

          items:

          - key: server.xml

            path: path/to/server.xml

      - persistentVolumeClaim:

          claimName: httpd-pvc

        name: web-directory8

---

kind: Service

apiVersion: v1

metadata:

  name: my-service

spec:

  type: NodePort

  selector:

    app: tomcat-application

    tier: frontent

  ports:

  - protocol: TCP

    port: 8080

    targetPort: 8080

    nodePort: 32222

#挂载tomcat的configmap的配置文件,参考如下:k8s容器挂载配置文件_weixin_34102807的博客-CSDN博客 

[root@master tomcat]# kubectl apply -f tomcat-deployment.yaml

[root@master tomcat]# kubectl get pod -o wide

NAME                           READY   STATUS    RESTARTS   AGE   IP            NODE    NOMINATED NODE   READINESS GATES

www.example-7549fd47f4-6ww7r   1/1     Running   0          87s   10.244.1.26   node1   <none>           <none>

www.example-7549fd47f4-q2h4s   1/1     Running   0          87s   10.244.2.27   node2   <none>           <none>

[root@master tomcat]# kubectl get deployment

NAME          READY   UP-TO-DATE   AVAILABLE   AGE

www.example   2/2     2            2           90s

[root@master tomcat]# kubectl get service

NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)          AGE

kubernetes   ClusterIP   10.1.0.1     <none>        443/TCP          15d

my-service   NodePort    10.1.52.45   <none>        8080:32222/TCP   93s

此时访问两台tomcat服务,为原来配置文件配置的站点目录,原: webapps  /usr/local/apache-tomcat.../webapps/ROOT/index.html 内容:111

[root@master tomcat]# curl node1:32222

111

[root@master tomcat]# curl node2:32222

111

查看和宿主机的pv对应的物理存储映射关系:

[root@master tomcat]# mkdir /opt/pv/ROOT

[root@master tomcat]# echo 222 > /opt/pv/ROOT/index.html

[root@master tomcat]# cat /opt/pv/ROOT/index.html

222

[root@master tomcat]# kubectl exec -it www.example-7549fd47f4-6ww7r ls /tmp/html

ROOT

[root@master tomcat]# kubectl exec -it www.example-7549fd47f4-6ww7r cat /tmp/html/ROOT/index.html

222

[root@master tomcat]# kubectl exec -it www.example-7549fd47f4-q2h4s ls /tmp/html

ROOT

[root@master tomcat]# kubectl exec -it www.example-7549fd47f4-q2h4s cat /tmp/html/ROOT/index.html

222

通过修改configmap配置文件的内容,进而解耦修改容器pod的配置文件,修改站点目录为:/tmp/html

[root@master tomcat]# kubectl get configmap

NAME               DATA   AGE

tomcat-configmap   1      87m

[root@master tomcat]# kubectl edit configmap tomcat-configmap      #编辑修改configmap配置文件内容

xxxxxx

  <Host name="localhost"  appBase="/tmp/html/"    #修改站点目录

:wq

若干等时间很长才能变化过来(此处干等不行,需要重建pod,pod内配置文件内容才能更新过来),也可使用kubectl delete pod 两个pod名称,删除两个pod后让自动创新新pod后会立刻应用新的配置文件内容

[root@master tomcat]# curl node1:32222

222     #才正确

[root@master tomcat]# curl node2:32222

222     #才正确

5.搭建Ingress-nginx-controller:——ingress负载均衡器的pod,注意:副本两个,且指定在node1和node2上运行(暂时没指定)

1)搭建ingress的pod容器:

[root@master tomcat]# mkdir /ingress

[root@master tomcat]# cd /ingress/

[root@master ingress]# rz

上传ingress相关包和yaml文件

[root@master ingress]# ls

mandatory.yaml  nginx-ingress-controller.tar

[root@master ingress]# docker load < nginx-ingress-controller.tar   #镜像导入到本地,为了运行速度加快

[root@master ingress]# docker images |grep nginx-ingress

quay.io/kubernetes-ingress-controller/nginx-ingress-controller                0.24.1              98675eb54d0e        3 months ago        631MB

[root@master ingress]# docker tag quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.24.1 192.168.43.166:5000/nginx-ingress-controller:0.24.1

[root@master ingress]# docker images |grep nginx-ingress

192.168.43.166:5000/nginx-ingress-controller                                  0.24.1              98675eb54d0e        3 months ago        631MB

quay.io/kubernetes-ingress-controller/nginx-ingress-controller                0.24.1              98675eb54d0e        3 months ago        631MB

[root@master ingress]# docker push 192.168.43.166:5000/nginx-ingress-controller:0.24.1

[root@master ingress]# curl http://192.168.43.166:5000/v2/_catalog

{"repositories":["centos","mytomcat8","nginx-ingress-controller"]}

[root@master ingress]# curl http://192.168.43.166:5000/v2/nginx-ingress-controller/tags/list

{"name":"nginx-ingress-controller","tags":["0.24.1"]}

[root@master ingress]# vim mandatory.yaml

apiVersion: v1

kind: Namespace

metadata:

  name: ingress-nginx

  labels:

    app.kubernetes.io/name: ingress-nginx

    app.kubernetes.io/part-of: ingress-nginx

---

kind: ConfigMap

apiVersion: v1

metadata:

  name: nginx-configuration

  namespace: ingress-nginx

  labels:

    app.kubernetes.io/name: ingress-nginx

    app.kubernetes.io/part-of: ingress-nginx

---

kind: ConfigMap

apiVersion: v1

metadata:

  name: tcp-services

  namespace: ingress-nginx

  labels:

    app.kubernetes.io/name: ingress-nginx

    app.kubernetes.io/part-of: ingress-nginx

---

kind: ConfigMap

apiVersion: v1

metadata:

  name: udp-services

  namespace: ingress-nginx

  labels:

    app.kubernetes.io/name: ingress-nginx

    app.kubernetes.io/part-of: ingress-nginx

---

apiVersion: v1

kind: ServiceAccount

metadata:

  name: nginx-ingress-serviceaccount

  namespace: ingress-nginx

  labels:

    app.kubernetes.io/name: ingress-nginx

    app.kubernetes.io/part-of: ingress-nginx

---

apiVersion: rbac.authorization.k8s.io/v1beta1

kind: ClusterRole

metadata:

  name: nginx-ingress-clusterrole

  labels:

    app.kubernetes.io/name: ingress-nginx

    app.kubernetes.io/part-of: ingress-nginx

rules:

  - apiGroups:

      - ""

    resources:

      - configmaps

      - endpoints

      - nodes

      - pods

      - secrets

    verbs:

      - list

      - watch

  - apiGroups:

      - ""

    resources:

      - nodes

    verbs:

      - get

  - apiGroups:

      - ""

    resources:

      - services

    verbs:

      - get

      - list

      - watch

  - apiGroups:

      - "extensions"

    resources:

      - ingresses

    verbs:

      - get

      - list

      - watch

  - apiGroups:

      - ""

    resources:

      - events

    verbs:

      - create

      - patch

  - apiGroups:

      - "extensions"

    resources:

      - ingresses/status

    verbs:

      - update

---

apiVersion: rbac.authorization.k8s.io/v1beta1

kind: Role

metadata:

  name: nginx-ingress-role

  namespace: ingress-nginx

  labels:

    app.kubernetes.io/name: ingress-nginx

    app.kubernetes.io/part-of: ingress-nginx

rules:

  - apiGroups:

      - ""

    resources:

      - configmaps

      - pods

      - secrets

      - namespaces

    verbs:

      - get

  - apiGroups:

      - ""

    resources:

      - configmaps

    resourceNames:

      # Defaults to "<election-id>-<ingress-class>"

      # Here: "<ingress-controller-leader>-<nginx>"

      # This has to be adapted if you change either parameter

      # when launching the nginx-ingress-controller.

      - "ingress-controller-leader-nginx"

    verbs:

      - get

      - update

  - apiGroups:

      - ""

    resources:

      - configmaps

    verbs:

      - create

  - apiGroups:

      - ""

    resources:

      - endpoints

    verbs:

      - get

---

apiVersion: rbac.authorization.k8s.io/v1beta1

kind: RoleBinding

metadata:

  name: nginx-ingress-role-nisa-binding

  namespace: ingress-nginx

  labels:

    app.kubernetes.io/name: ingress-nginx

    app.kubernetes.io/part-of: ingress-nginx

roleRef:

  apiGroup: rbac.authorization.k8s.io

  kind: Role

  name: nginx-ingress-role

subjects:

  - kind: ServiceAccount

    name: nginx-ingress-serviceaccount

    namespace: ingress-nginx

---

apiVersion: rbac.authorization.k8s.io/v1beta1

kind: ClusterRoleBinding

metadata:

  name: nginx-ingress-clusterrole-nisa-binding

  labels:

    app.kubernetes.io/name: ingress-nginx

    app.kubernetes.io/part-of: ingress-nginx

roleRef:

  apiGroup: rbac.authorization.k8s.io

  kind: ClusterRole

  name: nginx-ingress-clusterrole

subjects:

  - kind: ServiceAccount

    name: nginx-ingress-serviceaccount

    namespace: ingress-nginx

---

apiVersion: apps/v1

kind: Deployment

metadata:

  name: nginx-ingress-controller

  namespace: ingress-nginx

  labels:

    app.kubernetes.io/name: ingress-nginx

    app.kubernetes.io/part-of: ingress-nginx

spec:

  replicas: 2

  selector:

    matchLabels:

      app.kubernetes.io/name: ingress-nginx

      app.kubernetes.io/part-of: ingress-nginx

  template:

    metadata:

      labels:

        app.kubernetes.io/name: ingress-nginx

        app.kubernetes.io/part-of: ingress-nginx

      annotations:

        prometheus.io/port: "10254"

        prometheus.io/scrape: "true"

    spec:

      #nodeSelector:  #添加,将ingress-nginx-cotroller负载均衡器绑定在固定的两个node节点上

        #kubernetes.io/hostname: node1  #添加,通过命令:kubectl get nodes -o wide --show-labels查看到的

        #kubernetes.io/hostname: node2  #添加,通过命令:kubectl get nodes -o wide --show-labels查看到的

      hostNetwork: true        #添加,使用物理网络,跟pod对应宿主机一个网络

      serviceAccountName: nginx-ingress-serviceaccount

      containers:

        - name: nginx-ingress-controller

          image: 192.168.43.166:5000/nginx-ingress-controller:0.24.1

          imagePullPolicy: IfNotPresent    #添加,防止下载不了,老是always从镜像仓库下载,添加后可手工先下载到本地

          args:

            - /nginx-ingress-controller

            - --configmap=$(POD_NAMESPACE)/nginx-configuration

            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services

            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services

            - --publish-service=$(POD_NAMESPACE)/ingress-nginx

            - --annotations-prefix=nginx.ingress.kubernetes.io

          securityContext:

            allowPrivilegeEscalation: true

            capabilities:

              drop:

                - ALL

              add:

                - NET_BIND_SERVICE

            # www-data -> 33

            runAsUser: 33

          env:

            - name: POD_NAME

              valueFrom:

                fieldRef:

                  fieldPath: metadata.name

            - name: POD_NAMESPACE

              valueFrom:

                fieldRef:

                  fieldPath: metadata.namespace

          ports:

            - name: http

              containerPort: 80  #ingress-nginx控制器的容器端口

              hostPort: 80     #添加,指定物理机端口,来映射容器中端口确保node节点不要占用该端口

            #- name: https

              #containerPort: 443

          livenessProbe:

            failureThreshold: 3

            httpGet:

              path: /healthz

              port: 10254

              scheme: HTTP

            initialDelaySeconds: 10

            periodSeconds: 10

            successThreshold: 1

            timeoutSeconds: 10

          readinessProbe:

            failureThreshold: 3

            httpGet:

              path: /healthz

              port: 10254

              scheme: HTTP

            periodSeconds: 10

            successThreshold: 1

            timeoutSeconds: 10

---

[root@master ingress]# kubectl apply -f mandatory.yaml

[root@master ingress]# kubectl get ns     #查看命名空间,也可kubectl get namespace

NAME            STATUS   AGE

default         Active   16d

ingress-nginx   Active   24s

kube-public     Active   16d

kube-system     Active   16d

[root@master ingress]# kubectl get serviceaccount -n ingress-nginx  #查看服务账号

NAME                           SECRETS   AGE

default                        1         54s

nginx-ingress-serviceaccount   1         54s

[root@master ingress]# kubectl get configmap -n ingress-nginx  #查看configmap

NAME                  DATA   AGE

nginx-configuration   0      63s

tcp-services          0      63s

udp-services          0      63s

[root@master ingress]# kubectl get clusterrole -n ingress-nginx |grep ingress   #查看clusterrole

nginx-ingress-clusterrole                                              2m7s

[root@master ingress]# kubectl get role -n ingress-nginx |grep ingress   #查看role

nginx-ingress-role   2m41s

[root@master ingress]# kubectl get deployment -n ingress-nginx  #查看deployment

NAME                       READY   UP-TO-DATE   AVAILABLE   AGE

nginx-ingress-controller   2/2     2            2           4m49s

[root@master ingress]# kubectl get rs -n ingress-nginx    #查看rs

NAME                                  DESIRED   CURRENT   READY   AGE

nginx-ingress-controller-6bc4fc4d7d   2         2         2       5m14s

[root@master ingress]# kubectl get pod -n ingress-nginx -o wide

NAME                                        READY   STATUS    RESTARTS   AGE     IP               NODE    NOMINATED NODE   READINESS GATES

nginx-ingress-controller-6bc4fc4d7d-65ksq   1/1     Running   0          5m53s   192.168.43.168   node2   <none>           <none>

nginx-ingress-controller-6bc4fc4d7d-hxwq5   1/1     Running   0          5m53s   192.168.43.167   node1   <none>           <none>

2)搭建Ingress,配置前端和后端后将配置注入到Ingress-nginx-controller创建的pod的nginx配置文件中:

Ingress:  为例配置前后端的关联,前端哪个域名,对应后端哪个service

[root@master ingress]# vim ingress.yaml

apiVersion: extensions/v1beta1

kind: Ingress

metadata:

  name: ingress-nginx

spec:

  rules:

  - host: www.example.com   #关联前端域名,需要能和相应node的ip解析

    http:

      paths:

      - backend:

          serviceName: my-service  #关联后端service,进而关联后端应用pod

          servicePort: 8080             #关联后端service

[root@master ingress]# kubectl apply -f ingress.yaml

[root@master ingress]# kubectl get ingress

NAME            HOSTS             ADDRESS   PORTS   AGE

ingress-nginx   www.example.com             80      7s

[root@master ingress]# cat /etc/hosts

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4

::1         localhost localhost.localdomain localhost6 localhost6.localdomain6

192.168.43.166 master

192.168.43.167 node1

192.168.43.168 node2

192.168.43.167 www.example.com

192.168.43.168 www.example.com

[root@master ingress]# curl www.example.com

222

[root@master ingress]# curl www.example.com

222

[root@master ingress]# curl www.example.com

222

外部windows机器浏览器访问Example Domain 

前提是windows需要配置hosts解析:

192.168.43.167 www.example.com

192.168.43.168 www.example.com

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

运维实战帮

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值