k8s 中微服务之 MetailLB 搭配 ingress-nginx 实现七层负载

目录

1 MetailLB 搭建

1.1 MetalLB 的作用和原理

1.2 MetalLB功能

1.3 部署 MetalLB

1.3.1 创建deployment控制器和创建一个服务

1.3.2 下载MealLB清单文件

1.3.3 使用 docker 对镜像进行拉取

1.3.4 将镜像上传至私人仓库

1.3.5 将官方仓库地址修改为本地私人地址

1.3.6 运行清单文件部署服务

1.3.7 配置 MetalLB 分配地址段

2 Ingress-nginx 原理及部署

2.1 ingress-nginx 功能

2.2 Ingress-Nginx 的作用和原理

2.3 MetalLB 和 Ingress-Nginx 的搭配原理

2.4 Ingress 部署

2.4.1 下载ingress-nginx yaml清单 

2.4.2 下载镜像并上传私有仓库

2.4.3 修改清单镜像拉取地址

2.4.4 安装 Ingress-nginx

2.5 测试 Ingress-nginx

2.5.1 查看是否正常并修改服务类型

2.5.2 创建 ingress 资源类型

2.5.3 声明 ingress 资源类型

2.5.4 测试 ingress-nginx 是否实现

2.5.5 回收资源

3 Ingress-nginx 的高级用法

3.1 基于路径的访问微服务

3.1.1 将 nginx 命名两个版本v1与v2

3.1.2 暴露端口并指定微服务类型

3.1.3 进入 pod 修改默认发布文件

3.1.4 测试 service 是否正常

3.1.5 创建 ingress 资源类型

3.1.6 实现 路径识别 ingress 控制器清单文件配置的解释 

3.1.7 声明 ingress 清单文件 并测试

3.2 基于域名访问的微服务

3.2.1 创建 Ingress 资源类型

3.2.2 声明并测试是否正常访问

3.2.3 建立 tls 加密

3.2.4 建立 auth 认证

3.2.5 Igress 实现 rewrite 重定向


1 MetailLB 搭建

1.1 MetalLB 的作用和原理

  1. 提供外部 IP 地址:

    • MetalLB 的主要作用是为 Kubernetes 集群中的服务提供外部可访问的 IP 地址。在没有云服务提供商提供负载均衡器的情况下,MetalLB 可以模拟实现类似功能。
    • MetalLB 支持两种地址分配模式:二层模式 和 边界网关协议(BGP)模式。
      • 二层模式:通过在局域网中广播地址解析协议(ARP)请求来宣告服务的 IP 地址,将流量引导到拥有该 IP 地址的节点上。
      • BGP 模式:使用 BGP 协议与网络中的路由器进行通信,宣告服务的 IP 地址,并引导外部流量进入集群。
  2. 负载均衡流量:

    • 在将流量引导到拥有服务 IP 地址的节点后,MetalLB 可以根据配置的策略将流量分发到不同的后端 Pod 上。
    • 例如,在二层模式下,可以使用轮询或随机等方式进行流量分发。

MetalLB官网icon-default.png?t=O83Ahttps://metallb.universe.tf/installation/

1.2 MetalLB功能

为 LoadBalancer 分配 vip

LoadBalancer类型的Service

LoadBalancer和NodePort很相似,目的都是向外部暴露一个端口,区别在于LoadBalancer会在集群的外部再来做一个负载均衡设备,而这个设备需要外部环境支持的,外部服务发送到这个设备上的请求,会被设备负载之后转发到集群中。

1.3 部署 MetalLB

1.3.1 创建deployment控制器和创建一个服务

[root@k8s-master metalb]# kubectl get service
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   3d14h

[root@k8s-master metalb]# kubectl create deployment dep \
--image nginx:latest \
--dry-run=client \
--port 80 --replicas 3 -o yaml > dep.yml

# 修改好的如下
[root@k8s-master metalb]# cat dep.yml 
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: dep
  name: dep
spec:
  replicas: 3
  selector:
    matchLabels:
      app: dep
  template:
    metadata:
      labels:
        app: dep
    spec:
      containers:
      - image: nginx:latest
        name: nginx
        ports:
        - containerPort: 80

[root@k8s-master metalb]# kubectl apply -f dep.yml 

[root@k8s-master metalb]# kubectl get pods 
NAME                   READY   STATUS    RESTARTS   AGE
dep-79fcdcdfc7-27qzq   1/1     Running   0          63s
dep-79fcdcdfc7-sjjzz   1/1     Running   0          63s
dep-79fcdcdfc7-x7rdz   1/1     Running   0          63s

# 此时还没有创建服务
[root@k8s-master metalb]# kubectl get service
NAME         TYPE           CLUSTER-IP      EXTERNAL-IP       PORT(S)        AGE
kubernetes   ClusterIP      10.96.0.1       <none>            443/TCP        3d15h

# 创建服务
[root@k8s-master metalb]# kubectl expose deployment dep \
--name=svc-nginx \
--type=LoadBalancer \
--port=80 --target-port=80 \
--dry-run=client -o yaml >> dep.yml 

# 修改之后
[root@k8s-master metalb]# cat dep.yml 
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: dep
  name: dep
spec:
  replicas: 3
  selector:
    matchLabels:
      app: dep
  template:
    metadata:
      labels:
        app: dep
    spec:
      containers:
      - image: nginx:latest
        name: nginx
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: dep
  name: svc-nginx
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: dep
  type: LoadBalancer


[root@k8s-master metalb]# kubectl apply -f dep.yml 

# 没有提供IP 因为是裸金属模式,需要借助插件来完成 如 MetalLB
[root@k8s-master metalb]# kubectl get service
NAME         TYPE           CLUSTER-IP      EXTERNAL-IP       PORT(S)        AGE
kubernetes   ClusterIP      10.96.0.1       <none>            443/TCP        3d15h
svc-nginx    LoadBalancer   10.106.13.221   <peding>          80/TCP         69m

1.3.2 下载MealLB清单文件

[root@k8s-master metalb]# wget https://raw.githubusercontent.com/metallb/metallb/v0.14.8/config/manifests/metallb-native.yaml

1698         image: quay.io/metallb/controller:v0.14.8
1795         image: quay.io/metallb/speaker:v0.14.8

1.3.3 使用 docker 对镜像进行拉取

# 将镜像上传到私人仓库
[root@harbor harbor]# docker pull quay.io/metallb/controller:v0.14.8
[root@harbor harbor]# docker pull quay.io/metallb/speaker:v0.14.8

1.3.4 将镜像上传至私人仓库


[root@harbor ~]# docker login reg.shuyan.com
Username: admin
Password: 
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store


[root@harbor harbor]# docker tag registry.k8s.io/ingress-nginx/controller:v1.11.2 reg.shuyan.com/ingress-nginx/controller:v1.11.2
[root@harbor harbor]# docker push reg.shuyan.com/ingress-nginx/controller:v1.11.2

[root@harbor ~]# docker tag quay.io/metallb/speaker:v0.14.8 reg.shuyan.com/metallb/speaker:v0.14.8
[root@harbor ~]# docker push reg.shuyan.com/metallb/speaker:v0.14.8 

1.3.5 将官方仓库地址修改为本地私人地址

[root@k8s-master metalb]# ls 
metallb-native.yaml

[root@k8s-master metalb]# sed -i 's/quay.io\/metallb\/controller:v0.14.8/reg.shuyan.com\/metallb\/controller:v0.14.8/g' metallb-native.yaml
[root@k8s-master metalb]# sed -i 's/quay.io\/metallb\/speaker:v0.14.8/reg.shuyan.com\/metallb\/speaker:v0.14.8/g' metallb-native.yaml

1.3.6 运行清单文件部署服务

[root@k8s-master metalb]# kubectl apply -f metallb-native.yaml 
namespace/metallb-system created
customresourcedefinition.apiextensions.k8s.io/bfdprofiles.metallb.io created
customresourcedefinition.apiextensions.k8s.io/bgpadvertisements.metallb.io created
customresourcedefinition.apiextensions.k8s.io/bgppeers.metallb.io created
customresourcedefinition.apiextensions.k8s.io/communities.metallb.io created
customresourcedefinition.apiextensions.k8s.io/ipaddresspools.metallb.io created
customresourcedefinition.apiextensions.k8s.io/l2advertisements.metallb.io created
customresourcedefinition.apiextensions.k8s.io/servicel2statuses.metallb.io created
serviceaccount/controller created
serviceaccount/speaker created
role.rbac.authorization.k8s.io/controller created
role.rbac.authorization.k8s.io/pod-lister created
clusterrole.rbac.authorization.k8s.io/metallb-system:controller created
clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created
rolebinding.rbac.authorization.k8s.io/controller created
rolebinding.rbac.authorization.k8s.io/pod-lister created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created
configmap/metallb-excludel2 created
secret/metallb-webhook-cert created
service/metallb-webhook-service created
deployment.apps/controller created
daemonset.apps/speaker created
validatingwebhookconfiguration.admissionregistration.k8s.io/metallb-webhook-configuration created

# 查看命名空间是否建立
[root@k8s-master metalb]# kubectl get namespaces 
NAME              STATUS   AGE
default           Active   3d14h
dev               Active   45h
kube-flannel      Active   3d14h
kube-node-lease   Active   3d14h
kube-public       Active   3d14h
kube-system       Active   3d14h
metallb-system    Active   14s

# 查看镜像是否正确拉取
[root@k8s-master metalb]# kubectl -n metallb-system get pods 
NAME                          READY   STATUS    RESTARTS   AGE
controller-65957f77c8-mt8w8   1/1     Running   0          52s
speaker-f5znb                 1/1     Running   0          52s
speaker-slsf7                 1/1     Running   0          52s
speaker-wj79v                 1/1     Running   0          52s

1.3.7 配置 MetalLB 分配地址段

Configuration :: MetalLB, bare metal load-balancer for KubernetesMetalLB, bare metal load-balancer for Kubernetesicon-default.png?t=O83Ahttps://metallb.universe.tf/configuration/

 将以上官网的代码复制下来修改

[root@k8s-master metalb]# vim configmap.yml 
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: first-pool
  namespace: metallb-system    # 注意命名空间一定要和上面实体清单创建的一样
spec:
  addresses:
  - 192.168.239.240-192.168.239.250   # 注意此地址池一定要是本网段可用的地址

---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: example
  namespace: metallb-system    # 注意命名空间一定要和上面实体清单创建的一样
spec:
  ipAddressPools:
  - first-pool

声明地址池清单文件并访问测试

[root@k8s-master metalb]# kubectl apply -f configmap.yml 
ipaddresspool.metallb.io/first-pool created
l2advertisement.metallb.io/example created


[root@k8s-master metalb]# kubectl get service
NAME         TYPE           CLUSTER-IP      EXTERNAL-IP       PORT(S)        AGE
kubernetes   ClusterIP      10.96.0.1       <none>            443/TCP        3d15h
svc-nginx    LoadBalancer   10.106.13.221   192.168.239.240   80:30668/TCP   12s


[root@k8s-master metalb]# curl 192.168.239.240
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

2 Ingress-nginx 原理及部署

Ingress-nginx官网icon-default.png?t=O83Ahttps://kubernetes.github.io/ingress-nginx/deploy/

2.1 ingress-nginx 功能

  • 一种全局的、为了代理不同后端 Service 而设置的负载均衡服务,支持7层

  • Ingress由两部分组成:Ingress controller和Ingress服务

  • Ingress Controller 会根据你定义的 Ingress 对象,提供对应的代理能力。

  • 业界常用的各种反向代理项目,比如 Nginx、HAProxy、Envoy、Traefik 等,都已经为Kubernetes 专门维护了对应的 Ingress Controller。

2.2 Ingress-Nginx 的作用和原理

定义路由规则:

  • Ingress-Nginx 是一个 Kubernetes Ingress 控制器,它根据 Ingress 资源定义的规则来路由外部 HTTP(S)流量到集群内的服务。
  • Ingress 资源可以定义多个规则,每个规则可以指定一个主机名(如 example.com)和一个或多个路径(如 /path1 和 /path2),并将这些路径映射到后端服务。

反向代理和负载均衡:

  • 当外部请求到达 Ingress-Nginx 控制器时,它作为反向代理将请求转发到相应的后端服务,具体是基于定义的规则来确定。
  • Ingress-Nginx 可以实现负载均衡功能,将流量分发到多个后端 Pod 上。它支持多种负载均衡算法,如轮询、最少连接数等。

2.3 MetalLB 和 Ingress-Nginx 的搭配原理

部署 MetalLB:

  • 在集群中部署 MetalLB,并通过配置来指定可用的 IP 地址池。这些 IP 地址将用于暴露集群内部的服务。

部署 Ingress-Nginx:

  • 部署 Ingress-Nginx 控制器,通常会创建一个或多个服务(Service)来暴露 Ingress 控制器本身。这些服务可以配置为 NodePort 或者 LoadBalancer 类型。
  • 由于在裸金属环境中可能没有 LoadBalancer 类型的支持,因此可以使用 MetalLB 来替代 LoadBalancer,将 Ingress-Nginx 控制器暴露给外部网络。

配置 Ingress 资源:

  • 创建 Ingress 资源来定义 HTTP(S) 流量的规则。这些规则将告诉 Ingress-Nginx 如何处理来自外部的请求。
  • Ingress 资源通常会引用前面创建的 Ingress-Nginx 控制器。

ingress 如何链接后端 service :

1、修改服务类型

ingress 会创建自己的service 叫做 ingress-nginx-controller 修改 服务类型为 LoadBalancer

2、创建 ingress 资源类型:

在ingress的资源纪录类型中一定要注明service的名称否则无法正确转发

2.4 Ingress 部署

2.4.1 下载ingress-nginx yaml清单 

[root@k8s-master metalb]# mkdir ingress

[root@k8s-master metalb]# cd ingress/

[root@k8s-master ingress]# wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.11.2/deploy/static/provider/aws/deploy.yaml

2.4.2 下载镜像并上传私有仓库

[root@k8s-master ingress]# vim deploy.yaml 

451         image: registry.k8s.io/ingress-nginx/controller:v1.11.2
552         image: registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3

[root@harbor ~]# docker pull registry.k8s.io/ingress-nginx/controller:v1.11.2

[root@harbor ~]# docker pull registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3


[root@harbor ~]# docker tag registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3 reg.shuyan.com/ingress-nginx/kube-webhook-certgen:v1.4.3

[root@harbor ~]# docker push reg.shuyan.com/ingress-nginx/kube-webhook-certgen:v1.4.3

[root@harbor harbor]# docker tag registry.k8s.io/ingress-nginx/controller:v1.11.2 reg.shuyan.com/ingress-nginx/controller:v1.11.2
[root@harbor harbor]# docker push reg.shuyan.com/ingress-nginx/controller:v1.11.2

2.4.3 修改清单镜像拉取地址

[root@k8s-master ingress]# ls 
deploy.yaml
[root@k8s-master ingress]# sed -i 's/registry.k8s.io\/ingress-[root@k8s-master ingress]# nginx\/controller:v1.11.2/reg.shuyan.com\/ingress-nginx\/controller:v1.11.2/g' deploy.yaml
[root@k8s-master ingress]# sed -i 's/registry.k8s.io\/ingress-nginx\/kube-webhook-certgen:v1.4.3/reg.shuyan.com\/ingress-nginx\/kube-webhook-certgen:v1.4.3/g' deploy.yaml

2.4.4 安装 Ingress-nginx

[root@k8s-master ingress]# kubectl apply -f deploy.yaml 
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
serviceaccount/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
configmap/ingress-nginx-controller created
service/ingress-nginx-controller created
service/ingress-nginx-controller-admission created
deployment.apps/ingress-nginx-controller created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created
ingressclass.networking.k8s.io/nginx created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created

[root@k8s-master ingress]# kubectl get namespaces 
NAME              STATUS   AGE
default           Active   3d15h
dev               Active   46h
ingress-nginx     Active   37m
kube-flannel      Active   3d15h
kube-node-lease   Active   3d15h
kube-public       Active   3d15h
kube-system       Active   3d15h
metallb-system    Active   62m

2.5 测试 Ingress-nginx

2.5.1 查看是否正常并修改服务类型

[root@k8s-master ingress]# kubectl -n ingress-nginx get pods
NAME                                        READY   STATUS      RESTARTS   AGE
ingress-nginx-admission-create-dtnhp        0/1     Completed   0          40m
ingress-nginx-admission-patch-l9dp4         0/1     Completed   0          40m
ingress-nginx-controller-7d4db76476-hb9th   1/1     Running     0          40m

#修改微服务为loadbalancer
[root@k8s-master ~]# kubectl -n ingress-nginx edit svc ingress-nginx-controller
49   type: LoadBalancer

# 查看是否正确分配
[root@k8s-master ingress]# kubectl -n ingress-nginx get svc
NAME                                 TYPE           CLUSTER-IP       EXTERNAL-IP       PORT(S)                      AGE
ingress-nginx-controller             LoadBalancer   10.104.94.174    192.168.239.241   80:30654/TCP,443:32569/TCP   40m
ingress-nginx-controller-admission   ClusterIP      10.104.152.104   <none>            443/TCP                      40m

2.5.2 创建 ingress 资源类型

[root@k8s-master ingress]# kubectl create ingress webcluster \
--rule '/=svc-nginx:80' \
--class nginx \
--dry-run=client -o yaml > ingress.yml

# 以下是修改过的文件
[root@k8s-master ingress]# cat ingress.yml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: webcluster
spec:
  ingressClassName: nginx
  rules:
  - http:
      paths:
      - backend:
          service:
            name: svc-nginx
            port:
              number: 80
        path: /            # 这里指访问网站根的时候就会访问名为svc-nginx的这个服务
        pathType: Prefix
        # Exact(精确匹配),
        # ImplementationSpecific(特定实现),
        # Prefix(前缀匹配),
        # Regular expression(正则表达式匹配)

2.5.3 声明 ingress 资源类型

[root@k8s-master ingress]# kubectl apply -f ingress.yml 


# 在此时svc-nginx 就不需要使用 LoadBlance 了可以换成ClusterIP实现后端pod负载均衡,
# ingress-nginx 使用 MetalLB 分配的地址,为自己使用,然后再将收到的数据传到后端service
# 有点像nginx的反向代理,流量先到 ingress-nginx 控制器再传到指定的 service
# 后端 service 不需要与外界通讯了自然就不需要用到 LoadBlance 去获得对外访问的IP了
# 只需要 ingress-nginx 对所有的 service 做一个管理,可以实现复杂的正则匹配。


# 修改名为 svc-nginx 的服务类型为 ClusterIP,从而实现后端各pod的负载均衡
[root@k8s-master metalb]# kubectl edit service svc-nginx 
     33   type: ClusterIP

# 检查是否改过来了
[root@k8s-master metalb]# kubectl get service svc-nginx 
NAME        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
svc-nginx   ClusterIP   10.106.13.221   <none>        80/TCP    6h50m

2.5.4 测试 ingress-nginx 是否实现

[root@k8s-master metalb]# kubectl get pods -o wide 
NAME                   READY   STATUS    RESTARTS   AGE    IP            NODE        NOMINATED NODE   READINESS GATES
dep-79fcdcdfc7-27qzq   1/1     Running   0          7h2m   10.244.2.51   k8s-node2   <none>           <none>
dep-79fcdcdfc7-sjjzz   1/1     Running   0          7h2m   10.244.1.32   k8s-node1   <none>           <none>
dep-79fcdcdfc7-x7rdz   1/1     Running   0          7h2m   10.244.2.52   k8s-node2   <none>           <none>

[root@k8s-master metalb]# kubectl exec -it pods/dep-79fcdcdfc7-27qzq -- bash

root@dep-79fcdcdfc7-27qzq:/# echo this is `hostname -I` > /usr/share/nginx/html/index.html

[root@k8s-master metalb]# kubectl exec -it pods/dep-79fcdcdfc7-sjjzz -- bash
root@dep-79fcdcdfc7-sjjzz:/# echo this is `hostname -I` > /usr/share/nginx/html/index.html

[root@k8s-master metalb]# kubectl exec -it pods/dep-79fcdcdfc7-x7rdz -- bash
root@dep-79fcdcdfc7-x7rdz:/# echo this is `hostname -I` > /usr/share/nginx/html/index.html 

[root@k8s-master metalb]# kubectl get service svc-nginx 
NAME        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
svc-nginx   ClusterIP   10.106.13.221   <none>        80/TCP    7h4m
[root@k8s-master metalb]# curl 10.106.13.221
this is 10.244.2.51
[root@k8s-master metalb]# curl 10.106.13.221
this is 10.244.1.32
[root@k8s-master metalb]# curl 10.106.13.221
this is 10.244.2.52

[root@k8s-master metalb]# kubectl -n ingress-nginx get service ingress-nginx-controller 
NAME                       TYPE           CLUSTER-IP      EXTERNAL-IP       PORT(S)                      AGE
ingress-nginx-controller   LoadBalancer   10.104.94.174   192.168.239.241   80:30654/TCP,443:32569/TCP   6h53m

[root@k8s-master metalb]# curl 192.168.239.241
this is 10.244.2.51

[root@k8s-master metalb]# curl 192.168.239.241
this is 10.244.2.52

[root@k8s-master metalb]# curl 192.168.239.241
this is 10.244.1.32

2.5.5 回收资源

[root@k8s-master metalb]# cd ingress/
[root@k8s-master ingress]# ls 
deploy.yaml  ingress.yml

[root@k8s-master ingress]# cat ingress.yml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: webcluster
spec:
  ingressClassName: nginx
  rules:
  - http:
      paths:
      - backend:
          service:
            name: svc-nginx
            port:
              number: 80
        path: /
        pathType: Prefix

[root@k8s-master ingress]# kubectl delete -f ingress.yml 


[root@k8s-master ingress]# cd ..

[root@k8s-master metalb]# ls 
configmap.yml  dep.yml  ingress  metallb-native.yaml

[root@k8s-master metalb]# kubectl get deployments.apps dep 
NAME   READY   UP-TO-DATE   AVAILABLE   AGE
dep    3/3     3            3           7h19m

[root@k8s-master metalb]# kubectl delete -f dep.yml 
deployment.apps "dep" deleted
service "svc-nginx" deleted


[root@k8s-master metalb]# kubectl get deployments.apps 
No resources found in default namespace.

[root@k8s-master metalb]# kubectl get service
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   3d22h

3 Ingress-nginx 的高级用法

3.1 基于路径的访问微服务

3.1.1 将 nginx 命名两个版本v1与v2

# 创建版本v1的deployment资源类型的nginx
[root@k8s-master ingress]# kubectl create deployment nginx-v1 \
--image nginx:latest \
--dry-run=client \
--port 80 \
--replicas 1  \
-o yaml > nginx-v1.yml

[root@k8s-master ingress]# cat nginx-v1.yml 
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx-v1    # 此标签一定要与微服务的标签对得上,不然微服务无法找到deployment
  name: nginx-v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx-v1
  template:
    metadata:
      labels:
        app: nginx-v1
    spec:
      containers:
      - image: nginx:latest
        name: nginx-v1
        ports:
        - containerPort: 80

# 创建版本 v2 的 deployment 资源类型的 nginx
[root@k8s-master ingress]# kubectl create deployment nginx-v2 \
--image nginx:latest \
--dry-run=client \
--port 80 \
--replicas 1  \
-o yaml > nginx-v2.yml

[root@k8s-master ingress]# cat nginx-v2.yml 
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx-v2
  name: nginx-v2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx-v2
  template:
    metadata:
      labels:
        app: nginx-v2
    spec:
      containers:
      - image: nginx:latest
        name: nginx-v2
        ports:
        - containerPort: 80


# 声明这两个版本的清单文件
[root@k8s-master ingress]# kubectl apply -f nginx-v1.yml 
deployment.apps/nginx-v1 created

[root@k8s-master ingress]# kubectl apply -f nginx-v2.yml 
deployment.apps/nginx-v2 created

# 查看deployment是否正常运行
[root@k8s-master ingress]# kubectl get deployments.apps 
NAME       READY   UP-TO-DATE   AVAILABLE   AGE
nginx-v1   1/1     1            1           12s
nginx-v2   1/1     1            1           6s

3.1.2 暴露端口并指定微服务类型

创建微服务清单文件并将其加入到deployment的清单文件中

# 创建清单文件追加到deployment清单文件中
[root@k8s-master ingress]# kubectl expose deployment nginx-v1 \
--name=svc-nginx-v1 \
--port 80 --target-port 80 \
--dry-run=client \
--type=ClusterIP -o yaml >> nginx-v1.yml 

[root@k8s-master ingress]# kubectl expose deployment nginx-v2 \
--name=svc-nginx-v2 --port 80 --target-port 80 \
--dry-run=client \
--type=ClusterIP -o yaml >> nginx-v2.yml 

[root@k8s-master ingress]# cat nginx-v1.yml 
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx-v1
  name: nginx-v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx-v1
  template:
    metadata:
      labels:
        app: nginx-v1
    spec:
      containers:
      - image: nginx:latest
        name: nginx-v1
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: nginx-v1
  name: svc-nginx-v1
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: nginx-v1
  type: ClusterIP



[root@k8s-master ingress]# cat nginx-v2.yml 
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx-v2
  name: nginx-v2
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx-v2
  template:
    metadata:
      labels:
        app: nginx-v2
    spec:
      containers:
      - image: nginx:latest
        name: nginx-v2
        ports:
        - containerPort: 80
---        
apiVersion: v1
kind: Service
metadata:
  labels:
    app: nginx-v2
  name: svc-nginx-v2
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: nginx-v2
  type: ClusterIP


# 重新声明更新配置

[root@k8s-master ingress]# kubectl apply -f nginx-v1.yml 

[root@k8s-master ingress]# kubectl apply -f nginx-v2.yml 

# 服务创建成功
[root@k8s-master ingress]# kubectl get service
NAME           TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
kubernetes     ClusterIP   10.96.0.1        <none>        443/TCP   3d22h
svc-nginx-v1   ClusterIP   10.107.76.175    <none>        80/TCP    15s
svc-nginx-v2   ClusterIP   10.100.188.171   <none>        80/TCP    9s

3.1.3 进入 pod 修改默认发布文件

[root@k8s-master ingress]# kubectl get pods 
NAME                       READY   STATUS    RESTARTS   AGE
nginx-v1-dbd4bc45b-49hhw   1/1     Running   0          5m35s
nginx-v2-bd85b8bc4-nqpv2   1/1     Running   0          5m29s

[root@k8s-master ingress]# kubectl exec -it pods/nginx-v1-dbd4bc45b-49hhw -- bash

root@nginx-v1-dbd4bc45b-49hhw:/# echo this is nginx-v1 `hostname -I` > /usr/share/nginx/html/index.html 

[root@k8s-master ingress]# kubectl exec -it pods/nginx-v2-bd85b8bc4-nqpv2 -- bash

root@nginx-v2-bd85b8bc4-nqpv2:/# echo this is nginx-v2 `hostname -I` > /usr/share/nginx/html/index.html 

3.1.4 测试 service 是否正常

[root@k8s-master ingress]# kubectl get service
NAME           TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
kubernetes     ClusterIP   10.96.0.1        <none>        443/TCP   3d22h
svc-nginx-v1   ClusterIP   10.107.76.175    <none>        80/TCP    15s
svc-nginx-v2   ClusterIP   10.100.188.171   <none>        80/TCP    9s

[root@k8s-master ingress]# curl 10.107.76.175
this is nginx-v1 10.244.2.54

[root@k8s-master ingress]# curl 10.100.188.171
this is nginx-v2 10.244.1.35

创建七层负载

-- 基于路径识别访问哪个微服务

3.1.5 创建 ingress 资源类型

[root@k8s-master ingress]# kubectl create ingress webcluster \
--class nginx \
--rule "/v1=svc-nginx-v1:80" \
--rule "/v2=svc-nginx-v2:80" \
--dry-run=client -o yaml > ingress-route.yml 

3.1.6 实现 路径识别 ingress 控制器清单文件配置的解释 

[root@k8s-master ingress]# cat ingress-route.yml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: webcluster
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /   
    # 由于在下面基于路径访问,实际传到后端服务的路径为 192.168.239.241/v1 或者 /v2
    # 但是在后端nginx中默认发布路径中并没有这个目录,所以会无法找到。
    # 所以就有了以上的配置 -- rewrite-target 重定向。
    # 此条配置实现的效果:
    # 比如说访问版本一按下面配置路径最终为192.168.239.241/v1/index.html
    # 但加上rewrite-target: / 的这条配置 那么 Nginx Ingress 会将请求重写为 
    # 192.168.239.241/index.html
spec:
  ingressClassName: nginx
  rules:
  - http:
      paths:
      - backend:
          service:
            name: svc-nginx-v1
            port:
              number: 80
        path: /v1
        pathType: Prefix
      - backend:
          service:
            name: svc-nginx-v2
            port:
              number: 80
        path: /v2
        pathType: Prefix

# Exact(精确匹配),
# ImplementationSpecific(特定实现),
# Prefix(前缀匹配),
# Regular expression(正则表达式匹配)



在这个例子中,任何匹配 /v1 和 /v2 的请求都会被重写为新的目标路径 /,
然后转发到名为 svc-nginx-v1 和 svc-nginx-v2 的后端服务。

3.1.7 声明 ingress 清单文件 并测试

# 声明创建ingress控制器
[root@k8s-master ingress]# kubectl apply -f ingress-route.yml 
ingress.networking.k8s.io/webcluster created


# 查看ingress-nginx控制器是否正常
[root@k8s-master ingress]# kubectl -n ingress-nginx get service
NAME                                 TYPE           CLUSTER-IP       EXTERNAL-IP       PORT(S)                      AGE
ingress-nginx-controller             LoadBalancer   10.104.94.174    192.168.239.241   80:30654/TCP,443:32569/TCP   7h30m
ingress-nginx-controller-admission   ClusterIP      10.104.152.104   <none>            443/TCP                      7h30m

# 查看分配的IP
[root@k8s-master ingress]# kubectl get ingress
NAME         CLASS   HOSTS   ADDRESS           PORTS   AGE
webcluster   nginx   *       192.168.239.241   80      56s

# 测试版本是否正常访问
[root@k8s-master ingress]# curl 192.168.239.241/v1
this is nginx-v1 10.244.2.54

[root@k8s-master ingress]# curl 192.168.239.241/v2
this is nginx-v2 10.244.1.35

3.2 基于域名访问的微服务

在 3.1 的基础上做

3.2.1 创建 Ingress 资源类型

# 回收以上的ingress类型
[root@k8s-master ingress]# kubectl delete -f ingress-route.yml


# 注意创建ingress资源类型的时候 类必须为nginx 因为在ingress部署的时候类名就已经定好了

[root@k8s-master ingress]# kubectl get ingressclasses
NAME    CONTROLLER             PARAMETERS   AGE
nginx   k8s.io/ingress-nginx   <none>       35h

# deploy.yml 为ingress的部署文件
[root@k8s-master ingress]# grep -A 9 Ingress  deploy.yaml 
kind: IngressClass
metadata:
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
    app.kubernetes.io/version: 1.11.2
  name: nginx

# 创建ingress的资源类型
[root@k8s-master ingress]# kubectl create ingress dum --class nginx \ 
--rule "nginxv1.shuyan.com/=svc-nginx-v1:80" \
--rule "nginxv2.shuyan.com/=svc-nginx-v2:80" \
--dry-run=client -o yaml > nginx-dum.yml

# 由于生成的文件还是与目标需求文件有些差异,下面是修改好的yaml文件
[root@k8s-master ingress]# cat nginx-dum.yml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: dum
spec:
  ingressClassName: nginx
  rules:
  - host: nginxv1.shuyan.com
    http:
      paths:
      - backend:
          service:
            name: svc-nginx-v1
            port:
              number: 80
        path: /
        pathType: Prefix
  - host: nginxv2.shuyan.com
    http:
      paths:
      - backend:
          service:
            name: svc-nginx-v2
            port:
              number: 80
        path: /
        pathType: Prefix

3.2.2 声明并测试是否正常访问

[root@k8s-master ingress]# kubectl apply -f nginx-dum.yml 

# 查看是否正确创建
[root@k8s-master ingress]# kubectl describe ingress dum 
Name:             dum
Labels:           <none>
Namespace:        default
Address:          192.168.239.241    # IP 有了证明成功了
Ingress Class:    nginx
Default backend:  <default>
Rules:
  Host                Path  Backends
  ----                ----  --------
  nginxv1.shuyan.com  # 域名有了也证明成功了
                      /   svc-nginx-v1:80 (10.244.2.54:80)
  nginxv2.shuyan.com  
                      /   svc-nginx-v2:80 (10.244.1.35:80)
Annotations:          <none>
Events:
  Type    Reason  Age                From                      Message
  ----    ------  ----               ----                      -------
  Normal  Sync    20m (x2 over 21m)  nginx-ingress-controller  Scheduled for sync


# 客户端做好域名解析
[root@harbor ~]# vim /etc/hosts
192.168.239.241 nginxv1.shuyan.com nginxv2.shuyan.com

# 测试是否成功
[root@harbor ~]# curl nginxv1.shuyan.com
this is nginx-v1 10.244.2.54

[root@harbor ~]# curl nginxv2.shuyan.com
this is nginx-v2 10.244.1.35

3.2.3 建立 tls 加密

创建 secret 加密类型

# 回收之前的ingress资源

[root@k8s-master ingress]# kubectl delete -f nginx-dum.yml 

# 由于创建secret需要依靠证书来生成,所以得先有证书
[root@k8s-master tls]# yum install openssl

[root@k8s-master tls]# openssl req -newkey rsa:2048 \
-nodes -keyout tls.key \
-x509 -days 365 \
-subj "/CN=nginx-svc/O=nginx-svc" \
-out tls.crt

Generating a 2048 bit RSA private key
.......+++
...............................................................................................................+++
writing new private key to 'tls.key'
-----

# 创建secret使用tls加密方式,命名为web-tls-secret,并指定证书的私钥和证书的路径

[root@k8s-master tls]# kubectl create secret tls web-tls-secret \
--key /root/tls/tls.key \
--cert /root/tls/tls.crt 


# 查看 secret 是否正确创建

[root@k8s-master tls]# kubectl get secrets 
NAME             TYPE                DATA   AGE
web-tls-secret   kubernetes.io/tls   2      34m

[root@k8s-master tls]# kubectl describe secrets 
Name:         web-tls-secret
Namespace:    default
Labels:       <none>
Annotations:  <none>

Type:  kubernetes.io/tls

Data
====
tls.crt:  1147 bytes
tls.key:  1708 bytes

创建Igress资源类型,添加所需的 secret 到 Igress资源清单中,使得最后运行能正确识别此secret

# 创建资源类型
[root@k8s-master tls]# kubectl create ingress tls  \
--class nginx \
--rule "nginxv1.shuyan.com/=svc-nginx-v1:80" \
--rule "nginxv2.shuyan.com/=svc-nginx-v2:80"   \
--dry-run=client -o yaml  >  tls.yml 


[root@k8s-master tls]# cat tls.yml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: tls
spec:
# 增加了tls:以下的参数使得可以识别到 创建的secret
  tls:
  - hosts:
    - nginxv1.shuyan.com
    - nginxv2.shuyan.com
    secretName: web-tls-secret
    
  ingressClassName: nginx
  rules:
  - host: nginxv1.shuyan.com
    http:
      paths:
      - backend:
          service:
            name: svc-nginx-v1
            port:
              number: 80
        path: /
        pathType: Prefix
  - host: nginxv2.shuyan.com
    http:
      paths:
      - backend:
          service:
            name: svc-nginx-v2
            port:
              number: 80
        path: /
        pathType: Prefix

3.2.4 建立 auth 认证

创建认证文件

[root@k8s-master auth]# yum install httpd-tools -y

[root@k8s-master auth]# htpasswd -bcm auth shuyan 123456

[root@k8s-master auth]# ls 
auth 

[root@k8s-master auth]# cat auth 
shuyan:$apr1$Cqhl913B$Pexoaitb4OnILCdEZm/Kv0

建立 secret 并使用 generic 类型

[root@k8s-master auth]# kubectl create secret generic auth-web \
--from-file /root/auth/auth

[root@k8s-master auth]# kubectl describe secrets auth-web 
Name:         auth-web
Namespace:    default
Labels:       <none>
Annotations:  <none>

Type:  Opaque

Data
====
auth:  45 bytes

创建 ingress 资源类型

[root@k8s-master auth]# kubectl create ingress auth  \
> --class nginx \
> --rule "nginxv1.shuyan.com/=svc-nginx-v1:80" \
> --rule "nginxv2.shuyan.com/=svc-nginx-v2:80"   \
> --dry-run=client -o yaml > auth.yml


# 以下是修改后的ingress资源清单
[root@k8s-master auth]# cat auth.yml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
# 增加以下三行
  annotations:
    nginx.ingress.kubernetes.io/auth-type: basic       # 选择认证类型
    nginx.ingress.kubernetes.io/auth-secret: auth-web  # 选择 secret 的名字
    nginx.ingress.kubernetes.io/auth-realm: "Please input username and password"
  name: auth
spec:
  ingressClassName: nginx
  rules:
  - host: nginxv1.shuyan.com
    http:
      paths:
      - backend:
          service:
            name: svc-nginx-v1
            port:
              number: 80
        path: /
        pathType: Prefix
  - host: nginxv2.shuyan.com
    http:
      paths:
      - backend:
          service:
            name: svc-nginx-v2
            port:
              number: 80
        path: /
        pathType: Prefix


[root@k8s-master auth]# kubectl apply -f auth.yml 

[root@k8s-master auth]# kubectl get ingress
NAME   CLASS   HOSTS                                   ADDRESS           PORTS   AGE
auth   nginx   nginxv1.shuyan.com,nginxv2.shuyan.com   192.168.239.241   80      38s

客户端测试测试是否成功

[root@harbor ~]# curl -k https://nginxv1.shuyan.com
<html>
<head><title>401 Authorization Required</title></head>
<body>
<center><h1>401 Authorization Required</h1></center>
<hr><center>nginx</center>
</body>
</html>

[root@harbor ~]# curl -k https://nginxv1.shuyan.com -ushuyan:123456
this is nginx-v1 10.244.2.54

[root@harbor ~]# curl -k https://nginxv2.shuyan.com -ushuyan:123456
this is nginx-v2 10.244.1.35

3.2.5 Igress 实现 rewrite 重定向

# 回收上面的镜像
[root@k8s-master auth]# kubectl delete -f auth.yml 

# 查看 service 名字
[root@k8s-master auth]# kubectl get svc 
NAME           TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
kubernetes     ClusterIP   10.96.0.1        <none>        443/TCP   6d2h
svc-nginx-v1   ClusterIP   10.107.76.175    <none>        80/TCP    2d4h
svc-nginx-v2   ClusterIP   10.100.188.171   <none>        80/TCP    2d4h

# 创建资源类型
[root@k8s-master ingress-rewrite]# kubectl create ingress rewrite \
--class nginx \
--rule "nginxv1.shuyan.com/=svc-nginx-v1:80" \
--dry-run=client -o yaml > ingress-rewrite-app-root.yml


# 以下是修改过的配置,增加了几条参数
[root@k8s-master ingress-rewrite]# cat ingress-rewrite-app-root.yml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/app-root: /index.html    # 指定根目录文件
  name: rewrite
spec:
  ingressClassName: nginx
  rules:
  - host: nginxv1.shuyan.com    # 域名访问的ingress
    http:
      paths:
      - backend:
          service:
            name: svc-nginx-v1    # 指定service名字
            port:
              number: 80
        path: /    
        pathType: Prefix

[root@k8s-master ingress-rewrite]# kubectl apply -f ingress-rewrite-app-root.yml 

[root@k8s-master ingress-rewrite]# kubectl get ingress
NAME      CLASS   HOSTS                ADDRESS           PORTS   AGE
rewrite   nginx   nginxv1.shuyan.com   192.168.239.241   80      20s



测试是否成功访问

[root@harbor ~]# curl -L  http://nginxv1.shuyan.com # 重定向
this is nginx-v1 10.244.2.54

有一个问题就是假如中间惨咋着其他的目录他就会识别不到,为了解决这个问题,可以使用路径重定向

[root@harbor ~]# curl -L  http://nginxv1.shuyan.com/shuyan/index.html
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.27.1</center>
</body>
</html>

回收以上的资源类型


[root@k8s-master ingress-rewrite]# kubectl create ingress rewrite \
--class nginx \
--rule "nginxv1.shuyan.com/=svc-nginx-v1:80" \
--rule "nginxv2.shuyan.com/=svc-nginx-v2:80" \
--dry-run=client -o yaml > ingress-rewrite.yml

# 以下清单文件做了稍微的修改
[root@k8s-master ingress-rewrite]# cat ingress-rewrite.yml 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: rewrite
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
  ingressClassName: nginx
  rules:
  - host: nginxv1.shuyan.com
    http:
      paths:
      - backend:
          service:
            name: svc-nginx-v1
            port:
              number: 80
        path: /
        pathType: Prefix
  - host: nginxv2.shuyan.com
    http:
      paths:
      - backend:
          service:
            name: svc-nginx-v2
            port:
              number: 80
        path: /shuyan(/|$)(.*)    # 正则匹配类型将/shuyan 结尾的 还有 /shuyan/ 的 还有/shuyan/index.html 都转换为 /index.html
        pathType: ImplementationSpecific    # 由于使用到正则匹配需要改变类型


# 声明并查看
[root@k8s-master ingress-rewrite]# kubectl apply -f ingress-rewrite.yml 

[root@k8s-master ingress-rewrite]# kubectl get ingress
NAME      CLASS   HOSTS                                   ADDRESS           PORTS   AGE
rewrite   nginx   nginxv1.shuyan.com,nginxv2.shuyan.com   192.168.239.241   80      8m53s

测试重定向是否成功

[root@harbor ~]# curl  http://nginxv2.shuyan.com/shuyan/index.html -L
this is nginx-v2 10.244.1.35

[root@harbor ~]# curl  http://nginxv2.shuyan.com/shuyan -L
this is nginx-v2 10.244.1.35

[root@harbor ~]# curl  http://nginxv2.shuyan.com/shuyan/ -L
this is nginx-v2 10.244.1.35

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

妍妍的宝贝

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值