K8S实战Day6-用Ingress配置服务发现


前言

一、Ingress是什么?

Ingress是管理外部网咯访问K8S集群中Service的API对象(典型就是HTTP),它可以提供负载均衡、SSL终端以及基于名字的虚拟host。简单点来说它就是外网访问集群应用的媒介。

+----------+  Ingress   +---------+
| internet | ---------> | Service |
+----------+            +---------+

Ingress Controller 负责实现入口(即ingress的意思),通常使用负载均衡器,但它也可以配置边缘路由器或其他前端以帮助处理流量。Ingress不会随意暴露端口或协议,向Internet公开HTTP和HTTPS以外的服务通常使用Service.Type=NodePort或Service.Type=LoadBalancer类型的服务。
当然NodePort由于每个节点都要配置,转发效率相对不高,也并不一定是最好的方案,hostPort也有一定的使用场景

和其他K8S资源一样,Ingress必须要有一个 Ingress Controller,仅有 Ingress Resource 是无效的,Ingress也是无法正常工作的。Ingress Controller不会随着集群的启动而自动启动,且Ingress Controller有多种,目前K8S只支持和维护GCE(Google Kubernetes Engine)和nginx2种Controller。
我们这里使用的是ingress-nginx

ngress Resource 则和大部分的K8S资源差不多,需要apiVersion、kind和metadata等字段,下面一个最小ingress resource的示例:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: test-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - http:
      paths:
      - path: /testpath
        backend:
          serviceName: test
          servicePort: 80

Ingress通常都是使用注解annotations来配置某些选项,具体可配置选项、支持哪些注解取决于Ingress Controller类型,上述是一个rewrite-target的注解,spec字段包含了负载均衡器或代理服务器的配置,最重要的是它包含了所有传入请求的匹配规则。注:Ingress Resource仅支持HTTP通信的规则。

Ingress rules是Ingress中比较重要的一部分,即上述示例中的rules字段的含义(它是一个可以包含多种rule对象的数组),每个Http的rule都应该包含如下的信息:

  • host:可选参数,用于限定rule适用的host,如指定为foo.bar.com,那该rule只应用于这个host,上述示例中未指定,则该rule适用于通过指定IP地址的所有入站HTTP流量;

  • -paths:一系列的path对象组成的数组,上述示例中只有一个path为/testpath,每个path都有一个backend对象(代表的是后端服务)与之对应

  • backend:是一个serviceName和servicePort的组合对象,只有host和path都匹配成功的情况下,LoadBalancer才会将流量导向引用的服务。

通常Ingress会有一个 Default Backend,当在上述我们自定义的Ingress Resource没有匹配的rule对象时,流量就会被导向这个Default Backend。当然这个default backend需要我们自己去配,通常不会在Ingress Resource中配置default backend,而是应该在Ingress Controller中进行指定。

【Ingress的类型】

1.Single Service Ingress

即单服务Ingress,K8S当前是支持暴露单个服务的,可以通过在没有规则的情况下指定默认后端来使用ingress来完成此操作,如:

# ingress.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: test-ingress
spec:
  backend:
    serviceName: testsvc
    servicePort: 80

注意:安装完Ingress之后,还要执行

# 创建命名空间 ingress-nginx
kubectl create namespace ingress-nginx
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/rbac.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/with-rbac.yaml

kubectl apply -f ingress.yaml即可,通过kubectl get ingress test-ingress可以查看详情,包括Ingress分配给这个default backend的IP

2.Simple fanout

fanout基于请求的HTTP URI将配置流量从单个IP路由到多个服务,Ingress 是允许将负载平衡器的数量降到最低,如:

foo.bar.com -> 178.91.123.132 -> / foo    service1:4200
                                 / bar    service2:8080

上述的过程需要一个如下的Ingress(同一个域名配置了2个path):

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: simple-fanout-example
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: foo.bar.com
    http:
      paths:
      - path: /foo
        backend:
          serviceName: service1
          servicePort: 4200
      - path: /bar
        backend:
          serviceName: service2
          servicePort: 8080

3.基于名字虚拟host

和2类似,这种类型的Ingress支持将HTTP流量路由到在同一个IP上的多个host name,流程如下:

foo.bar.com --|                 |-> foo.bar.com s1:80
              | 178.91.123.132  |
bar.foo.com --|                 |-> bar.foo.com s2:80

此时的Ingress Resource应该为:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: name-virtual-host-ingress
spec:
  rules:
  - host: foo.bar.com
    http:
      paths:
      - backend:
          serviceName: service1
          servicePort: 80
  - host: bar.foo.com
    http:
      paths:
      - backend:
          serviceName: service2
          servicePort: 80

此时Ingress会告诉负载均衡器将请求根据Host header路由到指定地方。如果不指定host,任意到Ingress Controller IP地址的Web流量都可以匹配,而不需要基于名称的虚拟主机

4.TLS

可以指定包含一对TLS私钥、证书的方式来保护Ingress,当前Ingress只支持TLS 443端口和TLS终端,TLS必须包含tls.crt(即证书)和tls.key(即私钥),如:

apiVersion: v1
kind: Secret
metadata:
  name: testsecret-tls
  namespace: default
data:
  tls.crt: base64 encoded cert
  tls.key: base64 encoded key
type: kubernetes.io/tls

在Ingress中引用这个密钥将告诉Ingress Controller使用TLS保护Ingress从客户端到负载平衡器的通道。此外,还需要确保创建的TLS机密来自包含CN的证书,如:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: tls-example-ingress
spec:
  tls:
  - hosts:
    - sslexample.foo.com
    secretName: testsecret-tls
  rules:
    - host: sslexample.foo.com
      http:
        paths:
        - path: /
          backend:
            serviceName: service1
            servicePort: 80

注:TLS对于nginx和GCE Controller稍有不同。在实际配置中可能在配置完整证书后想让http请求也强制转到https,此后可使用如下注解:

  # 在配置了证书时,重定向到https
  nginx.ingress.kubernetes.io/ssl-redirect: "true"

若没有配置证书仍然想强制跳转https,需要使用注解nginx.ingress.kubernetes.io/force-ssl-redirect。上述重定向到https时默认状态码 308 或者 307,老版本浏览器可能不支持,可选择性配置状态码为 301。

二、Ingress安装

1.Mandatoy.yaml

官方的下载地址为:

apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.42.0/deploy/static/provider/baremetal/deploy.yaml

raw.githubusercontent.com由于最近需要科学上网无法下载,我们用教程中的mandatory,yaml配置:

apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx

---

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: default-http-backend
  labels:
    app.kubernetes.io/name: default-http-backend
    app.kubernetes.io/part-of: ingress-nginx
  namespace: ingress-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: default-http-backend
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: default-http-backend
        app.kubernetes.io/part-of: ingress-nginx
    spec:
      terminationGracePeriodSeconds: 60
      containers:
        - name: default-http-backend
          # Any image is permissible as long as:
          # 1. It serves a 404 page at /
          # 2. It serves 200 on a /healthz endpoint
          image: k8s.gcr.io/defaultbackend-amd64:1.5
          livenessProbe:
            httpGet:
              path: /healthz
              port: 8080
              scheme: HTTP
            initialDelaySeconds: 30
            timeoutSeconds: 5
          ports:
            - containerPort: 8080
          resources:
            limits:
              cpu: 10m
              memory: 20Mi
            requests:
              cpu: 10m
              memory: 20Mi

---
apiVersion: v1
kind: Service
metadata:
  name: default-http-backend
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: default-http-backend
    app.kubernetes.io/part-of: ingress-nginx
spec:
  ports:
    - port: 80
      targetPort: 8080
  selector:
    app.kubernetes.io/name: default-http-backend
    app.kubernetes.io/part-of: ingress-nginx

---

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---

kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---

kind: ConfigMap
apiVersion: v1
metadata:
  name: udp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress-serviceaccount
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: nginx-ingress-clusterrole
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - "extensions"
    resources:
      - ingresses/status
    verbs:
      - update

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: nginx-ingress-role
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      # Defaults to "<election-id>-<ingress-class>"
      # Here: "<ingress-controller-leader>-<nginx>"
      # This has to be adapted if you change either parameter
      # when launching the nginx-ingress-controller.
      - "ingress-controller-leader-nginx"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: nginx-ingress-role-nisa-binding
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nginx-ingress-role
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: nginx-ingress-clusterrole-nisa-binding
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nginx-ingress-clusterrole
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
      annotations:
        prometheus.io/port: "10254"
        prometheus.io/scrape: "true"
    spec:
      serviceAccountName: nginx-ingress-serviceaccount
      hostNetwork: true
      nodeSelector:
        app: ingress
      containers:
        - name: nginx-ingress-controller
          image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.19.0
          args:
            - /nginx-ingress-controller
            - --default-backend-service=$(POD_NAMESPACE)/default-http-backend
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --publish-service=$(POD_NAMESPACE)/ingress-nginx
            - --annotations-prefix=nginx.ingress.kubernetes.io
          securityContext:
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            # www-data -> 33
            runAsUser: 33
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
            - name: http
              containerPort: 80
            - name: https
              containerPort: 443
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 1

---

这里包含了

  • 新建命名空间:ingress-nginx
  • default-backend:Service和Deployment(当服务找不到对应的endpoint时,有一个默认的返回)
  • ConfigMap:nginx-configuration,tcp-services,udp-services
  • ServiceAccount/ClusterRole/Role/RoleBinding/ClusterRoleBinding
  • nginx-ingress-controller:Deployment

这里的ingress-controller没有定义Service,我们只能通过Pod的IP来访问

[root@m1 ingress-nginx]# kubectl apply -f mandatory.yaml

此时我们

[root@m1 ~]# kubectl get all -n ingress-nginx

发现镜像拉取有问题

[root@m1 ingress-nginx]# grep image mandatory.yaml
          # Any image is permissible as long as:
          image: k8s.gcr.io/defaultbackend-amd64:1.5
          image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.19.0

选一个节点s1:

docker pull k8s.gcr.io/defaultbackend-amd64:1.5

发现失败

[root@s1 ~]# docker pull registry.cn-hangzhou.aliyuncs.com/liuyi01/defaultbackend-amd64:1.5
[root@s1 ~]# docker tag registry.cn-hangzhou.aliyuncs.com/liuyi01/defaultbackend-amd64:1.5 k8s.gcr.io/defaultbackend-amd64:1.5

在s2节点也执行上述命令
这时再执行查看,发现节点已经处于running状态

2.暴露Ingress服务

官方文档中推荐的是通过一个Service,对外端口为80,方式为Nodeport,但是鉴于Nodeport需要在每个节点都要暴露80端口,会有浪费,我们这里选择hostport,提高转发效率。

端口还是建议80,因为nginx服务默认80端口,不然我们访问时还要加端口号,这样不太合适。

由于我们这里的实验环境2台Worker节点80端口均被占用,这里先停掉1台s2,s1由于关联着nginx服务我们不动它

[root@s2 ~]# cd harbor
[root@s2 harbor]# docker-compose down

检查一下80和443端口是否有占用

[root@s2 harbor]# netstat -ntlp | grep 80
[root@s2 harbor]# netstat -ntlp | grep 443

那么如何让Ingress服务成功暴露在s2节点上呢?我们这里选择Label的方式

[root@m1 nginx]# kubectl get nodes
NAME   STATUS   ROLES    AGE     VERSION
m1     Ready    master   5d20h   v1.14.0
m2     Ready    master   5d20h   v1.14.0
m3     Ready    master   5d20h   v1.14.0
s1     Ready    <none>   5d20h   v1.14.0
s2     Ready    <none>   5d20h   v1.14.0
[root@m1 nginx]# kubectl label node s2 app=ingress
node/s2 labeled
[root@m1 ingress-nginx]# vi mandatory.yaml
 #Ingress-controller这里修改网络模式,添加Label   
    spec:
      serviceAccountName: nginx-ingress-serviceaccount
      hostNetwork: true
      nodeSelector:
        app: ingress

Apply一下,等待服务重启

kubectl get all -n ingress-nginx
[root@s2 harbor]# netstat -ntlp | grep 80
tcp        0      0 0.0.0.0:18080           0.0.0.0:*               LISTEN      17553/nginx: master 
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      17553/nginx: master 
tcp6       0      0 :::18080                :::*                    LISTEN      17553/nginx: master 
tcp6       0      0 :::80                   :::*                    LISTEN      17553/nginx: master 
[root@s2 harbor]# netstat -ntlp | grep 443
tcp        0      0 0.0.0.0:443             0.0.0.0:*               LISTEN      17553/nginx: master 
tcp6       0      0 :::443                  :::*                    LISTEN      17553/nginx: master

此时用本主机hy1089访问

root@hy1089:~# curl 192.168.8.181
default backend - 404

返回default backend 404

3.部署Tomcat服务测试Ingress-controller

[root@m1 ingress-nginx]# cat ingress-demo.yaml
#deploy
apiVersion: apps/v1
kind: Deployment
metadata:
  name: tomcat-demo
spec:
  selector:
    matchLabels:
      app: tomcat-demo
  replicas: 1
  template:
    metadata:
      labels:
        app: tomcat-demo
    spec:
      containers:
      - name: tomcat-demo
        image: registry.cn-hangzhou.aliyuncs.com/liuyi01/tomcat:8.0.51-alpine
        ports:
        - containerPort: 8080
---
#service
apiVersion: v1
kind: Service
metadata:
  name: tomcat-demo
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 8080
  selector:
    app: tomcat-demo

---
#ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: tomcat-demo
spec:
  rules:
  - host: tomcat.mooc.com
    http:
      paths:
      - path: /
        backend:
          serviceName: tomcat-demo
          servicePort: 80

创建tomcat的service/deployment,并通过tomcat.mooc.com用ingress暴露

[root@m1 ingress-nginx]# kubectl get pods -o wide | grep tomcat
tomcat-demo-6bc7d5b6f4-6z8nt   1/1     Running   0          173m   172.22.1.23   s1     <none>           <none>

此时我们在hy1089主机

root@hy1089:~# cat /etc/hosts
127.0.0.1	localhost
192.168.8.181   tomcat.mooc.com
192.168.8.181   api.mooc.com

#没有用ingress暴露的域名
root@hy1089:~# curl api.mooc.com
default backend - 404

#使用ingress暴露的域名
default backend - 404root@hy1089:~# curl tomcat.mooc.com

<!DOCTYPE html>
<html lang="en">
    <head>
        <meta charset="UTF-8" />
        <title>Apache Tomcat/8.0.51</title>
        <link href="favicon.ico" rel="icon" type="image/x-icon" />
        <link href="favicon.ico" rel="shortcut icon" type="image/x-icon" />
        <link href="tomcat.css" rel="stylesheet" type="text/css" />
    </head>
。。。。。。。。。。

总结

在上一节部署Harbor时有一个巨大的陷阱,Harbor服务会在对应节点生成一个网卡,可能导致Calico服务发现IP失败,现象就是节点间pod不能访问,node只能访问本主机pod,也导致了tomcat服务访问超时504。

calico问题出现最多的就是ip识别错误,calico要实现对本机网络的完全控制,首先就得知道你的本机ip,默认它有一个算法自动识别,但根据你的网络情况有一定概率识别错误。这个问题有两种解决方式:

1、删掉识别错误的网桥
2、指定calico的启动参数,正则匹配网桥名称或通过can-reach测试可用的网卡,在calico.yaml中找到IP这段,增加“IP_AUTODETECTION_METHOD”

# Auto-detect the BGP IP address.
- name: IP
  value: "autodetect"
# 下面是新增部分,给出了4种匹配模式示例
- name: IP_AUTODETECTION_METHOD
  # 完全匹配
  value: "interface=eth0"
  # 前缀匹配
  value: "interface=ens.*"
  # 多个名字或者
  value: "interface=(eth0|eth1)"
  # 通过can-reach测试可用网卡
  value: "can-reach=192.168.0.1"

我们这里采用第二种方式,修改后重新apply calico

《Kubernetes生产级实践指南》课程手记-FAQ
Ingress学习笔记

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值