Kong Kubernetes-Native 实战

前言

Kong is a cloud-native, fast, scalable, and distributed Microservice Abstraction Layer (also known as an API Gateway or API Middleware). Made available as an open-source project in 2015, its core values are high performance and extensibility.
Actively maintained, Kong is widely used in production at companies ranging from startups to Global 5000 as well as government organizations.

Kong是目前社区最流行的云原生API网关。高性能可扩展两大特性使得Kong被各大厂商广泛使用

在深入Kong使用前有必要对Kong的作用进行概述:

If you are building for the web, mobile, or IoT (Internet of Things) you will likely end up needing common functionality to run your actual software. Kong can help by acting as a gateway (or a sidecar) for microservices requests while providing load balancing, logging, authentication, rate-limiting, transformations, and more through plugins.

也即在进行微服务开发时,我们需要一些公共的特性和功能,例如:日志、负载均衡、认证以及Rate limiting等。而Kong(API网关)便充当着这个角色,使服务与这些公共功能解耦,让开发者更加专注于自身的服务开发和运维,从这些繁琐的外围事情中解脱出来。更直观的对比如下:
在这里插入图片描述
在旧的服务管理体制下,各个服务需要各自开发具有相同功能的诸如日志、认证以及Rate limiting等模块,不仅增加了开发者负担也增加了整个系统的冗余度;而对比Kong(API网关)作为这些公共服务的统一接入层,所有外围服务均由Kong实现,整个系统结构清晰且易维护

Kong

这里我们从Kong Admin API为切入点深入Kong的使用

一、Kong Admin API

By default Kong listens on the following ports:

  • :8000 on which Kong listens for incoming HTTP traffic from your clients, and forwards it to your upstream services.
  • :8443 on which Kong listens for incoming HTTPS traffic. This port has a similar behavior as the :8000 port, except that it expects HTTPS traffic only. This port can be disabled via the configuration file.
  • :8001 on which the Admin API used to configure Kong listens.
  • :8444 on which the Admin API listens for HTTPS traffic.

如图:
在这里插入图片描述

  • 1、Proxy端口(8000 or 8443)用于代理后端服务
  • 2、Admin端口(8001 or 8444)用于管理Kong配置,对Kong配置进行CRUD操作(Konga就是利用Admin API实现的GUI)

二、Kong Configuration Mode

在详细介绍Kong具体使用之前,我们先介绍一下Kong的两种使用模式:

  • DB-less mode:使用declarative configuration,所有配置存放于一个配置文件中(YAML or JSON格式),不需要使用数据库,而修改配置的方法有两种:
    • 1、静态——在kong初始化时指定declarative_config文件路径:
      $ export KONG_DATABASE=off
      $ export KONG_DECLARATIVE_CONFIG=kong.yml
      $ kong start -c kong.conf
      
    • 2、动态——在kong运行期间,调用Kong Admin API:
      $ http :8001/config config=@kong.yml
      
    另外,由于是采用declarative configuration设计,所以只支持Read-Only Admin API,也即:只支持GET;不支持POST, PATCH, PUT or DELETE等Methods
  • DB mode: 使用imperative configuration,需要使用数据库(PostgreSQL or Cassandra),并通过Kong Admin API对配置进行CRUD操作

这两种模式各有优缺点,如下:

  • DB-less mode

    • Pros:
      • 1、无需使用数据库,减少了对数据库的依赖,减少部署&运维成本
    • Cons:
      • 1、由于采用declarative configuration设计,更新规则必须全量更新,重置整个配置文件,无法做到局部更新(调用Kong Admin API/config)
      • 2、不支持Konga对Kong的管理
      • 3、插件兼容性较差,无法支持所有Kong插件,详情见Plugin Compatibility
  • DB mode

    • Pros:
      • 1、支持调用Kong Admin API CRUD,支持局部更新
      • 2、支持Konga对Kong的管理
      • 3、插件兼容性好,可以支持所有Kong插件
    • Cons:
      • 1、需要使用数据库,增加了对数据库的依赖,增加部署&运维成本

三、Kong Used As HTTP Proxy

由于Kong DB mode更加便于举例说明,这里我们采用Kong DB mode展示如何使用Kong代理HTTP请求

首先介绍一下Kong Proxy几个关键概念:

  • client: Refers to the downstream client making requests to Kong’s proxy port.
  • upstream service: Refers to your own API/service sitting behind Kong, to which client requests/connections are forwarded.
  • Service: Service entities, as the name implies, are abstractions of each of your own upstream services. Examples of Services would be a data transformation microservice, a billing API, etc.
  • Route: This refers to the Kong Routes entity. Routes are entrypoints into Kong, and defining rules for a request to be matched, and routed to a given Service.
  • Target: A target is an ip address/hostname with a port that identifies an instance of a backend service. Every upstream can have many targets, and the targets can be dynamically added. Changes are effectuated on the fly.
  • Plugin: This refers to Kong “plugins”, which are pieces of business logic that run in the proxying lifecycle. Plugins can be configured through the Admin API - either globally (all incoming traffic) or on specific Routes and Services.

举一个例子对上述概念进行说明:

一个典型的 Nginx 配置:

upstream testUpstream {
    server localhost:3000 weight=100;
}

server {
    listen  80;
    location /test {
        proxy_pass http://testUpstream;
    }
}

转换为Kong Admin API请求如下:

# configurate service
curl -X POST http://localhost:8001/services --data "name=test" --data "host=testUpstream"
# configurate route
curl -X POST http://localhost:8001/routes --data "paths[]=/test" --data "service.id=92956672-f5ea-4e9a-b096-667bf55bc40c"
# configurate upstream
curl -X POST http://localhost:8001/upstreams --data "name=testUpstream"
# configurate target
curl -X POST http://localhost:8001/upstreams/testUpstream/targets --data "target=localhost:3000" --data "weight=100"

从这个例子可以看出:

  • Service:Kong服务抽象层,可以直接映射到一个物理服务,也可以指向一个Upstream来做到负载均衡
  • Route:Kong路由抽象层,负责将实际请求映射到相应的Service
  • Upstream:后端服务抽象,主要用于负载均衡
  • Target:代表了Upstream中的一个后端服务,是 ip(hostname) + port 的抽象

也即访问链路:Route => Service => Upstream => Target

下面给一个Kong Used As HTTP Proxy的例子,如下:

# step1: create nginx service
$ cat << EOF > nginx-svc.yml
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.15
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx
spec:
  selector:
    app: nginx
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
EOF

$ kubectl apply -f nginx-svc.yml
deployment.apps/nginx created
service/nginx created

$ kubectl get svc 
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
nginx        ClusterIP   172.28.255.197   <none>        80/TCP    5h18m

# step2: create kong nginx service
$ curl -s -X POST --url http://172.28.255.207:8001/services/ \
> -d 'name=nginx' \
> -d 'protocol=http' \
> -d 'host=nginxUpstream' \
> -d 'port=80' \
> -d 'path=/' \
> | python -m json.tool
{
   
    "client_certificate": null,
    "connect_timeout": 60000,
    "created_at": 1580560293,
    "host": "nginxUpstream",
    "id": "14100336-f5d2-48ef-a720-d341afceb466",
    "name": "nginx",
    "path": "/",
    "port": 80,
    "protocol": "http",
    "read_timeout": 60000,
    "retries": 5,
    "tags": null,
    "updated_at": 1580560293,
    "write_timeout": 60000
}

# step3: create kong nginx route
$ curl -s -X POST --url http://172.28.255.207:8001/services/nginx/routes \
> -d 'name=nginx' \
> -d 'hosts[]=nginx-test.duyanghao.com' \
> -d 'paths[]=/' \
> -d 'strip_path=true' \
> -d 'preserve_host=true' \
> -d 'protocols[]=http' \
> | python -m json.tool
{
   
    "created_at": 1580560619,
    "destinations": null,
    "headers": null,
    "hosts": [
        "nginx-test.duyanghao.com"
    ],
    "https_redirect_status_code": 426,
    "id": "bb678485-0b3e-4e8a-9a46-3e5464fedffc",
    "methods": null,
    "name": "nginx",
    "paths": [
        "/"
    ],
    "preserve_host": true,
    "protocols": [
        "http"
    ],
    "regex_priority": 0,
    "service": {
   
        "id": "14100336-f5d2-48ef-a720-d341afceb466"
    },
    "snis": null,
    "sources": null,
    "strip_path": true,
    "tags": null,
    "updated_at": 1580560619
}

# step4: create kong nginx upstream
$ curl -s -X POST --url http://172.28.255.207:8001/upstreams \
> -d 'name=nginxUpstream' \
> | python -m json.tool
{
   
    &
  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值