xDS API与动态配置-Day02

1. 动态配置介绍

(1)xDS API为Envov提供了资源的动态配置机制,它也被称为Data Plane API(数据平面);

(2)Envov支持三种类型的配置信息的动态发现机制,相关的发现服务及其相应的API联合起来称为xDS API;
- 基于文件系统发现: 指定要监视的文件系统路径;(就是只要我修改了文件的内容,envoy会自动识别并加载,是实时生效的,不需要我们重启服务)
- 通过查询一到多个管理服务器 (Management Server 控制平面) 发现: 通过Discoverv Request协议报文发送请求,并要求服务方以Discovery Response协议报文进行响应;(就是初始化的时候,通过tcp连接到指定的管理服务器进程,然后请求加载与自己相关的一些配置;)
a. gRPC服务: 启动gRPC流 # 只要与管理服务器建连后,连接永远不会断开,当服务端配置变更时,会发送一个报文给到我们,然后再加载新配置。比下面的方式更高效、更节约资源。
b. REST服务: 轮询REST-JSON URL # 通过轮询的方式,周期性的去请求管理服务器,看有没有新的配置需要加载(全量),类似DNS主从同步。这种方式依赖网络稳定性;

(3)v3 xDS支持如下几种资源类型的独立获取和发现: # 下面这些就对应evnoy中的配置
- envoy.config.listener.v3.Listener # 侦听器
- envoy.config.route.v3.RouteConfiguration # 路由
- envoy.config.route.v3.ScopedRouteConfiguration
- envoy.config.route.v3.VirtualHost # 虚拟主机
- envoy.config.cluster.v3.Cluster #  集群 
- envoy.config.endpoint.v3.ClusterLoadAssignment # 端点
- envoy.extensions.transport_sockets.tls.v3.Secret
- envoy.service.runtime.v3.Runtime

2. xDS API概述

官方文档:https://www.envoyproxy.io/docs/envoy/v1.23.12/api/api

Envoy对xDS API的管理由后端服务器实现,包括LDS、CDS、RDS、SRDS(Scoped Route)、VHDS(Virtual Host) 、EDS、sDS、RTDS (Runtime);
- 所有这些API都提供了最终的一致性,并且彼此间不存在相互影响(不影响数据传输);
- 部分更高级别的操作(例如执行服务的A/B部署)需要进行排序以防止流量被丢弃,因此,基于一个管理服务器提供多类API时还需要使用聚合发现服务(ADS) API,ADS API 还允许所有其他API通过来自单个管理服务器的单个gRPC双向流进行编组,从而允许对操作进行确定性排序;
- 另外,xDS的各API还支持增量传输机制,包括ADS;

3. Bootstrap node 配置段

一个Management Server实例可能需要同时响应多个不同的Envoy实例的资源发现请求;
- Management Server上的配置需要为适配到不同的Envoy实例;
- Envoy实例请求发现配置时,需要在请求报文中上报自身的信息;
. 例如id、cluster、metadata和locality等;
. 这些配置信息定义在Bootstrap配置文件中;

动态配置必须要添加node配置段。

在这里插入图片描述

4. xDS API 工作流程

(1)对于典型的HTTP路由方案,xDS API的Management Server需要为其客户端(Envoy实例)配置的核心资源类型为Listener、RouteConfiguration、Cluster和ClusterLoadAssignment四个;
- 每个Listener资源可以指向一个RouteConfiguration资源,该资源可以指向一个或多个Cluster资源,并且每个Cluster资源可以指向一个ClusterLoadAssignment资源;

(2)Envoy实例在启动时请求加载所有Listener和Cluster资源,而后再获取由这些Listener和Cluster所依赖的RouteConfiguration(路由)和ClusterLoadAssignment(端点)配置;
- 此种场景中,Listener资源和Cluster资源分别代表着客户端配置树上的“根(root)”配置(因为请求代理相关的都在L下面,端点都在C下面),因而可并行加载;

(3)但是,类似gRPC一类的非代理式客户端,可以仅在启动时请求加载其感兴趣的Listener资源,而后再加载这些特定Listener相关的RouteConfiguration资源;
再然后,是这些RouteConfiguration资源指向的Cluster资源,以及由这些Cluster资源依赖的ClusterLoadAssignment资源;
- 该种场景中,Listener资源是客户端整个配置树的“根”;

5. Envoy资源的动态配置源(ConfigSource)

官方文档:https://www.envoyproxy.io/docs/envoy/v1.23.12/api-v3/config/core/v3/config_source.proto.html#envoy-v3-api-enum-config-core-v3-apiconfigsource-apitype

配置源(ConfigSource)用于指定资源配置数据的来源,用于为Listener、Cluster、Route、Endpoint、Secret和VirtualHost等资源提供配置数据;

目前,Envoy支持的资源配置源只能是path、api_config_source或ads其中之一;

api_config_source或ads的数据来自于xDS API Server,即Management Server(控制平面);

配置源只能三选一:
(1)path:基于某个文件发现,就是给一个文件的绝对路径。
如果后续该文件内容发生变化,envoy会自动加载发生变化的配置并实时生效(注意:修改完文件后,需要mv 重命名,然后再mv修改回来原来的名称,配置才能被实时加载,原因下面示例中有)。

(2)api_config_source:基于MS(控制平面)单独进行发现,这里有个前提,就是必须事先定义好MS组成的集群, 该集群的配置一定是静态配置的,而且为了安全,一般应该使用TLS协议传输配置。
- 这里通过api_config_source进行发现的时候,api_type也还有3种配置类型:
a. REST  # 
b. GRPC  # 
c. delta gRPC # 增量发现

(3)ads:基于MS发现所有类型的动态配置。

6. 动态配置示例

官方文档:https://www.envoyproxy.io/docs/envoy/v1.23.12/start/quick-start/configuration-dynamic-filesystem

6.1 基于文件系统的订阅(EDS)

这种动态发现方式的好处就是简单便捷,不需要依赖任何第三方。

6.1.1 概述

为Envoy提供动态配置的最简单方法是将其放置在ConfigSource中显式指定的文件路径中,并且后续文件内容发生变化,还可以实时加载并生效增量的部分。

实时加载配置的机制:Envoy将使用inotify (Mac OS X上的kqueue)来监视文件的更改,并在更新时解析文件中的DiscoveryResponse报文。

支持的文件格式:
- 二进制protobufs
- JSON
- YAML
- proto文本等,都是DiscoveryResponse所支持的数据格式。

提示:
- 除了统计计数器和日志以外,没有任何机制可用于文件系统订阅ACK/NACK更新。
- 若配置更新被拒绝,xDS API的最后一个有效配置将继续适用。

注意:修改完配置后,需要mv一下文件,才能触发实时更新。通过这种方式确保配置一致性。

6.1.2 EDS配置示例

6.1.2.1 集群格式定义模板

在这里插入图片描述

6.1.2.2 EDS相关配置介绍
[root@k8s-harbor01 ~]# cd servicemesh_in_practise-MageEdu_N66/Dynamic-Configuration/eds-filesystem/
[root@k8s-harbor01 eds-filesystem]# ll
总用量 16
-rw-r--r-- 1 root root 1473 85 2022 docker-compose.yaml
drwxr-xr-x 2 root root   60 85 2022 eds.conf.d # 动态加载配置文件目录
-rw-r--r-- 1 root root 1222 85 2022 envoy-sidecar-proxy.yaml # envoy代理配置文件
-rw-r--r-- 1 root root 1185 85 2022 front-envoy.yaml # envoy网关配置文件
-rw-r--r-- 1 root root 1097 85 2022 README.md 

# front-envoy.yaml
[root@k8s-harbor01 eds-filesystem]# cat front-envoy.yaml
node: # BootStrap中,动态配置必须要添加node配置段
  id: envoy_front_proxy # 节点唯一标识符,通常用于识别和跟踪请求、记录日志以及进行流量管理。
  cluster: MageEdu_Cluster # 集群名称
# 上述配置,描述了一个名为 "envoy_front_proxy" 的 Envoy 节点,它属于 "MageEdu_Cluster" 集群。

admin: # Envoy管理接口配置
  profile_path: /tmp/envoy.prof # 性能剖析数据生成路径
  access_log_path: /tmp/admin_access.log # 管理接口访问日志
  address: # 管理接口地址配置
    socket_address:
       address: 0.0.0.0
       port_value: 9901

static_resources:
  listeners:
  - name: listener_0
    address:
      socket_address: { address: 0.0.0.0, port_value: 80 }
    filter_chains:
    - filters:
      - name: envoy.filters.network.http_connection_manager
        typed_config:
          "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
          stat_prefix: ingress_http
          codec_type: AUTO
          route_config:
            name: local_route
            virtual_hosts:
            - name: web_service_01
              domains: ["*"]
              routes:
              - match: { prefix: "/" }
                route: { cluster: webcluster }
          http_filters:
          - name: envoy.filters.http.router

  clusters: # 重点在这里
  - name: webcluster
    connect_timeout: 0.25s
    type: EDS # 声明集群类型为EDS,可动态获取服务的端点信息
    lb_policy: ROUND_ROBIN # 轮询
    eds_cluster_config:
      service_name: webcluster
      eds_config: # eds动态发现配置
        path: '/etc/envoy/eds.conf.d/eds.yaml' # 从该文件中实时加载配置。就是告诉envoy,我集群内的端点,你都到我指定的path去获取。该文件必须存在容器的相同路径下。


# eds.yaml
## 下面准备的3份配置,方便后续演示实时加载
[root@k8s-harbor01 eds-filesystem]# cat eds.conf.d/eds.yaml
resources:
- "@type": type.googleapis.com/envoy.config.endpoint.v3.ClusterLoadAssignment
  cluster_name: webcluster
  endpoints:
  - lb_endpoints:
    - endpoint:
        address:
          socket_address:
            address: 172.31.11.11
            port_value: 80
[root@k8s-harbor01 eds-filesystem]# cat eds.conf.d/eds.yaml.v1
resources:
- "@type": type.googleapis.com/envoy.config.endpoint.v3.ClusterLoadAssignment
  cluster_name: webcluster
  endpoints:
  - lb_endpoints:
    - endpoint:
        address:
          socket_address:
            address: 172.31.11.11
            port_value: 80
[root@k8s-harbor01 eds-filesystem]# cat eds.conf.d/eds.yaml.v2
version_info: '2'
resources:
- "@type": type.googleapis.com/envoy.config.endpoint.v3.ClusterLoadAssignment
  cluster_name: webcluster
  endpoints:
  - lb_endpoints:
    - endpoint:
        address:
          socket_address:
            address: 172.31.11.11
            port_value: 80
    - endpoint:
        address:
          socket_address:
            address: 172.31.11.12
            port_value: 80


# envoy-sidecar-proxy.yaml
[root@k8s-harbor01 eds-filesystem]# cat envoy-sidecar-proxy.yaml
admin:
  profile_path: /tmp/envoy.prof
  access_log_path: /tmp/admin_access.log
  address:
    socket_address:
       address: 0.0.0.0
       port_value: 9901

static_resources:
  listeners:
  - name: listener_0
    address:
      socket_address: { address: 0.0.0.0, port_value: 80 }
    filter_chains:
    - filters:
      - name: envoy.filters.network.http_connection_manager
        typed_config:
          "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
          stat_prefix: ingress_http
          codec_type: AUTO
          route_config:
            name: local_route
            virtual_hosts:
            - name: local_service
              domains: ["*"]
              routes:
              - match: { prefix: "/" }
                route: { cluster: local_cluster }
          http_filters:
          - name: envoy.filters.http.router

  clusters:
  - name: local_cluster
    connect_timeout: 0.25s
    type: STATIC
    lb_policy: ROUND_ROBIN
    load_assignment:
      cluster_name: local_cluster
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address: { address: 127.0.0.1, port_value: 8080 }


# docker-compose.yaml
[root@k8s-harbor01 eds-filesystem]# cat docker-compose.yaml
version: '3.3'

services:
  envoy:
    image: envoyproxy/envoy-alpine:v1.21-latest
    environment:
      - ENVOY_UID=0
      - ENVOY_GID=0
    volumes:
    - ./front-envoy.yaml:/etc/envoy/envoy.yaml
    - ./eds.conf.d/:/etc/envoy/eds.conf.d/
    networks:
      envoymesh:
        ipv4_address: 172.31.11.2
        aliases:
        - front-proxy
    depends_on:
    - webserver01-sidecar
    - webserver02-sidecar

  webserver01-sidecar:
    image: envoyproxy/envoy-alpine:v1.21-latest
    environment:
      - ENVOY_UID=0
      - ENVOY_GID=0
    volumes:
    - ./envoy-sidecar-proxy.yaml:/etc/envoy/envoy.yaml
    hostname: webserver01
    networks:
      envoymesh:
        ipv4_address: 172.31.11.11
        aliases:
        - webserver01-sidecar

  webserver01:
    image: ikubernetes/demoapp:v1.0
    environment:
      - PORT=8080
      - HOST=127.0.0.1
    network_mode: "service:webserver01-sidecar"
    depends_on:
    - webserver01-sidecar

  webserver02-sidecar:
    image: envoyproxy/envoy-alpine:v1.21-latest
    environment:
      - ENVOY_UID=0
      - ENVOY_GID=0
    volumes:
    - ./envoy-sidecar-proxy.yaml:/etc/envoy/envoy.yaml
    hostname: webserver02
    networks:
      envoymesh:
        ipv4_address: 172.31.11.12
        aliases:
        - webserver02-sidecar

  webserver02:
    image: ikubernetes/demoapp:v1.0
    environment:
      - PORT=8080
      - HOST=127.0.0.1
    network_mode: "service:webserver02-sidecar"
    depends_on:
    - webserver02-sidecar

networks:
  envoymesh:
    driver: bridge
    ipam:
      config:
        - subnet: 172.31.11.0/24

6.1.3 启动容器

[root@k8s-harbor01 eds-filesystem]# docker-compose up -d

[root@k8s-harbor01 eds-filesystem]# docker-compose ps
               Name                              Command               State     Ports
----------------------------------------------------------------------------------------
edsfilesystem_envoy_1                 /docker-entrypoint.sh envo ...   Up      10000/tcp
edsfilesystem_webserver01-sidecar_1   /docker-entrypoint.sh envo ...   Up      10000/tcp
edsfilesystem_webserver01_1           /bin/sh -c python3 /usr/lo ...   Up
edsfilesystem_webserver02-sidecar_1   /docker-entrypoint.sh envo ...   Up      10000/tcp
edsfilesystem_webserver02_1           /bin/sh -c python3 /usr/lo ...   Up

6.1.4 请求测试,查看端点是否被自动发现

[root@k8s-harbor01 eds-filesystem]# curl 172.31.11.2:9901/listeners
listener_0::0.0.0.0:80

[root@k8s-harbor01 eds-filesystem]# curl 172.31.11.2:9901/clusters # 从下面的输出可以看到,我们cluster的endpoint就是172.31.11.11,这就是从eds.conf.d/eds.yaml动态加载到的
webcluster::observability_name::webcluster
webcluster::default_priority::max_connections::1024
webcluster::default_priority::max_pending_requests::1024
webcluster::default_priority::max_requests::1024
webcluster::default_priority::max_retries::3
webcluster::high_priority::max_connections::1024
webcluster::high_priority::max_pending_requests::1024
webcluster::high_priority::max_requests::1024
webcluster::high_priority::max_retries::3
webcluster::added_via_api::false
webcluster::172.31.11.11:80::cx_active::0
webcluster::172.31.11.11:80::cx_connect_fail::0
webcluster::172.31.11.11:80::cx_total::0
webcluster::172.31.11.11:80::rq_active::0
webcluster::172.31.11.11:80::rq_error::0
webcluster::172.31.11.11:80::rq_success::0
webcluster::172.31.11.11:80::rq_timeout::0
webcluster::172.31.11.11:80::rq_total::0
webcluster::172.31.11.11:80::hostname::
webcluster::172.31.11.11:80::health_flags::healthy
webcluster::172.31.11.11:80::weight::1
webcluster::172.31.11.11:80::region::
webcluster::172.31.11.11:80::zone::
webcluster::172.31.11.11:80::sub_zone::
webcluster::172.31.11.11:80::canary::false
webcluster::172.31.11.11:80::priority::0
webcluster::172.31.11.11:80::success_rate::-1.0
webcluster::172.31.11.11:80::local_origin_success_rate::-1.0

6.1.5 增加端点,检查动态加载情况

6.1.5.1 新增端点
# 这里是把文件挂载到了容器中,就该本地或去容器中修改都可以
[root@k8s-harbor01 eds-filesystem]# docker exec -it edsfilesystem_envoy_1 /bin/sh
/ # cd /etc/envoy/eds.conf.d/

/etc/envoy/eds.conf.d # ls
eds.yaml     eds.yaml.v1  eds.yaml.v2
/etc/envoy/eds.conf.d # cat eds.yaml.v2  # 用v2版本的文件替换旧的文件
version_info: '2'
resources:
- "@type": type.googleapis.com/envoy.config.endpoint.v3.ClusterLoadAssignment
  cluster_name: webcluster
  endpoints:
  - lb_endpoints:
    - endpoint:
        address:
          socket_address:
            address: 172.31.11.11
            port_value: 80
    - endpoint:
        address:
          socket_address:
            address: 172.31.11.12
            port_value: 80
/etc/envoy/eds.conf.d # cat eds.yaml.v2 > eds.yaml
/etc/envoy/eds.conf.d # cat eds.yaml # 替换后的文件
version_info: '2'
resources:
- "@type": type.googleapis.com/envoy.config.endpoint.v3.ClusterLoadAssignment
  cluster_name: webcluster
  endpoints:
  - lb_endpoints:
    - endpoint:
        address:
          socket_address:
            address: 172.31.11.11
            port_value: 80
    - endpoint:
        address:
          socket_address:
            address: 172.31.11.12
            port_value: 80

6.1.5.2 检查动态加载情况
/etc/envoy/eds.conf.d # exit
[root@k8s-harbor01 eds-filesystem]# curl 172.31.11.2:9901/clusters # 通过观察输出发现,新增节点并没有被动态加载,但这并不是envoy的问题,而是docker的问题,因为docker本身没有办法调用宿主机内核,而inotify是内核级别的功能。
webcluster::observability_name::webcluster
webcluster::default_priority::max_connections::1024
webcluster::default_priority::max_pending_requests::1024
webcluster::default_priority::max_requests::1024
webcluster::default_priority::max_retries::3
webcluster::high_priority::max_connections::1024
webcluster::high_priority::max_pending_requests::1024
webcluster::high_priority::max_requests::1024
webcluster::high_priority::max_retries::3
webcluster::added_via_api::false
webcluster::172.31.11.11:80::cx_active::0
webcluster::172.31.11.11:80::cx_connect_fail::0
webcluster::172.31.11.11:80::cx_total::0
webcluster::172.31.11.11:80::rq_active::0
webcluster::172.31.11.11:80::rq_error::0
webcluster::172.31.11.11:80::rq_success::0
webcluster::172.31.11.11:80::rq_timeout::0
webcluster::172.31.11.11:80::rq_total::0
webcluster::172.31.11.11:80::hostname::
webcluster::172.31.11.11:80::health_flags::healthy
webcluster::172.31.11.11:80::weight::1
webcluster::172.31.11.11:80::region::
webcluster::172.31.11.11:80::zone::
webcluster::172.31.11.11:80::sub_zone::
webcluster::172.31.11.11:80::canary::false
webcluster::172.31.11.11:80::priority::0
webcluster::172.31.11.11:80::success_rate::-1.0
webcluster::172.31.11.11:80::local_origin_success_rate::-1.0

# 解决办法:cp覆盖或者mv
[root@k8s-harbor01 eds-filesystem]# docker exec -it edsfilesystem_envoy_1 /bin/sh
/ # cd /etc/envoy/eds.conf.d/
/etc/envoy/eds.conf.d # mv eds.yaml temp && mv temp eds.yaml # 强行通知宿主机内核,该文件的时间戳发生了改变
/etc/envoy/eds.conf.d # exit

[root@k8s-harbor01 eds-filesystem]# curl 172.31.11.2:9901/clusters|grep 172.31.11 # 这样就有了
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  2026    0  2026    0     0  1408k      0 --:--:-- --:--:-- --:--:-- 1978k
webcluster::172.31.11.11:80::cx_active::0
webcluster::172.31.11.11:80::cx_connect_fail::0
webcluster::172.31.11.11:80::cx_total::0
webcluster::172.31.11.11:80::rq_active::0
webcluster::172.31.11.11:80::rq_error::0
webcluster::172.31.11.11:80::rq_success::0
webcluster::172.31.11.11:80::rq_timeout::0
webcluster::172.31.11.11:80::rq_total::0
webcluster::172.31.11.11:80::hostname::
webcluster::172.31.11.11:80::health_flags::healthy
webcluster::172.31.11.11:80::weight::1
webcluster::172.31.11.11:80::region::
webcluster::172.31.11.11:80::zone::
webcluster::172.31.11.11:80::sub_zone::
webcluster::172.31.11.11:80::canary::false
webcluster::172.31.11.11:80::priority::0
webcluster::172.31.11.11:80::success_rate::-1.0
webcluster::172.31.11.11:80::local_origin_success_rate::-1.0
webcluster::172.31.11.12:80::cx_active::0
webcluster::172.31.11.12:80::cx_connect_fail::0
webcluster::172.31.11.12:80::cx_total::0
webcluster::172.31.11.12:80::rq_active::0
webcluster::172.31.11.12:80::rq_error::0
webcluster::172.31.11.12:80::rq_success::0
webcluster::172.31.11.12:80::rq_timeout::0
webcluster::172.31.11.12:80::rq_total::0
webcluster::172.31.11.12:80::hostname::
webcluster::172.31.11.12:80::health_flags::healthy
webcluster::172.31.11.12:80::weight::1
webcluster::172.31.11.12:80::region::
webcluster::172.31.11.12:80::zone::
webcluster::172.31.11.12:80::sub_zone::
webcluster::172.31.11.12:80::canary::false
webcluster::172.31.11.12:80::priority::0
webcluster::172.31.11.12:80::success_rate::-1.0
webcluster::172.31.11.12:80::local_origin_success_rate::-1.0

在这里插入图片描述

6.1.6 删减端点,检查动态加载情况

6.1.6.1 删减端点
[root@k8s-harbor01 eds.conf.d]# cat eds.yaml.v1 > eds.yaml
[root@k8s-harbor01 eds.conf.d]# mv eds.yaml temp && mv temp eds.yaml
[root@k8s-harbor01 eds.conf.d]# cat eds.yaml
resources:
- "@type": type.googleapis.com/envoy.config.endpoint.v3.ClusterLoadAssignment
  cluster_name: webcluster
  endpoints:
  - lb_endpoints:
    - endpoint:
        address:
          socket_address:
            address: 172.31.11.11
            port_value: 80
6.1.6.2 检查动态加载情况
[root@k8s-harbor01 eds.conf.d]# curl 172.31.11.2:9901/clusters
webcluster::observability_name::webcluster
webcluster::default_priority::max_connections::1024
webcluster::default_priority::max_pending_requests::1024
webcluster::default_priority::max_requests::1024
webcluster::default_priority::max_retries::3
webcluster::high_priority::max_connections::1024
webcluster::high_priority::max_pending_requests::1024
webcluster::high_priority::max_requests::1024
webcluster::high_priority::max_retries::3
webcluster::added_via_api::false
webcluster::172.31.11.11:80::cx_active::2
webcluster::172.31.11.11:80::cx_connect_fail::0
webcluster::172.31.11.11:80::cx_total::2
webcluster::172.31.11.11:80::rq_active::0
webcluster::172.31.11.11:80::rq_error::0
webcluster::172.31.11.11:80::rq_success::3
webcluster::172.31.11.11:80::rq_timeout::0
webcluster::172.31.11.11:80::rq_total::3
webcluster::172.31.11.11:80::hostname::
webcluster::172.31.11.11:80::health_flags::healthy
webcluster::172.31.11.11:80::weight::1
webcluster::172.31.11.11:80::region::
webcluster::172.31.11.11:80::zone::
webcluster::172.31.11.11:80::sub_zone::
webcluster::172.31.11.11:80::canary::false
webcluster::172.31.11.11:80::priority::0
webcluster::172.31.11.11:80::success_rate::-1.0
webcluster::172.31.11.11:80::local_origin_success_rate::-1.0

在这里插入图片描述

6.1.7 清理环境

[root@k8s-harbor01 eds-filesystem]# docker-compose down
Stopping edsfilesystem_webserver02_1         ... done
Stopping edsfilesystem_envoy_1               ... done
Stopping edsfilesystem_webserver01_1         ... done
Stopping edsfilesystem_webserver01-sidecar_1 ... done
Stopping edsfilesystem_webserver02-sidecar_1 ... done
Removing edsfilesystem_webserver02_1         ... done
Removing edsfilesystem_envoy_1               ... done
Removing edsfilesystem_webserver01_1         ... done
Removing edsfilesystem_webserver01-sidecar_1 ... done
Removing edsfilesystem_webserver02-sidecar_1 ... done
Removing network edsfilesystem_envoymesh

6.2 基于文件系统的订阅(LDS和CDS)

我们也可以基于lds和cds,实现Envoy基本全动态的配置方式;
- 各Listener的定义以Discovery Response的标准格式保存于一个文件中;
- 各Cluster的定义同样以Discovery Response的标准格式保存于另一文件中;

6.2.1 相关配置文件展示

6.2.1.1 envoy的配置文件(BootStrap)
[root@k8s-harbor01 eds-filesystem]# cd ../lds-cds-filesystem/
[root@k8s-harbor01 lds-cds-filesystem]# ls
conf.d  docker-compose.yaml  envoy-sidecar-proxy.yaml  front-envoy.yaml  README.md

[root@k8s-harbor01 lds-cds-filesystem]# cat front-envoy.yaml
node: # 动态发现所必须的配置
  id: envoy_front_proxy
  cluster: MageEdu_Cluster

admin:
  profile_path: /tmp/envoy.prof
  access_log_path: /tmp/admin_access.log
  address:
    socket_address:
       address: 0.0.0.0
       port_value: 9901

dynamic_resources: # 动态配置。主要是用来告诉envoy在哪里加载动态配置
  lds_config: # lds配置文件路径
    path: /etc/envoy/conf.d/lds.yaml # 如果lds中需要用到rds相关配置,也在这个文件中
  cds_config: # cds配置文件路径
    path: /etc/envoy/conf.d/cds.yaml
6.2.1.2 lds配置
[root@k8s-harbor01 lds-cds-filesystem]# cat conf.d/lds.yaml
resources:
- "@type": type.googleapis.com/envoy.config.listener.v3.Listener
  name: listener_http
  address:
    socket_address: { address: 0.0.0.0, port_value: 80 }
  filter_chains:
  - filters:
      name: envoy.http_connection_manager
      typed_config:
        "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
        stat_prefix: ingress_http
        route_config: # 这里还可以配置rds动态获取,就不用写相关路由配置在这儿了
          name: local_route
          virtual_hosts:
          - name: local_service
            domains: ["*"]
            routes:
            - match:
                prefix: "/"
              route:
                cluster: webcluster
        http_filters:
        - name: envoy.filters.http.router
6.2.1.3 cds配置
[root@k8s-harbor01 lds-cds-filesystem]# cat conf.d/cds.yaml
resources:
- "@type": type.googleapis.com/envoy.config.cluster.v3.Cluster
  name: webcluster
  connect_timeout: 1s
  type: STRICT_DNS # 这里的发现类型是dns,会把解析后的每一个IP,都配置为一个endpoint
  load_assignment: # 这也可以改成配置eds,动态获取endpoint
    cluster_name: webcluster
    endpoints:
    - lb_endpoints:
      - endpoint:
          address:
            socket_address:
              address: webserver01 # 这个其实就是容器的名称,在docker-compose中有体现
              port_value: 80
      - endpoint:
          address:
            socket_address:
              address: webserver02 # 这个其实就是容器的名称,在docker-compose中有体现
              port_value: 80
6.2.1.4 envoy代理配置
[root@k8s-harbor01 lds-cds-filesystem]# cat envoy-sidecar-proxy.yaml
admin:
  profile_path: /tmp/envoy.prof
  access_log_path: /tmp/admin_access.log
  address:
    socket_address:
       address: 0.0.0.0
       port_value: 9901

static_resources:
  listeners:
  - name: listener_0
    address:
      socket_address: { address: 0.0.0.0, port_value: 80 }
    filter_chains:
    - filters:
      - name: envoy.filters.network.http_connection_manager
        typed_config:
          "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
          stat_prefix: ingress_http
          codec_type: AUTO
          route_config:
            name: local_route
            virtual_hosts:
            - name: local_service
              domains: ["*"]
              routes:
              - match: { prefix: "/" }
                route: { cluster: local_cluster }
          http_filters:
          - name: envoy.filters.http.router

  clusters:
  - name: local_cluster
    connect_timeout: 0.25s
    type: STATIC
    lb_policy: ROUND_ROBIN
    load_assignment:
      cluster_name: local_cluster
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address: { address: 127.0.0.1, port_value: 8080 }
6.2.1.5 dokcer配置
[root@k8s-harbor01 lds-cds-filesystem]# cat docker-compose.yaml
version: '3.3'

services:
  envoy:
    image: envoyproxy/envoy-alpine:v1.21-latest
    environment:
      - ENVOY_UID=0
      - ENVOY_GID=0
    volumes:
    - ./front-envoy.yaml:/etc/envoy/envoy.yaml
    - ./conf.d/:/etc/envoy/conf.d/
    networks:
      envoymesh:
        ipv4_address: 172.31.12.2
        aliases:
        - front-proxy
    depends_on:
    - webserver01
    - webserver01-app
    - webserver02
    - webserver02-app

  webserver01:
    image: envoyproxy/envoy-alpine:v1.21-latest
    environment:
      - ENVOY_UID=0
      - ENVOY_GID=0
    volumes:
    - ./envoy-sidecar-proxy.yaml:/etc/envoy/envoy.yaml
    hostname: webserver01
    networks:
      envoymesh:
        ipv4_address: 172.31.12.11
        aliases:
        - webserver01-sidecar

  webserver01-app:
    image: ikubernetes/demoapp:v1.0
    environment:
      - PORT=8080
      - HOST=127.0.0.1
    network_mode: "service:webserver01"
    depends_on:
    - webserver01

  webserver02:
    image: envoyproxy/envoy-alpine:v1.21-latest
    environment:
      - ENVOY_UID=0
      - ENVOY_GID=0
    volumes:
    - ./envoy-sidecar-proxy.yaml:/etc/envoy/envoy.yaml
    hostname: webserver02
    networks:
      envoymesh:
        ipv4_address: 172.31.12.12
        aliases:
        - webserver02-sidecar

  webserver02-app:
    image: ikubernetes/demoapp:v1.0
    environment:
      - PORT=8080
      - HOST=127.0.0.1
    network_mode: "service:webserver02"
    depends_on:
    - webserver02

networks:
  envoymesh:
    driver: bridge
    ipam:
      config:
        - subnet: 172.31.12.0/24

6.2.2 启动容器

[root@k8s-harbor01 lds-cds-filesystem]# docker-compose up -d

[root@k8s-harbor01 lds-cds-filesystem]# docker-compose ps
               Name                             Command               State     Ports
---------------------------------------------------------------------------------------
ldscdsfilesystem_envoy_1             /docker-entrypoint.sh envo ...   Up      10000/tcp
ldscdsfilesystem_webserver01-app_1   /bin/sh -c python3 /usr/lo ...   Up
ldscdsfilesystem_webserver01_1       /docker-entrypoint.sh envo ...   Up      10000/tcp
ldscdsfilesystem_webserver02-app_1   /bin/sh -c python3 /usr/lo ...   Up
ldscdsfilesystem_webserver02_1       /docker-entrypoint.sh envo ...   Up      10000/tcp

6.2.3 检查动态发现是否生效

6.2.3.1 检查lds动态发现
[root@k8s-harbor01 lds-cds-filesystem]# curl 172.31.12.2:9901/listeners 
listener_http::0.0.0.0:80 # 这个就是conf.d/lds.yaml中定义的listener
6.2.3.2 检查cds动态发现
[root@k8s-harbor01 lds-cds-filesystem]# curl 172.31.12.2:9901/clusters # 从下面的输出可以看到cluster名称为webcluster,并且有2个端点172.31.12.11和172.31.12.12
webcluster::observability_name::webcluster
webcluster::default_priority::max_connections::1024
webcluster::default_priority::max_pending_requests::1024
webcluster::default_priority::max_requests::1024
webcluster::default_priority::max_retries::3
webcluster::high_priority::max_connections::1024
webcluster::high_priority::max_pending_requests::1024
webcluster::high_priority::max_requests::1024
webcluster::high_priority::max_retries::3
webcluster::added_via_api::true
webcluster::172.31.12.11:80::cx_active::0
webcluster::172.31.12.11:80::cx_connect_fail::0
webcluster::172.31.12.11:80::cx_total::0
webcluster::172.31.12.11:80::rq_active::0
webcluster::172.31.12.11:80::rq_error::0
webcluster::172.31.12.11:80::rq_success::0
webcluster::172.31.12.11:80::rq_timeout::0
webcluster::172.31.12.11:80::rq_total::0
webcluster::172.31.12.11:80::hostname::webserver01
webcluster::172.31.12.11:80::health_flags::healthy
webcluster::172.31.12.11:80::weight::1
webcluster::172.31.12.11:80::region::
webcluster::172.31.12.11:80::zone::
webcluster::172.31.12.11:80::sub_zone::
webcluster::172.31.12.11:80::canary::false
webcluster::172.31.12.11:80::priority::0
webcluster::172.31.12.11:80::success_rate::-1.0
webcluster::172.31.12.11:80::local_origin_success_rate::-1.0
webcluster::172.31.12.12:80::cx_active::0
webcluster::172.31.12.12:80::cx_connect_fail::0
webcluster::172.31.12.12:80::cx_total::0
webcluster::172.31.12.12:80::rq_active::0
webcluster::172.31.12.12:80::rq_error::0
webcluster::172.31.12.12:80::rq_success::0
webcluster::172.31.12.12:80::rq_timeout::0
webcluster::172.31.12.12:80::rq_total::0
webcluster::172.31.12.12:80::hostname::webserver02
webcluster::172.31.12.12:80::health_flags::healthy
webcluster::172.31.12.12:80::weight::1
webcluster::172.31.12.12:80::region::
webcluster::172.31.12.12:80::zone::
webcluster::172.31.12.12:80::sub_zone::
webcluster::172.31.12.12:80::canary::false
webcluster::172.31.12.12:80::priority::0
webcluster::172.31.12.12:80::success_rate::-1.0
webcluster::172.31.12.12:80::local_origin_success_rate::-1.

6.2.4 访问测试

在这里插入图片描述

6.2.5 删除一个端点并检查动态发现

6.2.5.1 删除端点
[root@k8s-harbor01 lds-cds-filesystem]# cd conf.d/
[root@k8s-harbor01 conf.d]# cat cds.yaml # 注释掉一个端点
resources:
- "@type": type.googleapis.com/envoy.config.cluster.v3.Cluster
  name: webcluster
  connect_timeout: 1s
  type: STRICT_DNS
  load_assignment:
    cluster_name: webcluster
    endpoints:
    - lb_endpoints:
      - endpoint:
          address:
            socket_address:
              address: webserver01
              port_value: 80
#      - endpoint:
#          address:
#            socket_address:
#              address: webserver02
#              port_value: 80

[root@k8s-harbor01 conf.d]# mv cds.yaml temp && mv temp cds.yaml
6.2.5.2 检查动态发现

在这里插入图片描述
在这里插入图片描述

6.2.6 环境清理

[root@k8s-harbor01 conf.d]# cd ..
[root@k8s-harbor01 lds-cds-filesystem]# docker-compose down

6.3 基于GRPC协议的订阅

当前的主流,就是使用GRPC订阅

6.3.1 GRPC订阅介绍

Enovy支持为每个xDS API(如LDS、EDS等)独立指定gRPC Api Config Source,它指向与管理服务器对应的某上游集群;
对于每一个API与指定GRPC建立连接时,对应的envoy都会启动一个独立的双向grpc流,该grpc流可能会发送给不同的管理服务器。
API的交付方式采用最终一致性机制;
基于GRPC订阅的方式不需要mv文件。

如下图:
(1)Enovy的初次请求(lnitial Request),会向MS发起一个Discovery Request(请求报文),MS会检索自己这边最新的版本号(Version 7)。
(2)MS检查自己的最新版本号后,会进行初次响应(Initial Response),返回内容中包含了对应最新版本的集群的定义(Clustrers)。
(3)Enovy收到MS的出初次响应后会进行初次确认(Intial ACK),并发送一个Discovery Request,告诉MS我确认使用Version 7。
(4)后续,一旦MS中新出现了一个版本号(Version 8),会主动响应给Enovy(包含了V8对应的cluster配置),然后Evnoy会发起一个确认使用V8版本的请求给MS(Spontaneous UpdateVersion=8)。

在这里插入图片描述

6.3.2 基于GRPC协议的动态配置格式

以LDS为例,它配置Listener以动态方式发现和加载,而内部的路由可由发现的Listener直接提供,也可配置再经由RDS发现;

下面为LDS配置格式,CDS等的配置格式类同;
dynamic_resources: # 动态资源
  lds_config: # lds配置
    api_config_source: # api配置源
      api_type: GRPC # API可经由REST或gRPC获取 ,支持的类型包括REST、gRPC、delta_gRPC
      resource_api_version: V3 # xDS资源的API版本,对于1.19之后的Envoy版本,必须使用v3
      rate_limit_settings: {...} # 速率限制
      grpc_services: # 提供grpc服务的一到多个服务源
        transport_api_version: V3 # xDS传输协议使用的API版本,对于1.19之后的Envoy版本,必须使用v3
        envoy_grpc: # Envoy内建的grpc客户端,envoy_grpc和google_grpc二者仅能用其一。
          cluster_name: xds_cluster # grpc集群名称(也就是MS的集群名称,静态定义的)
        google_grpc: # 谷歌的C++ grpc客户端
        timeout: ... # grpc超时时长(与ms端连接的超时时长)

注意: 提供gRPC API服务的Management Server (控制平面)也需要定义为Envoy上的集群,并由envov实例通过xDS API进行请求;
- 通常,这些管理服务器需要以静态资源的格式提供;
- 类似于,DHCP协议的Server端的地址必须静态配置,而不能经由DHCP协议获取;

6.3.3 配置示例:基于GRPC向管理服务器订阅

基于gRPC的订阅功能需要向专用的Management Server请求配置信息

6.3.3.1 前端代理配置文件
[root@k8s-harbor01 lds-cds-grpc]# pwd
/root/servicemesh_in_practise-MageEdu_N66/Dynamic-Configuration/lds-cds-grpc

[root@k8s-harbor01 lds-cds-grpc]# cat front-envoy.yaml
node: # 动态配置必加配置。声明当前节点ID和所属集群
  id: envoy_front_proxy
  cluster: webcluster

admin: # Envoy管理接口相关配置
  profile_path: /tmp/envoy.prof
  access_log_path: /tmp/admin_access.log
  address:
    socket_address:
       address: 0.0.0.0
       port_value: 9901

dynamic_resources: # 动态资源。告诉envoy如何加载所需的配置
  lds_config: # lds动态发现
    resource_api_version: V3 # 动态发现api版本为v3
    api_config_source:
      api_type: GRPC # api类型为grpc
      transport_api_version: V3 #  xDS 传输协议的 API 版本。就是我们DiscoveryRequest和Response 时使用的版本
      - envoy_grpc:
          cluster_name: xds_cluster

  cds_config:
    resource_api_version: V3
    api_config_source:
      api_type: GRPC
      transport_api_version: V3
      grpc_services: # 当涉及到 gRPC 的配置时,可以提供多个 gRPC 服务。
      - envoy_grpc:
          cluster_name: xds_cluster # 如果定义了多个集群(cluster),并且发生任何类型的故障,Envoy 将循环使用这些服务。具有名称 cluster_name 的集群必须静态定义,其类型不得为 EDS 。

static_resources: # 静态资源配置
  clusters:
  - name: xds_cluster
    connect_timeout: 0.25s
    type: STRICT_DNS # 把所有解析出来的IP,都视为上游的端点
    # The extension_protocol_options field is used to provide extension-specific protocol options for upstream connections.
    typed_extension_protocol_options: # 使用GRPC协议必加的配置段
      envoy.extensions.upstreams.http.v3.HttpProtocolOptions:
        "@type": type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions
        explicit_http_config:
          http2_protocol_options: {} # {}表示不添加额外选项
    lb_policy: ROUND_ROBIN
    load_assignment:
      cluster_name: xds_cluster
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address:
                address: xdsserver # 端点的数量,取决于该地址能解析出几个IP
                port_value: 18000
6.3.3.2 docker-compose配置文件
[root@k8s-harbor01 lds-cds-grpc]# cat docker-compose.yaml
version: '3.3'

services:
  envoy:
    image: envoyproxy/envoy-alpine:v1.21-latest
    environment:
      - ENVOY_UID=0
      - ENVOY_GID=0
    volumes:
    - ./front-envoy.yaml:/etc/envoy/envoy.yaml
    networks:
      envoymesh:
        ipv4_address: 172.31.15.2
        aliases:
        - front-proxy
    depends_on:
    - webserver01
    - webserver02
    - xdsserver

  webserver01:
    image: ikubernetes/demoapp:v1.0
    environment:
      - PORT=8080
      - HOST=127.0.0.1
    hostname: webserver01
    networks:
      envoymesh:
        ipv4_address: 172.31.15.11

  webserver01-sidecar:
    image: envoyproxy/envoy-alpine:v1.21-latest
    environment:
      - ENVOY_UID=0
      - ENVOY_GID=0
    volumes:
    - ./envoy-sidecar-proxy.yaml:/etc/envoy/envoy.yaml
    network_mode: "service:webserver01"
    depends_on:
    - webserver01

  webserver02:
    image: ikubernetes/demoapp:v1.0
    environment:
      - PORT=8080
      - HOST=127.0.0.1
    hostname: webserver02
    networks:
      envoymesh:
        ipv4_address: 172.31.15.12

  webserver02-sidecar:
    image: envoyproxy/envoy-alpine:v1.21-latest
    environment:
      - ENVOY_UID=0
      - ENVOY_GID=0
    volumes:
    - ./envoy-sidecar-proxy.yaml:/etc/envoy/envoy.yaml
    network_mode: "service:webserver02"
    depends_on:
    - webserver02

  xdsserver: # 看这里,该容器最终会被解析成cluster的endpoint
    image: ikubernetes/envoy-xds-server:v0.1
    environment:
      - SERVER_PORT=18000
      - NODE_ID=envoy_front_proxy
      - RESOURCES_FILE=/etc/envoy-xds-server/config/config.yaml
    volumes:
    - ./resources:/etc/envoy-xds-server/config/ #  # 注意这里还挂载一个目录到容器中
    networks:
      envoymesh:
        ipv4_address: 172.31.15.5
        aliases:
        - xdsserver
        - xds-service
    expose:
    - "18000"

networks:
  envoymesh:
    driver: bridge
    ipam:
      config:
        - subnet: 172.31.15.0/24

6.3.3.3 resources目录下的配置文件
[root@k8s-harbor01 lds-cds-grpc]# cd resources/
[root@k8s-harbor01 resources]# ll
总用量 12
-rw-r--r-- 1 root root 270 85 2022 config.yaml
-rw-r--r-- 1 root root 270 85 2022 config.yaml-v1
-rw-r--r-- 1 root root 313 85 2022 config.yaml-v2
[root@k8s-harbor01 resources]# cat config.yaml
name: myconfig
spec:
  listeners:
  - name: listener_http
    address: 0.0.0.0
    port: 80
    routes:
    - name: local_route
      prefix: /
      clusters:
      - webcluster
  clusters:
  - name: webcluster
    endpoints:
    - address: 172.31.15.11
      port: 80
      
[root@k8s-harbor01 resources]# cat config.yaml-v1
name: myconfig
spec:
  listeners:
  - name: listener_http
    address: 0.0.0.0
    port: 80
    routes:
    - name: local_route
      prefix: /
      clusters:
      - webcluster
  clusters:
  - name: webcluster
    endpoints:
    - address: 172.31.15.11
      port: 80

[root@k8s-harbor01 resources]# cat config.yaml-v2
name: myconfig
spec:
  listeners:
  - name: listener_http
    address: 0.0.0.0
    port: 80
    routes:
    - name: local_route
      prefix: /
      clusters:
      - webcluster
  clusters:
  - name: webcluster
    endpoints:
    - address: 172.31.15.11
      port: 80
    - address: 172.31.15.12
      port: 80
6.3.3.4 sidecar配置文件
[root@k8s-harbor01 lds-cds-grpc]# cat envoy-sidecar-proxy.yaml
admin:
  profile_path: /tmp/envoy.prof
  access_log_path: /tmp/admin_access.log
  address:
    socket_address:
       address: 0.0.0.0
       port_value: 9901

static_resources:
  listeners:
  - name: listener_0
    address:
      socket_address: { address: 0.0.0.0, port_value: 80 }
    filter_chains:
    - filters:
      - name: envoy.filters.network.http_connection_manager
        typed_config:
          "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
          stat_prefix: ingress_http
          codec_type: AUTO
          route_config:
            name: local_route
            virtual_hosts:
            - name: local_service
              domains: ["*"]
              routes:
              - match: { prefix: "/" }
                route: { cluster: local_cluster }
          http_filters:
          - name: envoy.filters.http.router

  clusters:
  - name: local_cluster
    connect_timeout: 0.25s
    type: STATIC
    lb_policy: ROUND_ROBIN
    load_assignment:
      cluster_name: local_cluster
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address: { address: 127.0.0.1, port_value: 8080 }

6.3.3.5 启动容器
[root@k8s-harbor01 lds-cds-grpc]# docker-compose up -d

[root@k8s-harbor01 lds-cds-grpc]# docker-compose ps
              Name                            Command               State     Ports
-------------------------------------------------------------------------------------
ldscdsgrpc_envoy_1                 /docker-entrypoint.sh envo ...   Up      10000/tcp
ldscdsgrpc_webserver01-sidecar_1   /docker-entrypoint.sh envo ...   Up
ldscdsgrpc_webserver01_1           /bin/sh -c python3 /usr/lo ...   Up
ldscdsgrpc_webserver02-sidecar_1   /docker-entrypoint.sh envo ...   Up
ldscdsgrpc_webserver02_1           /bin/sh -c python3 /usr/lo ...   Up
ldscdsgrpc_xdsserver_1             /bin/sh -c /bin/envoy-xds- ...   Up      18000/tcp
6.3.3.6 访问管理接口,检查端点发现情况
[root@k8s-harbor01 lds-cds-grpc]# curl 172.31.15.2:9901/clusters

在这里插入图片描述

6.3.3.7 访问管理接口,检查侦听器发现情况
[root@k8s-harbor01 lds-cds-grpc]# curl 172.31.15.2:9901/listeners

在这里插入图片描述

6.3.3.8 新增端点并检查动态获取情况
# 新增端点
[root@k8s-harbor01 lds-cds-grpc]# cd resources/
[root@k8s-harbor01 resources]# cat config.yaml-v2 > config.yaml
[root@k8s-harbor01 resources]# cat config.yaml
name: myconfig
spec:
  listeners:
  - name: listener_http
    address: 0.0.0.0
    port: 80
    routes:
    - name: local_route
      prefix: /
      clusters:
      - webcluster
  clusters:
  - name: webcluster
    endpoints:
    - address: 172.31.15.11
      port: 80
    - address: 172.31.15.12
      port: 80

# 检查端点获取情况
[root@k8s-harbor01 resources]# curl 172.31.15.2:9901/clusters

在这里插入图片描述

6.3.3.9 删减端点并检查动态获取情况
[root@k8s-harbor01 resources]# cat config.yaml-v1
name: myconfig
spec:
  listeners:
  - name: listener_http
    address: 0.0.0.0
    port: 80
    routes:
    - name: local_route
      prefix: /
      clusters:
      - webcluster
  clusters:
  - name: webcluster
    endpoints:
    - address: 172.31.15.11
      port: 80
[root@k8s-harbor01 resources]# cat config.yaml-v1 > config.yaml

[root@k8s-harbor01 resources]# curl 172.31.15.2:9901/clusters

在这里插入图片描述

6.3.3.10 清理环境
[root@k8s-harbor01 resources]# cd ..
[root@k8s-harbor01 lds-cds-grpc]# docker-compose down

[root@k8s-harbor01 lds-cds-grpc]# docker ps -a
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES

6.3.4 不足之处

在上面的示例中(GRPC订阅),还存在一个问题:
如果我们更新了一个LDS和CDS,但是LDS被提前加载了,而LDS中引用的路由是后配置的,
刚好这个时候有流量进来了,但是CDS配置还没有来得及被加载,那这个时候这部分流量就会被丢弃。

为了避免这种问题,所以使用ADS更加适合。

6.3.5 配置示例:ADS(聚合 xDS)

官方文档:https://www.envoyproxy.io/docs/envoy/v1.23.12/api-docs/xds_protocol#aggregated-discovery-service

6.3.5.1 概述
通过前述的交互顺序保证MS资源分发时的流量丢弃是一项很有挑战的工作,而ADS允许单一MS通过单个gRPC流提供所有的API更新。
6.3.5.2 前端代理配置文件
[root@k8s-harbor01 ads-grpc]# pwd
/root/servicemesh_in_practise-MageEdu_N66/Dynamic-Configuration/ads-grpc

[root@k8s-harbor01 ads-grpc]# cat front-envoy.yaml # 这里的大部分配置和之前的没什么差别,主要是ads_config这段
node:
  id: envoy_front_proxy
  cluster: webcluster

admin:
  profile_path: /tmp/envoy.prof
  access_log_path: /tmp/admin_access.log
  address:
    socket_address:
       address: 0.0.0.0
       port_value: 9901

dynamic_resources:
  ads_config: # ads配置
    api_type: GRPC # api类型
    transport_api_version: V3 # 使用grpc协议传输时使用的版本
    grpc_services:
    - envoy_grpc:
        cluster_name: xds_cluster # 和cluster相关联
    set_node_on_first_message_only: true
  cds_config: # 这里的配置会引用上面的ads_config
    resource_api_version: V3
    ads: {} # 表示通过ads加载cds相关配置
  lds_config:
    resource_api_version: V3
    ads: {} # 表示通过ads加载lds相关配置

static_resources:
  clusters:
  - name: xds_cluster
    connect_timeout: 0.25s
    type: STRICT_DNS
    # The extension_protocol_options field is used to provide extension-specific protocol options for upstream connections.
    typed_extension_protocol_options:
      envoy.extensions.upstreams.http.v3.HttpProtocolOptions:
        "@type": type.googleapis.com/envoy.extensions.upstreams.http.v3.HttpProtocolOptions
        explicit_http_config:
          http2_protocol_options: {}
    lb_policy: ROUND_ROBIN
    load_assignment:
      cluster_name: xds_cluster
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address:
                address: xdsserver
                port_value: 18000

6.3.5.3 docker-compose配置文件
[root@k8s-harbor01 ads-grpc]# cat docker-compose.yaml
version: '3.3'

services:
  envoy:
    image: envoyproxy/envoy-alpine:v1.21-latest
    environment:
      - ENVOY_UID=0
      - ENVOY_GID=0
    volumes:
    - ./front-envoy.yaml:/etc/envoy/envoy.yaml
    networks:
      envoymesh:
        ipv4_address: 172.31.16.2
        aliases:
        - front-proxy
    depends_on:
    - webserver01
    - webserver02
    - xdsserver

  webserver01:
    image: ikubernetes/demoapp:v1.0
    environment:
      - PORT=8080
      - HOST=127.0.0.1
    hostname: webserver01
    networks:
      envoymesh:
        ipv4_address: 172.31.16.11

  webserver01-sidecar:
    image: envoyproxy/envoy-alpine:v1.21-latest
    environment:
      - ENVOY_UID=0
      - ENVOY_GID=0
    volumes:
    - ./envoy-sidecar-proxy.yaml:/etc/envoy/envoy.yaml
    network_mode: "service:webserver01"
    depends_on:
    - webserver01

  webserver02:
    image: ikubernetes/demoapp:v1.0
    environment:
      - PORT=8080
      - HOST=127.0.0.1
    hostname: webserver02
    networks:
      envoymesh:
        ipv4_address: 172.31.16.12

  webserver02-sidecar:
    image: envoyproxy/envoy-alpine:v1.21-latest
    environment:
      - ENVOY_UID=0
      - ENVOY_GID=0
    volumes:
    - ./envoy-sidecar-proxy.yaml:/etc/envoy/envoy.yaml
    network_mode: "service:webserver02"
    depends_on:
    - webserver02

  xdsserver:
    image: ikubernetes/envoy-xds-server:v0.1
    environment:
      - SERVER_PORT=18000
      - NODE_ID=envoy_front_proxy
      - RESOURCES_FILE=/etc/envoy-xds-server/config/config.yaml
    volumes:
    - ./resources:/etc/envoy-xds-server/config/
    networks:
      envoymesh:
        ipv4_address: 172.31.16.5
        aliases:
        - xdsserver
        - xds-service
    expose:
    - "18000"

networks:
  envoymesh:
    driver: bridge
    ipam:
      config:
        - subnet: 172.31.16.0/24

6.3.5.4 resource目录下配置文件
[root@k8s-harbor01 ads-grpc]# ll resources/
总用量 12
-rw-r--r-- 1 root root 270 85 2022 config.yaml
-rw-r--r-- 1 root root 270 85 2022 config.yaml-v1
-rw-r--r-- 1 root root 313 85 2022 config.yaml-v2
[root@k8s-harbor01 ads-grpc]# cat resources/config.yaml
name: myconfig
spec:
  listeners:
  - name: listener_http
    address: 0.0.0.0
    port: 80
    routes:
    - name: local_route
      prefix: /
      clusters:
      - webcluster
  clusters:
  - name: webcluster
    endpoints:
    - address: 172.31.16.11
      port: 80
[root@k8s-harbor01 ads-grpc]# cat resources/config.yaml-v1
name: myconfig
spec:
  listeners:
  - name: listener_http
    address: 0.0.0.0
    port: 80
    routes:
    - name: local_route
      prefix: /
      clusters:
      - webcluster
  clusters:
  - name: webcluster
    endpoints:
    - address: 172.31.16.11
      port: 80
[root@k8s-harbor01 ads-grpc]# cat resources/config.yaml-v2
name: myconfig
spec:
  listeners:
  - name: listener_http
    address: 0.0.0.0
    port: 80
    routes:
    - name: local_route
      prefix: /
      clusters:
      - webcluster
  clusters:
  - name: webcluster
    endpoints:
    - address: 172.31.16.11
      port: 80
    - address: 172.31.16.12
      port: 80

6.3.5.5 sidecar配置文件
[root@k8s-harbor01 ads-grpc]# cat envoy-sidecar-proxy.yaml
admin:
  profile_path: /tmp/envoy.prof
  access_log_path: /tmp/admin_access.log
  address:
    socket_address:
       address: 0.0.0.0
       port_value: 9901

static_resources:
  listeners:
  - name: listener_0
    address:
      socket_address: { address: 0.0.0.0, port_value: 80 }
    filter_chains:
    - filters:
      - name: envoy.filters.network.http_connection_manager
        typed_config:
          "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
          stat_prefix: ingress_http
          codec_type: AUTO
          route_config:
            name: local_route
            virtual_hosts:
            - name: local_service
              domains: ["*"]
              routes:
              - match: { prefix: "/" }
                route: { cluster: local_cluster }
          http_filters:
          - name: envoy.filters.http.router

  clusters:
  - name: local_cluster
    connect_timeout: 0.25s
    type: STATIC
    lb_policy: ROUND_ROBIN
    load_assignment:
      cluster_name: local_cluster
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address: { address: 127.0.0.1, port_value: 8080 }

6.3.5.6 启动容器
[root@k8s-harbor01 ads-grpc]# docker-compose up -d

[root@k8s-harbor01 ads-grpc]# docker-compose ps
            Name                           Command               State     Ports
----------------------------------------------------------------------------------
adsgrpc_envoy_1                 /docker-entrypoint.sh envo ...   Up      10000/tcp
adsgrpc_webserver01-sidecar_1   /docker-entrypoint.sh envo ...   Up
adsgrpc_webserver01_1           /bin/sh -c python3 /usr/lo ...   Up
adsgrpc_webserver02-sidecar_1   /docker-entrypoint.sh envo ...   Up
adsgrpc_webserver02_1           /bin/sh -c python3 /usr/lo ...   Up
adsgrpc_xdsserver_1             /bin/sh -c /bin/envoy-xds- ...   Up      18000/tcp

6.3.5.7 访问管理接口,检查端点发现情况
[root@k8s-harbor01 ads-grpc]# curl 172.31.16.2:9901/clusters

在这里插入图片描述

6.3.5.8 访问管理接口,检查侦听器发现情况
[root@k8s-harbor01 ads-grpc]# curl 172.31.16.2:9901/listeners
listener_http::0.0.0.0:80

6.3.5.9 新增端点并检查
[root@k8s-harbor01 resources]# cat config.yaml-v2 > config.yaml
[root@k8s-harbor01 resources]# curl 172.31.16.2:9901/clusters

在这里插入图片描述

6.3.5.10 删除端点并检查
[root@k8s-harbor01 resources]# cat config.yaml-v1 > config.yaml
[root@k8s-harbor01 resources]# curl 172.31.16.2:9901/clusters

在这里插入图片描述

6.3.5.11 看动态Clusters的相关信息
[root@k8s-harbor01 ads-grpc]# curl -s 172.31.16.2:9901/config_dump | jq '.configs[1].dynamic_active_clusters'

在这里插入图片描述

6.3.5.12 查看动态的Listener信息
[root@k8s-harbor01 ads-grpc]# curl -s 172.31.16.2:9901/config_dump?resource=dynamic_listeners | jq '.configs[0].active_state.listener.address'

在这里插入图片描述

6.3.5.13 清理环境
[root@k8s-harbor01 resources]# docker-compose down

[root@k8s-harbor01 resources]# docker ps -a
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES

6.4 基于REST-JSON协议轮询订阅

官方文档:https://www.envoyproxy.io/docs/envoy/v1.23.12/api-docs/xds_protocol#rest-json-polling-subscriptions
这种方式性能较差,不做具体演示

6.4.1 概述

xDS 单例 API 还支持通过 REST 端点进行同步(长轮询)操作。
上述消息的顺序类似,但没有维护到管理服务器的持久流。
预期在任何时刻只会有一个未完成的请求,因此在 REST-JSON 中,响应的 nonce 是可选的。
使用 JSON canonical transform of proto3 来编码 DiscoveryRequest 和 DiscoveryResponse 消息。
注意:
- REST-JSON 轮询不支持 ADS。
- 当轮询周期设置为较小的值,以实现长轮询时,还需要避免发送 DiscoveryResponse,除非通过资源更新发生了对底层资源的更改。

6.4.2 配置介绍

在这里插入图片描述
在这里插入图片描述

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值