Envoy HTTP流量治理-Day03

1. HTTP连接管理器(http_connection_manager)

Envoy通过内置的L4过滤器(http_connection_manager)将加密报文转换为解密报文,再交给L7的相关链进行处理。

HTTP协议相关的功能通过各HTTP过滤器实现,这些过滤器大体可分为编码器、解码器和编/解码器三类;
- router (envoy.router)是最常的过滤器之一,它基于路由表完成请求的转发或重定向,以及处理重试操作和生成统计信息等;

2. HTTP高级路由

 Envoy基于HTTP router过滤器基于路由表完成多种高级路由机制,包括如下:
 - 将域名映射到虚拟主机;
- path的前缀(prefix)匹配、精确匹配或正则表达式匹配;
- 虚拟主机级别的TLS重定向;
- path级别的path/host重定向;
- 由Envoy直接生成响应报文;
- 显式host rewrite;
- prefix rewrite;
- 基于HTTP标头或路由配置的请求重试与请求超时;
- 基于运行时参数的流量迁移;
- 基于权重或百分比的跨集群流量分割;
- 基于任意标头匹配路由规则;
- 基于优先级的路由;
- 基于hash策略的路由;
- ...

3. HTTP路由及虚拟主机

路由配置中的顶级元素是虚拟主机。
- 每个虚拟主机都有一个逻辑名称以及一组域名,请求报文中的主机头将根据此处的域名进行路由;
- 基于域名选择虚拟主机后,将基于配置的路由机制完成请求路由或进行重定向;
- 每个虚拟主机都有一个逻辑名称(name)以及一组域名(domains),请求报文中的主机头将根据此处的域名进行路由;
- 基于域名选择虚拟主机后,将基于配置的路由机制(routes)完成请求路由或进行重定向;

listeners:
- name:
  address: {...}
  filter_chians: []
  - filters:
    - name: envoy.filters.network.http_connection_manager
      typed_config:
      "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
      stat_prefix: ingress_http
      codec_type: AUTO
      route_config:
        name: ...
        virutal_hosts: []
        - name: ...
          domains: [] # 虚拟主机的域名,路由匹配时将请求报文中的host标头值与此处列表项进行匹配检测;
          routes: [] # 路由条目,匹配到当前虚拟主机的请求中的path匹配检测将针对各route中由match定义条件进行;
          - name: ...
            match: {...} # 常用内嵌字段 prefix|path|sate_regex|connect_matcher,用于定义基于路径前缀、路径、正则表达式或连接匹配器四者之一定义匹配条件;
            route: {...} # 常用内嵌字段cluster|cluster_header|weighted_clusters,基于集群、请求报文中的集群标头或加权集群(流量分割)定义路由目标;
            redirect: {} # 重定向请求,但不可与route或direct_response一同使用;
            direct_response: {} # 直接响应请求,不可与route和redirect一同使用;
          virtual_clusters: [] # 为此虚拟主机定义的用于收集统计信息的虚拟集群列表;

3.1 虚拟主机

虚拟主机是路由配置中的顶级元素,它可以通过virtual_hosts字段进行静态配置,也可基于VHDS进行动态发现。
以下是虚拟主机的一些配置:

在这里插入图片描述

3.2 路由配置示例

3.2.1的示例重点说明match的基本匹配机制及不同的路由方式,而3.2.2的则侧重基于标头和查询参数的匹配;

3.2.1 match的基本匹配机制

virtual_hosts: # 虚拟主机配置
- name: vh_001 虚拟主机名称
  domains: ["ilinux.io", "*.ilinux.io", "ilinux.*"] # 匹配的域名,需要精准匹配的都放最前面,其他的放下面通过*匹配
  routes: # 路由配置
  - match: # 匹配匹配
    path: "/service/blue" # 匹配的路径,写死的一般都是精准匹配
    route:
     cluster: blue # 满足match要求的都路由到指定的blue集群
  - match:
    safe_regex: # 正则表达式匹配
      google_re2: {} # 使用google_re2的正则表达式规范
      regex: "^/service/.*blue$" # 具体的正则表达式:以/service开头,/.*任意字符串,但是必须以blue$结尾的
    redirect: # 重定向配置
      path_redirect: "/service/blue" # 重定向路径。只要通过正则匹配的(如:/service/aaablue),都重定向到/service/blue
  - match:
    prefix: "/service/yellow"
    direct_response: # 直接响应配置(只要匹配路径是/service/yellow,直接由envoy返回状态码$status,返回值$body)
      status: 200 # 状态码200
      body: # 响应内容
        inline_string: "This page will be provided soon later.\n"
  - match:
    prefix: "/"
    route:
      cluster: red # 如果直接请求/,就把请求路由到$cluster
  - name: vh_002
    domains: ["*"] # 匹配的域名,这里第二个虚拟主机相当于是一个默认虚拟主机,需要匹配所有,所以要放到最后
    routes:
    - match:
      prefix: "/"
     route:
      cluster: gray

3.2.2 基于标头和查询参数的匹配

virtual_hosts:
  - name: vh_001
    domains: ["*"] # 默认虚拟主机。匹配所有域名
    routes:
    - match:
      prefix: "/" # 匹配前缀: /
      headers: # 匹配的请求头
      - name: X-Canary # 请求头的名称
        exact_match: "true" # 请求头的值
      route:
        cluster: demoappv12 # 满足上面prefix和headers的,都路由到指定的cluster
    - match:
      prefix: "/"
      query_parameters: # 查询参数
      - name: "username" # 参数名称
        string_match: # 参数的值
          prefix: "vip_" #  # vip_开头
      route:
        cluster: demoappv11 # 满足上面prefix、name、string_match的都路由到这里的cluster
  - match:
    prefix: "/"
    route:
      cluster: demoappv10 # 不满足上面两个match的,都路由到这里指定的cluster

3.3 Envoy路由匹配过程

- 检测HTTP请求的host标头或:authority,并将其同路由配置中定义的虚拟主机作匹配检查;
- 在匹配到的虚拟主机配置中按顺序检查虚拟主机中的每个route条目中的匹配条件,直到第一个匹配的为止(短路,不再向下继续);
- 若定义了虚拟集群,按顺序检查虚拟主机中的每个虚拟集群,直到第一个匹配的为止;

---
listeners:
- name:
  address: {...}
  filter_chians: []
  - filters:
    - name: envoy.filters.network.http_connection_manager
      typed_config:
        "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
        stat_prefix: ingress_http
        codec_type: AUTO
        route_config:
          name: ...
          virutal_hosts: []
          - name: ...
            domains: [] # 虚拟主机的域名,路由匹配时将请求报文中的host标头值与此处列表项进行匹配检测;
            routes: [] # 路由条目,匹配到当前虚拟主机的请求中的path匹配检测将针对各route中由match定义条件进行;
            - name: ...
              match: {...} # 常用内嵌字段 prefix|path|sate_regex|connect_matcher,用于定义基于路径前缀、路径、正则表达式或连接匹配器四者之一定义匹配条件;
              route: {...} # 常用内嵌字段cluster|cluster_header|weighted_clusters,基于集群、请求报文中的集群标头或加权集群(流量分割)定义路由目标;
            virtual_clusters: [] # 为此虚拟主机定义的用于收集统计信息的虚拟集群列表;
            ...
          ...

3.4 envoy域名搜索顺序

将请求报文中的host标头值依次与路由表中定义的各Virtualhost的domain属性值进行比较,并于第一次匹配时终止搜索;

域名搜索顺序(下面的优先级1最高,往下以此类推):
(1)精准匹配:如ilinux.io
(2)前缀匹配:如*.ilinux.io或者*-ilinux.io。就是通配符在左边的
(3)后缀匹配:如ilinux.*或者ilinux-*.
(4)全局匹配:如[*],这种优先级最低

3.5 路由配置之match和route

3.5.1 match(流量分类)

match支持的匹配方式如下几种:
(1)prefix(前缀匹配):一般都是基于用户请求的前缀进行匹配
(2)path(路径匹配)
(3)safe_regex(正则表达式匹配)
(4)connect_matchter(链接匹配器)
上述4种,在每一个match中只能选择一个。
注意:早期版本中的regex已经被safe_regex取代,v3版本中只能使用safe_regex;

还可以额外根据headers(请求头)和query_parameters(请求参数)完成报文匹配。
匹配的到报文可有三种路由机制:
(1)redirect(重定向)
(2)direct_response(直接响应)
(3)route

3.5.2 route(流量目标)

支持把流量路由给cluster、weighted_clusters(权重集群)和cluster_header三者之一。
weighted_clusters:如A B集群,A集群权重90,B集群权重10,这样A集群就会得到90%的请求,一般流量分割时用到。
cluster_header:只有有指定请求头的请求,才会被路由到该集群,一般都是一些特定场景或者灰度测试时用到。

而且转发请求时,还可根据prefix_rewrite(前缀重写)和host_rewrite(主机地址重写)完成URL重写;
还可以额外配置流量管理机制,例如:
- 韧性相关:timeout(超时)、retry_policy(可重试策略)
- 测试相关: request_mirror_policies(请求镜像策略)
- 流控相关: rate_limits(限速策略)
- 访问控制相关:cors(跨域策略)

3.6 路由框架配置

符合匹配条件的请求要由如下三种方式之一处理:
(1)route:路由到指定位置
(2)redirect:重定向到指定位置
(3)direct_response:直接以给定的内容进行响应

路由中也可按需在请求及响应报文中添加或删除响应标头:
{
  "name": "...",
  "match": "{...}", # 定义匹配条件
  “route”: “{...}”, # 定义流量路由目标,与redirect和direct_response互斥
  “redirect”: “{...}”, # 将请求进行重定向,与route和direct_response互斥
  “direct_response”: “{...}”, # 用指定的内容直接响应请求,与redirect和redirect互斥
  “metadata”: “{...}”, # 为路由机制提供额外的元数据,常用于configuration、stats和logging相关功能,则通常需要先定义相关的filter
  "decorator": "{...}",
  "typed_per_filter_config": "{...}",
  "request_headers_to_add": [],
  "request_headers_to_remove": [],
  "response_headers_to_add": [],
  "response_headers_to_remove": [],
  "tracing": "{...}", 
  "per_request_buffer_limit_bytes": "{...}"
}

3.6.1 路由匹配

(1)匹配条件是定义的检测机制,用于过滤出符合条件的请求并对其作出所需的处理,例如路由、重定向或直接响应等,必须要定义prefix、path和regex三种匹配条件中的一种形式;

(2)除了必须设置上述三者其中之一外,还可额外完成如下限定;
- 区分字符大小写(case_sensitive)
- 匹配指定的运行键值表示的比例进行流量迁移(runtime_fraction,不断地修改运行时键值完成流量迁移);
- 基于标头的路由:匹配指定的一组标头(headers);
- 基于参数的路由:匹配指定的一组URL查询参数(query_parameters);
- 仅匹配grpc流量(grpc);

在这里插入图片描述

3.6.2 基于标头的路由匹配(route.HeaderMatcher)

路由器将根据路由配置中的所有指定标头检查请求的标头。
- 若路由中指定的所有标头都存在于请求中且具有相同值,则匹配
- 若配置中未指定标头值,则基于标头的存在性进行判断

标头及其值的上述检查机制仅能定义以下几个的其中1个:
- exact_match:精确标头值匹配
- safe_regex_match:正则表达式标头值匹配
- range_match:标头值范围匹配,检查标头值是否在指定范围内
- present_match:标头值存在性匹配,检查指定标头是否存在
- prefix_match:标头值前缀匹配
- suffix_match:标头值后缀匹配
- contains_match:检测标头值是否包含此处指定的字符串
- string_match:检测标头值是否包匹配此处指定的字符串
- invert_match:是否将匹配的标头值去取反,即满足条件为false,不满足条件为true,默认为false

在这里插入图片描述

3.6.3 基于查询参数的路由匹配(route.QueryParameterMatcher)

路由器将根据路由配置中指定的所有查询参数检查路径头中的查询字符串。
- 查询参数匹配将请求的URL中查询字符串视为以&符号分隔的“键”或“键=值”元素列表
- 若存在指定的查询参数,则所有参数都必须与URL中的查询字符串匹配
- 匹配条件指定为value、regex、string_match或present_match其中之一。

query_parameters:
  name: "..." # 指定要匹配的参数值
  string_match: "{...}" # 参数值的字符串匹配检查,支持使用以下五种检查方式其中之一进行字符串匹配
    exact: "...“ # 精准匹配
    prefix: "..." # 前缀匹配
    suffix: "..." # 后缀匹配
    contains: "..." # 包含匹配
    safe_regex: "{...}"  # 正则匹配
    ignore_case: ""  # 忽略大小写
  present_match: "..." # 存在匹配,只要存在,不管值是什么都可以

在这里插入图片描述

3.7 路由配置示例

3.7.1 简单路由匹配

[root@k8s-harbor01 ~]# cd -
/root/servicemesh_in_practise-MageEdu_N66/HTTP-Connection-Manager/httproute-simple-match
[root@k8s-harbor01 httproute-simple-match]# ll
总用量 12
-rw-r--r-- 1 root root 2066 85 2022 docker-compose.yaml
-rw-r--r-- 1 root root 3201 85 2022 front-envoy.yaml
-rw-r--r-- 1 root root 2983 85 2022 README.md
3.7.1.1 环境说明
docker-compose启动八个Service。
1个入口网关地址为:172.31.50.10。
7个后端服务:
- light_blue和dark_blue:对应于Envoy中的blue集群
- light_red和dark_red:对应于Envoy中的red集群
- light_green和dark_green:对应Envoy中的green集群
- gray:对应于Envoy中的gray集群
3.7.1.2 配置文件说明
# front-envoy.yaml
[root@k8s-harbor01 httproute-simple-match]# cat front-envoy.yaml
admin:
  profile_path: /tmp/envoy.prof
  access_log_path: /tmp/admin_access.log
  address:
    socket_address:
       address: 0.0.0.0
       port_value: 9901

static_resources:
  listeners:
  - name: listener_0
    address:
      socket_address: { address: 0.0.0.0, port_value: 80 }
    filter_chains:
    - filters:
      - name: envoy.filters.network.http_connection_manager
        typed_config:
          "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
          stat_prefix: ingress_http
          codec_type: AUTO
          route_config:
            name: local_route
            virtual_hosts:
            - name: vh_001
              domains: ["ilinux.io", "*.ilinux.io", "ilinux.*"]
              routes:
              - match:
                  path: "/service/blue"
                route:
                  cluster: blue
              - match:
                  safe_regex:
                    google_re2: {}
                    regex: "^/service/.*blue$"
                redirect:
                  path_redirect: "/service/blue"
              - match:
                  prefix: "/service/yellow"
                direct_response:
                  status: 200
                  body:
                    inline_string: "This page will be provided soon later.\n"
              - match:
                  prefix: "/"
                route:
                  cluster: red
            - name: vh_002
              domains: ["*"]
              routes:
              - match:
                  prefix: "/"
                route:
                  cluster: gray
          http_filters:
          - name: envoy.filters.http.router
            typed_config:
              "@type": type.googleapis.com/envoy.extensions.filters.http.router.v3.Router

  clusters:
  - name: blue
    connect_timeout: 0.25s
    type: STRICT_DNS
    lb_policy: ROUND_ROBIN
    http2_protocol_options: {}
    load_assignment:
      cluster_name: blue
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address:
                address: blue
                port_value: 80

  - name: red
    connect_timeout: 0.25s
    type: STRICT_DNS
    lb_policy: ROUND_ROBIN
    http2_protocol_options: {}
    load_assignment:
      cluster_name: red
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address:
                address: red
                port_value: 80

  - name: green
    connect_timeout: 0.25s
    type: STRICT_DNS
    lb_policy: ROUND_ROBIN
    http2_protocol_options: {}
    load_assignment:
      cluster_name: green
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address:
                address: green
                port_value: 80

  - name: gray
    connect_timeout: 0.25s
    type: STRICT_DNS
    lb_policy: ROUND_ROBIN
    http2_protocol_options: {}
    load_assignment:
      cluster_name: gray
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address:
                address: gray
                port_value: 80

# docker-compose.yaml
[root@k8s-harbor01 httproute-simple-match]# cat docker-compose.yaml
version: '3'

services:
  front-envoy:
    #image: envoyproxy/envoy-alpine:v1.21-latest
    image: envoyproxy/envoy:v1.23-latest
    environment:
      - ENVOY_UID=0
      - ENVOY_GID=0
    volumes:
      - ./front-envoy.yaml:/etc/envoy/envoy.yaml
    networks:
      envoymesh:
        ipv4_address: 172.31.50.10
    expose:
      # Expose ports 80 (for general traffic) and 9901 (for the admin server)
      - "80"
      - "9901"

  light_blue:
    image: ikubernetes/servicemesh-app:latest
    networks:
      envoymesh:
        aliases:
          - light_blue
          - blue
    environment:
      - SERVICE_NAME=light_blue
    expose:
      - "80"

  dark_blue:
    image: ikubernetes/servicemesh-app:latest
    networks:
      envoymesh:
        aliases:
          - dark_blue
          - blue
    environment:
      - SERVICE_NAME=dark_blue
    expose:
      - "80"

  light_green:
    image: ikubernetes/servicemesh-app:latest
    networks:
      envoymesh:
        aliases:
          - light_green
          - green
    environment:
      - SERVICE_NAME=light_green
    expose:
      - "80"

  dark_green:
    image: ikubernetes/servicemesh-app:latest
    networks:
      envoymesh:
        aliases:
          - dark_green
          - green
    environment:
      - SERVICE_NAME=dark_green
    expose:
      - "80"

  light_red:
    image: ikubernetes/servicemesh-app:latest
    networks:
      envoymesh:
        aliases:
          - light_red
          - red
    environment:
      - SERVICE_NAME=light_red
    expose:
      - "80"

  dark_red:
    image: ikubernetes/servicemesh-app:latest
    networks:
      envoymesh:
        aliases:
          - dark_red
          - red
    environment:
      - SERVICE_NAME=dark_red
    expose:
      - "80"

  gray:
    image: ikubernetes/servicemesh-app:latest
    networks:
      envoymesh:
        aliases:
          - gray
          - grey
    environment:
      - SERVICE_NAME=gray
    expose:
      - "80"

networks:
  envoymesh:
    driver: bridge
    ipam:
      config:
        - subnet: 172.31.50.0/24
3.7.1.3 启动容器
[root@k8s-harbor01 httproute-simple-match]# docker-compose up -d 
3.7.1.4 测试域名匹配机制
# 首先测试无法匹配到虚拟主机vh_001的域名
[root@k8s-harbor01 httproute-simple-match]# curl -H "Host: www.magedu.com" http://172.31.50.10/service/a # 该请求携带的host是无法被vh_001匹配的,所以会走最后的默认匹配,请求也就到了gray集群
Hello from App behind Envoy (service gray)! hostname: ad9f33f3c71a resolved hostname: 172.31.50.3

# 接着访问可以匹配vh_001的域名
[root@k8s-harbor01 httproute-simple-match]# curl -H "Host: www.ilinux.io" http://172.31.50.10/service/a # 这里www.ilinux.io是满足*.ilinux.io的,但是路由匹配的时候,只有/能满足,所以调度到red集群
Hello from App behind Envoy (service dark_red)! hostname: 424d88d76fd9 resolved hostname: 172.31.50.2
[root@k8s-harbor01 httproute-simple-match]# curl -H "Host: www.ilinux.io" http://172.31.50.10/service/a
Hello from App behind Envoy (service light_red)! hostname: f82dd82b3e51 resolved hostname: 172.31.50.6
3.7.1.5 测试路由匹配机制
# 首先访问“/service/blue”
[root@k8s-harbor01 httproute-simple-match]# curl -H "Host: www.ilinux.io" http://172.31.50.10/service/blue # 请求携带的host和请求的路径,都满足第一个虚拟主机,所以请求调到blue集群
Hello from App behind Envoy (service light_blue)! hostname: 0360309299c7 resolved hostname: 172.31.50.7

# 接着访问“/service/dark_blue”
[root@k8s-harbor01 httproute-simple-match]# curl -I -H "Host: www.ilinux.io" http://172.31.50.10/service/dark_blue # 通过返回可以看到,请求被重定向到了/service/blue
HTTP/1.1 301 Moved Permanently 
location: http://www.ilinux.io/service/blue
date: Thu, 12 Oct 2023 06:14:58 GMT
server: envoy
transfer-encoding: chunked

# 然后访问“/serevice/yellow”
[root@k8s-harbor01 httproute-simple-match]# curl -H "Host: www.ilinux.io" http://172.31.50.10/service/yellow # 可以看到直接返回了内容给我们
This page will be provided soon later.


3.7.1.6 清理环境
[root@k8s-harbor01 httproute-simple-match]# docker-compose  down

3.7.2 路由请求头匹配

3.7.2.1 环境说明
[root@k8s-harbor01 httproute-simple-match]# cd ../httproute-headers-match/
[root@k8s-harbor01 httproute-headers-match]# ls
docker-compose.yaml  front-envoy.yaml  README.md

[root@k8s-harbor01 httproute-headers-match]# cat README.md
一共6个服务,1个envoy网关,5个后端服务。
3.7.2.2 配置文件说明
[root@k8s-harbor01 httproute-headers-match]# cat front-envoy.yaml
admin:
  profile_path: /tmp/envoy.prof
  access_log_path: /tmp/admin_access.log
  address:
    socket_address:
       address: 0.0.0.0
       port_value: 9901

static_resources:
  listeners:
  - name: listener_0
    address:
      socket_address: { address: 0.0.0.0, port_value: 80 }
    filter_chains:
    - filters:
      - name: envoy.filters.network.http_connection_manager
        typed_config:
          "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
          stat_prefix: ingress_http
          codec_type: AUTO
          route_config:
            name: local_route
            virtual_hosts: # 虚拟主机配置
            - name: vh_001 # 虚拟主机
              domains: ["*"] # 匹配所有域名
              routes:
              - match:
                  prefix: "/" # 匹配前缀/
                  headers: # 匹配请求头X-Canary,对应值为true的(这就相当于工作中的金丝雀流量了)
                  - name: X-Canary
                    exact_match: "true"
                route:
                  cluster: demoappv12 # 满足上面- match的请求,路由到demoappv12集群
              - match:
                  prefix: "/" # 匹配前缀/
                  query_parameters: # 查询参数
                  - name: "username" # 名称为username
                    string_match: # 字符串匹配
                      prefix: "vip_" # 字符串前缀为vip_
                route:
                  cluster: demoappv11 # 满足上面 - match的,路由到demoappv11
              - match:
                  prefix: "/"
                route:
                  cluster: demoappv10 # 其余的路由到demoappv10
          http_filters:
          - name: envoy.filters.http.router

  clusters:
  - name: demoappv10
    connect_timeout: 0.25s
    type: STRICT_DNS
    lb_policy: ROUND_ROBIN
    load_assignment:
      cluster_name: demoappv10
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address:
                address: demoappv10
                port_value: 80

  - name: demoappv11
    connect_timeout: 0.25s
    type: STRICT_DNS
    lb_policy: ROUND_ROBIN
    load_assignment:
      cluster_name: demoappv11
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address:
                address: demoappv11
                port_value: 80

  - name: demoappv12
    connect_timeout: 0.25s
    type: STRICT_DNS
    lb_policy: ROUND_ROBIN
    load_assignment:
      cluster_name: demoappv12
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address:
                address: demoappv12
                port_value: 80

# docker-compose.yaml
[root@k8s-harbor01 httproute-headers-match]# cat docker-compose.yaml
version: '3'

services:
  front-envoy:
    image: envoyproxy/envoy-alpine:v1.21-latest
    environment:
      - ENVOY_UID=0
      - ENVOY_GID=0
    volumes:
      - ./front-envoy.yaml:/etc/envoy/envoy.yaml
    networks:
      envoymesh:
        ipv4_address: 172.31.52.10
    expose:
      # Expose ports 80 (for general traffic) and 9901 (for the admin server)
      - "80"
      - "9901"

  demoapp-v1.0-1:
    image: ikubernetes/demoapp:v1.0
    hostname: demoapp-v1.0-1
    networks:
      envoymesh:
        aliases:
          - demoappv10
    expose:
      - "80"

  demoapp-v1.0-2:
    image: ikubernetes/demoapp:v1.0
    hostname: demoapp-v1.0-2
    networks:
      envoymesh:
        aliases:
          - demoappv10
    expose:
      - "80"

  demoapp-v1.1-1:
    image: ikubernetes/demoapp:v1.1
    hostname: demoapp-v1.1-1
    networks:
      envoymesh:
        aliases:
          - demoappv11
    expose:
      - "80"

  demoapp-v1.1-2:
    image: ikubernetes/demoapp:v1.1
    hostname: demoapp-v1.1-2
    networks:
      envoymesh:
        aliases:
          - demoappv11
    expose:
      - "80"

  demoapp-v1.2-1:
    image: ikubernetes/demoapp:v1.2
    hostname: demoapp-v1.2-1
    networks:
      envoymesh:
        aliases:
          - demoappv12
    expose:
      - "80"

networks:
  envoymesh:
    driver: bridge
    ipam:
      config:
        - subnet: 172.31.52.0/24

3.7.2.3 启动容器
[root@k8s-harbor01 httproute-headers-match]# docker-compose up -d

[root@k8s-harbor01 httproute-headers-match]# docker-compose ps
                 Name                               Command               State              Ports
-------------------------------------------------------------------------------------------------------------
httprouteheadersmatch_demoapp-v1.0-1_1   /bin/sh -c python3 /usr/lo ...   Up      80/tcp
httprouteheadersmatch_demoapp-v1.0-2_1   /bin/sh -c python3 /usr/lo ...   Up      80/tcp
httprouteheadersmatch_demoapp-v1.1-1_1   /bin/sh -c python3 /usr/lo ...   Up      80/tcp
httprouteheadersmatch_demoapp-v1.1-2_1   /bin/sh -c python3 /usr/lo ...   Up      80/tcp
httprouteheadersmatch_demoapp-v1.2-1_1   /bin/sh -c python3 /usr/lo ...   Up      80/tcp
httprouteheadersmatch_front-envoy_1      /docker-entrypoint.sh envo ...   Up      10000/tcp, 80/tcp, 9901/tcp

3.7.2.4 测试:不带任何参数请求
# 下面的请求直接就到了demoappv10
[root@k8s-harbor01 httproute-headers-match]# curl 172.31.52.10/hostname 
ServerName: demoapp-v1.0-1
[root@k8s-harbor01 httproute-headers-match]# curl 172.31.52.10/hostname
ServerName: demoapp-v1.0-2

3.7.2.5 测试:携带请求头“X-Canary: true”发起请求
[root@k8s-harbor01 httproute-headers-match]# curl -H "X-Canary: true" 172.31.52.10/hostname
ServerName: demoapp-v1.2-1

3.7.2.6 测试:使用特定查询条件发起请求
[root@k8s-harbor01 httproute-headers-match]# curl 172.31.52.10/hostname?username=vip_xxx
ServerName: demoapp-v1.1-2

[root@k8s-harbor01 httproute-headers-match]# curl 172.31.52.10/hostname?username=vip_abvc
ServerName: demoapp-v1.1-1
3.7.2.7 清理环境
[root@k8s-harbor01 httproute-headers-match]# docker-compose  down

4. 灰度发布

4.1 Envoy流量治理术语扩展

(1)流量迁移 
如公司存在2个平台,A服务在A平台,现在需要把A迁移到B平台,那么就会在B平台发布一个A服务,然后在负载均衡或者外部网关进行流量切换,把A服务的流量切换到B服务。

(2)流量分割
就是AB服务同时存在,并且都接收外部流量。

(3)流量镜像
如新上了一个服务C,但是该服务还在测试阶段,所以需要复制正常业务的流量进行测试。
该服务只接收请求,但是不返回数据,所以不会影响客户端,如果测试没有问题,再替换就版本。
简单来说就是复制流量,用来测试服务的可用性。

(4)故障注入
(5)超时和重试
(6)CORS(跨域资源共享)

4.2 灰度发布

4.2.1 灰度介绍

新版本上线时,无论是出于产品稳定性还是用户接受程度等方面因素的考虑,直接以新代旧都充满风险;
于是,通行做法是新老版本同时在线,且一开始仅分出较小比例的流量至新版本,待确认新版本没问题后再逐级加大流量切换;

 灰度发布是迭代的软件产品在生产环境安全上线的一种重要手段,对于Envoy来说,灰度发布仅是流量治理的一种典型应用;
 以下是几种常见的场景:
 - 金丝雀发布
- 蓝绿发布
- A/B测试
- 流量镜像

4.2.2 灰度策略介绍

- 需要在生产环境发布一个新的待上线版本时,需要事先添加一个灰度版本,而后将原有的生产环境的默认版本的流量引流一部分至此灰度版本,配置的引流机制即为灰度策略;
- 经过评估稳定后,即可配置此灰度版本接管所有流量,并下线老版本;

4.3 金丝雀部署(Canary Deployment)

(1)金丝雀部署通过在生产环境运行的服务中引一部分实际流量对一个新版本进行测试,测试新版本的性能和表现,然后从这部分的新版本中快速获取用户反馈;

(2)它的特点就是通过在线上运行的服务中,新加入少量的新版本的服务,然后从这少量的新版本中快速获得反馈,根据反馈决定最后的交付形态。

在这里插入图片描述

4.4 蓝绿部署(Blue-Green Deployment)

(1)蓝绿发布提供了一种零宕机的部署方式,不停用老版本的同时部署新版本进行测试,确认没问题后将流量切到新版本;

(2)特点
- 在保留旧版本的同时部署新版本,将两个版本同时在线,新版本和旧版本互相热备。
- 通过切换路由权重 (weight) 的方式(非 0100)实现应用的不同版本上线或者下线,如果有问题可以快速地回滚到老版本。

在这里插入图片描述

4.5 A/B测试(A/B Testing)

从本质上讲,AB测试是一种实验,通过向用户随机显示页面的两个或多个变体,并使用统计分析来确定哪种变体对于给定的转化目标效果更好;
比如2个web页面,想统计哪个页面用户更喜欢,就可以用AB测试的方式进行部署,然后开放给用户访问。

A/B测试可以用于测试、比较和分析几乎所有内容:
- 最常用于网站以及移动应用程序:将Web或App界面或流程的两个或多个版本,在同一时间维度,分别让两个或多个属性或组成成分相同(相似)的访客群组访问,收集各群组的用户体验数据和业务数据,最后分析评估出最好版本以正式采用。
- 主要用于转换率优化,一般在线业务会定期通过A/B测试来优化其目标网页并提高ROI。

A/B测试需要同时在线上部署A和B两个对等版本同时接收用户流量
- 按一定的目标选择策略将一部分用户导向A版本,让另一部分用户使用B版本;
- 分别收集两部分用户的反馈,并根据分析结果确定最终使用的版本;

在这里插入图片描述

4.6 灰度策略

常用的策略类型大体可分为“基于请求内容发布”和“基于流量比例发布”两种类型:
(1)基于请求内容发布:配置相应的请求内容规则,满足相应规则服务流量会路由到灰度版本;例如对于http请求,通过匹配特定的Cookie标头值完成引流。
- Cookie内容:
a. 完全匹配:当且仅当表达式完全符合此情况时,流量才会走到这个版本;
b. 正则匹配:此处需要您使用正则表达式来匹配相应的规则;

- 自定义Header:通过在入口网关,对一部分流量自动注入一个header,调度时就匹配这个请求头进行灰度流量调度。
a. 完全匹配:当且仅当表达式完全符合此情况时,流量才会走到这个版本;
b. 正则匹配:此处需要您使用正则表达式来匹配相应的规则;

- 可以自定义请求头的key和value,value支持完全匹配和正则匹配;

(2)基于流量比例发布:对灰度版本配置期望的流量权重,将服务流量以指定权重比例引流到灰度版本;例如10%的流量分配为新版本,90%的流量保持在老版本;
- 所有版本的权重之和为100;
- 这种灰度策略也可以称为AB测试;

4.7 灰度发布的实施方式

(1)基于负载均衡器进行灰度发布(非容器化环境)
- 在服务入口的支持流量策略的负载均衡器上配置流量分布机制
- 仅支持对入口服务进行灰度,无法支撑后端服务需求,因为nginx代理是在入口的,如果想后端也能灰度,那只能在所有后端都安装一个nginx。

(2)基于Kubernetes进行灰度发布
- 根据新旧版本应用所在的Pod数量比例进行流量分配
- 不断滚动更新旧版本Pod到新版本:先增后减、先减后增、又增又减;
- 服务入口通常是Service或Ingress;# 基于Ingiress的话,也只能在入口处进行灰度,原因同上。

(3)基于Istio 服务网格进行灰度发布
- 对于Envoy或Istio来说,灰度发布仅是流量治理机制的一种典型应用
- 通过控制平面,将流量配置策略分发至对目标服务的请求发起方的envoy sidecar上即可
- 支持基于请求内容的流量分配机制,例如浏览器类型、cookie等;
- 服务访问入口通常是一个单独部署的Envoy Gateway;

4.8 灰度发布的实施过程

# 主要是下面两步
(1)构建承载实例 # 下面3种方式3选一
- 滚动更新,分批进行:
先下线部分实例,更新后上线;
先更新上线部分实例,后下线对应比例的老版本实例并进行版本更新;

- 蓝绿部署:
额外准备全量的新版本实例;

- A/B测试:
滚动更新机制或蓝绿部署机制的特殊应用;

(2)配置流量策略
- 配合实例更新机制调整流量策略
下线并排空旧版本实例的流量;
上线并分配流量到新版本实例;
慢启动;

5. Envoy流量迁移和分割

(1)新版本上线时,为兼顾到产品的稳定性及用户的接受程度,让新老版本同时在线,将流量按需要分派至不同的版本;
- 蓝绿发布
- A/B测试
- 金丝雀发布

(2)HTTP路由器能够将流量按比例分成两个或多个上游集群中虚拟主机中的路由,从而产生两种常见用例:
a. 版本升级:路由时将流量逐渐从一个集群迁移至另一个集群,实现灰度发布;
- 通过在路由中定义路由相关流量的百分比进行;

b. A/B测试或多变量测试:同时测试多个相同的服务,路由的流量必须在相同服务的不同版本的集群之间分配;
- 通过在路由中使用基于权重的集群路由完成;

另外,匹配条件中,结合指定标头也能够完成基于内容的流量管理;

5.1 高级路由:流量迁移

(1)通过在路由中配置运行时对象选择特定路由以及相应集群的概率的变动,从而实现将虚拟主机中特定路由的流量逐渐从一个集群迁移到另一个集群;
---
routes:
- match: # 定义路由匹配参数;
  prefix|path|regex: ... # 流量过滤条件,三者必须定义其中之一;
  runtime_fraction: # 额外匹配指定的运行时键值,每次评估匹配路径时,它必需低于此字段指示的匹配百分比;支持渐进式修改;
    default_value: # 运行时键值不可用时,则使用此默认值;
    numerator: # 指定分子,默认为0;
      denominator: # 指定分母,小于分子时,最终百分比为1;分母可固定使用HUNDRED(默认)、TEN_THOUSAND和MILLION;
    runtime_key: routing.traffic_shift.KEY # 指定要使用的运行时键,其值需要用户自定义;
  route:
    custer: app1_v1
- match:
    prefix|path|regex: ... # 此处的匹配条件应该与前一个路由匹配条件相同,以确保能够分割流量;
  route:
    cluster: app1_v2 # 此处的集群通常是前一个路由项目中的目标集群应用程序的不同版本;

(2)在路由匹配方面,Envoy在检测到第一个匹配时即终止后续检测;因而,流量迁移应该如此配置。
- 配置两个使用相同的match条件的路由条目。
- 在第一个路由条目中配置runtime_fraction对象,并设定其接收的流量比例。
- 该流量比例之外的其它请求将由第二个路由条目所捕获。

(3)用户再不断地通过Envoy的admin接口修改runtime_fraction对象的值完成流量迁移;
- 如:~]# curl -XPOST http://envoy_ip:admin_port/runtime_modify?key1=val1&key2=val2

5.1.1 流量迁移配置讲解

假设存在某微服务应用demoapp,其1.0和1.1两个版本分别对应于demmoapp-v1.0和demoapp-v1.1两个集群;
- 应用新的路由配置时,demoapp-v1.0承载所有的请求流量;
- 通过不断地调整运行时参数routing.traffic_shift.myapp的值来进行流量迁移,最终可将流量完全迁往demoapp-v1.1集群上;
curl -XPOST 'http://front_envop_ip:admin_port/runtime_modify?routing.traffic
_shift.demoapp=90'

route_config:
  name: local_route
  virtual_hosts:
  - name: demoapp
    domains: ["*"]
    routes:
    - match:
      prefix: "/"
      runtime_fraction:
        default_value:
          numerator: 100 # 分子配置
          denominator: HUNDRED # 分母配置:100
        runtime_key: routing.traffic_shift.demoapp # 后续通过管理接口,调整该配置的值来调整流量分发比例,这里的值就相当于numerator分子的值
    route:
      cluster: demoappv10 # 默认所有流量都调度给该集群,因为分母分子的值都是100
    - match:
      prefix: "/"
    route:
      cluster: demoappv11

5.1.2 流量迁移示例

5.1.2.1 配置文件
# front-envoy.yaml
[root@k8s-harbor01 http-traffic-shifting]# cat front-envoy.yaml
admin:
  profile_path: /tmp/envoy.prof
  access_log_path: /tmp/admin_access.log
  address:
    socket_address:
       address: 0.0.0.0
       port_value: 9901

# 只有添加下面的运行时配置,后面的curl命令才能调用runtime_modify
layered_runtime: # envoy运行时配置
  layers: # 运行时的层级列表,后面的层将覆盖先前层上的配置;
  - name: admin # 运行时的层级名称
    admin_layer: {} # 管理控制台运行时层级,即通过/runtime管理端点查看,通过/runtime_modify管理端点修改的配置方式;

static_resources:
  listeners:
  - name: listener_0
    address:
      socket_address: { address: 0.0.0.0, port_value: 80 }
    filter_chains:
    - filters:
      - name: envoy.filters.network.http_connection_manager
        typed_config:
          "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
          stat_prefix: ingress_http
          codec_type: AUTO
          route_config:
            name: local_route
            virtual_hosts:
            - name: demoapp
              domains: ["*"]
              routes:
              - match:
                  prefix: "/"
                  runtime_fraction: # 额外匹配指定的运行时键值,每次评估匹配路径时,它必需低于此字段指示的匹配百分比;支持渐进式修改;
                    default_value: # 默认值
                      numerator: 100 # 分子配置
                      denominator: HUNDRED # 分母配置
                    runtime_key: routing.traffic_shift.demoapp # 管理接口动态修改的接口,相当于修改numerator
                route:
                  cluster: demoappv10 # 默认情况下,所有流量都会调度到该集群
              - match:
                  prefix: "/"
                route:
                  cluster: demoappv11 # 这里的集群,只有通过管理接口修改分子的值后,才能有流量进来
          http_filters:
          - name: envoy.filters.http.router

  clusters:
  - name: demoappv10
    connect_timeout: 0.25s
    type: STRICT_DNS
    lb_policy: ROUND_ROBIN
    load_assignment:
      cluster_name: demoappv10
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address:
                address: demoappv10
                port_value: 80

  - name: demoappv11
    connect_timeout: 0.25s
    type: STRICT_DNS
    lb_policy: ROUND_ROBIN
    load_assignment:
      cluster_name: demoappv11
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address:
                address: demoappv11
                port_value: 80


# docker-compose.yaml
[root@k8s-harbor01 http-traffic-shifting]# cat docker-compose.yaml
version: '3'

services:
  front-envoy:
    image: envoyproxy/envoy-alpine:v1.21-latest
    environment:
      - ENVOY_UID=0
      - ENVOY_GID=0
    volumes:
      - ./front-envoy.yaml:/etc/envoy/envoy.yaml
    networks:
      envoymesh:
        ipv4_address: 172.31.55.10
    expose:
      # Expose ports 80 (for general traffic) and 9901 (for the admin server)
      - "80"
      - "9901"

  demoapp-v1.0-1:
    image: ikubernetes/demoapp:v1.0
    hostname: demoapp-v1.0-1
    networks:
      envoymesh:
        aliases:
          - demoappv10
    expose:
      - "80"

  demoapp-v1.0-2:
    image: ikubernetes/demoapp:v1.0
    hostname: demoapp-v1.0-2
    networks:
      envoymesh:
        aliases:
          - demoappv10
    expose:
      - "80"

  demoapp-v1.0-3:
    image: ikubernetes/demoapp:v1.0
    hostname: demoapp-v1.0-3
    networks:
      envoymesh:
        aliases:
          - demoappv10
    expose:
      - "80"

  demoapp-v1.1-1:
    image: ikubernetes/demoapp:v1.1
    hostname: demoapp-v1.1-1
    networks:
      envoymesh:
        aliases:
          - demoappv11
    expose:
      - "80"

  demoapp-v1.1-2:
    image: ikubernetes/demoapp:v1.1
    hostname: demoapp-v1.1-2
    networks:
      envoymesh:
        aliases:
          - demoappv11
    expose:
      - "80"

networks:
  envoymesh:
    driver: bridge
    ipam:
      config:
        - subnet: 172.31.55.0/24

# 测试脚本 send-request.sh
[root@k8s-harbor01 http-traffic-shifting]# cat send-request.sh
#!/bin/bash
declare -i ver10=0
declare -i ver11=0

interval="0.2"

while true; do
        if curl -s http://$1/hostname | grep "demoapp-v1.0" &> /dev/null; then
                # $1 is the host address of the front-envoy.
                ver10=$[$ver10+1]
        else
                ver11=$[$ver11+1]
        fi
        echo "demoapp-v1.0:demoapp-v1.1 = $ver10:$ver11"
        sleep $interval
done

5.1.2.2 启动容器
root@k8s-harbor01 http-traffic-shifting]# docker-compose up -d

[root@k8s-harbor01 http-traffic-shifting]# docker-compose ps
                Name                              Command               State              Ports
-----------------------------------------------------------------------------------------------------------
httptrafficshifting_demoapp-v1.0-1_1   /bin/sh -c python3 /usr/lo ...   Up      80/tcp
httptrafficshifting_demoapp-v1.0-2_1   /bin/sh -c python3 /usr/lo ...   Up      80/tcp
httptrafficshifting_demoapp-v1.0-3_1   /bin/sh -c python3 /usr/lo ...   Up      80/tcp
httptrafficshifting_demoapp-v1.1-1_1   /bin/sh -c python3 /usr/lo ...   Up      80/tcp
httptrafficshifting_demoapp-v1.1-2_1   /bin/sh -c python3 /usr/lo ...   Up      80/tcp
httptrafficshifting_front-envoy_1      /docker-entrypoint.sh envo ...   Up      10000/tcp, 80/tcp, 9901/tcp

5.1.3 流量迁移测试

5.1.3.1 测试默认情况下,流量调度比例
[root@k8s-harbor01 http-traffic-shifting]# ./send-request.sh 172.31.55.10 # 从下面的输出可以看到,现在demoapp-v1.1是没有流量进去的
demoapp-v1.0:demoapp-v1.1 = 1:0
demoapp-v1.0:demoapp-v1.1 = 2:0
demoapp-v1.0:demoapp-v1.1 = 3:0
demoapp-v1.0:demoapp-v1.1 = 4:0
demoapp-v1.0:demoapp-v1.1 = 5:0
demoapp-v1.0:demoapp-v1.1 = 6:0

5.1.3.2 调整流量分发比例,再请求查看流量调度比例
# 将保留给demoappv10集群的流量比例调整为90%,方法是将指定键的值定义为相应的分子数即可
[root@k8s-harbor01 http-traffic-shifting]# curl -XPOST http://172.31.55.10:9901/runtime_modify?routing.traffic_shift.demoapp=90 # 使用这里的runtime_modify,一定要在入口网关配置上开启运行时配置才行
OK

# 再请求测试
[root@k8s-harbor01 http-traffic-shifting]# ./send-request.sh 172.31.55.10 # 从下面的输出可以看到,是有1部分流量(10%)到了demoapp-v1.1
demoapp-v1.0:demoapp-v1.1 = 1:0
demoapp-v1.0:demoapp-v1.1 = 2:0
demoapp-v1.0:demoapp-v1.1 = 3:0
demoapp-v1.0:demoapp-v1.1 = 4:0
demoapp-v1.0:demoapp-v1.1 = 5:0
demoapp-v1.0:demoapp-v1.1 = 6:0
demoapp-v1.0:demoapp-v1.1 = 7:0
demoapp-v1.0:demoapp-v1.1 = 8:0
demoapp-v1.0:demoapp-v1.1 = 9:0
demoapp-v1.0:demoapp-v1.1 = 10:0
demoapp-v1.0:demoapp-v1.1 = 10:1
demoapp-v1.0:demoapp-v1.1 = 11:1
demoapp-v1.0:demoapp-v1.1 = 12:1
demoapp-v1.0:demoapp-v1.1 = 13:1
demoapp-v1.0:demoapp-v1.1 = 14:1
demoapp-v1.0:demoapp-v1.1 = 15:1
demoapp-v1.0:demoapp-v1.1 = 16:1
demoapp-v1.0:demoapp-v1.1 = 16:2
demoapp-v1.0:demoapp-v1.1 = 17:2
demoapp-v1.0:demoapp-v1.1 = 18:2
demoapp-v1.0:demoapp-v1.1 = 19:2
demoapp-v1.0:demoapp-v1.1 = 20:2
demoapp-v1.0:demoapp-v1.1 = 21:2
demoapp-v1.0:demoapp-v1.1 = 22:2
demoapp-v1.0:demoapp-v1.1 = 23:2
demoapp-v1.0:demoapp-v1.1 = 23:3
demoapp-v1.0:demoapp-v1.1 = 24:3
demoapp-v1.0:demoapp-v1.1 = 25:3
demoapp-v1.0:demoapp-v1.1 = 26:3

# 将旧集群的权限调整为10
[root@k8s-harbor01 http-traffic-shifting]# curl -XPOST http://172.31.55.10:9901/runtime_modify?routing.traffic_shift.demoapp=10
OK

# 再请求
[root@k8s-harbor01 http-traffic-shifting]# ./send-request.sh 172.31.55.10 # 可以看到现在90%的请求都到了新集群
demoapp-v1.0:demoapp-v1.1 = 0:1
demoapp-v1.0:demoapp-v1.1 = 0:2
demoapp-v1.0:demoapp-v1.1 = 0:3
demoapp-v1.0:demoapp-v1.1 = 0:4
demoapp-v1.0:demoapp-v1.1 = 0:5
demoapp-v1.0:demoapp-v1.1 = 0:6
demoapp-v1.0:demoapp-v1.1 = 0:7
demoapp-v1.0:demoapp-v1.1 = 0:8
demoapp-v1.0:demoapp-v1.1 = 0:9
demoapp-v1.0:demoapp-v1.1 = 0:10
demoapp-v1.0:demoapp-v1.1 = 0:11
demoapp-v1.0:demoapp-v1.1 = 0:12
demoapp-v1.0:demoapp-v1.1 = 0:13
demoapp-v1.0:demoapp-v1.1 = 0:14
demoapp-v1.0:demoapp-v1.1 = 0:15
demoapp-v1.0:demoapp-v1.1 = 0:16
demoapp-v1.0:demoapp-v1.1 = 0:17
demoapp-v1.0:demoapp-v1.1 = 0:18
demoapp-v1.0:demoapp-v1.1 = 0:19
demoapp-v1.0:demoapp-v1.1 = 0:20
demoapp-v1.0:demoapp-v1.1 = 0:21
demoapp-v1.0:demoapp-v1.1 = 0:22
demoapp-v1.0:demoapp-v1.1 = 0:23
demoapp-v1.0:demoapp-v1.1 = 0:24
demoapp-v1.0:demoapp-v1.1 = 0:25
demoapp-v1.0:demoapp-v1.1 = 0:26
demoapp-v1.0:demoapp-v1.1 = 0:27
demoapp-v1.0:demoapp-v1.1 = 1:27
demoapp-v1.0:demoapp-v1.1 = 1:28
demoapp-v1.0:demoapp-v1.1 = 1:29
demoapp-v1.0:demoapp-v1.1 = 2:29
demoapp-v1.0:demoapp-v1.1 = 2:30
demoapp-v1.0:demoapp-v1.1 = 2:31
demoapp-v1.0:demoapp-v1.1 = 2:32
demoapp-v1.0:demoapp-v1.1 = 2:33
demoapp-v1.0:demoapp-v1.1 = 2:34
demoapp-v1.0:demoapp-v1.1 = 3:34
demoapp-v1.0:demoapp-v1.1 = 3:35
demoapp-v1.0:demoapp-v1.1 = 3:36

5.1.4 清理环境

[root@k8s-harbor01 http-traffic-shifting]# docker-compose  down

5.2 高级路由:流量分割

(1)HTTP router过滤器支持在一个路由中指定多个上游具有权重属性的集群,而后将流量基于权重调度至此些集群其中之一;
---
routes
- match: {...} #路由 匹配
   route:
     weight_clusters: {...}  # 匹配上的请求路由到该权重集群
       clusters: [] # 与当前路由关联的一个或多个集群,必选参数;
       - name: ... # 目标集群名称;也可以使用“cluster_header”字段来指定集群;二者互斥;
         weight: ... # 集群权重,取值范围为0至total_weight;权重代表流量分配比例
         metadata_match: {...} # 子集负载均衡器使用的端点元数据匹配条件,可选参数,仅用于上游集群中具有与此字段中设置的元数据匹配的元数端点以进行流量分配;
       total_weight: ... # 总权重值,默认为100;
       runtime_key_prefix: ... # 可选参数,用于设定键前缀,从而每个集群以“runtime_key_prefix+.+cluster[i].name”为其键名,并能够以运行时,通过这种方式来动态修改权重,但是动态调整的时候,一定要注意权重的总和是100,也就是说如果是2个集群,那么调整了A,也得调整B
       # 键值的方式为每个集群提供权重;其中,cluster[i].name表示列表中第i个集群名称;

(2)类似流量迁移,流量分割中,分配给每个集群的权重也可以使用运行时参数进行调整;

5.2.1 流量迁移配置讲解

仍以demoapp为例,其1.0和1.1两个版本分别对应于demoappv10和demoappv11两个集群;
(1) 初始权重,demoappv10为100,从而要承载所有请求流量,而demoappv11为0;
随后可通过运行时参数,将demoappv11的权重设置为100,而demoappv10的为0,从而所有流量切往demoappv11,模拟蓝绿部署过程;


(2)各自的权重比例亦可可通过运行时参数动态调整;
curl -XPOST 'http://172.31.57.10:9901/runtime_modify?routing.traffic_split.dem
oapp.demoappv10=0&routing.traffic_split.demoapp.demoappv11=100'

virtual_hosts: # 虚拟主机配置
- name: demoapp
  domains: ["*"]
  routes:
  - match:
    prefix: "/"
  route:
    weighted_clusters: # 权重集群
      clusters:
      - name: demoappv10
        weight: 100 # 默认情况下,所有请求进来的流量都会到demoappv10
      - name: demoappv11
        weight: 0
      total_weight: 100 # 权重总和
      runtime_key_prefix: routing.traffic_split.demoapp # 运行时key前缀,可通过curl命令来动态调整所有集群的权重

5.2.2 流量分隔示例

5.2.2.1 配置文件
[root@k8s-harbor01 http-traffic-shifting]# cd ../http-traffic-splitting/
[root@k8s-harbor01 http-traffic-splitting]# ls
docker-compose.yaml  front-envoy.yaml  README.md  send-request.sh

# front-envoy.yaml
[root@k8s-harbor01 http-traffic-splitting]# cat front-envoy.yaml
admin:
  profile_path: /tmp/envoy.prof
  access_log_path: /tmp/admin_access.log
  address:
    socket_address:
       address: 0.0.0.0
       port_value: 9901

layered_runtime:
  layers:
  - name: admin
    admin_layer: {}

static_resources:
  listeners:
  - name: listener_0
    address:
      socket_address: { address: 0.0.0.0, port_value: 80 }
    filter_chains:
    - filters:
      - name: envoy.filters.network.http_connection_manager
        typed_config:
          "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
          stat_prefix: ingress_http
          codec_type: AUTO
          route_config:
            name: local_route
            virtual_hosts:
            - name: demoapp
              domains: ["*"]
              routes:
              routes:
              - match:
                  prefix: "/"
                route:
                  weighted_clusters:
                    clusters:
                    - name: demoappv10
                      weight: 100
                    - name: demoappv11
                      weight: 0
                    total_weight: 100
                    runtime_key_prefix: routing.traffic_split.demoapp
          http_filters:
          - name: envoy.filters.http.router

  clusters:
  - name: demoappv10
    connect_timeout: 0.25s
    type: STRICT_DNS
    lb_policy: ROUND_ROBIN
    load_assignment:
      cluster_name: demoappv10
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address:
                address: demoappv10
                port_value: 80

  - name: demoappv11
    connect_timeout: 0.25s
    type: STRICT_DNS
    lb_policy: ROUND_ROBIN
    load_assignment:
      cluster_name: demoappv11
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address:
                address: demoappv11
                port_value: 80


# docker-compose.yaml
[root@k8s-harbor01 http-traffic-splitting]# cat docker-compose.yaml
version: '3'

services:
  front-envoy:
    image: envoyproxy/envoy-alpine:v1.21-latest
    environment:
      - ENVOY_UID=0
      - ENVOY_GID=0
    volumes:
      - ./front-envoy.yaml:/etc/envoy/envoy.yaml
    networks:
      envoymesh:
        ipv4_address: 172.31.57.10
    expose:
      # Expose ports 80 (for general traffic) and 9901 (for the admin server)
      - "80"
      - "9901"

  demoapp-v1.0-1:
    image: ikubernetes/demoapp:v1.0
    hostname: demoapp-v1.0-1
    networks:
      envoymesh:
        aliases:
          - demoappv10
    expose:
      - "80"

  demoapp-v1.0-2:
    image: ikubernetes/demoapp:v1.0
    hostname: demoapp-v1.0-2
    networks:
      envoymesh:
        aliases:
          - demoappv10
    expose:
      - "80"

  demoapp-v1.0-3:
    image: ikubernetes/demoapp:v1.0
    hostname: demoapp-v1.0-3
    networks:
      envoymesh:
        aliases:
          - demoappv10
    expose:
      - "80"

  demoapp-v1.1-1:
    image: ikubernetes/demoapp:v1.1
    hostname: demoapp-v1.1-1
    networks:
      envoymesh:
        aliases:
          - demoappv11
    expose:
      - "80"

  demoapp-v1.1-2:
    image: ikubernetes/demoapp:v1.1
    hostname: demoapp-v1.1-2
    networks:
      envoymesh:
        aliases:
          - demoappv11
    expose:
      - "80"

networks:
  envoymesh:
    driver: bridge
    ipam:
      config:
        - subnet: 172.31.57.0/24

# send-request.sh
[root@k8s-harbor01 http-traffic-splitting]# cat send-request.sh
#!/bin/bash
declare -i ver10=0
declare -i ver11=0

interval="0.2"

while true; do
        if curl -s http://$1/hostname | grep "demoapp-v1.0" &> /dev/null; then
                # $1 is the host address of the front-envoy.
                ver10=$[$ver10+1]
        else
                ver11=$[$ver11+1]
        fi
        echo "demoapp-v1.0:demoapp-v1.1 = $ver10:$ver11"
        sleep $interval
done
5.2.2.2 启动容器
[root@k8s-harbor01 http-traffic-splitting]# docker-compose up -d

[root@k8s-harbor01 http-traffic-splitting]# docker-compose ps
                Name                               Command               State              Ports
------------------------------------------------------------------------------------------------------------
httptrafficsplitting_demoapp-v1.0-1_1   /bin/sh -c python3 /usr/lo ...   Up      80/tcp
httptrafficsplitting_demoapp-v1.0-2_1   /bin/sh -c python3 /usr/lo ...   Up      80/tcp
httptrafficsplitting_demoapp-v1.0-3_1   /bin/sh -c python3 /usr/lo ...   Up      80/tcp
httptrafficsplitting_demoapp-v1.1-1_1   /bin/sh -c python3 /usr/lo ...   Up      80/tcp
httptrafficsplitting_demoapp-v1.1-2_1   /bin/sh -c python3 /usr/lo ...   Up      80/tcp
httptrafficsplitting_front-envoy_1      /docker-entrypoint.sh envo ...   Up      10000/tcp, 80/tcp, 9901/tcp

5.2.3 流量分割测试

5.2.3.1 测试默认情况下,流量分割比例
[root@k8s-harbor01 http-traffic-splitting]# ./send-request.sh 172.31.57.10 # 可以看到此时的流量还没有被分割
demoapp-v1.0:demoapp-v1.1 = 1:0
demoapp-v1.0:demoapp-v1.1 = 2:0
demoapp-v1.0:demoapp-v1.1 = 3:0
demoapp-v1.0:demoapp-v1.1 = 4:0
demoapp-v1.0:demoapp-v1.1 = 5:0
demoapp-v1.0:demoapp-v1.1 = 6:0
demoapp-v1.0:demoapp-v1.1 = 7:0
demoapp-v1.0:demoapp-v1.1 = 8:0
demoapp-v1.0:demoapp-v1.1 = 9:0
demoapp-v1.0:demoapp-v1.1 = 10:0
demoapp-v1.0:demoapp-v1.1 = 11:0
demoapp-v1.0:demoapp-v1.1 = 12:0

5.2.3.2 调整集群权重
# 将集群权重对调来模拟蓝绿部署,方法是在指定键(runtime_key)的值后附加以点号分隔的集群名称,并为其各自定义为相应的新权重值即可;
[root@k8s-harbor01 http-traffic-splitting]# curl -XPOST 'http://172.31.57.10:9901/runtime_modify?routing.traffic_split.demoapp.demoappv10=0&routing.traffic_split.demoapp.demoappv11=100'
OK
5.2.3.3 请求测试
[root@k8s-harbor01 http-traffic-splitting]# ./send-request.sh 172.31.57.10 # 通过返回可以看到,此时旧的集群已经没有流量过去了
demoapp-v1.0:demoapp-v1.1 = 0:1
demoapp-v1.0:demoapp-v1.1 = 0:2
demoapp-v1.0:demoapp-v1.1 = 0:3
demoapp-v1.0:demoapp-v1.1 = 0:4
demoapp-v1.0:demoapp-v1.1 = 0:5
demoapp-v1.0:demoapp-v1.1 = 0:6
demoapp-v1.0:demoapp-v1.1 = 0:7
demoapp-v1.0:demoapp-v1.1 = 0:8
demoapp-v1.0:demoapp-v1.1 = 0:9
demoapp-v1.0:demoapp-v1.1 = 0:10
demoapp-v1.0:demoapp-v1.1 = 0:11
demoapp-v1.0:demoapp-v1.1 = 0:12
demoapp-v1.0:demoapp-v1.1 = 0:13
demoapp-v1.0:demoapp-v1.1 = 0:14

5.2.3.3 将两个集群的权重都调整为各50
[root@k8s-harbor01 http-traffic-splitting]# curl -XPOST 'http://172.31.57.10:9901/runtime_modify?routing.traffic_split.demoapp.demoappv10=50&routing.traffic_split.demoapp.demoappv11=50'
OK

5.2.3.4 请求测试
[root@k8s-harbor01 http-traffic-splitting]# ./send-request.sh 172.31.57.10
demoapp-v1.0:demoapp-v1.1 = 0:1
demoapp-v1.0:demoapp-v1.1 = 0:2
demoapp-v1.0:demoapp-v1.1 = 1:2
demoapp-v1.0:demoapp-v1.1 = 2:2
demoapp-v1.0:demoapp-v1.1 = 3:2
demoapp-v1.0:demoapp-v1.1 = 4:2
demoapp-v1.0:demoapp-v1.1 = 5:2
demoapp-v1.0:demoapp-v1.1 = 5:3
demoapp-v1.0:demoapp-v1.1 = 5:4
demoapp-v1.0:demoapp-v1.1 = 5:5

6. HTTP流量镜像

6.1 流量镜像介绍

(1)流量镜像,也称为流量复制或影子镜像。
(2)流量镜像功能通常用于在生产环境进行测试,通过将生产流量镜像拷贝到测试集群或者新版本集群,实现新版本接近真实环境的测试,旨在有效地降低新版本上线的风险;

6.2 适用场景

(1)验证新版本:实时对比镜像流量与生产流量的输出结果,完成新版本目标验证。
(2)测试:用生产实例的真实流量进行模拟测试。
(3)隔离测试数据库:与数据处理相关的业务,可使用空的数据存储并加载测试数据,针对该数据进行镜像流量操作,实现测试数据的隔离,如果有敏感数据,还要对数据进行脱敏。

6.3 配置HTTP流量镜像

(1)将流量转发至一个集群(主集群)的同时再转发到另一个集群(影子集群)
- 无须等待影子集群返回响应
- 支持收集影子集群的常规统计信息,常用于测试
---
route:
cluster|weighted_clusters: 
  ...
request_mirror_policies: [] # 流量镜像策略
- cluster": "..." # 指明镜像到的集群
  runtime_fraction": "{...}"
    default_value: # 运行时键值不可用时,则使用此默认值;
      numerator: # 指定分子,默认为0;
      denominator: # 指定分母,小于分子时,最终百分比为1;分母可固定使用HUNDRED(默认)、TEN_THOUSAND和MILLION;
    runtime_key: routing.request_mirror.KEY # 指定要使用的运行时键,其值需要用户自定义;
  trace_sampled: {} # 是否对trace span进行采样,默认为true

(2)默认情况下,路由器会镜像所有请求;也可使用如下参数配置转发的流量比例
runtime_key:运行时键,用于明确定义向影子集群转发的流量的百分比,取值范围为0-10000,每个数字表示0.01%的请求比例;定义了此键却未指定其值时,默认为0;

6.4 流量镜像配置示例

仍以demoapp为例,其1.0和1.1两个版本分别对应于demoappv10和demoappv11两个集群
- 假设新版本demoappv11需要验证其承载正常流量时的工作状况;
- 正常请求的所有流量均由demoappv10承载;
- 将一部分流量或所有流量镜像至demoappv11集群一份,起始的镜像比例由默认配置的比例定义,而后可由runtime_key定义的运行键进行动态调整,例如下面的命令将流量镜像比例调整到了100%;
curl -XPOST 'http://172.31.60.10:9901/runtime_modify?routing.request_mirror.demoapp=100'

6.5 流量镜像测试

6.5.1 配置文件

[root@k8s-harbor01 http-traffic-splitting]# cd ../http-request-mirror/
[root@k8s-harbor01 http-request-mirror]# ls
docker-compose.yaml  front-envoy.yaml  README.md  send-request.sh

# front-envoy.yaml
[root@k8s-harbor01 http-request-mirror]# cat front-envoy.yaml
admin:
  profile_path: /tmp/envoy.prof
  access_log_path: /tmp/admin_access.log
  address:
    socket_address:
       address: 0.0.0.0
       port_value: 9901

layered_runtime:
  layers:
  - name: admin
    admin_layer: {}

static_resources:
  listeners:
  - name: listener_0
    address:
      socket_address: { address: 0.0.0.0, port_value: 80 }
    filter_chains:
    - filters:
      - name: envoy.filters.network.http_connection_manager
        typed_config:
          "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
          stat_prefix: ingress_http
          codec_type: AUTO
          route_config:
            name: local_route
            virtual_hosts:
            - name: demoapp
              domains: ["*"]
              routes:
              - match:
                  prefix: "/"
                route:
                  cluster: demoappv10
                  request_mirror_policies:
                  - cluster: demoappv11
                    runtime_fraction:
                      default_value:
                        numerator: 20
                        denominator: HUNDRED
                      runtime_key: routing.request_mirror.demoapp
          http_filters:
          - name: envoy.filters.http.router

  clusters:
  - name: demoappv10
    connect_timeout: 0.25s
    type: STRICT_DNS
    lb_policy: ROUND_ROBIN
    load_assignment:
      cluster_name: demoappv10
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address:
                address: demoappv10
                port_value: 80

  - name: demoappv11
    connect_timeout: 0.25s
    type: STRICT_DNS
    lb_policy: ROUND_ROBIN
    load_assignment:
      cluster_name: demoappv11
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address:
                address: demoappv11
                port_value: 80

# docker-compose.yaml
[root@k8s-harbor01 http-request-mirror]# cat docker-compose.yaml
version: '3'

services:
  front-envoy:
    image: envoyproxy/envoy-alpine:v1.21-latest
    environment:
      - ENVOY_UID=0
      - ENVOY_GID=0
    volumes:
      - ./front-envoy.yaml:/etc/envoy/envoy.yaml
    networks:
      envoymesh:
        ipv4_address: 172.31.60.10
    expose:
      # Expose ports 80 (for general traffic) and 9901 (for the admin server)
      - "80"
      - "9901"

  demoapp-v1.0-1:
    image: ikubernetes/demoapp:v1.0
    hostname: demoapp-v1.0-1
    networks:
      envoymesh:
        aliases:
          - demoappv10
    expose:
      - "80"

  demoapp-v1.0-2:
    image: ikubernetes/demoapp:v1.0
    hostname: demoapp-v1.0-2
    networks:
      envoymesh:
        aliases:
          - demoappv10
    expose:
      - "80"

  demoapp-v1.0-3:
    image: ikubernetes/demoapp:v1.0
    hostname: demoapp-v1.0-3
    networks:
      envoymesh:
        aliases:
          - demoappv10
    expose:
      - "80"

  demoapp-v1.1-1:
    image: ikubernetes/demoapp:v1.1
    hostname: demoapp-v1.1-1
    networks:
      envoymesh:
        aliases:
          - demoappv11
    expose:
      - "80"

  demoapp-v1.1-2:
    image: ikubernetes/demoapp:v1.1
    hostname: demoapp-v1.1-2
    networks:
      envoymesh:
        aliases:
          - demoappv11
    expose:
      - "80"

networks:
  envoymesh:
    driver: bridge
    ipam:
      config:
        - subnet: 172.31.60.0/24

# send-request.sh
[root@k8s-harbor01 http-request-mirror]# cat send-request.sh
#!/bin/bash
interval="0.5"

while true; do
        curl -s http://$1/hostname
                # $1 is the host address of the front-envoy.
        sleep $interval
done

6.5.2 启动容器

[root@k8s-harbor01 http-request-mirror]# docker-compose up  # 这里前台运行,方便等下测试的时候看日志

[root@k8s-harbor01 ~]# cd servicemesh_in_practise-MageEdu_N66/HTTP-Connection-Manager/http-request-mirror/ # 新开一个窗口
[root@k8s-harbor01 http-request-mirror]# docker-compose  ps
               Name                             Command               State              Ports
---------------------------------------------------------------------------------------------------------
httprequestmirror_demoapp-v1.0-1_1   /bin/sh -c python3 /usr/lo ...   Up      80/tcp
httprequestmirror_demoapp-v1.0-2_1   /bin/sh -c python3 /usr/lo ...   Up      80/tcp
httprequestmirror_demoapp-v1.0-3_1   /bin/sh -c python3 /usr/lo ...   Up      80/tcp
httprequestmirror_demoapp-v1.1-1_1   /bin/sh -c python3 /usr/lo ...   Up      80/tcp
httprequestmirror_demoapp-v1.1-2_1   /bin/sh -c python3 /usr/lo ...   Up      80/tcp
httprequestmirror_front-envoy_1      /docker-entrypoint.sh envo ...   Up      10000/tcp, 80/tcp, 9901/tcp

6.5.3 请求测试

6.5.3.1 测试默认请求
[root@k8s-harbor01 http-request-mirror]# ./send-request.sh 172.31.60.10 # 该脚本隔两秒钟向其发起一次HTTP请求

在这里插入图片描述

6.5.3.2 查看后端日志

镜像后端收到请求后其实也会响应,响应数据会发送给入口网关,但是网关不会把这部分数据发给
在这里插入图片描述

6.5.4 清理环境

[root@k8s-harbor01 http-request-mirror]# docker-compose  down

7. 其它可定义的路由管理机制

下面的这些部分配置,只有单独使用envoy时,才有。

路由过滤器额外还可执行如下操作
- matadata_match:子集负载均衡器使用的端点元数据匹配条件;
- prefix_rewrite:前缀重写,即将下游请求的path转发至上游主机时重写为另一个path;
- host_rewrite:主机头重写;
- auto_host_rewrite:自动主机头重写,仅适用于strict_dns或logical_dns类型的集群;
- timeout:上游超时时长,默认为15s; # 常用重点参数
- idle_timeout:路由的空间超时时长,未指定时表示不超时;
- retry_policy:重试策略,优先于虚拟主机级别的重试策略;  # 常用重点参数
- cors:跨域资源共享(Cross Origin Resource Sharing);  # 常用重点参数
- priority:路由优先级;
- rate_limits:速率限制;
- hash_policy:上游集群使用环哈希算法时为其指定用于环形哈希负载均衡的哈希策略表;通常哈希计算的目标是指定的标头、cookie或者请求报文的源IP地址;
  • 1
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值