Docker Swarm 部署Consul集群

1、部署环境

IPhostname部署实例Node labels
172.16.1.101manager01consul_server1-
172.16.1.102manager02consul_server2-
172.16.1.103manager03consul_server3-
172.16.1.104worker01consul_client-
172.16.1.105worker02consul_client-
172.16.1.106worker03consul_client-

2、给相关Swarm节点添加标签,使consul实例部署到指定服务器节点上

#  在任意一个Manager节点上执行
docker node update --label-add consul.cluster=server1 manager01
docker node update --label-add consul.cluster=server2 manager02
docker node update --label-add consul.cluster=server3 manager03
docker node update --label-add consul.cluster=client worker01
docker node update --label-add consul.cluster=client worker02
docker node update --label-add consul.cluster=client worker03

执行以上命令后,各节点Node labels如下:

IPhostname部署实例Node labels
172.16.1.101manager01consul_server1consul.cluster=server1
172.16.1.102manager02consul_server2consul.cluster=server2
172.16.1.103manager03consul_server3consul.cluster=server3
172.16.1.104worker01consul_clientconsul.cluster=client
172.16.1.105worker02consul_clientconsul.cluster=client
172.16.1.106worker03consul_clientconsul.cluster=client

例:在compose编排脚本的deploy属性中,使用placement属性将consul_client服务指定运行在worker01-03节点上,如下:

deploy:
      placement:
        constraints:
          - node.labels.consul.cluster == client

1、部署架构图

在这里插入图片描述

  1. Consul 的 Client 和 Server 及 Server 和 Server之间通过 overlay网络进行通信;
  2. Consul Server1 通过Ingress模式发布UI服务,端口为8500,这样,可通过swarm集群下任何物理机IP:8500访问该UI服务;
  3. Consul Client 和 App Service 通过host模式进行通信:

即:Consul Client 通过 host模式暴露端口8510,所有App Service 通过访问其容器所在宿主机IP:8510 访问Consul Client,进行服务注册与发现;

3、为consul_servert和consul_client创建数据卷映射目录

# 在三个服务器节点上,分别执行以下命令
mkdir -p /{consul_server,consul_client}/data/

4、部署文件说明

涉及到的部署文件有三个:

  1. 编排脚本:consul-compose.yml

定义了Consul集群相关所有服务

4.1、编排脚本:consul-compose.yml

version: '3.7'

services:
  consul-server1:
    image: consul:1.9.3
    container_name: consul-server1
    volumes:
      - /consul_server/data:/consul/data
    networks:
      - middle
    ports:
      - '8500:8500'
    deploy:
      mode: global
      placement:
        constraints:
          - node.labels.consul.cluster == server1
    command: >
      agent -server -ui
      -node=consul-server1
      -bootstrap-expect=3
      -client=0.0.0.0
      -data-dir=/consul/data
      -datacenter=dc1
      -bind '{{ GetPrivateInterfaces | include "network" "192.168.1.0/24" | attr "address" }}'

  consul-server2:
    image: consul:1.9.3
    container_name: consul-server2
    depends_on:
      - consul-server1
    volumes:
      - /consul_server/data:/consul/data
    networks:
      - consul
    deploy:
      mode: global
      placement:
        constraints:
          - node.labels.consul.cluster == server2
    command: >
      agent -server -ui
      -node=consul-server2
      -bootstrap-expect=3
      -client=0.0.0.0
      -retry-join=consul-server1
      -data-dir=/consul/data
      -datacenter=dc1
      -bind '{{ GetPrivateInterfaces | include "network" "192.168.1.0/24" | attr "address" }}'

  consul-server3:
    image: consul:1.9.3
    container_name: consul-server3
    depends_on:
      - consul-server1
    volumes:
      - /consul_server/data:/consul/data
    networks:
      - middle
    deploy:
      mode: global
      placement:
        constraints:
          - node.labels.consul.cluster == server3
    command: >
      agent -server -ui
      -node=consul-server3
      -bootstrap-expect=3
      -client=0.0.0.0
      -retry-join=consul-server1
      -data-dir=/consul/data
      -datacenter=dc1
      -bind '{{ GetPrivateInterfaces | include "network" "192.168.1.0/24" | attr "address" }}'

  consul-client:
    image: consul:1.9.3
    networks:
      - middle
    depends_on:
      - consul-server1
      - consul-server2
      - consul-server3
    ports:
      - target: 8500
        published: 8510
        protocol: tcp
        mode: host
    volumes:
      - /consul_client/data:/consul/data
      - /var/run/docker.sock:/var/run/docker.sock
      - /usr/bin/docker:/usr/bin/docker
    deploy:
      mode: global
      placement:
        constraints:
          - node.labels.consul.cluster == client
    entrypoint: >
      sh -c "docker-entrypoint.sh agent
      -node=consul_client_`docker info --format {{.Name}}`
      -retry-join=consul-server1
      -retry-join=consul-server2
      -retry-join=consul-server3
      -data-dir=/consul/data
      -datacenter=dc1
      -bind '{{ GetPrivateInterfaces | include \"network\" \"192.168.1.0/24\" | attr \"address\" }}'"

networks:
  middle:
    driver: overlay
    ipam:
      driver: default
      config:
        - subnet: "192.168.1.0/24"

脚本说明:

  1. 多网卡情况下,绑定指定网段的IP
-bind '{{ GetPrivateInterfaces | include "network" "192.168.1.0/24" | attr "address" }}'
  1. 获得宿主机hostname
-node=consul_client_`docker info --format {{.Name}}`

最终结果为:-node=consul_client_worker01
这里使用docker info命令的前提是配置以下两个映射

/var/run/docker.sock:/var/run/docker.sock
/usr/bin/docker:/usr/bin/docker

5、部署

部署compose脚本的方式有两种,分别是:命令行、portainer工具

5.1、通过命令行部署

登录到任意一个Manager节点,将consul-compose.yml文件,放到任意目录,比如放到/usr/local/deploy/consul/目录

cd /usr/local/deploy/consul/
 ls ./
consul-compose.yml

然后执行docker stack deploy 命令

cd /usr/local/deploy/consul/
docker stack deploy -c consul-compose.yml consul

5.2、通过Portainer部署

  1. 点击菜单Stacks,点击按钮Add Stack,在Name输入框输入本Stack的名称,如consul,在Build method选项中选择Web editor,并在Web editor文本域中输入文件consul-compose.yml 的内容;
  2. 点击页面底部的Deploy the stack按钮;
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值