docker-compose多主机部署zookeeper集群、kafka集群

1、创建docker-compose文件

主机一:

version: '3.4'
 
services:
  zookeeper:
    image: harbor-test.aitdcoin.com/sgpexchange/zookeeper:v3.4.13
    restart: always
    hostname: zookeeper
    container_name: zookeeper
    ports:
    - 2181:2181
    - 2888:2888
    - 3888:3888
    volumes:
    - "./data:/opt/zookeeper-3.4.13/data"
    - "./conf/zoo.cfg:/opt/zookeeper-3.4.13/conf/zoo.cfg"
    environment:
      TZ: Asia/Shanghai
    networks:
      - zookeeper-net
 
networks:
  zookeeper-net:
    driver: bridge

主机二:

version: '3.4'
 
services:
  zookeeper:
    image: harbor-test.aitdcoin.com/sgpexchange/zookeeper:v3.4.13
    restart: always
    hostname: zookeeper
    container_name: zookeeper
    ports:
    - 2181:2181
    - 2888:2888
    - 3888:3888
    volumes:
    - "./data:/opt/zookeeper-3.4.13/data"
    - "./conf/zoo.cfg:/opt/zookeeper-3.4.13/conf/zoo.cfg"
    environment:
      TZ: Asia/Shanghai
    networks:
      - zookeeper-net
 
networks:
  zookeeper-net:
    driver: bridge

 主机三:

version: '3.4'
 
services:
  zookeeper:
    image: harbor-test.aitdcoin.com/sgpexchange/zookeeper:v3.4.13
    restart: always
    hostname: zookeeper
    container_name: zookeeper
    ports:
    - 2181:2181
    - 2888:2888
    - 3888:3888
    volumes:
    - "./data:/opt/zookeeper-3.4.13/data"
    - "./conf/zoo.cfg:/opt/zookeeper-3.4.13/conf/zoo.cfg"
    environment:
      #ZOO_MY_ID: 3 
      #ZOO_SERVERS: server.1=192.168.8.51:2888:3888 server.2=192.168.8.61:2888:3888 server.3=0.0.0.0:2888:3888
      TZ: Asia/Shanghai
    networks:
      - zookeeper-net
 
networks:
  zookeeper-net:
    driver: bridge

三台主机都要配置,其中hostname自己定义,注意IP地址和myid更改

2、修改配置文件

[root@sgpexchangeintermediate-192-168-8-62 zookeeper]# vim conf/zoo.cfg 

# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=/opt/zookeeper-3.4.13/data
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
autopurge.purgeInterval=1

minSessionTimeout=16000
maxSessionTimeout=30000
server.1=192.168.8.51:2888:3888
server.2=192.168.8.61:2888:3888
server.3=0.0.0.0:2888:3888

更改myid文件

[root@sgpexchangeintermediate-192-168-8-62 zookeeper]# cat data/myid 
3

3、启动zookeeper各个节点

[root@sgpexchangemysql-192-168-8-61 zookeeper]# docker-compose up -d
Creating network "zookeeper_zookeeper-net" with driver "bridge"
Creating zookeeper ... done

4、测试集群连通性

#随意进入一台主机添加一个根节点目录
[root@sgpexchangeintermediate-192-168-8-62 zookeeper]# docker exec -it zookeeper /bin/bash
root@zookeeper3:/opt/zookeeper-3.4.13# cd bin/
root@zookeeper3:/opt/zookeeper-3.4.13/bin# ./zkCli.sh -server 192.168.8.51:2181
Connecting to 192.168.8.51:2181


WatchedEvent state:SyncConnected type:None path:null
[zk: 192.168.8.51:2181(CONNECTED) 0] create /project zookeeper
Node already exists: /project
[zk: 192.168.8.51:2181(CONNECTED) 1] get /project
zookeeper_project
cZxid = 0x9
ctime = Thu May 13 11:24:46 UTC 2021
mZxid = 0x9
mtime = Thu May 13 11:24:46 UTC 2021
pZxid = 0x9
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 17
numChildren = 0
# 在第二个根节点查询节点目录情况
[zk: 192.168.8.51:2181(CONNECTED) 2] root@zookeeper3:/opt/zookeeper-3.4.13/bin# ./zkCli.sh -server 192.168.8.61:2181
Connecting to 192.168.8.61:2181

[zk: 192.168.8.61:2181(CONNECTED) 1] get /project
zookeeper_project
cZxid = 0x2
ctime = Thu May 13 11:14:34 UTC 2021
mZxid = 0x2
mtime = Thu May 13 11:14:34 UTC 2021
pZxid = 0x2
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 17
numChildren = 0
[zk: 192.168.8.61:2181(CONNECTED) 2] root@zookeeper3:/opt/zookeeper-3.4.13/bin# ./zkCli.sh -server 192.168.8.62:2181
Connecting to 192.168.8.62:2181

# 在第三个根节点查询新增节点目录
[zk: 192.168.8.62:2181(CONNECTED) 0] get /project
zookeeper_project
cZxid = 0xb
ctime = Thu May 13 11:26:47 UTC 2021
mZxid = 0xb
mtime = Thu May 13 11:26:47 UTC 2021
pZxid = 0xb
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 17
numChildren = 0
[zk: 192.168.8.62:2181(CONNECTED) 1] 

或查看各zookeeper节点状态

[root@sgpexchangeintermediate-192-168-8-62 zookeeper]# docker exec -it zookeeper /bin/bash
root@zookeeper:/opt/zookeeper-3.4.13# cd bin/
root@zookeeper:/opt/zookeeper-3.4.13/bin# ./zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper-3.4.13/bin/../conf/zoo.cfg
Mode: leader

[root@sgpexchangemysql-192-168-8-61 zookeeper]# docker exec -it zookeeper /bin/bash
root@zookeeper:/opt/zookeeper-3.4.13# cd bin/
root@zookeeper:/opt/zookeeper-3.4.13/bin# ./zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper-3.4.13/bin/../conf/zoo.cfg
Mode: follower

[root@sgpexchangenginx-192-168-8-51 zookeeper]# docker exec -it zookeeper /bin/bash
root@zookeeper:/opt/zookeeper-3.4.13# cd bin/
root@zookeeper:/opt/zookeeper-3.4.13/bin# ./zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper-3.4.13/bin/../conf/zoo.cfg
Mode: follower

二、部署kafka集群

1、编写docker-compose文件

主机一

[root@sgpexchangeinetermediate-192-168-8-63 kafka]# vim docker-compose.yml 

version: '2'

services:
  kafka1:
    image: harbor-test.sgpexchange.com/sgpexchange/kafka:v2.7.0
    restart: always
    hostname: kafka
    container_name: kafka
    ports:
    - 9092:9092
    environment:
      KAFKA_ADVERTISED_HOST_NAME: kafka
      KAFKA_ADVERTISED_PORT: 9092
      KAFKA_ZOOKEEPER_CONNECT: 192.168.8.63:2181,192.168.8.61:2181,192.168.8.62:2181
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.8.63:9092
      TZ: Asia/Shanghai
    volumes:
    - ./logs:/kafka
    #external_links:
    #- 192.168.8.51
    #- 192.168.8.61
    #- 192.168.8.61
    networks:
      kafka-net:

networks:
  kafka-net:
    driver: bridge

主机二:

[root@sgpexchangeintermediate-192-168-8-62 kafka]# vim docker-compose.yml 

version: '2'

services:
  kafka1:
    image: harbor-test.sgpexchange.com/sgpexchange/kafka:v2.7.0
    restart: always
    hostname: kafka
    container_name: kafka
    ports:
    - 9092:9092
    environment:
      KAFKA_ADVERTISED_HOST_NAME: kafka
      KAFKA_ADVERTISED_PORT: 9092
      KAFKA_ZOOKEEPER_CONNECT: 192.168.8.63:2181,192.168.8.61:2181,192.168.8.62:2181
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.8.62:9092
      TZ: Asia/Shanghai
    volumes:
    - ./logs:/kafka
    #external_links:
    #- 192.168.8.63
    #- 192.168.8.61
    #- 192.168.8.61
    networks:
      kafka-net:

networks:
  kafka-net:
    driver: bridge

主机三

[root@sgpexchangeinetermediate-192-168-8-63 kafka]# vim docker-compose.yml 

version: '2'

services:
  kafka1:
    image: harbor-test.sgpexchange.com/sgpexchange/kafka:v2.7.0
    restart: always
    hostname: kafka
    container_name: kafka
    ports:
    - 9092:9092
    environment:
      KAFKA_ADVERTISED_HOST_NAME: kafka
      KAFKA_ADVERTISED_PORT: 9092
      KAFKA_ZOOKEEPER_CONNECT: 192.168.8.63:2181,192.168.8.61:2181,192.168.8.62:2181
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.8.63:9092
      TZ: Asia/Shanghai
    volumes:
    - ./logs:/kafka
    #external_links:
    #- 192.168.8.51
    #- 192.168.8.61
    #- 192.168.8.61
    networks:
      kafka-net:

networks:
  kafka-net:
    driver: bridge

2、启动kafka个节点

[root@sgpexchangenginx-192-168-8-61 kafka]# docker-compose up -d
Creating network "kafka_kafka-net" with driver "bridge"
Pulling kafka1 (harbor-test.aitdcoin.com/sgpexchange/kafka:v2.7.0)...
v2.7.0: Pulling from sgpexchange/kafka
e7c96db7181b: Pull complete
f910a506b6cb: Pull complete
b6abafe80f63: Pull complete
fc8281eb3951: Pull complete
d50ae6888ae4: Pull complete
e59aa960c952: Pull complete
Digest: sha256:da7dd66b9e6429ac122a07a259d34c7c43f47bf3fb933b1b5ff1e46f676b6941
Status: Downloaded newer image for harbor-test.aitdcoin.com/sgpexchange/kafka:v2.7.0
Creating kafka ... done

 

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

yunson_Liu

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值