由于zookeeper 集群选择leader的策略是少数服从多数原则,所以建议选择奇数个结点
1 compose file 文件
version: '3.7'
services:
zoo1:
image: zookeeper:3.8.0
restart: always
hostname: zoo1
container_name: zookeeper-cluster-1
ports:
- 12181:2181
- 18080:8080
volumes:
- "/home/data/cluster/zookeeper/zookeeper-1/data:/data"
- "/home/data/cluster/zookeeper/zookeeper-1/datalog:/datalog"
- "/home/data/cluster/zookeeper/zookeeper-1/logs:/logs"
environment:
ZOO_MY_ID: 1
ALLOW_ANONYMOUS_LOGIN: "yes"
ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=zoo3:2888:3888;2181
ZOO_4LW_COMMANDS_WHITELIST: "*"
networks:
brzk-kafka:
ipv4_address: 172.19.0.11
zoo2:
image: zookeeper:3.8.0
restart: always
hostname: zoo2
container_name: zookeeper-cluster-2
ports:
- 22181:2181
- 28080:8080
volumes:
- "/home/data/cluster/zookeeper/zookeeper-2/data:/data"
- "/home/data/cluster/zookeeper/zookeeper-2/datalog:/datalog"
- "/home/data/cluster/zookeeper/zookeeper-2/logs:/logs"
environment:
ZOO_MY_ID: 2
ALLOW_ANONYMOUS_LOGIN: "yes"
ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=zoo3:2888:3888;2181
ZOO_4LW_COMMANDS_WHITELIST: "*"
networks:
brzk-kafka:
ipv4_address: 172.19.0.12
zoo3:
image: zookeeper:3.8.0
restart: always
hostname: zoo3
container_name: zookeeper-cluster-3
ports:
- 32181:2181
- 38080:8080
volumes:
- "/home/data/cluster/zookeeper/zookeeper-3/data:/data"
- "/home/data/cluster/zookeeper/zookeeper-3/datalog:/datalog"
- "/home/data/cluster/zookeeper/zookeeper-3/logs:/logs"
environment:
ZOO_MY_ID: 3
ALLOW_ANONYMOUS_LOGIN: "yes"
ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=zoo3:2888:3888;2181
ZOO_4LW_COMMANDS_WHITELIST: "*"
networks:
brzk-kafka:
ipv4_address: 172.19.0.13
networks:
brzk-kafka:
ipam:
driver: default
config:
- subnet: "172.19.0.0/24"
2 运行
docker-compose --project-name myzkcomposeprj up -d
3 查看集群
在某个容器运行客户端:
docker exec --interactive --tty zookeeper-cluster-1 bin/zkCli.sh -server :2181 config| grep ^server
[hostuser@host-machine]$ docker exec --interactive --tty zookeeper-cluster-3 bin/zkCli.sh -server :2181 config | grep ^server
server.1=zoo1:2888:3888:participant;0.0.0.0:2181
server.2=zoo2:2888:3888:participant;0.0.0.0:2181
server.3=zoo3:2888:3888:participant;0.0.0.0:2181
查看结点类型:
docker exec zookeeper-cluster-2 bash -c 'echo srvr | nc localhost 2181' | grep "Mode"
[hostuser@host-machine]$ docker exec zookeeper-cluster-1 bash -c 'echo "srvr" | nc localhost 2181' | grep "Mode"
Mode: follower
[hostuser@host-machine]$ docker exec zookeeper-cluster-2 bash -c 'echo "srvr" | nc localhost 2181' | grep "Mode"
Mode: follower
[hostuser@host-machine]$ docker exec zookeeper-cluster-3 bash -c 'echo "srvr" | nc localhost 2181' | grep "Mode"
Mode: leader
也可以在 host 机器使用映射的端口查看:
echo "srvr" | nc localhost 32181
[hostuser@host-machine]$ echo "stat" | nc localhost 32181
Zookeeper version: 3.8.0-5a02a05eddb59aee6ac762f7ea82e92a68eb9c0f, built on 2022-02-25 08:49 UTC
Clients:
/172.19.0.1:36848[0](queued=0,recved=1,sent=0)
Latency min/avg/max: 1/8.5/25
Received: 7
Sent: 6
Connections: 1
Outstanding: 0
Zxid: 0x10000000e
Mode: leader
Node count: 5
Proposal sizes last/min/max: 48/48/48
也可以使用容器里的 bin/zkServer.sh status
查看:
[hostuser@host-machine]$ docker exec zookeeper-cluster-3 bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: leader
如果环境变量设置了 ZOO_4LW_COMMANDS_WHITELIST: "*"
还可以用 docker exec zookeeper-cluster-2 bash -c 'echo "stat" | nc localhost 2181' | grep "Mode"
[hostuser@host-machine]$ docker exec zookeeper-cluster-2 bash -c 'echo "stat" | nc localhost 2181' | grep "Mode"
Mode: follower
新方法是使用 AdminServer 的 http 接口
http://localhost:8080/commands/stat
root@zoo2:/apache-zookeeper-3.8.0-bin# wget --quiet --output-document=/dev/stdout http://localhost:8080/commands/stat | grep "server_state"
"server_state" : "follower",
或者在host 机器使用映射的端口
[hostuser@host-machine]$ curl --silent http://localhost:28080/commands/stat | grep "server_state"
"server_state" : "follower",
4 客户端 zkCli.sh 进入集群
随便进入一个容器结点 docker exec --interactive --tty zookeeper-cluster-2 bash
。再使用 zookeeper 客户端 zkCli.sh -server zoo3:2181
进入另外一个结点
root@zoo2:/apache-zookeeper-3.8.0-bin# zkCli.sh -server zoo3:2181
Connecting to zoo3:2181
...
Welcome to ZooKeeper!
JLine support is enabled
2022-05-21 16:38:14,563 [myid:zoo3:2181] - INFO [main-SendThread(zoo3:2181):o.a.z.ClientCnxn$SendThread@1171] - Opening socket connection to server zoo3/172.19.0.13:2181.
...
[zk: zoo3:2181(CONNECTED) 0] create /test hello
Created /test
在这个结点上创建 znode 并关联字符串 hello
[zk: zoo3:2181(CONNECTED) 0] ls /
[test, zookeeper]
[zk: zoo3:2181(CONNECTED) 1] get /test
hello
[zk: zoo3:2181(CONNECTED) 2] quit
进入另外一个结点查看
root@zoo2:/apache-zookeeper-3.8.0-bin# zkCli.sh -server zoo1:2181
...
[zk: zoo1:2181(CONNECTED) 0] get /test
hello
[zk: zoo1:2181(CONNECTED) 1] ls /
[test, zookeeper]
可以看到 zoo3 创建的 znode
5 Api for Java
Ref
why-pipeline-content-to-command-nc-wont-work stackoverflow
what-command-could-be-issued-to-check-whether-a-zookeeper-server-is-a-leader serverfault
zookeeper AdminServer Doc
zookeeper_api
ZooKeeper-API-Java-Examples-Watcher
Official Java Example
Using docker compose to build zookeeper cluster
ZooKeeper cluster with Docker Compose
setting-up-an-apache-zookeeper-cluster-in-docker
bitnami zookeeper 3.8.0 docker-compose-cluster.yml