NFS的搭建
为了统一管理配置、数据和日志,使用NFS作为集群共享存储,详细安装方法查看另外一篇文章《Docker Swarm集群 使用NFS共享存储》
案例中将创建3个zookeeper组建集群,我的NFS配置如下
[root@KD111111003041 ~]# nano /etc/exports
/root/kafkadata 111.111.3.0/24(rw,async,insecure,anonuid=1000,anongid=1000,no_root_squash)
/root/kafkadata/zoo1/data 111.111.3.0/24(rw,async,insecure,anonuid=1000,anongid=1000,no_root_squash)
/root/kafkadata/zoo1/datalog 111.111.3.0/24(rw,async,insecure,anonuid=1000,anongid=1000,no_root_squash)
/root/kafkadata/zoo2/data 111.111.3.0/24(rw,async,insecure,anonuid=1000,anongid=1000,no_root_squash)
/root/kafkadata/zoo2/datalog 111.111.3.0/24(rw,async,insecure,anonuid=1000,anongid=1000,no_root_squash)
/root/kafkadata/zoo3/data 111.111.3.0/24(rw,async,insecure,anonuid=1000,anongid=1000,no_root_squash)
/root/kafkadata/zoo3/datalog 111.111.3.0/24(rw,async,insecure,anonuid=1000,anongid=1000,no_root_squash)
/root/kafkadata/zoo1/conf 111.111.3.0/24(rw,async,insecure,anonuid=1000,anongid=1000,no_root_squash)
/root/kafkadata/zoo2/conf 111.111.3.0/24(rw,async,insecure,anonuid=1000,anongid=1000,no_root_squash)
/root/kafkadata/zoo3/conf 111.111.3.0/24(rw,async,insecure,anonuid=1000,anongid=1000,no_root_squash)
让配置生效
exportfs -rv
zookeeper集群的组建
系统环境:7台机子组建了Docker Swarm集群
IP | Docker角色 |
---|---|
111.111.3.41 | Manager |
111.111.3.42 | Manager |
111.111.3.43 | Manager |
111.111.3.44 | Worker |
111.111.3.45 | Manager |
111.111.3.46 | Manager |
111.111.3.47 | Worker |
docker-compose文件的创建
创建3个Zookeeper服务
注意:1、我使用的镜像是3.5以上的,与Zookeeper 3.4的ZOO_SERVERS格式略有不同,网上的其他教程大部分都是3.4的,我在这个坑爬了好久。2、映射的data和datalog必须是不同的文件夹,不然重启集群时会出错
version: '3.8'
services:
zoo1:
image: zookeeper
networks:
- zookeeper
hostname: zoo1
ports: # 端口
- 2181:2181
environment:
ZOO_MY_ID: 1
ZOO_SERVERS: server.1=0.0.0.0:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=zoo3:2888:3888;2181
volumes:
- "zoo1conf:/conf"
- "zoo1data:/data"
- "zoo1datalog:/datalog"
deploy:
mode: replicated
replicas: 1
zoo2:
image: zookeeper
networks:
- zookeeper
hostname: zoo2
ports: # 端口
- 2182:2181
environment:
ZOO_MY_ID: 2
ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=0.0.0.0:2888:3888;2181 server.3=zoo3:2888:3888;2181
volumes:
- "zoo2conf:/conf"
- "zoo2data:/data"
- "zoo2datalog:/datalog"
deploy:
mode: replicated
replicas: 1
zoo3:
image: zookeeper
networks:
- zookeeper
hostname: zoo3
ports: # 端口
- 2183:2181
environment:
ZOO_MY_ID: 3
ZOO_SERVERS: server.1=zoo1:2888:3888;2181 server.2=zoo2:2888:3888;2181 server.3=0.0.0.0:2888:3888;2181
volumes:
- "zoo3conf:/conf"
- "zoo3data:/data"
- "zoo3datalog:/datalog"
deploy:
mode: replicated
replicas: 1
volumes:
zoo1data:
driver: local
driver_opts:
type: "nfs"
o: "addr=111.111.3.41,rw"
device: ":/root/kafkadata/zoo1/data"
zoo1datalog:
driver: local
driver_opts:
type: "nfs"
o: "addr=111.111.3.41,rw"
device: ":/root/kafkadata/zoo1/datalog"
zoo1conf:
driver: local
driver_opts:
type: "nfs"
o: "addr=111.111.3.41,rw"
device: ":/root/kafkadata/zoo1/conf"
zoo2data:
driver: local
driver_opts:
type: "nfs"
o: "addr=111.111.3.41,rw"
device: ":/root/kafkadata/zoo2/data"
zoo2datalog:
driver: local
driver_opts:
type: "nfs"
o: "addr=111.111.3.41,rw"
device: ":/root/kafkadata/zoo2/datalog"
zoo2conf:
driver: local
driver_opts:
type: "nfs"
o: "addr=111.111.3.41,rw"
device: ":/root/kafkadata/zoo2/conf"
zoo3data:
driver: local
driver_opts:
type: "nfs"
o: "addr=111.111.3.41,rw"
device: ":/root/kafkadata/zoo3/data"
zoo3datalog:
driver: local
driver_opts:
type: "nfs"
o: "addr=111.111.3.41,rw"
device: ":/root/kafkadata/zoo3/datalog"
zoo3conf:
driver: local
driver_opts:
type: "nfs"
o: "addr=111.111.3.41,rw"
device: ":/root/kafkadata/zoo3/conf"
networks:
zookeeper:
driver: overlay
运行并验证zookeeper集群
[root@KD111111003041 ~]# docker stack deploy -c zookeeperCompose.yml zookeeper
Creating network zookeeper_zookeeper
Creating service zookeeper_zoo3
Creating service zookeeper_zoo1
Creating service zookeeper_zoo2
查看运行状态
[root@KD111111003041 ~]# docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
pyxs8uix8wwo zookeeper_zoo1 replicated 1/1 zookeeper:latest *:2181->2181/tcp
v5h6gckw4hmb zookeeper_zoo2 replicated 1/1 zookeeper:latest *:2182->2181/tcp
plim4a7s42yg zookeeper_zoo3 replicated 1/1 zookeeper:latest *:2183->2181/tcp
查看集群是否组建成功
经查询,三个服务分别运行在43、46、47三个节点上
43节点为leader
[root@KD111111003043 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
71cc7f30e7a0 zookeeper:latest "/docker-entrypoint.…" 29 minutes ago Up 29 minutes 2181/tcp, 2888/tcp, 3888/tcp, 8080/tcp zookeeper_zoo3.1.ilt743ba0u612n7t2zb55abk9
[root@KD111111003043 ~]# docker exec -it zookeeper_zoo3.1.ilt743ba0u612n7t2zb55abk9 zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: leader
46节点
[root@KD111111003046 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
473354d4875e zookeeper:latest "/docker-entrypoint.…" 31 minutes ago Up 31 minutes 2181/tcp, 2888/tcp, 3888/tcp, 8080/tcp zookeeper_zoo1.1.q555a2ddd3ukptakt7l1l7mm9
[root@KD111111003046 ~]# docker exec -it zookeeper_zoo1.1.q555a2ddd3ukptakt7l1l7mm9 zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: follower
47节点
[root@KD111111003047 ~]# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a5a6bb6af2a1 zookeeper:latest “/docker-entrypoint.…” 32 minutes ago Up 32 minutes 2181/tcp, 2888/tcp, 3888/tcp, 8080/tcp zookeeper_zoo2.1.lvlh910o36mio851ep0nhp2q2
620e7a9c9a31 nginx:latest “/docker-entrypoint.…” 4 days ago Up 4 days 80/tcp app_nginx-test5.2.oxsyphx6qnvhfzcxnkeqq3ece
[root@KD111111003047 ~]# docker exec -it zookeeper_zoo2.1.lvlh910o36mio851ep0nhp2q2 zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: follower