前言
本文为我亲测,2020年5月之前绝对可以用
kafka的原理:https://blog.csdn.net/qq_34168515/article/details/106433825
前提,建议虚拟机内存大于4G,系统为centos7
建议将yum和docker的源都切换为阿里源,不然下载速度会非常慢,原因你懂的。
以下我搭建的三个zk集群和三个kafka集群的地址
hostname | ip addr | port | listener |
---|---|---|---|
zoo1 | 172.19.0.11 | 2184:2181 | — |
zoo2 | 172.19.0.12 | 2185:2181 | — |
zoo3 | 172.19.0.13 | 2186:2181 | — |
kafka1 | 172.19.0.14 | 9092:9092 | kafka1 |
kafka2 | 172.19.0.15 | 9093:9092 | kafka2 |
Kafka3 | 172.19.0.16 | 9094:9092 | Kafka3 |
宿主机 | 192.168.102.137 | — | — |
一、搭建docker
省略,网上很多,注意yum和docker的源都切换为阿里源
1.1 创建docker的网络
docker network create --driver bridge --subnet 172.19.0.0/25 --gateway 172.19.0.1 br200530
该网络用于配置zk集群和kafka集群。
1.2 查询docker的网络
[root@localhost docker-kafka]# docker network list
NETWORK ID NAME DRIVER SCOPE
386116bb9b09 br200530 bridge local
7c38e96cb246 bridge bridge local
bf929b7394aa host host local
199262ea3eba none null local
二、Zookeeper集群搭建
注意,zookeeper的版本最新为3.6,目前推荐为3.4。因为最新的3.6的端口需要自己去设置,比较麻烦。而3.4的端口不需要自己设置,直接2181,比较方便。
先拉取dookeeper镜像
docker pull zookeeper:3.4.10
2.1 安装docker之后,还需要安装 docker-compose
Compose 是一个用户定义和运行多个容器的 Docker 应用程序。在 Compose 中你可以使用 YAML 文件来配置你的应用服务。然后,只需要一个简单的命令,就可以创建并启动你配置的所有服务。
安装 Docker Compose
Docker Compose 存放在Git Hub,不太稳定。
你可以也通过执行下面的命令,高速安装Docker Compose。
curl -L https://get.daocloud.io/docker/compose/releases/download/1.25.5/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
[root@localhost bin]# docker-compose -v
docker-compose version 1.25.5, build 8a1c60f6
2.2 采用 使用yml文件和$ docker-compose up -d命令创建或重建集群
解释:
1、把镜像中的 /data 挂载到 /data/cancan/Development/volume/zkcluster/zoo1/data
2、把镜像中的 /datalog 挂载到 /data/cancan/Development/volume/zkcluster/zoo1/datalog
3、docker操作宿主机的文件,是需要权限的,所以需要设置 privileged: true
zk集群的 docker-compose.yml
version: '3.4.10'
services:
zoo1:
image: zookeeper:3.4.10
restart: always
hostname: zoo1
container_name: zoo1
privileged: true
ports:
- 2184:2181
volumes:
- "/data/cancan/Development/volume/zkcluster/zoo1/data:/data"
- "/data/cancan/Development/volume/zkcluster/zoo1/datalog:/datalog"
environment:
ZOO_MY_ID: 1
ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888
networks:
br200530:
ipv4_address: 172.19.0.11
zoo2:
image: zookeeper:3.4.10
restart: always
hostname: zoo2
container_name: zoo2
privileged: true
ports:
- 2185:2181
volumes:
- "/data/cancan/Development/volume/zkcluster/zoo2/data:/data"
- "/data/cancan/Development/volume/zkcluster/zoo2/datalog:/datalog"
environment:
ZOO_MY_ID: 2
ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888
networks:
br200530:
ipv4_address: 172.19.0.12
zoo3:
image: zookeeper:3.4.10
restart: always
hostname: zoo3
container_name: zoo3
privileged: true
ports:
- 2186:2181
volumes:
- "/data/cancan/Development/volume/zkcluster/zoo3/data:/data"
- "/data/cancan/Development/volume/zkcluster/zoo3/datalog:/datalog"
environment:
ZOO_MY_ID: 3
ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888
networks:
br200530:
ipv4_address: 172.19.0.13
networks:
br200530:
external:
name: br200530
2.3 在放置有 docker-compose.yml 的位置运行命令
- 创建zk集群
[root@localhost docker-zk]# docker-compose up -d
Creating zoo2 ... done
Creating zoo1 ... done
Creating zoo3 ... done
- 查询
[root@localhost docker-zk]# docker-compose ps
Name Command State Ports
------------------------------------------------------------------------------------------
zoo1 /docker-entrypoint.sh zkSe ... Up 0.0.0.0:2184->2181/tcp, 2888/tcp, 3888/tcp
zoo2 /docker-entrypoint.sh zkSe ... Up 0.0.0.0:2185->2181/tcp, 2888/tcp, 3888/tcp
zoo3 /docker-entrypoint.sh zkSe ... Up 0.0.0.0:2186->2181/tcp, 2888/tcp, 3888/tcp
- 停止 — 非必要不要操作
[root@localhost docker-zk]# docker-compose stop
Stopping zoo3 ... done
Stopping zoo1 ... done
Stopping zoo2 ... done
- 删除 — 非必要不要操作
[root@localhost docker-zk]# docker-compose rm
Going to remove zoo3, zoo1, zoo2
Are you sure? [yN] y
Removing zoo3 ... done
Removing zoo1 ... done
Removing zoo2 ... done
三、Kafka集群搭建
2.1 创建 kafka的 docker-compose.yml
kafka集群的 docker-compose.yml
version: '2.1.0'
services:
kafka1:
image: wurstmeister/kafka
restart: always
hostname: kafka1
container_name: kafka1
privileged: true
ports:
- 9092:9092
environment:
KAFKA_BROKER_ID: 1
KAFKA_LISTENERS: PLAINTEXT://kafka1:9092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.102.137:9092
KAFKA_ZOOKEEPER_CONNECT: zoo1:2181,zoo2:2181,zoo3:2181
KAFKA_DEFAULT_REPLICATION_FACTOR: 3
KAFKA_NUM_PARTITIONS: 3
JMX_PORT: 9988
volumes:
- /data/cancan/Development/volume/kfkcluster/kafka1/logs:/kafka
external_links:
- zoo1
- zoo2
- zoo3
networks:
br200530:
ipv4_address: 172.19.0.14
kafka2:
image: wurstmeister/kafka
restart: always
hostname: kafka2
container_name: kafka2
privileged: true
ports:
- 9093:9092
environment:
KAFKA_BROKER_ID: 2
KAFKA_LISTENERS: PLAINTEXT://kafka2:9092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.102.137:9093
KAFKA_ZOOKEEPER_CONNECT: zoo1:2181,zoo2:2181,zoo3:2181
KAFKA_DEFAULT_REPLICATION_FACTOR: 3
KAFKA_NUM_PARTITIONS: 3
JMX_PORT: 9988
volumes:
- /data/cancan/Development/volume/kfkcluster/kafka2/logs:/kafka
external_links:
- zoo1
- zoo2
- zoo3
networks:
br200530:
ipv4_address: 172.19.0.15
kafka3:
image: wurstmeister/kafka
restart: always
hostname: kafka3
container_name: kafka3
privileged: true
ports:
- 9094:9092
environment:
KAFKA_BROKER_ID: 3
KAFKA_LISTENERS: PLAINTEXT://kafka3:9092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.102.137:9094
KAFKA_ZOOKEEPER_CONNECT: zoo1:2181,zoo2:2181,zoo3:2181
KAFKA_DEFAULT_REPLICATION_FACTOR: 3
KAFKA_NUM_PARTITIONS: 3
JMX_PORT: 9988
volumes:
- /data/cancan/Development/volume/kfkcluster/kafka3/logs:/kafka
external_links:
- zoo1
- zoo2
- zoo3
networks:
br200530:
ipv4_address: 172.19.0.16
kafka-manager:
image: sheepkiller/kafka-manager:latest
restart: always
container_name: kafka-manager
hostname: kafka-manager
ports:
- "9000:9000"
links: # 连接本compose文件创建的container
- kafka1
- kafka2
- kafka3
external_links: # 连接本compose文件以外的container
- zoo1
- zoo2
- zoo3
environment:
ZK_HOSTS: zoo1:2181,zoo2:2181,zoo3:2181
KAFKA_BROKERS: kafka1:9092,kafka2:9092,kafka3:9092
APPLICATION_SECRET: letmein
KM_ARGS: -Djava.net.preferIPv4Stack=true
networks:
br200530:
ipv4_address: 172.19.0.20
networks:
br200530:
external:
name: br200530
2.2 运行在放置有 docker-compose.yml 的位置运行命令
- 创建
[root@localhost docker-kafka]# docker-compose up -d
Creating kafka1 ... done
Creating kafka2 ... done
Creating kafka3 ... done
Creating kafka-manager ... done
- 查询
[root@localhost docker-kafka]# docker-compose ps
Name Command State Ports
-------------------------------------------------------------------------
kafka-manager ./start-kafka-manager.sh Up 0.0.0.0:9000->9000/tcp
kafka1 start-kafka.sh Up 0.0.0.0:9092->9092/tcp
kafka2 start-kafka.sh Up 0.0.0.0:9093->9092/tcp
kafka3 start-kafka.sh Up 0.0.0.0:9094->9092/tcp
2.3 说明
-
搭建了 kafka1 kafka2 kafka3 三个集群和一个管理界面 manager
-
kafka启动参数说明
# kafka的broker的顺序
KAFKA_BROKER_ID: 1
# kafka的内部监听端口
KAFKA_LISTENERS: PLAINTEXT://kafka1:9092
# kafka的外部监听端口,取代了KAFKA_ADVERTISED_HOST_NAME和KAFKA_ADVERTISED_PORT
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://192.168.102.137:9092
# zk的集群地址
KAFKA_ZOOKEEPER_CONNECT: zoo1:2181,zoo2:2181,zoo3:2181
# 默认开启了分片,一般分片的数量跟集群的数量一致
KAFKA_DEFAULT_REPLICATION_FACTOR: 3
# 默认开启了副本
KAFKA_NUM_PARTITIONS: 3
JMX_PORT: 9988
-
zk的集群地址为 192.168.102.137:2184,192.168.102.137:2185,192.168.102.137:2186
-
kafka的集群地址为 192.168.102.137:9092,192.168.102.137:909
3,192.168.102.137:9094
四、测试
参考文章:https://blog.csdn.net/qq_34168515/article/details/106433825
4.1 在宿主机下载一个kafka的安装包。
例如:
[root@localhost docker-kafka]# cd ../kafka/kafka_2.11-2.1.0/bin/
[root@localhost bin]# ls
connect-distributed.sh kafka-consumer-groups.sh kafka-preferred-replica-election.sh kafka-streams-application-reset.sh zookeeper-security-migration.sh
connect-standalone.sh kafka-consumer-perf-test.sh kafka-producer-perf-test.sh kafka-topics.sh zookeeper-server-start.sh
kafka-acls.sh kafka-delegation-tokens.sh kafka-reassign-partitions.sh kafka-verifiable-consumer.sh zookeeper-server-stop.sh
kafka-broker-api-versions.sh kafka-delete-records.sh kafka-replica-verification.sh kafka-verifiable-producer.sh zookeeper-shell.sh
kafka-configs.sh kafka-dump-log.sh kafka-run-class.sh trogdor.sh
kafka-console-consumer.sh kafka-log-dirs.sh kafka-server-start.sh windows
kafka-console-producer.sh kafka-mirror-maker.sh kafka-server-stop.sh zookeeper.out
4.2 手动创建一个为 cancan 的主题
[root@localhost bin]# ./kafka-topics.sh --zookeeper localhost:2184,localhost:2185,localhost:2186 --topic cancan --create --partitions 3 --replication-factor 3
Created topic "cancan".
4.3 查询 cancan 的主题
[root@localhost bin]# ./kafka-topics.sh --zookeeper localhost:2184,localhost:2185,localhost:2186 --list
__consumer_offsets
cancan
4.4 前端页面打开kafka
192.168.102.137:9000