背景
本篇主要记录一下使用docker搭建kafka+zk的开发环境的过程。
使用工具:
docker
docker-compose
环境:
CentOS 7.2
docker-compose安装
安装前需确认已经安装Docker,这里不过多说明Docker的安装。
安装docker-compose
sudo curl -L "https://github.com/docker/compose/releases/download/1.24.1/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
使用云服务器,国内下载可能较慢,建议翻墙下载到本机后再上传至云服务器。
修改二进制文件执行权限
chmod +x /usr/local/bin/docker-compose
创建软连接
ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
测试安装结果
docker-compose --version
docker-compose version 1.24.1, build 4667896b
构建docker-compose配置文件
version: '2.1'
services:
zoo1:
image: zookeeper:3.4.9
hostname: zoo1
ports:
- "2181:2181"
environment:
ZOO_MY_ID: 1
ZOO_PORT: 2181
ZOO_SERVERS: server.1=zoo1:2888:3888
volumes:
- ./zk-single-kafka-single/zoo1/data:/data
- ./zk-single-kafka-single/zoo1/datalog:/datalog
kafka1:
image: confluentinc/cp-kafka:5.3.1
hostname: kafka1
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_LISTENERS: LISTENER_DOCKER_INTERNAL://kafka1:19092,LISTENER_DOCKER_EXTERNAL://${DOCKER_HOST_IP:-127.0.0.1}:9092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: LISTENER_DOCKER_INTERNAL:PLAINTEXT,LISTENER_DOCKER_EXTERNAL:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: LISTENER_DOCKER_INTERNAL
KAFKA_ZOOKEEPER_CONNECT: "zoo1:2181"
KAFKA_BROKER_ID: 1
KAFKA_LOG4J_LOGGERS: "kafka.controller=INFO,kafka.producer.async.DefaultEventHandler=INFO,state.change.logger=INFO"
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
volumes:
- ./zk-single-kafka-single/kafka1/data:/var/lib/kafka/data
depends_on:
- zoo1
运行以下命令即可完成环境搭建(会自动下载并运行一个 zookeeper 和 kafka )
docker-compose -f zk-single-kafka-single.yml up -d
关于docker-compose的几个常用命令
docker-compose up #启动所有容器
docker-compose up -d #后台启动并运行所有容器
docker-compose up --no-recreate -d #不重新创建已经停止的容器
docker-compose up -d test2 #只启动test2这个容器
docker-compose stop #停止容器
docker-compose start #启动容器
docker-compose down #停止并销毁容器
确认容器启动成功
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
63e2f16de66a confluentinc/cp-kafka:5.3.1 "/etc/confluent/docke" 52 minutes ago Up 37 minutes 0.0.0.0:9092->9092/tcp kafka_zk_docker_file_kafka1_1
224ec7f6a303 zookeeper:3.4.9 "/docker-entrypoint.s" 52 minutes ago Up 37 minutes 2888/tcp, 0.0.0.0:2181->2181/tcp, 3888/tcp kafka_zk_docker_file_zoo1_1
Kafka命令行测试消息的生产和消费
进入kafka docker
docker exec -it kafka_zk_docker_file_kafka1_1 /bin/bash
查看全部topic
kafka-topics --describe --zookeeper zoo1:2181
创建topic
kafka-topics --create --topic test --partitions 3 --zookeeper zoo1:2181 --replication-factor 1
Created topic test.
我们创建了一个名为 test 的 topic, partition 数为 3, replica 数为 1。
消费者订阅topic
需启动另一个shell窗口,执行该命令
kafka-console-consumer --bootstrap-server localhost:9092 --topic test
生产者向 topic 发送消息
kafka-console-producer --broker-list localhost:9092 --topic test
>send hello from console -producer
>
我们使用 kafka-console-producer 命令向名为 test 的 Topic 发送了一条消息,消息内容为:“send hello from console -producer”
这个时候,你会发现消费者成功接收到了消息:
kafka-console-consumer --bootstrap-server localhost:9092 --topic test
send hello from console -producer
使用Kafka自带工具执行压测
进入kafka docker中,进入bin目录下,kafka自带了进行压测的测试脚本。
执行写入测试
kafka-producer-perf-test --topic test_perf --num-records 10000 --record-size 1000 --throughput 1000 --producer-props bootstrap.servers=localhost:9092
参数解析:
--topic topic名称,本例为test_perf
--num-records 总共需要发送的消息数,本例为100000
--record-size 每个记录的字节数,本例为1000
--throughput 每秒钟发送的记录数,本例为5000
--producer-props bootstrap.servers=localhost:9092 发送端的配置信息
这里,我们执行压测,参数设定10W条消息,TPS 1000/s,查看结果
5001 records sent, 1000.2 records/sec (0.95 MB/sec), 2.0 ms avg latency, 295.0 ms max latency.
5001 records sent, 1000.2 records/sec (0.95 MB/sec), 0.2 ms avg latency, 16.0 ms max latency.
5001 records sent, 1000.2 records/sec (0.95 MB/sec), 0.3 ms avg latency, 12.0 ms max latency.
5000 records sent, 1000.0 records/sec (0.95 MB/sec), 0.2 ms avg latency, 18.0 ms max latency.
5000 records sent, 999.8 records/sec (0.95 MB/sec), 0.2 ms avg latency, 16.0 ms max latency.
5000 records sent, 1000.0 records/sec (0.95 MB/sec), 0.2 ms avg latency, 16.0 ms max latency.
5000 records sent, 1000.0 records/sec (0.95 MB/sec), 0.3 ms avg latency, 17.0 ms max latency.
5002 records sent, 1000.2 records/sec (0.95 MB/sec), 0.2 ms avg latency, 17.0 ms max latency.
5001 records sent, 1000.0 records/sec (0.95 MB/sec), 0.2 ms avg latency, 11.0 ms max latency.
5001 records sent, 1000.2 records/sec (0.95 MB/sec), 0.2 ms avg latency, 15.0 ms max latency.
5002 records sent, 1000.2 records/sec (0.95 MB/sec), 0.2 ms avg latency, 15.0 ms max latency.
执行消费测试
需要注意的是,消费测试必须在写入测试执行完毕之后执行
kafka-consumer-perf-test --broker-list localhost:9092 --topic test_perf --fetch-size 1048576 --messages 100000 --threads 1
参数解析:
--broker-list 指定kafka的链接信息,本例为localhost:9092
--topic 指定topic的名称,本例为test_perf,即4.2.1中写入的消息;
--fetch-size 指定每次fetch的数据的大小,本例为1048576,也就是1M
--messages 总共要消费的消息个数,本例为100000,10w
查看结果
start.time, end.time, data.consumed.in.MB, MB.sec, data.consumed.in.nMsg, nMsg.sec, rebalance.time.ms, fetch.time.ms, fetch.MB.sec, fetch.nMsg.sec
2021-01-12 13:41:24:120, 2021-01-12 13:41:29:114, 96.1370, 19.2505, 100073, 20038.6464, 3040, 1954, 49.2001, 51214.4319
消费TPS 20038.6464。