参考文章: https://cloud.tencent.com/developer/article/1587057
kafka吞吐量测试
(1) 测试kafka生产者的吞吐量
[root@hadoop01 kafka_2.12-2.6.2]# bin/kafka-producer-perf-test.sh --num-records 10000000 --record-size 1000 --topic test01 --throughput 100000 --producer-props bootstrap.servers=hadoop01:9092,hadoop02:9092,hadoop03:9092
相关参数说明:
--topic
指定Kafka集群的topic名称,本例为test1
--num-records
总共需要发送的消息数,本例为10000000(1千万)
--record-size
是一条信息有多大,单位是字节
--throughput
每秒钟发送的记录数,本例为100000(10万)
--producer-props bootstrap.servers=hadoop01:9092,hadoop02:9092,hadoop03:9092
kafka集群的broker地址
测试结果如下:
......
187488 records sent, 37497.6 records/sec (35.76 MB/sec), 878.5 ms avg latency, 939.0 ms max latency.
199120 records sent, 39816.0 records/sec (37.97 MB/sec), 822.1 ms avg latency, 896.0 ms max latency.
194448 records sent, 38889.6 records/sec (37.09 MB/sec), 838.8 ms avg latency, 920.0 ms max latency.
191696 records sent, 38331.5 records/sec (36.56 MB/sec), 850.8 ms avg latency, 927.0 ms max latency.
193680 records sent, 38728.3 records/sec (36.93 MB/sec), 853.1 ms avg latency, 927.0 ms max latency.
193184 records sent, 38636.8 records/sec (36.85 MB/sec), 846.1 ms avg latency, 910.0 ms max latency.
190416 records sent, 38083.2 records/sec (36.32 MB/sec), 859.9 ms avg latency, 906.0 ms max latency.
10000000 records sent, 37516.835680 records/sec (35.78 MB/sec), 868.30 ms avg latency, 1887.00 ms max latency, 854 ms 50th, 1006 ms 95th, 1343 ms 99th, 1691 ms 99.9th.
参数解析:
本例中一共写入1千万条消息,平均是81878.623129条消息/秒,每秒向Kafka写入了35.78MB的数据,每次写入的平均延迟为868.30毫秒,最大的延迟为1887.00毫秒。
(2) 测试kafka消费的吞吐量
[root@hadoop01 kafka_2.12-2.6.2]# bin/kafka-consumer-perf-test.sh --broker-list hadoop01:9092 --topic test01 --messages 1000000 --fetch-size 1048576 --threads 10
相关参数说明:
--topic
指定topic的名称,指定消费的topic,本例为 test01
--fetch-size
指定每次fetch的数据的大小,本例为1048576,也就是1M
--messages
总共要消费的消息个数,本例为1000000(100万)
--threads
指定消费的线程数为10
注意:
必须要执行写入100w消息之后,才能执行上面的命令,否则运行时,会报下面的错误!
[2022-06-20 11:20:04,964] WARN [Consumer clientId=consumer-perf-consumer-31301-1, groupId=perf-consumer-31301] Error while fetching metadata with correlation id 2 : {test01=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
WARNING: Exiting before consuming the expected number of messages: timeout (10000 ms) exceeded. You can use the --timeout option to increase the timeout.
正常输出
start.time, end.time, data.consumed.in.MB, MB.sec, data.consumed.in.nMsg, nMsg.sec, rebalance.time.ms, fetch.time.ms, fetch.MB.sec, fetch.nMsg.sec
2022-06-20 11:27:30:288, 2022-06-20 11:27:39:483, 953.6743, 103.7166, 1000000, 108754.7580, 1655695650570, -1655695641375, -0.0000, -0.0006
参数解析:
以本例中消费100w条MQ消息为例总共消费了954.07M的数据,每秒消费数据大小为49.52M,总共消费了1000000条消息,每秒消费108754.7580条消息。
- 一般写入MQ消息设置5000条/秒时,消息延迟时间小于等于1ms,在可接受范围内,说明消息写入及时。
- Kafka消费MQ消息时,1000W待处理消息的处理能力如果在每秒20w条以上,那么处理结果是理想的。
- 根据Kafka处理10w、100w和1000w级的消息时的处理能力,可以评估出Kafka集群服务,是否有能力处理上亿级别的消息。