Kafka 自带测试脚本进行性能测试

压力测试

用Kafka官方自带的脚本,对Kafka进行压测。Kafka压测时,可以查看到哪个地方出现了瓶颈(CPU,内存,网络IO)。一般都是网络IO达到瓶颈

producer 压力测试

bin/kafka-producer-perf-test.sh  --topic test --record-size 100 --num-records 100000 --throughput 1000 --producer-props bootstrap.servers=dw-node01:9092,dw-node02:9092,dw-node03:9092
  • record-size 设置单条信息大小 单位:字节
  • num-records 设置总共多少条信息
  • throughput 设置每秒产生多少条信息

测试结果

5001 records sent, 1000.2 records/sec (0.10 MB/sec), 0.9 ms avg latency, 3.0 max latency.
5002 records sent, 1000.2 records/sec (0.10 MB/sec), 0.9 ms avg latency, 2.0 max latency.
5001 records sent, 1000.2 records/sec (0.10 MB/sec), 0.9 ms avg latency, 6.0 max latency.
5002 records sent, 1000.2 records/sec (0.10 MB/sec), 0.9 ms avg latency, 4.0 max latency.
5002 records sent, 1000.2 records/sec (0.10 MB/sec), 0.9 ms avg latency, 13.0 max latency.
5001 records sent, 1000.2 records/sec (0.10 MB/sec), 0.9 ms avg latency, 3.0 max latency.
5002 records sent, 1000.0 records/sec (0.10 MB/sec), 0.9 ms avg latency, 2.0 max latency.
5003 records sent, 1000.4 records/sec (0.10 MB/sec), 0.9 ms avg latency, 2.0 max latency.
5002 records sent, 1000.2 records/sec (0.10 MB/sec), 0.9 ms avg latency, 2.0 max latency.
5002 records sent, 1000.2 records/sec (0.10 MB/sec), 0.9 ms avg latency, 2.0 max latency.
5001 records sent, 1000.2 records/sec (0.10 MB/sec), 0.9 ms avg latency, 2.0 max latency.
5002 records sent, 1000.2 records/sec (0.10 MB/sec), 0.9 ms avg latency, 14.0 max latency.
5001 records sent, 1000.2 records/sec (0.10 MB/sec), 0.9 ms avg latency, 8.0 max latency.
5002 records sent, 1000.2 records/sec (0.10 MB/sec), 0.9 ms avg latency, 2.0 max latency.
100000 records sent, 999.890012 records/sec (0.10 MB/sec), 1.21 ms avg latency, 539.00 ms max latency, 1 ms 50th, 1 ms 95th, 2 ms 99th, 69 ms 99.9th.

本例中一共写入10w条消息,每秒向Kafka写入了0.10MB的数据,平均是999.89条消息/秒,每次写入的平均延迟为0.10毫秒,最大的延迟为539毫秒…

Consumer 压力测试

Consumer的测试,如果这四个指标(IO,CPU,内存,网络)都不能改变,考虑增加分区数来提升性能

bin/kafka-consumer-perf-test.sh --zookeeper dw-node01:2181 --topic test --fetch-size 10000 --messages 10000000 --threads 1
  • zookeeper 指定zookeeper的链接信息

  • topic 指定topic的名称

  • fetch-size 指定每次fetch的数据的大小

  • messages 总共要消费的消息个数

测试结果

start.time, end.time, data.consumed.in.MB, MB.sec, data.consumed.in.nMsg, nMsg.sec
2020-07-20 12:01:03:058, 2020-07-20 12:01:03:902, 9.5367, 11.2995, 100000, 118483.4123

开始测试时间,测试结束数据,最大吞吐率9.5367MB/s,平均每秒消费11.2995MB/s,最大每秒消费100000条,平均每秒消费118483.4123条。

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值