Kafka:生产者压测和消费者压测

1)Kafka 压测

用 Kafka 官方自带的脚本,对 Kafka 进行压测。

⚫ 生产者压测:kafka-producer-perf-test.sh

⚫ 消费者压测:kafka-consumer-perf-test.sh

 2)Kafka Producer 压力测试

(1)创建一个 test topic,设置为 3 个分区 3 个副本

[root@hadoop102 kafka]$ bin/kafka-topics.sh --bootstrapserver
 hadoop102:9092 --create --replication-factor 3 --partitions 3 --topic test

 (2)在/opt/module/kafka/bin 目录下面有这两个文件。我们来测试一下

[root@hadoop105 kafka]$ bin/kafka-producer-perf-test.sh --topic test
 --record-size 1024 --num-records 1000000 --throughput 10000 --producer-props 
bootstrap.servers=hadoop102:9092,hadoop103:9092,hadoop104:9092
 batch.size=16384 linger.ms=0

 参数说明:

⚫ record-size 是一条信息有多大,单位是字节,本次测试设置为 1k。

⚫ num-records 是总共发送多少条信息,本次测试设置为 100 万条。

⚫ throughput 是每秒多少条信息,设成-1,表示不限流,尽可能快的生产数据,可测 出生产者最大吞吐量。本次实验设置为每秒钟 1 万条。

⚫ producer-props 后面可以配置生产者相关参数,batch.size 配置为 16k。 输出结果:

ap.servers=hadoop102:9092,hadoop103:9092,hadoop104:9092 batch.size=16384 
linger.ms=0
37021 records sent, 7401.2 records/sec (7.23 MB/sec), 1136.0 ms avg latency, 
1453.0 ms max latency.
50535 records sent, 10107.0 records/sec (9.87 MB/sec), 1199.5 ms avg 
latency, 1404.0 ms max latency.
47835 records sent, 9567.0 records/sec (9.34 MB/sec), 1350.8 ms avg latency, 
1570.0 ms max latency.
。。。 。。。
42390 records sent, 8444.2 records/sec (8.25 MB/sec), 3372.6 ms avg latency, 
4008.0 ms max latency.
37800 records sent, 7558.5 records/sec (7.38 MB/sec), 4079.7 ms avg latency, 
4758.0 ms max latency.
33570 records sent, 6714.0 records/sec (6.56 MB/sec), 4549.0 ms avg latency, 
5049.0 ms max latency.
# records/sec (8.97 MB/sec)
1000000 records sent, 9180.713158 records/sec (8.97 MB/sec), 1894.78 ms 
avg latency, 5049.00 ms max latency, 1335 ms 50th, 4128 ms 95th, 4719 ms 
99th, 5030 ms 99.9th.

(3)调整 batch.size 大小

①batch.size 默认值是 16k。本次实验 batch.size 设置为 32k。

[root@hadoop105 kafka]$ bin/kafka-producer-perf-test.sh --
topic test --record-size 1024 --num-records 1000000 --throughput 
10000 --producer-props 
bootstrap.servers=hadoop102:9092,hadoop103:9092,hadoop104:9092 
batch.size=32768 linger.ms=0

输出结果:

49922 records sent, 9978.4 records/sec (9.74 MB/sec), 64.2 ms avg latency, 
340.0 ms max latency.
49940 records sent, 9988.0 records/sec (9.75 MB/sec), 15.3 ms avg latency, 
31.0 ms max latency.
50018 records sent, 10003.6 records/sec (9.77 MB/sec), 16.4 ms avg latency, 
52.0 ms max latency.
。。。 。。。
49960 records sent, 9992.0 records/sec (9.76 MB/sec), 17.2 ms avg latency, 
40.0 ms max latency.
50090 records sent, 10016.0 records/sec (9.78 MB/sec), 16.9 ms avg latency, 
47.0 ms max latency.
# records/sec (9.76 MB/sec)
1000000 records sent, 9997.600576 records/sec (9.76 MB/sec), 20.20 ms avg 
latency, 340.00 ms max latency, 16 ms 50th, 30 ms 95th, 168 ms 99th, 249 
ms 99.9th.

②batch.size 默认值是 16k。本次实验 batch.size 设置为 4k。

[root@hadoop105 kafka]$ bin/kafka-producer-perf-test.sh --topic test
 --record-size 1024 --num-records 1000000 --throughput 10000 --producer-props
bootstrap.servers=hadoop102:9092,hadoop103:9092,hadoop104:9092 
batch.size=4096 linger.ms=0

输出结果:

15598 records sent, 3117.1 records/sec (3.04 MB/sec), 1878.3 ms avg latency, 
3458.0 ms max latency.
17748 records sent, 3549.6 records/sec (3.47 MB/sec), 5072.5 ms avg latency, 
6705.0 ms max latency.
18675 records sent, 3733.5 records/sec (3.65 MB/sec), 6800.9 ms avg latency, 
7052.0 ms max latency.
。。。 。。。
19125 records sent, 3825.0 records/sec (3.74 MB/sec), 6416.5 ms avg latency, 
7023.0 ms max latency.
# records/sec (3.57 MB/sec)
1000000 records sent, 3660.201531 records/sec (3.57 MB/sec), 6576.68 ms 
avg latency, 7677.00 ms max latency, 6745 ms 50th, 7298 ms 95th, 7507 ms 
99th, 7633 ms 99.9th.

 (4)调整 linger.ms 时间

linger.ms 默认是 0ms。本次实验 linger.ms 设置为 50ms。

[root@hadoop105 kafka]$ bin/kafka-producer-perf-test.sh --
topic test --record-size 1024 --num-records 1000000 --throughput 
10000 --producer-props 
bootstrap.servers=hadoop102:9092,hadoop103:9092,hadoop104:9092 
batch.size=4096 linger.ms=50

输出结果:

16804 records sent, 3360.1 records/sec (3.28 MB/sec), 1841.6 ms avg latency, 
3338.0 ms max latency.
18972 records sent, 3793.6 records/sec (3.70 MB/sec), 4877.7 ms avg latency, 
6453.0 ms max latency.
19269 records sent, 3852.3 records/sec (3.76 MB/sec), 6477.9 ms avg latency, 
6686.0 ms max latency.
。。。 。。。
17073 records sent, 3414.6 records/sec (3.33 MB/sec), 6987.7 ms avg latency, 
7353.0 ms max latency.
19326 records sent, 3865.2 records/sec (3.77 MB/sec), 6756.5 ms avg latency, 
7357.0 ms max latency.
# records/sec (3.75 MB/sec)
1000000 records sent, 3842.754486 records/sec (3.75 MB/sec), 6272.49 ms 
avg latency, 7437.00 ms max latency, 6308 ms 50th, 6880 ms 95th, 7289 ms 
99th, 7387 ms 99.9th.

(5)调整压缩方式

①默认的压缩方式是 none。本次实验 compression.type 设置为 snappy。

[root@hadoop105 kafka]$ bin/kafka-producer-perf-test.sh --
topic test --record-size 1024 --num-records 1000000 --throughput 
10000 --producer-props 
bootstrap.servers=hadoop102:9092,hadoop103:9092,hadoop104:9092 
batch.size=4096 linger.ms=50 compression.type=snappy

输出结果:

17244 records sent, 3446.0 records/sec (3.37 MB/sec), 5207.0 ms avg latency, 
6861.0 ms max latency.
18873 records sent, 3774.6 records/sec (3.69 MB/sec), 6865.0 ms avg latency, 
7094.0 ms max latency.
18378 records sent, 3674.1 records/sec (3.59 MB/sec), 6579.2 ms avg latency, 
6738.0 ms max latency.
。。。 。。。
17631 records sent, 3526.2 records/sec (3.44 MB/sec), 6671.3 ms avg latency, 
7566.0 ms max latency.
19116 records sent, 3823.2 records/sec (3.73 MB/sec), 6739.4 ms avg latency, 
7630.0 ms max latency.
# records/sec (3.64 MB/sec)
1000000 records sent, 3722.925028 records/sec (3.64 MB/sec), 6467.75 ms 
avg latency, 7727.00 ms max latency, 6440 ms 50th, 7308 ms 95th, 7553 ms 
99th, 7665 ms 99.9th.

②默认的压缩方式是 none。本次实验 compression.type 设置为 zstd。

[root@hadoop105 kafka]$ bin/kafka-producer-perf-test.sh --
topic test --record-size 1024 --num-records 1000000 --throughput 
10000 --producer-props 
bootstrap.servers=hadoop102:9092,hadoop103:9092,hadoop104:9092 
batch.size=4096 linger.ms=50 compression.type=zstd

输出结果:

23820 records sent, 4763.0 records/sec (4.65 MB/sec), 1580.2 ms avg latency, 
2651.0 ms max latency.
29340 records sent, 5868.0 records/sec (5.73 MB/sec), 3666.0 ms avg latency, 
4752.0 ms max latency.
28950 records sent, 5788.8 records/sec (5.65 MB/sec), 5785.2 ms avg latency, 
6865.0 ms max latency.
。。。 。。。
29580 records sent, 5916.0 records/sec (5.78 MB/sec), 6907.6 ms avg latency, 
7432.0 ms max latency.
29925 records sent, 5981.4 records/sec (5.84 MB/sec), 6948.9 ms avg latency, 
7541.0 ms max latency.
# records/sec (5.60 MB/sec)
1000000 records sent, 5733.583318 records/sec (5.60 MB/sec), 6824.75 ms 
avg latency, 7595.00 ms max latency, 7067 ms 50th, 7400 ms 95th, 7500 ms 
99th, 7552 ms 99.9th.

③默认的压缩方式是 none。本次实验 compression.type 设置为 gzip。

[root@hadoop105 kafka]$ bin/kafka-producer-perf-test.sh --
topic test --record-size 1024 --num-records 1000000 --throughput 
10000 --producer-props 
bootstrap.servers=hadoop102:9092,hadoop103:9092,hadoop104:9092 
batch.size=4096 linger.ms=50 compression.type=gzip

输出结果:

27170 records sent, 5428.6 records/sec (5.30 MB/sec), 1374.0 ms avg latency, 
2311.0 ms max latency.
31050 records sent, 6210.0 records/sec (6.06 MB/sec), 3183.8 ms avg latency, 
4228.0 ms max latency.
32145 records sent, 6427.7 records/sec (6.28 MB/sec), 5028.1 ms avg latency, 
6042.0 ms max latency.
。。。 。。。
31710 records sent, 6342.0 records/sec (6.19 MB/sec), 6457.1 ms avg latency, 
6777.0 ms max latency.
31755 records sent, 6348.5 records/sec (6.20 MB/sec), 6498.7 ms avg latency, 
6780.0 ms max latency.
32760 records sent, 6548.1 records/sec (6.39 MB/sec), 6375.7 ms avg latency, 
6822.0 ms max latency.
# records/sec (6.17 MB/sec)
1000000 records sent, 6320.153706 records/sec (6.17 MB/sec), 6155.42 ms 
avg latency, 6943.00 ms max latency, 6437 ms 50th, 6774 ms 95th, 6863 ms 
99th, 6912 ms 99.9th.

④默认的压缩方式是 none。本次实验 compression.type 设置为 lz4。

[root@hadoop105 kafka]$ bin/kafka-producer-perf-test.sh --
topic test --record-size 1024 --num-records 1000000 --throughput 
10000 --producer-props
bootstrap.servers=hadoop102:9092,hadoop103:9092,hadoop104:9092 
batch.size=4096 linger.ms=50 compression.type=lz4

输出结果:

16696 records sent, 3339.2 records/sec (3.26 MB/sec), 1924.5 ms avg latency, 
3355.0 ms max latency.
19647 records sent, 3928.6 records/sec (3.84 MB/sec), 4841.5 ms avg latency, 
6320.0 ms max latency.
20142 records sent, 4028.4 records/sec (3.93 MB/sec), 6203.2 ms avg latency, 
6378.0 ms max latency.
。。。 。。。
20130 records sent, 4024.4 records/sec (3.93 MB/sec), 6073.6 ms avg latency, 
6396.0 ms max latency.
19449 records sent, 3889.8 records/sec (3.80 MB/sec), 6195.6 ms avg latency, 
6500.0 ms max latency.
19872 records sent, 3972.8 records/sec (3.88 MB/sec), 6274.5 ms avg latency, 
6565.0 ms max latency.
# records/sec (3.86 MB/sec)
1000000 records sent, 3956.087430 records/sec (3.86 MB/sec), 6085.62 ms 
avg latency, 6745.00 ms max latency, 6212 ms 50th, 6524 ms 95th, 6610 ms 
99th, 6695 ms 99.9th.

(6)调整缓存大小

默认生产者端缓存大小 32m。本次实验 buffer.memory 设置为 64m。

[root@hadoop105 kafka]$ bin/kafka-producer-perf-test.sh --
topic test --record-size 1024 --num-records 1000000 --throughput 
10000 --producer-props 
bootstrap.servers=hadoop102:9092,hadoop103:9092,hadoop104:9092 
batch.size=4096 linger.ms=50 buffer.memory=67108864

输出结果:

20170 records sent, 4034.0 records/sec (3.94 MB/sec), 1669.5 ms avg latency, 
3040.0 ms max latency.
21996 records sent, 4399.2 records/sec (4.30 MB/sec), 4407.9 ms avg latency, 
5806.0 ms max latency.
22113 records sent, 4422.6 records/sec (4.32 MB/sec), 7189.0 ms avg latency, 
8623.0 ms max latency.
。。。 。。。
19818 records sent, 3963.6 records/sec (3.87 MB/sec), 12416.0 ms avg 
latency, 12847.0 ms max latency.
20331 records sent, 4062.9 records/sec (3.97 MB/sec), 12400.4 ms avg 
latency, 12874.0 ms max latency.
19665 records sent, 3933.0 records/sec (3.84 MB/sec), 12303.9 ms avg 
latency, 12838.0 ms max latency.
# records/sec (3.93 MB/sec)
1000000 records sent, 4020.100503 records/sec (3.93 MB/sec), 11692.17 ms 
avg latency, 13796.00 ms max latency, 12238 ms 50th, 12949 ms 95th, 13691 
ms 99th, 13766 ms 99.9th.

3)Kafka Consumer 压力测试

(1)修改/opt/module/kafka/config/consumer.properties 文件中的一次拉取条数为 500

max.poll.records=500

(2)消费 100 万条日志进行压测

[atguigu@hadoop105 kafka]$ bin/kafka-consumer-perf-test.sh 
--bootstrap-server hadoop102:9092,hadoop103:9092,hadoop104:9092 
--topic test --messages 1000000 --consumer.config config/consumer.properties

参数说明:

 ⚫ --bootstrap-server 指定 Kafka 集群地址

⚫ --topic 指定 topic 的名称

⚫ --messages 总共要消费的消息个数。本次实验 100 万条。

输出结果:

# MB.sec 136.6457
start.time, end.time, data.consumed.in.MB, MB.sec, data.consumed.in.nMsg, 
nMsg.sec, rebalance.time.ms, fetch.time.ms, fetch.MB.sec, fetch.nMsg.sec
2022-01-20 09:58:26:171, 2022-01-20 09:58:33:321, 977.0166, 136.6457, 
1000465, 139925.1748, 415, 6735, 145.0656, 148547.1418

(3)一次拉取条数为 2000

①修改/opt/module/kafka/config/consumer.properties 文件中的一次拉取条数为 2000

max.poll.records=2000

②再次执行

[root@hadoop105 kafka]$ bin/kafka-consumer-perf-test.sh 
--broker-list hadoop102:9092,hadoop103:9092,hadoop104:9092 
--topic test --messages 1000000 --consumer.config config/consumer.properties

输出结果:

# MB.sec 148.2206
start.time, end.time, data.consumed.in.MB, MB.sec, data.consumed.in.nMsg, 
nMsg.sec, rebalance.time.ms, fetch.time.ms, fetch.MB.sec, fetch.nMsg.sec
2022-01-20 10:18:06:268, 2022-01-20 10:18:12:863, 977.5146, 148.2206, 
1000975, 151777.8620, 358, 6237, 156.7283, 160489.8188

(4)调整 fetch.max.bytes 大小为 100m

①修改/opt/module/kafka/config/consumer.properties 文件中的拉取一批数据大小 100m

fetch.max.bytes=104857600

②再次执行

[root@hadoop105 kafka]$ bin/kafka-consumer-perf-test.sh 
--broker-list hadoop102:9092,hadoop103:9092,hadoop104:9092 
--topic test --messages 1000000 --consumer.config config/consumer.properties

 输出结果:

# MB.sec 151.3415
start.time, end.time, data.consumed.in.MB, MB.sec, 
data.consumed.in.nMsg, nMsg.sec, rebalance.time.ms, 
fetch.time.ms, fetch.MB.sec, fetch.nMsg.sec
2022-01-20 10:26:13:203, 2022-01-20 10:26:19:662, 977.5146, 
151.3415, 1000975, 154973.6801, 362, 6097, 160.3272, 164175.004

  • 0
    点赞
  • 10
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

程序员无羡

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值