Kafka性能测试工具

简介

Kafka本身自带了性能测试的脚本,可以测试发送端和消费端的速度,分别为:
kafka-producer-perf-test.sh
kafka-consumer-perf-test.sh
这两个脚本可以在kafka的bin目录下找到。

发送端

bin/kafka-producer-perf-test.sh

usage: producer-performance [-h] --topic TOPIC --num-records NUM-RECORDS --record-size RECORD-SIZE --throughput THROUGHPUT [--producer-props PROP-NAME=PROP-VALUE [PROP-NAME=PROP-VALUE ...]] [--producer.config CONFIG-FILE] This tool is used to verify the producer performance. optional arguments: -h, --help             show this help message and exit
  --topic TOPIC          produce messages to this topic
  --num-records NUM-RECORDS number of messages to produce
  --record-size RECORD-SIZE message size in bytes
  --throughput THROUGHPUT throttle maximum message throughput to *approximately* THROUGHPUT messages/sec
  --producer-props PROP-NAME=PROP-VALUE [PROP-NAME=PROP-VALUE ...] kafka producer related configuration properties like bootstrap.servers,client.id etc. These configs take  precedence over those passed via --
                         producer.config.
  --producer.config CONFIG-FILE producer config properties file. 

例子:

bin/kafka-producer-perf-test.sh --topic store --record-size 1000 --throughput 2000 --num-records 10000 --producer-props bootstrap.servers=cdh01:9092  client.id=store_client

消费端

bin/kafka-consumer-perf-test.sh

Option                                 Description
------                                 ----------- --batch-size <Integer: size>           Number of messages to write in a
                                         single batch. (default: 200)
--broker-list <host>                   A broker list to use for connecting if using the new consumer.
--compression-codec <Integer:          If set, messages are sent compressed
  supported codec: NoCompressionCodec    (default: 0)
  as 0, GZIPCompressionCodec as 1,
  SnappyCompressionCodec as 2,
  LZ4CompressionCodec as 3>
--consumer.config <config file>        Consumer config properties file. --date-format <date format>            The date format to use for formatting
                                         the time field. See java.text.
                                         SimpleDateFormat for options.
                                         (default: yyyy-MM-dd HH:mm:ss:SSS)
--fetch-size <Integer: size>           The amount of data to fetch in a
                                         single request. (default: 1048576)
--from-latest                          If the consumer does not already have
                                         an established offset to consume
                                         from, start with the latest message
                                         present in the log rather than the
                                         earliest message.
--group <gid>                          The group id to consume on. (default:
                                         perf-consumer-77417)
--help                                 Print usage. --hide-header                          If set, skips printing the header for
                                         the stats
--message-size <Integer: size>         The size of each message. (default: 100)
--messages <Long: count>               REQUIRED: The number of messages to
                                         send or consume
--new-consumer                         Use the new consumer implementation. --num-fetch-threads <Integer: count>   Number of fetcher threads. (default: 1) --reporting-interval <Integer:         Interval in milliseconds at which to
  interval_ms>                           print progress info. (default: 5000)
--show-detailed-stats                  If set, stats are reported for each
                                         reporting interval as configured by
                                         reporting-interval --socket-buffer-size <Integer: size>   The size of the tcp RECV size.
                                         (default: 2097152)
--threads <Integer: count>             Number of processing threads.
                                         (default: 10)
--topic <topic>                        REQUIRED: The topic to consume from. --zookeeper <urls>                     The connection string for the
                                         zookeeper connection in the form
                                         host:port. Multiple URLS can be
                                         given to allow fail-over. This
                                         option is only used with the old
                                         consumer.

例子:

bin/kafka-consumer-perf-test.sh --topic store --zookeeper cdh01:2181 --messages 10000
  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

修破立生

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值