Kafka fetching topic metadata failed

###问题重现

[2015-10-20 17:03:43][INFO] [Connected to 127.0.0.1:1092 for producing] [kafka.utils.Logging$class.info(Logging.scala:68)] [group1_byd0030-1445331791972-ce151558-leader-finder-thread]
[2015-10-20 17:03:43][INFO] [Disconnecting from 127.0.0.1:1092] [kafka.utils.Logging$class.info(Logging.scala:68)] [group1_byd0030-1445331791972-ce151558-leader-finder-thread]
[2015-10-20 17:03:43][WARN] [Fetching topic metadata with correlation id 0 for topics [Set(bitmap_topic)] from broker [id:0,host:127.0.0.1,port:1092] failed] [kafka.utils.Logging$class.warn(Logging.scala:89)] [group1_byd0030-1445331791972-ce151558-leader-finder-thread]
java.nio.channels.ClosedChannelException
    at kafka.network.BlockingChannel.send(BlockingChannel.scala:100)
    at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73)
    at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:72)
    at kafka.producer.SyncProducer.send(SyncProducer.scala:113)
    at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:58)
    at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:93)
    at kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:66)
    at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)

启动comsumer的时候报上面的错误,从上面错误我们可以看出客户端已经成功连接到kafka,但是后来有断开了,没有说明断开的原因.
###解决办法 其实我们可以用kakfa自带的测试脚本kafka-console-consumer.sh,这个脚本报错比较明显

[jinsx@byd0030 bin]$ ./kafka-console-consumer.sh  --consumer.config ../config/consumer.properties --zookeeper 127.0.0.1:1081 --topic bitmap_topic 
[2015-10-21 11:12:33,186] ERROR Error processing message, stopping consumer:  (kafka.tools.ConsoleConsumer$)
kafka.common.MessageSizeTooLargeException: Found a message larger than the maximum fetch size of this consumer on topic bitmap_topic partition 2 at fetch offset 0. Increase the fetch size, or decrease the maximum message size the broker will allow.
	at kafka.consumer.ConsumerIterator.makeNext(ConsumerIterator.scala:90)
	at kafka.consumer.ConsumerIterator.makeNext(ConsumerIterator.scala:33)
	at kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:66)
	at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:58)
	at scala.collection.Iterator$class.foreach(Iterator.scala:727)
	at kafka.utils.IteratorTemplate.foreach(IteratorTemplate.scala:32)
	at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
	at kafka.consumer.KafkaStream.foreach(KafkaStream.scala:25)
	at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:166)
	at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala)

这里给我报的错误是消息体过大,叫我增大fetch size,就是是consumer配置里面的"fetch.message.max.bytes",或者减小broker配置文件里面的maximum message size,也就是server.properties中的"message.max.bytes"配置项.

###结论 出现这种错误一般是配置文件的问题,在Kafka官方配置文档里面列举了各个配置项,好好对照broker,consumer,producer各个配置项,再结合kafka-console-consumer.sh,kafka-console-producer.sh等脚本可以轻松查出错误.

转载于:https://my.oschina.net/chzhuo/blog/519775

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值