###问题重现
[2015-10-20 17:03:43][INFO] [Connected to 127.0.0.1:1092 for producing] [kafka.utils.Logging$class.info(Logging.scala:68)] [group1_byd0030-1445331791972-ce151558-leader-finder-thread]
[2015-10-20 17:03:43][INFO] [Disconnecting from 127.0.0.1:1092] [kafka.utils.Logging$class.info(Logging.scala:68)] [group1_byd0030-1445331791972-ce151558-leader-finder-thread]
[2015-10-20 17:03:43][WARN] [Fetching topic metadata with correlation id 0 for topics [Set(bitmap_topic)] from broker [id:0,host:127.0.0.1,port:1092] failed] [kafka.utils.Logging$class.warn(Logging.scala:89)] [group1_byd0030-1445331791972-ce151558-leader-finder-thread]
java.nio.channels.ClosedChannelException
at kafka.network.BlockingChannel.send(BlockingChannel.scala:100)
at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73)
at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:72)
at kafka.producer.SyncProducer.send(SyncProducer.scala:113)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:58)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:93)
at kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:66)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
启动comsumer的时候报上面的错误,从上面错误我们可以看出客户端已经成功连接到kafka,但是后来有断开了,没有说明断开的原因.
###解决办法 其实我们可以用kakfa自带的测试脚本kafka-console-consumer.sh,这个脚本报错比较明显
[jinsx@byd0030 bin]$ ./kafka-console-consumer.sh --consumer.config ../config/consumer.properties --zookeeper 127.0.0.1:1081 --topic bitmap_topic
[2015-10-21 11:12:33,186] ERROR Error processing message, stopping consumer: (kafka.tools.ConsoleConsumer$)
kafka.common.MessageSizeTooLargeException: Found a message larger than the maximum fetch size of this consumer on topic bitmap_topic partition 2 at fetch offset 0. Increase the fetch size, or decrease the maximum message size the broker will allow.
at kafka.consumer.ConsumerIterator.makeNext(ConsumerIterator.scala:90)
at kafka.consumer.ConsumerIterator.makeNext(ConsumerIterator.scala:33)
at kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:66)
at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:58)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at kafka.utils.IteratorTemplate.foreach(IteratorTemplate.scala:32)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at kafka.consumer.KafkaStream.foreach(KafkaStream.scala:25)
at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:166)
at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala)
这里给我报的错误是消息体过大,叫我增大fetch size,就是是consumer配置里面的"fetch.message.max.bytes",或者减小broker配置文件里面的maximum message size,也就是server.properties中的"message.max.bytes"配置项.
###结论 出现这种错误一般是配置文件的问题,在Kafka官方配置文档里面列举了各个配置项,好好对照broker,consumer,producer各个配置项,再结合kafka-console-consumer.sh,kafka-console-producer.sh等脚本可以轻松查出错误.