kafka消费不到数据排查记录

集群上新安装并启动了3个kafka Broker,代码打包上传至集群,运行后发现一直消费不到数据,
本地idea中debug后发现,程序一直阻塞在如下程序中,陷入了死循环。

  /**
     * Block until the coordinator for this group is known and is ready to receive requests.
     * 等待直到我们和服务端的GroupCoordinator取得连接
     */
    public void ensureCoordinatorReady() {
        while (coordinatorUnknown()) {//无法获取GroupCoordinator
            RequestFuture<Void> future = sendGroupCoordinatorRequest();//发送请求
            client.poll(future);//同步等待异步调用的结果
            if (future.failed()) {
                if (future.isRetriable())
                    client.awaitMetadataUpdate();
                else
                    throw future.exception();
            } else if (coordinator != null && client.connectionFailed(coordinator)) {
                // we found the coordinator, but the connection has failed, so mark
                // it dead and backoff before retrying discovery
                coordinatorDead();
                time.sleep(retryBackoffMs);//等待一段时间,然后重试
            }

        }
    }

流程大概说就是

  • consumer会从集群中选取一个broker作为coordinator
  • 然后group中的consumer会向coordinator发请求申请成为consumergroup中的leader
  • 最后有1个consumer会成为consumerLeader ,其他consumer成为follower
  • consumerLeader做分区分配任务,同步给coordinator
  • consumerFollower从coordinator同步分区分配数据

问题出现在第一步,意思就是说Consumer和服务端的GroupCoordinator无法取得连接,所以程序一直在等待状态。
看了下__consumer_offsets 这个topic情况,50个分区全在broker id为152的broker上

bin/kafka-topics.sh --describe --zookeeper localhost:2182 --topic __consumer_offsets
Topic:__consumer_offsets    PartitionCount:50    ReplicationFactor:1    Configs:segment.bytes=104857600,cleanup.policy=compact,compression.type=producer
    Topic: __consumer_offsets    Partition: 0    Leader: 152    Replicas: 152   Isr:152
    Topic: __consumer_offsets    Partition: 1    Leader: 152    Replicas: 152   Isr:152
    Topic: __consumer_offsets    Partition: 2    Leader: 152    Replicas: 152   Isr:152
    Topic: __consumer_offsets    Partition: 3    Leader: 152   
......

但是集群上并没有broker id为152的节点,想到该集群kafka节点曾经添加删除过节点,初步断定152是之前的kafka节点,后来该节点去掉后又加入新的节点但是zookeeper中的数据并没有更新。
所以就关闭broker,进入zookeeper客户端,将brokers节点下的topics节点下的__consumer_offsets删除,然后重启broker,注意,此时zookeeper上__consumer_offsets还并没有生成,要开启消费者之后才会生成.
然后再观察__consumer_offsets,分区已经均匀分布在三个broker上面了

 bin/kafka-topics.sh --zookeeper localhost:2182 --describe --topic __consumer_offsets
Topic:__consumer_offsets	PartitionCount:50	ReplicationFactor:3	Configs:segment.bytes=104857600,cleanup.policy=compact,compression.type=producer
	Topic: __consumer_offsets	Partition: 0	Leader: 420	Replicas: 420,421,422	Isr: 420,422,421
	Topic: __consumer_offsets	Partition: 1	Leader: 421	Replicas: 421,422,420	Isr: 422,420,421
	Topic: __consumer_offsets	Partition: 2	Leader: 422	Replicas: 422,420,421	Isr: 422,420,421
	Topic: __consumer_offsets	Partition: 3	Leader: 420	Replicas: 420,422,421	Isr: 420,422,421
	Topic: __consumer_offsets	Partition: 4	Leader: 421	Replicas: 421,420,422	Isr: 420,422,421
	Topic: __consumer_offsets	Partition: 5	Leader: 422	Replicas: 422,421,420	Isr: 422,420,421
	Topic: __consumer_offsets	Partition: 6	Leader: 420	Replicas: 420,421,422	Isr: 420,422,421
	Topic: __consumer_offsets	Partition: 7	Leader: 421	Replicas: 421,422,420	Isr: 422,420,421
	Topic: __consumer_offsets	Partition: 8	Leader: 422	Replicas: 422,420,421	Isr: 422,420,421
	Topic: __consumer_offsets	Partition: 9	Leader: 420	Replicas: 420,422,421	Isr: 420,422,421
	Topic: __consumer_offsets	Partition: 10	Leader: 421	Replicas: 421,420,422	Isr: 420,422,421
	Topic: __consumer_offsets	Partition: 11	Leader: 422	Replicas: 422,421,420	Isr: 422,420,421
	Topic: __consumer_offsets	Partition: 12	Leader: 420	Replicas: 420,421,422	Isr: 420,422,421
	Topic: __consumer_offsets	Partition: 13	Leader: 421	Replicas: 421,422,420	Isr: 422,420,421
	Topic: __consumer_offsets	Partition: 14	Leader: 422	Replicas: 422,420,421	Isr: 422,420,421
	Topic: __consumer_offsets	Partition: 15	Leader: 420	Replicas: 420,422,421	Isr: 420,422,421
	Topic: __consumer_offsets	Partition: 16	Leader: 421	Replicas: 421,420,422	Isr: 420,422,421
	Topic: __consumer_offsets	Partition: 17	Leader: 422	Replicas: 422,421,420	Isr: 422,420,421
	Topic: __consumer_offsets	Partition: 18	Leader: 420	Replicas: 420,421,422	Isr: 420,422,421
	Topic: __consumer_offsets	Partition: 19	Leader: 421	Replicas: 421,422,420	Isr: 422,420,421
	Topic: __consumer_offsets	Partition: 20	Leader: 422	Replicas: 422,420,421	Isr: 422,420,421
	Topic: __consumer_offsets	Partition: 21	Leader: 420	Replicas: 420,422,421	Isr: 420,422,421
	Topic: __consumer_offsets	Partition: 22	Leader: 421	Replicas: 421,420,422	Isr: 420,422,421
	Topic: __consumer_offsets	Partition: 23	Leader: 422	Replicas: 422,421,420	Isr: 422,420,421
	Topic: __consumer_offsets	Partition: 24	Leader: 420	Replicas: 420,421,422	Isr: 420,422,421
	Topic: __consumer_offsets	Partition: 25	Leader: 421	Replicas: 421,422,420	Isr: 422,420,421
	Topic: __consumer_offsets	Partition: 26	Leader: 422	Replicas: 422,420,421	Isr: 422,420,421
	Topic: __consumer_offsets	Partition: 27	Leader: 420	Replicas: 420,422,421	Isr: 420,422,421
	Topic: __consumer_offsets	Partition: 28	Leader: 421	Replicas: 421,420,422	Isr: 420,422,421
	Topic: __consumer_offsets	Partition: 29	Leader: 422	Replicas: 422,421,420	Isr: 422,420,421
	Topic: __consumer_offsets	Partition: 30	Leader: 420	Replicas: 420,421,422	Isr: 420,422,421
	Topic: __consumer_offsets	Partition: 31	Leader: 421	Replicas: 421,422,420	Isr: 422,420,421
	Topic: __consumer_offsets	Partition: 32	Leader: 422	Replicas: 422,420,421	Isr: 422,420,421
	Topic: __consumer_offsets	Partition: 33	Leader: 420	Replicas: 420,422,421	Isr: 420,422,421
	Topic: __consumer_offsets	Partition: 34	Leader: 421	Replicas: 421,420,422	Isr: 420,422,421
	Topic: __consumer_offsets	Partition: 35	Leader: 422	Replicas: 422,421,420	Isr: 422,420,421
	Topic: __consumer_offsets	Partition: 36	Leader: 420	Replicas: 420,421,422	Isr: 420,422,421
	Topic: __consumer_offsets	Partition: 37	Leader: 421	Replicas: 421,422,420	Isr: 422,420,421
	Topic: __consumer_offsets	Partition: 38	Leader: 422	Replicas: 422,420,421	Isr: 422,420,421
	Topic: __consumer_offsets	Partition: 39	Leader: 420	Replicas: 420,422,421	Isr: 420,422,421
	Topic: __consumer_offsets	Partition: 40	Leader: 421	Replicas: 421,420,422	Isr: 420,422,421
	Topic: __consumer_offsets	Partition: 41	Leader: 422	Replicas: 422,421,420	Isr: 422,420,421
	Topic: __consumer_offsets	Partition: 42	Leader: 420	Replicas: 420,421,422	Isr: 420,422,421
	Topic: __consumer_offsets	Partition: 43	Leader: 421	Replicas: 421,422,420	Isr: 422,420,421
	Topic: __consumer_offsets	Partition: 44	Leader: 422	Replicas: 422,420,421	Isr: 422,420,421
	Topic: __consumer_offsets	Partition: 45	Leader: 420	Replicas: 420,422,421	Isr: 420,422,421
	Topic: __consumer_offsets	Partition: 46	Leader: 421	Replicas: 421,420,422	Isr: 420,422,421
	Topic: __consumer_offsets	Partition: 47	Leader: 422	Replicas: 422,421,420	Isr: 422,420,421
	Topic: __consumer_offsets	Partition: 48	Leader: 420	Replicas: 420,421,422	Isr: 420,422,421
	Topic: __consumer_offsets	Partition: 49	Leader: 421	Replicas: 421,422,420	Isr: 422,420,421

这个时候重启程序,发现已经可以正常消费了,问题解决。

参考资料:

  • 5
    点赞
  • 15
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 6
    评论
kafka消费不到数据的问题可能有几种可能性。首先,可能是发生了重平衡。当消费者组中的消费者发生变动时,比如有新的消费者加入或者旧的消费者离开,就会触发重平衡。在重平衡期间,消费者无法消费数据,直到平衡完成。 另一个可能的原因是Group coordinator查找失败。当消费者尝试加入消费者组时,会去查找Group coordinator,如果查找失败,就无法正常消费数据。 此外,如果你的kafka使用了华为mrs kafka,并且对于消费者进行了权限管理,可能会出现消费者没有访问权限的情况。当消费者没有被授权访问某些Topic时,尝试消费数据时会出现消费失败的情况。错误信息可能包含"Not authorized to access topics"。 针对这些问题,你可以尝试以下解决方法: 1. 等待重平衡完成,确保消费者组中的消费者稳定。 2. 检查Group coordinator是否可用,确保消费者可以成功加入消费者组。 3. 检查消费者的权限设置,确保消费者被授予了正确的Topic访问权限。 通过解决以上可能的问题,你应该能够解决kafka消费不到数据的情况。<span class="em">1</span><span class="em">2</span><span class="em">3</span> #### 引用[.reference_title] - *1* *3* [关于与kafka的爱恨交织](https://blog.csdn.net/qq_40634730/article/details/125257125)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_2"}}] [.reference_item style="max-width: 50%"] - *2* [[kafka] 消费没有数据的问题解决Group coordinator lookup failed: The coordinator is not available](https://blog.csdn.net/u010321872/article/details/131602411)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_2"}}] [.reference_item style="max-width: 50%"] [ .reference_list ]

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 6
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

香山上的麻雀1008

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值