ERROR org.apache.kafka.clients.consumer.internals.ConsumerCoordinator 异常!原因和解决方案!

 

kafka版本:kafka_2.12-2.3.0

具体报错:

2019-12-10 15:27:36.006[main] ERROR org.apache.kafka.clients.consumer.internals.ConsumerCoordinator[843] [Consumer clientId=consumer-1, groupId=01] Offset commit failed on partition test-0 at offset 175256: The coordinator is not aware of this member.
2019-12-10 15:27:36.010[main] WARN  org.apache.kafka.clients.consumer.internals.ConsumerCoordinator[737] [Consumer clientId=consumer-1, groupId=01] Asynchronous auto-commit of offsets {test-0=OffsetAndMetadata{offset=175256, metadata=''}} failed: Commit cannot be completed since the group has already rebalanced and assigned the partitions to another member. This means that the time between subsequent calls to poll() was longer than the configured max.poll.interval.ms, which typically implies that the poll loop is spending too much time message processing. You can address this either by increasing the session timeout or by reducing the maximum size of batches returned in poll() with max.poll.records.
原因:

Note that it isn't possible to mix manual partition assignment (i.e. using assign) with dynamic partition assignment through topic subscription (i.e. using subscribe).

意思就是:注意,手动分配分区(即,assgin)和动态分区分配的订阅topic模式(即,subcribe)不能混合使用。

错误复现:

首先启动一个消费者,使用assgin ,配置如下:

Properties props = new Properties();
props.put("bootstrap.servers", "10.20.87.23:9092");
props.put("group.id", "01");
props.put("enable.auto.commit", "true");
props.put("auto.commit.interval.ms", "1000");
props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
String topic = "test";
TopicPartition partition0 = new TopicPartition(topic, 0);
consumer.assign(Arrays.asList(partition0));

然后再启动一个消费者,使用subscribe,配置如下:

Properties props = new Properties();
props.put("bootstrap.servers", "10.20.87.23:9092");
props.put("group.id", "01");
props.put("enable.auto.commit", "true");
props.put("auto.commit.interval.ms", "1000");
props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
KafkaConsumer<String, String> consumer = new KafkaConsumer<>(props);
consumer.subscribe(Arrays.asList("test"));

在启动第二个时,就会出现上述报错。但是可以正常运行,只不过两个都会消费整个partition0 

解决方案:

将第二个消费者的配置group.id修改为02(意思就是与第一个assgin的group.id不一样)!然后重新启动消费者,错误就消失了!

  • 6
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值