KafkaListener的各种操作

@KafkaListener的各种操作

  • 通过KafkaListener可以自定义批量消费和多线程消费,通过自定义创建消费容器的工厂类,来定义不同的消费容器,如下

多线程和单线程消费

@KafkaListener(
            id = "concurrencyConsumer",
            topics = "#{'${kafka.listener.multiple.partition.topic}'.split(',')}",
            containerFactory = "ackConcurrencyContainerFactory")
    public void consumerListener(List<ConsumerRecord> consumerRecords, Acknowledgment ack) {
        LogRecord.handle(consumerRecords, ack);
    }

 
   @KafkaListener(
            id = "singleConsumer",
            topics = "#{'${kafka.listener.single.partition.topic}'.split(',')}",
            containerFactory = "ackSingleContainerFactory")
    public void inputPersonfileNewCluster(List<ConsumerRecord> consumerRecords, Acknowledgment ack) {
        LogRecord.handle(consumerRecords, ack);
    }
    
  • 这里面id是自定义的,topics是配置文件中定义好的,2这的区别就在containerFactory这个参数的指定,这个参数是一个Bean的名字,是自定义的创建消费容器的工厂bean。

容器工厂ConcurrentKafkaListenerContainerFactory

  • 下面这bean就是提供并发能力的消费容器的工厂bean,关键在于factory.setConcurrency(concurrency);设置了一个并发度,这个需要小于topic的分区数量,否则会有多余的消费者线程无法消费到消息。
@Bean("ackConcurrencyContainerFactory")
    public ConcurrentKafkaListenerContainerFactory ackContainerFactory() {
        ConcurrentKafkaListenerContainerFactory factory = new ConcurrentKafkaListenerContainerFactory();
        factory.setConsumerFactory(new DefaultKafkaConsumerFactory(consumerProps()));
        factory.getContainerProperties().setAckMode(AbstractMessageListenerContainer.AckMode.MANUAL_IMMEDIATE);
        factory.setBatchListener(true);
        factory.setConcurrency(concurrency);
        return factory;
    }
  • 创建一个包含4个分区的topic test1和一个只有一个分区的topic test,设置并发度为4,
    启动后日志如下:
INFO|2019-04-17 21:53:04.079|[singleConsumer-0-L-1     ]|c.i.m.c.LogRecord.logRecode.  23|消费数据: topic = test,partition = 0, offset = 0,value = fo": "{\"c,Time: Wed Apr 17 21:53:04 CST 2019
INFO|2019-04-17 21:53:04.080|[concurrencyConsumer-0-L-1]|c.i.m.c.LogRecord.logRecode.  23|消费数据: topic = test1,partition = 0, offset = 0,value = fo": "{\"c,Time: Wed Apr 17 21:53:04 CST 2019
INFO|2019-04-17 21:53:04.137|[concurrencyConsumer-2-L-1]|c.i.m.c.LogRecord.logRecode.  23|消费数据: topic = test1,partition = 2, offset = 0,value = fo": "{\"c,Time: Wed Apr 17 21:53:04 CST 2019
INFO|2019-04-17 21:53:04.153|[concurrencyConsumer-3-L-1]|c.i.m.c.LogRecord.logRecode.  23|消费数据: topic = test1,partition = 3, offset = 0,value = fo": "{\"c,Time: Wed Apr 17 21:53:04 CST 2019
INFO|2019-04-17 21:53:04.168|[concurrencyConsumer-1-L-1]|c.i.m.c.LogRecord.logRecode.  23|消费数据: topic = test1,partition = 1, offset = 0,value = fo": "{\"c,Time: Wed Apr 17 21:53:04 CST 2019
INFO|2019-04-17 21:53:07.848|[singleConsumer-0-L-1     ]|c.i.m.c.LogRecord.logRecode.  23|消费数据: topic = test,partition = 0, offset = 10000,value = fo": "{\"c,Time: Wed Apr 17 21:53:07 CST 2019
INFO|2019-04-17 21:53:07.914|[concurrencyConsumer-0-L-1]|c.i.m.c.LogRecord.logRecode.  23|消费数据: topic = test1,partition = 0, offset = 10000,value = fo": "{\"c,Time: Wed Apr 17 21:53:07 CST 2019
INFO|2019-04-17 21:53:07.962|[concurrencyConsumer-2-L-1]|c.i.m.c.LogRecord.logRecode.  23|消费数据: topic = test1,partition = 2, offset = 10000,value = fo": "{\"c,Time: Wed Apr 17 21:53:07 CST 2019
INFO|2019-04-17 21:53:08.200|[concurrencyConsumer-3-L-1]|c.i.m.c.LogRecord.logRecode.  23|消费数据: topic = test1,partition = 3, offset = 10000,value = fo": "{\"c,Time: Wed Apr 17 21:53:08 CST 2019
INFO|2019-04-17 21:53:08.506|[concurrencyConsumer-1-L-1]|c.i.m.c.LogRecord.logRecode.  23|消费数据: topic = test1,partition = 1, offset = 10000,value = fo": "{\"c,Time: Wed Apr 17 21:53:08 CST 2019
INFO|2019-04-17 21:53:11.559|[concurrencyConsumer-0-L-1]|c.i.m.c.LogRecord.logRecode.  23|消费数据: topic = test1,partition = 0, offset = 20000,value = fo": "{\"c,Time: Wed Apr 17 21:53:11 CST 2019
INFO|2019-04-17 21:53:11.618|[concurrencyConsumer-2-L-1]|c.i.m.c.LogRecord.logRecode.  23|消费数据: topic = test1,partition = 2, offset = 20000,value = fo": "{\"c,Time: Wed Apr 17 21:53:11 CST 2019
INFO|2019-04-17 21:53:11.862|[concurrencyConsumer-3-L-1]|c.i.m.c.LogRecord.logRecode.  23|消费数据: topic = test1,partition = 3, offset = 20000,value = fo": "{\"c,Time: Wed Apr 17 21:53:11 CST 2019
INFO|2019-04-17 21:53:12.131|[concurrencyConsumer-1-L-1]|c.i.m.c.LogRecord.logRecode.  23|消费数据: topic = test1,partition = 1, offset = 20000,value = fo": "{\"c,Time: Wed Apr 17 21:53:12 CST 2019
INFO|2019-04-17 21:53:13.311|[singleConsumer-0-L-1     ]|c.i.m.c.LogRecord.logRecode.  23|消费数据: topic = test,partition = 0, offset = 20000,value = fo": "{\"c,Time: Wed Apr 17 21:53:13 CST 2019
INFO|2019-04-17 21:53:14.986|[concurrencyConsumer-0-L-1]|c.i.m.c.LogRecord.logRecode.  23|消费数据: topic = test1,partition = 0, offset = 30000,value = fo": "{\"c,Time: Wed Apr 17 21:53:14 CST 2019
INFO|2019-04-17 21:53:15.047|[concurrencyConsumer-2-L-1]|c.i.m.c.LogRecord.logRecode.  23|消费数据: topic = test1,partition = 2, offset = 30000,value = fo": "{\"c,Time: Wed Apr 17 21:53:15 CST 2019
INFO|2019-04-17 21:53:15.256|[concurrencyConsumer-3-L-1]|c.i.m.c.LogRecord.logRecode.  23|消费数据: topic = test1,partition = 3, offset = 30000,value = fo": "{\"c,Time: Wed Apr 17 21:53:15 CST 2019
INFO|2019-04-17 21:53:15.676|[concurrencyConsumer-1-L-1]|c.i.m.c.LogRecord.logRecode.  23|消费数据: topic = test1,partition = 1, offset = 30000,value = fo": "{\"c,Time: Wed Apr 17 21:53:15 CST 2019
INFO|2019-04-17 21:53:17.396|[singleConsumer-0-L-1     ]|c.i.m.c.LogRecord.logRecode.  23|消费数据: topic = test,partition = 0, offset = 30000,value = fo": "{\"c,Time: Wed Apr 17 21:53:17 CST 2019
INFO|2019-04-17 21:53:19.376|[concurrencyConsumer-0-L-1]|c.i.m.c.LogRecord.logRecode.  23|消费数据: topic = test1,partition = 0, offset = 40000,value = fo": "{\"c,Time: Wed Apr 17 21:53:19 CST 2019
INFO|2019-04-17 21:53:19.586|[concurrencyConsumer-2-L-1]|c.i.m.c.LogRecord.logRecode.  23|消费数据: topic = test1,partition = 2, offset = 40000,value = fo": "{\"c,Time: Wed Apr 17 21:53:19 CST 2019
INFO|2019-04-17 21:53:19.708|[concurrencyConsumer-3-L-1]|c.i.m.c.LogRecord.logRecode.  23|消费数据: topic = test1,partition = 3, offset = 40000,value = fo": "{\"c,Time: Wed Apr 17 21:53:19 CST 2019
INFO|2019-04-17 21:53:20.099|[concurrencyConsumer-1-L-1]|c.i.m.c.LogRecord.logRecode.  23|消费数据: topic = test1,partition = 1, offset = 40000,value = fo": "{\"c,Time: Wed Apr 17 21:53:20 CST 2019
INFO|2019-04-17 21:53:21.908|[singleConsumer-0-L-1     ]|c.i.m.c.LogRecord.logRecode.  23|消费数据: topic = test,partition = 0, offset = 40000,value = fo": "{\"c,Time: Wed Apr 17 21:53:21 CST 2019
  • 这里可以看到,消费test1主题的并发消费者分组,包含四个线程,打印出的线程id不一样,消费test主题的只有一个线程,消费速度很快,因此每消费一万条才打印一条日志。
  • 另外验证了,将并发度设置为6,那么5和6这两个消费者线程是不会打印出日志的,而在前面可以看到下面这样的日志,也就是这两个消费者加入到了消费者分组,但是后面并没有消费,这里的线程命名规则和clientId命名规则都是递增式的:
INFO|2019-04-17 21:55:04.030|[concurrencyConsumer-4-C-1]|o.a.k.c.c.i.AbstractCoordinator.sendJoinGroupRequest. 486|[Consumer clientId=consumer-5, groupId=test-group-xn-03] (Re-)joining group
INFO|2019-04-17 21:55:04.030|[concurrencyConsumer-5-C-1]|o.a.k.c.c.i.AbstractCoordinator.sendJoinGroupRequest. 486|[Consumer clientId=consumer-6, groupId=test-group-xn-03] (Re-)joining group

批量消息消费和单条消息消费

  • 此处设置类似,就不在赘述,设置factory.setBatchListener(true),同时设置参数max.poll.records,表示批量消费时的最大消息数量。

代码

https://gitee.com/mozping/total-learn/tree/master/mozping-msg-queue/msg-queue-consumer-test

参考

  • 1
    点赞
  • 14
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
@KafkaListener注解是Spring Kafka提供的一个注解,用于在应用程序中监听Kafka主题并消费消息。该注解可以用于指定要监听的主题、分区以及其他属性。 引用中提到了@KafkaListener注解的topicPartitions属性,该属性用于监听不同的分区。可以通过指定分区的信息来实现对特定分区的监听。 引用提供了一个示例代码,展示了如何在Spring应用程序中使用@KafkaListener注解。在这个例子中,创建了一个名为Listener的类,并使用@KafkaListener注解来监听名为"topic1"的主题。在consumerListener方法中,可以处理接收到的ConsumerRecord对象并执行相应的操作。 除了示例代码中的使用方式,@KafkaListener注解还支持其他属性,例如设置消费者组ID、设置是否自动提交偏移量、设置错误处理策略等等。可以根据具体需求来配置这些属性。 总结来说,@KafkaListener注解是用于在Spring应用程序中监听Kafka主题并消费消息的注解。它提供了灵活的配置选项,可以根据需求来设置监听的主题、分区以及其他属性。<span class="em">1</span><span class="em">2</span><span class="em">3</span> #### 引用[.reference_title] - *1* *2* *3* [【kafka】@KafkaListener 注解解读](https://blog.csdn.net/u012796085/article/details/118273689)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_2"}}] [.reference_item style="max-width: 100%"] [ .reference_list ]

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值