项目示例基于gradle编译,使用依赖spring-cloud-starter-stream-kafka:2.0.2.RELEASE
配置文件里面:对于输出通道outputChannel,要定义生产者producer的分区键以及分区个数,
对于输入通道inputChannel,要定义消费者consumer是否开启分区,以及消费组group
对于通用配置kafka.binder,要定义最小分区个数min-partition-count,和自动添加分区auto-add-partitions,以及实例个数instance-count,和实例索引instance-index(小于实例个数)
其他的配置,比如,binders,定义kafka的连接信息
项目结构:package channel定义了输入通道InboundChannel和输出通道outputChannel,分别绑定到inputChannel和outputChannel
package service定义了消息监听服务ReceiverService和消息发送服务SenderService,分别关联InboundChannel和OutboundChannel
package controller定义消息发送接口"/send"?msg=xxx,里面会调用消息发送服务SenderService,最终往kafka的topic inputTopic生产消息
代码下载链接:https://github.com/stringhuang/test-mq-partition.git
编译:/usr/local/Cellar/gradle/5.6.1/bin/gradle clean build -x test;
运行-实例-端口8943:java -jar build/libs/test-mq-partition-0.0.1-SNAPSHOT.jar --server.port=8943 --spring.cloud.stream.instance-index=0;
发现,两个分区都分配给了这个实例:inputTopic-1, inputTopic-02019-12-16 09:17:15.812 INFO 77976 --- [container-0-C-1] o.s.c.s.b.k.KafkaMessageChannelBinder$1 : partitions assigned: [inputTopic-1, inputTopic-0]
接着,运行-实例-端口8944:java -jar build/libs/test-mq-partition-0.0.1-SNAPSHOT.jar --server.port=8944 --spring.cloud.stream.instance-index=1;
发现,对于实例-端口8943,分区先是回收,然后重新分配:inputTopic-1
对于实例-端口8944,则分配分区inputTopic-0:2019-12-16 09:17:24.821 INFO 77978 --- [container-0-C-1] o.s.c.s.b.k.KafkaMessageChannelBinder$1 : partitions assigned: [inputTopic-0]
调用接口"/send"?msg=xxx,发送两条消息:curl http://localhost:8943/send/?msg=daluo;
curl http://localhost:8943/send/?msg=henry;
日志打印显示,实例-端口8943消费消息"daluo", 实例-端口8944消费消息"henry":2019-12-16 09:20:52.488 INFO 77976 --- [container-0-C-1] c.z.t.service.ReceiverService : ReceiverClass in Instance 8943 has received message: daluo2019-12-16 09:20:55.409 INFO 77978 --- [container-0-C-1] c.z.t.service.ReceiverService : ReceiverClass in Instance 8944 has received message: henry
至此,就实现了分组分区消费kafka!
当你的消费实例能力不强时,可以引用上面的示例,部署多个消费实例,避免kafka消息堆积!
如果,我们把配置里的partition-count: 从2改为4,再次运行程序,会发现:
实例-端口8943分配了分区inputTopic-3, inputTopic-2:2019-12-16 09:25:49.197 INFO 78071 --- [container-0-C-1] o.s.c.s.b.k.KafkaMessageChannelBinder$1 : partitions assigned: [inputTopic-3, inputTopic-2]
实例-端口8943分配了分区inputTopic-1, inputTopic-0:2019-12-16 09:25:49.202 INFO 78074 --- [container-0-C-1] o.s.c.s.b.k.KafkaMessageChannelBinder$1 : partitions assigned: [inputTopic-1, inputTopic-0]
在kafka的安装目录下,使用命令"./bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic inputTopic",发现topic inputTopic的分区个数自动从2变成4了。
所以,spring-cloud-starter-stream-kafka很强大,通过简单的配置,就帮我们实现了分布式架构下的,消息分组分区消费功能!