Spring Boot集成kafka笔记
kafka官网 http://kafka.apache.org/quickstart
spring-kafka当前稳定版本是1.2.0..RELEASE http://docs.spring.io/spring-kafka/docs/1.2.0.RELEASE/reference/html/_introduction.html
1.首先下载kafka,解压缩,然后运行zookeeper和kafka
cd kafka_2.11-0.10.2.0
bin/zookeeper-server-start.sh config/zookeeper.properties
bin/kafka-server-start.sh config/server.properties
经过测试发现Java程序发送消息时自动创建了主题,所以不需要用命令单独创建主题了(如果用命令行producer来绑定主题topic必须要先用命令行创建一个topic)
2.按照spring-kafka官网集成文档
1)pom中添加依赖
<dependency> <groupId>org.apache.kafka</groupId> <artifactId>kafka-clients</artifactId> <version>0.10.2.0</version> </dependency> <dependency> <groupId>org.springframework.kafka</groupId> <artifactId>spring-kafka</artifactId> <version>1.2.0.RELEASE</version> </dependency>
2)配置,spring boot1.5.1可以直接在application.yml 配置文件中配置,官方文档说只要配置两个必要项就可以了
spring: kafka: consumer: group-id: foo auto-offset-reset: earliestgroup-id是所属分组(kafka要求一个分组的成员数量不能大于设置的分区数,否则同一个分组中多出的成员永远都不会收到消息),auto-offset-reset设为earliest是获取以前就有过的消息(其实不是必要的配置)
可以在配置文件中进行更完整的配置,但是配置文件中支持的默认配置项还是有限,如果需要更加完整的配置建议还是用bean配置
spring: kafka: producer: retries: 0 batch-size: 16384 buffer-memory: 33554432 key-serializer: org.apache.kafka.common.serialization.StringSerializer value-serializer: org.apache.kafka.common.serialization.StringSerializer bootstrap-servers: localhost:9092 consumer: bootstrap-servers: localhost:9092 group-id: foo auto-offset-reset: earliest enable-auto-commit: true auto-commit-interval: 100 key-deserializer: org.apache.kafka.common.serialization.StringDeserializer value-deserializer: org.apache.kafka.common.serialization.StringDeserializer
3.Java部分就是spring-kafka官网使用的代码粘贴过来了,可以直接在application中同时运行producer和consumer
import lombok.extern.slf4j.Slf4j; import org.apache.kafka.clients.consumer.ConsumerRecord; import org.springframework.beans.factory.annotation.Autowired; import org.springframework.boot.CommandLineRunner; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.boot.builder.SpringApplicationBuilder; import org.springframework.boot.web.support.SpringBootServletInitializer; import org.springframework.kafka.annotation.KafkaListener; import org.springframework.kafka.core.KafkaTemplate; import java.util.concurrent.CountDownLatch; import java.util.concurrent.TimeUnit; @SpringBootApplication @Slf4j public class MyApplication extends SpringBootServletInitializer implements CommandLineRunner { @Override protected SpringApplicationBuilder configure(SpringApplicationBuilder application) { return application.sources(MyApplication.class); } public static void main(String[] args) { SpringApplication.run(MyApplication.class, args); } @Autowired private KafkaTemplate<String, String> template; private final CountDownLatch latch = new CountDownLatch(4); @Override public void run(String... args) throws Exception { this.template.send("myTopic", "foo1"); this.template.send("myTopic", "foo2"); this.template.send("myTopic", "foo3"); this.template.send("myTopic", "hi", "foo4");
this.template.send("myTopic2", "2", "foo5");latch .await( 60 , TimeUnit. SECONDS ) ; log .info( "All received" ) ; } @KafkaListener ( topics = "myTopic" ) public void listen (ConsumerRecord<? , ?> cr) throws Exception { log .info(cr.toString()) ; latch .countDown() ; }}
可以发现发送消息非常简单,先自动注入一个template,然后template.send(主题,data)或者template.send(主题,key, data)即可
客户端consumer获取消息特别简单,直接用@KafkaListener注解即可,并在监听中设置监听的主题,topics是一个数组可以绑定多个主题的,上面的代码中修改为@KafkaListener(topics = {"myTopic","myTopic2"})就可以同时监听两个主题的消息了,但是需要注意的是开始监听主题前要求主题已经被创建好了。
4.上面只是程序创建producer和consumer,还可以在命令窗口发送消息和接受消息
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic myTopic --from-beginning
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic myTopic
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic myTopic2
在producer命令窗口发送消息,在consumer命令窗口和spring-boot程序中都可以接收到消息。注意的是用命令行创建的producer绑定的主题topic需要用命令行先创建topic,但是在前面的Java程序中已经发送过myTopic和myTopic2两个主题消息,主题已经被Java程序创建好了,所以这里就不需要创建了
demo源码下载http://git.oschina.net/liufang1991/kafkademo