Springboot中集成kafka

Kafka本地环境准备

安装zookeeper

  1. 使用kafka需要依赖于zookeeper,故先下载安装zookeeper
    zookeeper主页
    下载源: zookper下载源

  2. 基本配置:
    (1) 修改/conf文件夹下的配置文件zoo_sample.cfg为zoo.cfg.
    (2) 修改内容:dataDir=到用户想指定的位置

  3. 启动:使用/bin目录下的zkServer.sh start启动即可。

zookeeper安装遇到的问题

  1. 启动时,默认绑定8080端口,可能因为被占用导致启动失败。具体信息查看/logs目录下的log文件。

安装kafka

  1. 下载:kafka下载源
  2. 修改/conf文件夹下的server.properties : log.dirs=到用户想指定的位置
  3. 启动:
/bin/windows/下的 kafka-server-start.bat    ../../conf/server.properties

kafka安装遇到的问题

  1. 在启动过程中遇到启动失败:
    启动失败

  2. 百度了下是因为classpath引用不到,修改kafka-run-class.bat :

set COMMAND=%JAVA% %KAFKA_HEAP_OPTS% %KAFKA_JVM_PERFORMANCE_OPTS% %KAFKA_JMX_OPTS% %KAFKA_LOG4J_OPTS% -cp %CLASSPATH% %KAFKA_OPTS% %*
为:将%CLASSPATH%添加双引号 "%CLASSPATH%"
  1. 再次启动成功。
    启动成功

验证kafka本地使用

  1. 创建一个topic:
kafka-topics.bat --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
  1. 打开一个producer:
kafka-console-producer.bat --broker-list localhost:9092 --topic test
  1. 打开一个CONSUMER:
kafka-console-consumer.bat --bootstrap-server localhost:9092 --topic test --from-beginning
  1. 在producer输入msg,结果:
    结果

Spring boot集成Kafka

引入kafka依赖

  1. spring-kafak 和kafka-client的使用依赖于版本间的兼容,详情参考spring 官方版本支持建议

  2. pom依赖:

     <dependency>
         <groupId>org.springframework.kafka</groupId>
         <artifactId>spring-kafka</artifactId>
     </dependency>
  1. 当前只加了这个依赖,当加入kafaka-clients依赖时,报错,删去后正常(本机kafka版本为2.1.1)。未深入研究。
     <dependency>
         <groupId>org.apache.kafka</groupId>
         <artifactId>kafka-clients</artifactId>
         <version>2.1.1</version>
     </dependency>

异常信息:

2020-03-23 16:51:02.152  INFO 26408 --- [ad | producer-1] org.apache.kafka.clients.Metadata        : Cluster ID: hGdpKkVBQDW3BUvzmG60eQ
2020-03-23 16:51:02.220 ERROR 26408 --- [ad | producer-1] o.apache.kafka.common.utils.KafkaThread  : Uncaught exception in thread 'kafka-producer-network-thread | producer-1':

java.lang.NoSuchMethodError: org.apache.kafka.clients.producer.Producer.close(Ljava/time/Duration;)V
	at org.springframework.kafka.core.KafkaTemplate.closeProducer(KafkaTemplate.java:382) ~[spring-kafka-2.3.5.RELEASE.jar:2.3.5.RELEASE]
	at org.springframework.kafka.core.KafkaTemplate.lambda$buildCallback$4(KafkaTemplate.java:433) ~[spring-kafka-2.3.5.RELEASE.jar:2.3.5.RELEASE]
	at org.apache.kafka.clients.producer.KafkaProducer$InterceptorCallback.onCompletion(KafkaProducer.java:1304) ~[kafka-clients-2.1.1.jar:na]
	at org.apache.kafka.clients.producer.internals.ProducerBatch.completeFutureAndFireCallbacks(ProducerBatch.java:227) ~[kafka-clients-2.1.1.jar:na]
	at org.apache.kafka.clients.producer.internals.ProducerBatch.done(ProducerBatch.java:196) ~[kafka-clients-2.1.1.jar:na]
	at org.apache.kafka.clients.producer.internals.Sender.completeBatch(Sender.java:677) ~[kafka-clients-2.1.1.jar:na]
	at org.apache.kafka.clients.producer.internals.Sender.completeBatch(Sender.java:649) ~[kafka-clients-2.1.1.jar:na]
	at org.apache.kafka.clients.producer.internals.Sender.handleProduceResponse(Sender.java:557) ~[kafka-clients-2.1.1.jar:na]
	at org.apache.kafka.clients.producer.internals.Sender.access$100(Sender.java:74) ~[kafka-clients-2.1.1.jar:na]
	at org.apache.kafka.clients.producer.internals.Sender$1.onComplete(Sender.java:786) ~[kafka-clients-2.1.1.jar:na]
	at org.apache.kafka.clients.ClientResponse.onComplete(ClientResponse.java:109) ~[kafka-clients-2.1.1.jar:na]
	at org.apache.kafka.clients.NetworkClient.completeResponses(NetworkClient.java:557) ~[kafka-clients-2.1.1.jar:na]
	at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:549) ~[kafka-clients-2.1.1.jar:na]
	at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:311) ~[kafka-clients-2.1.1.jar:na]
	at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:235) ~[kafka-clients-2.1.1.jar:na]
	at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_131]

config:

spring:
  kafka:
    producer:
      bootstrap-servers: 127.0.0.1:9092
      key-serializer: org.apache.kafka.common.serialization.StringSerializer
      value-serializer: org.apache.kafka.common.serialization.StringSerializer
    consumer:
      group-id: foo
      auto-offset-reset: earliest
      bootstrap-servers: localhost:9092

Producer class:

@RestController
public class SimpleSendController {

    @Autowired
    private KafkaTemplate<Object, Object> kafkaTemplate;

    @PostMapping("/send/{message}")
    public String send(@PathVariable String message) {
        kafkaTemplate.send("test", "topci1:" + message);
        return message;
    }
}

consumer class:

@Component
@Slf4j
public class KafkaSimpleListener {
    @KafkaListener(topics = {"test"})
    public void listen(String data) {
        log.info("KafkaListener: " + data);
    }
}

Result:

bash consumer result
console-log

代码链接

上文中完整工程链接

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值