kafka 启动_springboot快速整合kafka

不定期分享java相关技术干货,每天进步一点点,欢迎关注,一起交流!

开始:

首先在windows下启动kafka

启动方法如下:

首先下载kafka,zookeeper安装包:

f9b7541481c3196e48b09b93557b7ab2.png

修改下

fc8fd5d551f5baf071e272216ba89e7a.png

为你配置的文件路径

6c5f894be1a08416c6e296a5f804af35.png

修改如图文件

zookeeper启动:

b0dde5e916221dab760f3ed6fa3db432.png

复制下面那个配置文件,重命名为zoo.cnf,然后启动就可以了

在启动kafka不知道为什么我再本机上,一直点击启动文件无法启动,后来采用启动窗口启动的:

.binwindowskafka-server-start.bat .configserver.properties

进入那个包里就可以无需要进入bin下

7aacd78c497869a20b1488eb67993e1f.png

然后整合springboot

2371c41f48c370157cc91806af0d4005.png
1ce6a2969fb17ddb606abfd711cb9c84.png

,

5430ee0562b45438b1ec8a03eb86bbbc.png

可以看到初始化的进行发送消息了,

看具体代码:

<?xml version="1.0" encoding="UTF-8"?>4.0.0org.springframework.boot        spring-boot-starter-parent        2.1.5.RELEASEcom.cxy    skafka    0.0.1-SNAPSHOTskafkaDemo project for Spring Boot1.8org.springframework.boot            spring-boot-starter-web        org.springframework.kafka            spring-kafka        com.alibaba            fastjson            1.2.56org.projectlombok            lombok            trueorg.springframework.boot            spring-boot-starter-test            testorg.springframework.kafka            spring-kafka-test            testorg.springframework.boot                spring-boot-maven-plugin            

启动类:

package com.cxy.skafka;import com.cxy.skafka.component.UserLogProducer;import org.springframework.beans.factory.annotation.Autowired;import org.springframework.boot.SpringApplication;import org.springframework.boot.autoconfigure.SpringBootApplication;import javax.annotation.PostConstruct;@SpringBootApplicationpublic class SkafkaApplication {    public static void main(String[] args) {        SpringApplication.run(SkafkaApplication.class, args);    }    @Autowired    private UserLogProducer userLogProducer;    @PostConstruct    public  void init() {        for (int i = 0; i < 10; i++) {            userLogProducer.sendlog(String.valueOf(i));        }    }}

model

package com.cxy.skafka.model;import lombok.Data;import lombok.experimental.Accessors;/*** * @ClassName: Usrlog * @Description: * @Auther: cxy * @Date: 2020/11/1:16:47 * @version : V1.0 */@Data@Accessorspublic class Userlog {    private  String username;    private String userid;    private String state;}

producer

server.port=8080#制定kafka代理地址spring.kafka.bootstrap-servers=localhost:9092#消息发送失败重试次数spring.kafka.producer.retries=0#每次批量发送消息的数量spring.kafka.producer.batch-size=16384#每次批量发送消息的缓冲区大小spring.kafka.producer.buffer-memory=335554432# 指定消息key和消息体的编解码方式spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializerspring.kafka.producer.value-serializer=org.apache.kafka.common.serialization.StringSerializer#=============== consumer  =======================# 指定默认消费者group idspring.kafka.consumer.group-id=user-log-groupspring.kafka.consumer.auto-offset-reset=earliestspring.kafka.consumer.enable-auto-commit=truespring.kafka.consumer.auto-commit-interval=100# 指定消息key和消息体的编解码方式spring.kafka.consumer.key-deserializer=org.apache.kafka.common.serialization.StringDeserializerspring.kafka.consumer.value-deserializer=org.apache.kafka.common.serialization.StringDeserializer

消费者:

package com.cxy.skafka.component;import org.apache.kafka.clients.consumer.ConsumerRecord;import org.springframework.kafka.annotation.KafkaListener;import org.springframework.stereotype.Component;import java.util.Optional;/*** * @ClassName: UserLogConsumer * @Description: * @Auther: cxy * @Date: 2020/11/1:16:54 * @version : V1.0 */@Componentpublic class UserLogConsumer {    @KafkaListener(topics = {"userLog"})    public  void consumer(ConsumerRecord consumerRecord){      Optional kafkaMsg=  Optional.ofNullable(consumerRecord.value());      if (kafkaMsg.isPresent()){        Object msg=  kafkaMsg.get();        System.err.println(msg);      }    }}

配置文件:

server.port=8080#制定kafka代理地址spring.kafka.bootstrap-servers=localhost:9092#消息发送失败重试次数spring.kafka.producer.retries=0#每次批量发送消息的数量spring.kafka.producer.batch-size=16384#每次批量发送消息的缓冲区大小spring.kafka.producer.buffer-memory=335554432# 指定消息key和消息体的编解码方式spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializerspring.kafka.producer.value-serializer=org.apache.kafka.common.serialization.StringSerializer#=============== consumer  =======================# 指定默认消费者group idspring.kafka.consumer.group-id=user-log-groupspring.kafka.consumer.auto-offset-reset=earliestspring.kafka.consumer.enable-auto-commit=truespring.kafka.consumer.auto-commit-interval=100# 指定消息key和消息体的编解码方式spring.kafka.consumer.key-deserializer=org.apache.kafka.common.serialization.StringDeserializerspring.kafka.consumer.value-deserializer=org.apache.kafka.common.serialization.StringDeserializer

启动之后就是上面的效果

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值