windows操作系统下 springboot 集成 kafka

(在Window  下安装kafka-manager可实现界面化管理(暂时没做))

Offset Explorer  kafka可视化客户端工具(Kafka Tool)的基本使用

zk可视化工具 Zookeeper可视化客户端ZooViewer详细使用教程_zkviewer使用-CSDN博客(转)

1 安装zookeeper

1.1 下载安装包:

      https://archive.apache.org/dist/zookeeper/zookeeper-3.4.13/zookeeper-3.4.13.tar.gz

1.2 解压文件 E:\zookeeper-3.4.13\zookeeper-3.4.13

1.3 打开zookeeper-3.4.13\conf,把zoo_sample.cfg重命名成zoo.cfg

1.4 路径下新增data log 文件夹,打开zoo.cfg 修改以下路径(自己解压包所在路径)

1.5 dataDir=E:\\zookeeper-3.4.13\\zookeeper-3.4.13\\data

       dataLogDir =E:\\zookeeper-3.4.13\\zookeeper-3.4.13\\log

1.6 添加如下系统变量:

   ZOOKEEPER_HOME: E:\zookeeper-3.4.13\zookeeper-3.4.13(zookeeper目录)

   Path: 在现有的值后面添加 ";%ZOOKEEPER_HOME%\bin;"

1.7 运行Zookeeper: 打开cmd然后执行 zkserver.sh

   

2 安装kafka



2.1 下载安装文件  https://mirror.bit.edu.cn/apache/kafka/2.4.1/kafka_2.11-2.4.1.tgz

(注意不要下载源码版,不然会报错提示错误: 找不到或无法加载主类 kafka.Kafka

 如果还是报这个错误,查看下:)

2.2 解压文件E:\kafka_2.11-2.4.1\kafka_2.11-2.4.1

2.3 打开kafka-2.4.1-src\config

2.4 打开 server.properties 把 log.dirs的值改成 log.dirs=./logs

2.5 打开cmd

2.6 进入kafka文件目录:E:\kafka_2.11-2.4.1\kafka_2.11-2.4.1(kafka目录)

2.7 输入并执行:  .\bin\windows\kafka-server-start.bat .\config\server.properties

(尽量将E:\kafka_2.11-2.4.1\kafka_2.11-2.4.1\logs 底下文件清空,第二次启动异常清空后能正常启动)

3 测试是否生效


 

1 创建topics

在路径下E:\kafka_2.11-2.4.1\kafka_2.11-2.4.1\bin\windows

执行 kafka-topics.bat --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test2


 

2 打开一个PRODUCER:

在路径下E:\kafka_2.11-2.4.1\kafka_2.11-2.4.1\bin\windows

    执行 kafka-console-producer.bat --broker-list localhost:9092 --topic test2

3 打开一个CONSUMER:

在路径下E:\kafka_2.11-2.4.1\kafka_2.11-2.4.1\bin\windows

    执行 kafka-console-consumer.bat --bootstrap-server localhost:9092 --topic test2 --from-beginning

测试成功。

3 springboot集成kafka

3.1pom.xml引入依赖

   <!-- 引入kafka依赖 -->
            <dependency>
                <groupId>org.springframework.kafka</groupId>
                <artifactId>spring-kafka</artifactId>
                <version>2.8.6</version>
            </dependency>

3.2 配置文件


# Spring
spring: 
  kafka:
    bootstrap-servers: localhost:9092 # kafka集群信息,多个用逗号间隔
    # 生产者
    producer:
      # 重试次数,设置大于0的值,则客户端会将发送失败的记录重新发送
      retries: 3
      batch-size: 16384 #批量处理大小,16K
      buffer-memory: 33554432 #缓冲存储大,32M
      acks: 1
      # 指定消息key和消息体的编解码方式
      key-serializer: org.apache.kafka.common.serialization.StringSerializer
      value-serializer: org.apache.kafka.common.serialization.StringSerializer
    # 消费者
    consumer:
      # 消费者组
      group-id: TestGroup
      # 是否自动提交
      enable-auto-commit: false
      # 消费偏移配置
      # none:如果没有为消费者找到先前的offset的值,即没有自动维护偏移量,也没有手动维护偏移量,则抛出异常
      # earliest:在各分区下有提交的offset时:从offset处开始消费;在各分区下无提交的offset时:从头开始消费
      # latest:在各分区下有提交的offset时:从offset处开始消费;在各分区下无提交的offset时:从最新的数据开始消费
      auto-offset-reset: earliest
      key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
      value-deserializer: org.apache.kafka.common.serialization.StringDeserializer
    # 监听
    listener:
      # record:当每一条记录被消费者监听器(ListenerConsumer)处理之后提交
      # batch:当每一批poll()的数据被ListenerConsumer处理之后提交
      # time:当每一批poll()的数据被ListenerConsumer处理之后,距离上次提交时间大于TIME时提交
      # count:当每一批poll()的数据被ListenerConsumer处理之后,被处理record数量大于等于COUNT时提交
      # count_time:TIME或COUNT中有一个条件满足时提交
      # manual:当每一批poll()的数据被ListenerConsumer处理之后, 手动调用Acknowledgment.acknowledge()后提交
      # manual_immediate:手动调用Acknowledgment.acknowledge()后立即提交,一般推荐使用这种
      ack-mode: manual_immediate

3.3 新建 KafkaProducer 

package com.ruoyi.websocket.kafka;


import java.util.Date;
import java.util.UUID;

import org.apache.commons.lang3.StringUtils;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.stereotype.Component;



@Component
public class KafkaProducer {


    private static Logger logger = LoggerFactory.getLogger(KafkaProducer.class);
    @Autowired
    private KafkaTemplate<String, String> kafkaTemplate;
    //发送消息方法
    public void send(String message) {
        for(int i=0;i<1;i++){
            if(StringUtils.isEmpty(message)){
                message="猴哥,猴哥,师傅被妖怪抓走了!!!";
            }
            logger.info("发送消息 -->  message = {}", message);
            kafkaTemplate.send("test2",message);
        }
    }
}

 如果 test2没有被创建就使用 kafkaTemplate.send("test2",message);

会报下面的错误

 Topic(s) [test2] is/are not present and missingTopicsFatal is true

3.4 新建KafkaConsumer消费者

package com.ruoyi.websocket.kafka;



import java.util.Optional;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.stereotype.Component;


@Component
public class KafkaConsumer {

    private static Logger logger = LoggerFactory.getLogger(KafkaConsumer.class);

    @KafkaListener(topics = {"test2"})
    public void listen(ConsumerRecord<?, ?> record) {
        Optional<?> kafkaMessage = Optional.ofNullable(record.value());
        if (kafkaMessage.isPresent()) {
            Object message = kafkaMessage.get();
            logger.info("----------------- record =" + record);
            logger.info("------------------ message =" + message);
        }

    }
}

3.5测试方法

package com.ruoyi.websocket.kafka;

import com.baomidou.mybatisplus.core.conditions.query.LambdaQueryWrapper;
import com.baomidou.mybatisplus.core.metadata.IPage;
import com.baomidou.mybatisplus.extension.plugins.pagination.Page;
import com.ruoyi.common.core.base.controller.BaseController;
import com.ruoyi.common.core.base.controller.CommonController;
import com.ruoyi.websocket.domain.Group;
import com.ruoyi.websocket.service.IGroupService;
import io.swagger.annotations.ApiOperation;
import io.swagger.annotations.ApiParam;
import org.apache.commons.lang3.ObjectUtils;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestParam;
import org.springframework.web.bind.annotation.RestController;

import javax.servlet.http.HttpServletRequest;
import java.util.ArrayList;
import java.util.List;

/**
 * kafka测试Controller

 */
@RestController
@RequestMapping("/kafka")
public class KafkaController extends CommonController
{
    private Logger logger = LoggerFactory.getLogger(getClass());

    @Autowired
    private KafkaProducer kafkaProducer;

    @ApiOperation(value="kafka发送信息", nickname="kafka发送信息")
    @GetMapping(value = "kafkaSend", produces = { "application/json; charset=utf-8" })
    public String kafkaSend(HttpServletRequest request,
                                           @ApiParam(name="message", value="信息", required=false)String message
    ){
        try{
            kafkaProducer.send(message);
        }catch (RuntimeException e){
            logger.error(e.getMessage());
            return buildFailedResult("失败",null);
        }
        return buildSuccessResult("成功",null);
    }

}

.3.6调用方法验证生效

http://localhost/dev-api/websocket/kafka/kafkaSend?message=你好吗

注意:如果出现以下异常,是springboot 版本和kafka版本不一致导致(

springboot 2.0.2.RELEASE --kafka 2.2.7.RELEASE(替换成2.1.7.RELEASE就正常了)

Error starting ApplicationContext. To display the conditions report re-run your application with 'debug' enabled.
2020-08-28 17:45:42.766 [restartedMain] ERROR org.springframework.boot.SpringApplication - Application run failed
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'kafkaListenerContainerFactoryConfigurer' defined in class path resource [org/springframework/boot/autoconfigure/kafka/KafkaAnnotationDrivenConfiguration.class]: Post-processing of merged bean definition failed; nested exception is java.lang.IllegalStateException: Failed to introspect Class [org.springframework.boot.autoconfigure.kafka.ConcurrentKafkaListenerContainerFactoryConfigurer] from ClassLoader [sun.misc.Launcher$AppClassLoader@18b4aac2]
	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.doCreateBean(AbstractAutowireCapableBeanFactory.java:556)
	at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.createBean(AbstractAutowireCapableBeanFactory.java:501)
	at org.springframework.beans.factory.support.AbstractBeanFactory.lambda$doGetBean$0(AbstractBeanFactory.java:317)
	at org.springframework.beans.factory.support.DefaultSingletonBeanRegistry.getSingleton(DefaultSingletonBeanRegistry.java:228)
	at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:315)
	at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:199)
	at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:760)
	at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:869)
	at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:550)
	at org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext.refresh(ServletWebServerApplicationContext.java:140)
	at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:759)
	at org.springframework.boot.SpringApplication.refreshContext(SpringApplication.java:395)
	at org.springframework.boot.SpringApplication.run(SpringApplication.java:327)
	at org.springframework.boot.SpringApplication.run(SpringApplication.java:1255)
	at org.springframework.boot.SpringApplication.run(SpringApplication.java:1243)
	at com.kry.xr.XrApplication.main(XrApplication.java:19)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.springframework.boot.devtools.restart.RestartLauncher.run(RestartLauncher.java:49)
Caused by: java.lang.IllegalStateException: Failed to introspect Class [org.springframework.boot.autoconfigure.kafka.ConcurrentKafkaListenerContainerFactoryConfigurer] from ClassLoader [sun.misc.Launcher$AppClassLoader@18b4aac2]
	at org.springframework.util.ReflectionUtils.getDeclaredMethods(ReflectionUtils.java:659)

kafka可视化客户端工具(Kafka Tool)

下载地址:Offset Explorer

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
Spring Boot集成Kafka,需要以下步骤: 1. 引入Kafka依赖 在pom.xml文件中添加以下依赖: ```xml <dependency> <groupId>org.springframework.kafka</groupId> <artifactId>spring-kafka</artifactId> <version>2.8.0</version> </dependency> ``` 2. 配置Kafka连接信息 在application.properties或者application.yml文件中添加以下配置信息: ```properties # Kafka连接信息 spring.kafka.bootstrap-servers=localhost:9092 # 消费者组ID spring.kafka.consumer.group-id=my-group # 序列化方式 spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer spring.kafka.producer.value-serializer=org.apache.kafka.common.serialization.StringSerializer spring.kafka.consumer.key-deserializer=org.apache.kafka.common.serialization.StringDeserializer spring.kafka.consumer.value-deserializer=org.apache.kafka.common.serialization.StringDeserializer ``` 3. 创建生产者 创建Kafka生产者需要使用KafkaTemplate类,代码如下: ```java @Autowired private KafkaTemplate<String, String> kafkaTemplate; public void sendMessage(String topic, String message) { kafkaTemplate.send(topic, message); } ``` 4. 创建消费者 创建Kafka消费者需要使用@KafkaListener注解,代码如下: ```java @KafkaListener(topics = "my-topic", groupId = "my-group") public void consume(String message) { System.out.println("Received message: " + message); } ``` 5. 发送和接收消息 在需要发送消息的地方调用sendMessage方法即可,例如: ```java sendMessage("my-topic", "Hello, Kafka!"); ``` 当Kafka接收到消息后,会自动调用@KafkaListener注解的方法进行消费,例如: ```java Received message: Hello, Kafka! ``` 以上就是在Spring Boot集成Kafka的基本步骤,需要注意的是,在实际应用中还需要考虑一些高级特性,例如消息确认、消息重试、消息过滤等,以及Kafka集群的配置和管理。
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值