手把手教大家写一个kafka多线程消费

Springboot kafka 多线程消费者

1.打开idea,点击File->New->Project
点击next
在这里插入图片描述
2.选择Spring Initializr->右边选择Custom (https://start.aliyun.com/),尽量不用默认,不然速度会慢。
点击next
在这里插入图片描述
3.Group: com.example
Artifact: demo_kafka
点击next
在这里插入图片描述
4.选择消息->Spring for Apache Kafka
点击next
在这里插入图片描述
5.选择位置信息,点击Finish
在这里插入图片描述
6.点击New Window
在这里插入图片描述
7.允许自动导入
在这里插入图片描述
8.开始改配置文件(法一,或者在后序代码直接配置Properties也行)
点击左侧的application.properties

#============== kafka ===================
# 指定kafka 代理地址,可以多个
spring.kafka.bootstrap-servers=h01:9092

#=============== provider  =======================

spring.kafka.producer.retries=0
# 每次批量发送消息的数量
spring.kafka.producer.batch-size=16384
spring.kafka.producer.buffer-memory=33554432

# 指定消息key和消息体的编解码方式
spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.value-serializer=org.apache.kafka.common.serialization.StringSerializer

这里h01详情修改进入C:\Windows\System32\drivers\etc\host文件

# Copyright (c) 1993-2009 Microsoft Corp.
#
# This is a sample HOSTS file used by Microsoft TCP/IP for Windows.
#
# This file contains the mappings of IP addresses to host names. Each
# entry should be kept on an individual line. The IP address should
# be placed in the first column followed by the corresponding host name.
# The IP address and the host name should be separated by at least one
# space.
#
# Additionally, comments (such as these) may be inserted on individual
# lines or following the machine name denoted by a '#' symbol.
#
# For example:
#
#      102.54.94.97     rhino.acme.com          # source server
#       38.25.63.10     x.acme.com              # x client host

# localhost name resolution is handled within DNS itself.
#	127.0.0.1       localhost
#	::1             localhost
127.0.0.1       activate.navicat.com
172.*.*.*	     h01
# 改成自己kafka producer的ip地址

原因 :不这么做,服务器根据ip解析偶尔会失败
解决方法: 配置hosts文件,绑定文件,绑定IP 和主机名产生映射关系

9.编写ConsumerHandler.java
这里用了法二,直接写死kafka接受配置

package com.example.demo_kafka;

import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.springframework.stereotype.Component;

import javax.annotation.PostConstruct;
import java.time.Duration;
import java.util.Collections;
import java.util.Properties;
import java.util.concurrent.ArrayBlockingQueue;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.ThreadPoolExecutor;
import java.util.concurrent.TimeUnit;

@Component
public class ConsumerHandler {
    private KafkaConsumer<Object, Object> consumer;
    private ExecutorService executors;

    public static Properties initConfig() {
        Properties props = new Properties();
        props.put("bootstrap.servers", "h01:9092");
        props.put("group.id", "test01");
        props.put("enable.auto.commit", "true");
        props.put("auto.commit.interval.ms", "1000");
        props.put("session.timeout.ms", "30000");
        props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        return props;
    }

    @PostConstruct
    public void initKafkaConfig() {
        Properties properties = initConfig();
        consumer = new KafkaConsumer<>(properties);
        consumer.subscribe(Collections.singleton("test01"));
//        consumer.subscribe(Collections.singleton("test001"));
    }

    public void execute(int workerNum) {
        executors = new ThreadPoolExecutor(10, 100, 0L, TimeUnit.MILLISECONDS,
                new ArrayBlockingQueue<>(1000), new ThreadPoolExecutor.CallerRunsPolicy());
        while (true) {
            ConsumerRecords<Object, Object> consumerRecords = consumer.poll(Duration.ofMillis(100));
            if (!consumerRecords.isEmpty()) {
                for (final ConsumerRecord record : consumerRecords) {
                    executors.submit(new Worker(record));
                }
            }
        }

    }
}

10.编写Worker.java

package com.example.demo_kafka;

import org.apache.kafka.clients.consumer.ConsumerRecord;

public class Worker implements Runnable {

    private ConsumerRecord<String, String> consumerRecord;

    public Worker(ConsumerRecord record) {
        this.consumerRecord = record;
    }

    @Override
    public void run() {
        // 这里写你的消息处理逻辑只是简单地打印消息
        System.out.println(Thread.currentThread().getId()+"   "+Thread.currentThread().getName() + " consumed " + consumerRecord.partition()
                + "th message with offset: " + consumerRecord.offset()+"=======>"+ consumerRecord.value());

    }
}

11.编写DemoKafkaApplication.java

package com.example.demo_kafka;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.kafka.core.KafkaTemplate;
import org.springframework.kafka.support.SendResult;
import org.springframework.scheduling.annotation.EnableScheduling;
import org.springframework.scheduling.annotation.Scheduled;
import org.springframework.util.concurrent.ListenableFuture;

@EnableScheduling
@SpringBootApplication
public class DemoKafkaApplication {

    public static void main(String[] args) {
        SpringApplication.run(DemoKafkaApplication.class, args);

    }

    @Autowired
    private KafkaTemplate<String, String> kafkaTemplate;
    @Autowired
    private ConsumerHandler consumers;

    @Scheduled(cron = "0/10 * * * * ? ")
    public void test() {
        System.err.println("定时消费");
        consumers.execute(10);
    }

//    @Scheduled(cron = "0/10 * * * * ? ")
//    public void test01() {
//        System.err.println("定时生产");
//        for (int i = 0; i < 10000; i++) {
//            try{
//                String message = "{\"x1\": 2133, \"x2\": 2477, \"y1\": 1568, \"y2\": 1888, \"conf\": 0.791015625, \"label_id\": 0, \"label_name\": \"trash\"}";
//                ListenableFuture<SendResult<String, String>> topic_test = kafkaTemplate.send("topic_test", message);
//                System.err.println(message);
//            }catch (Exception e){
//                e.printStackTrace();
//            }
//        }
//    }
}

12.之后直接运行DemoKafkaApplication.java即可
这边是基于我自己的实时垃圾检测项目传来的消息进行多线程消费,因此在重写run函数中,我就直接写了打印消息,大家可以根据自己的代码传参进行调整。
打印结果如下
在这里插入图片描述
目前正在研发一个集聚垃圾检测落地项目,项目包含了技术有:
YOLO v5改进、Linux、Nginx、Kaka、Zookeeper、Springboot、js,是一个一体化的流程开发,如果有小伙伴感兴趣或者有类似经验可以联系我,我们课一起交流~期待和各位一起共同进步!

  • 1
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值