提示:文章如有错误的地方请指出,以免误人子弟!
提示:以下是本篇文章正文内容,下面案例可供参考
一、导入maven jar包
maven 地址,不写版本号也可以,会自动获取。
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
<version>2.8.2</version>
</dependency>
二、kafka 安装及使用
安装
这边都使用docker
安装,比较方便。
- 安装zookeeper
- 下载镜像
docker pull wurstmeister/zookeeper
- 启动容器
docker run -d --name zookeeper --publish 2181:2181 -v /etc/localtime:/etc/localtime wurstmeister/zookeeper
- 安装kafka
- 下载镜像
docker pull wurstmeister/kafka
- 启动容器(注意替换自己服务器地址)
docker run -d --name kafka --publish 9092:9092 \
--link zookeeper \
--env KAFKA_ZOOKEEPER_CONNECT=192.168.96.135:2181 \
--env KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://192.168.96.135:9092 \
--env KAFKA_ADVERTISED_HOST_NAME=192.168.96.135 \
--env KAFKA_ADVERTISED_PORT=9092 \
-v /etc/localtime:/etc/localtime \
wurstmeister/kafka
使用
- application.yml
spring:
application:
name: ThreadStudyDemo
main:
allow-bean-definition-overriding: true
###########【Kafka集群】###########
kafka:
bootstrap-servers: 192.168.96.135:9092
###########【初始化生产者配置】###########
# 重试次数
producer:
retries: 2
# 应答级别:多少个分区副本备份完成时向生产者发送ack确认(可选0、1、all/-1)
acks: 1
# 批量大小
batch-size: 16384
properties:
# 自定义分区器
# partitioner.class: com.felix.kafka.producer.CustomizePartitioner
# 提交延时
linger:
ms: 0
# 当生产端积累的消息达到batch-size或接收到消息linger:ms后,生产者就会将消息提交给kafka
# linger:ms为0表示每接收到一条消息就提交给kafka,这时候batch-size其实就没用了
# 生产端缓冲区大小
buffer-memory: 33554432
# Kafka提供的序列化和反序列化类
key-serializer: org.apache.kafka.common.serialization.StringSerializer
value-serializer: org.apache.kafka.common.serialization.StringSerializer
###########【初始化消费者配置】###########
consumer:
properties:
# 消费会话超时时间(超过这个时间consumer没有发送心跳,就会触发rebalance操作)
session:
timeout:
ms: 120000
# 消费请求超时时间
request:
timeout:
ms: 180000
# 默认的消费组ID
group:
id: defaultConsumerGroup
# 是否自动提交offset
enable-auto-commit: true
# 提交offset延时(接收到消息后多久提交offset)
auto-commit-interval: 1000
# 当kafka中没有初始offset或offset超出范围时将自动重置offset
# earliest:重置为分区中最小的offset;
# latest:重置为分区中最新的offset(消费分区中新产生的数据);
# none:只要有一个分区不存在已提交的offset,就抛出异常;
auto-offset-reset: latest
# Kafka提供的序列化和反序列化类
key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
value-deserializer: org.apache.kafka.common.serialization.StringDeserializer
# 批量消费每次最多消费多少条消息
# max-poll-records: 50
listener:
# 消费端监听的topic不存在时,项目启动会报错(关掉)
missing-topics-fatal: false
# 设置批量消费
# type: batch
- config
import org.apache.kafka.clients.admin.NewTopic;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.kafka.config.ConcurrentKafkaListenerContainerFactory;
import org.springframework.kafka.core.ConsumerFactory;
import org.springframework.kafka.listener.ConsumerAwareListenerErrorHandler;
import javax.annotation.Resource;
/**
* @author Mr.Tigger
* @data 2022/3/3 19:09
*/
@Configuration
public class KafkaConfig {
@Resource
private ConsumerFactory<String, Object> consumerFactory;
/**
* description: 创建一个名为sendTopic的Topic并设置分区为8,分区副本数为2
*
* @return org.apache.kafka.clients.admin.NewTopic
* @author Tigger
*/
@Bean
public NewTopic createTopic() {
return new NewTopic("sendMsgTopic", 8, (short)1);
}
@Bean
public NewTopic createTopics() {
return new NewTopic("sendMessageTopic", 8, (short)1);
}
/**
* description: 如果要修改分区数,只需要修改配置值重启项目即可
* 修改分区数并不会导致数据丢失,但是分区数智能增大不能减小
*
* @return org.apache.kafka.clients.admin.NewTopic
* @author Tigger
*/
// @Bean
// public NewTopic updateTopic() {
// return new NewTopic("sendMsgTopic", 10, (short)2);
// }
// 新建一个异常处理器,用@Bean注入
@Bean
public ConsumerAwareListenerErrorHandler consumerAwareErrorHandler() {
return (message, exception, consumer) -> {
System.out.println("消费异常:"+message.getPayload());
return "消费异常:"+message.getPayload();
};
}
/**
* @KafkaListener注解所标注的方法并不会在IOC容器中被注册为Bean,
* 而是会被注册在KafkaListenerEndpointRegistry中,
* 而KafkaListenerEndpointRegistry在SpringIOC中已经被注册为Bean
**/
// 监听器容器工厂(设置禁止KafkaListener自启动)
@Bean
public ConcurrentKafkaListenerContainerFactory<String, Object> delayContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, Object> container = new ConcurrentKafkaListenerContainerFactory<String, Object>();
container.setConsumerFactory(consumerFactory);
//禁止KafkaListener自启动
container.setAutoStartup(false);
return container;
}
}
- 正常使用(更详细的参数使用,请看文章结尾的参考链接)
controller,send消息。
/**
* description: 简单消费回调是否成功发送
*
* @param message 消息内容
* @return void
* @author Tigger
*/
@GetMapping("callback")
public String callbackConsumptionMsg(String message) {
kafkaTemplate.send("sendMsgTopic","pic", message).addCallback(
success -> {
if (success != null) {
// 消息发送到的topic
String topic = success.getRecordMetadata().topic();
// 消息发送到的分区
int partition = success.getRecordMetadata().partition();
// 消息在分区内的offset
long offset = success.getRecordMetadata().offset();
System.out.println("发送消息成功:" + topic + "-" + partition + "-" + offset);
}
},
failure -> {
System.out.println("发送消息失败:" + failure.getMessage());
}
);
return message;
}
消费消息 @KafkaListener
首先在该方法所在的类上面加上@Component将该类交由spring管理
errorHandler = "consumerAwareErrorHandler",异常消费方法请查看上面config里面配置的消费异常回调方法,方法名:consumerAwareErrorHandler。
/**
* description: 消费图片消息
* ① id:消费者ID;
* ② groupId:消费组ID;
* ③ topics:监听的topic,可监听多个;
* ④ topicPartitions:可配置更加详细的监听信息,可指定topic、parition、offset监听。
* partitionOffsets表示详细的指定分区,partition表示那个分区,initialOffset表示Offset的初始位置
* partitionOffsets = @PartitionOffset(partition = "1", initialOffset = "1")
*
* 上面easyConsumptionMsg监听的含义:监听sendMsgTopic的0号分区,监听sendMsgTopic的0号分区里面offset从8开始的消息。
* 注意:topics和topicPartitions不能同时使用;
*
* @param record 队列消息
* @author Tigger
*/
@KafkaListener(id = "manyConsumptionMsg",
groupId = "defaultConsumerGroup",
topicPartitions = {@TopicPartition(topic = "sendMsgTopic", partitions = "5")},
errorHandler = "consumerAwareErrorHandler")
public void manyConsumptionMsg(ConsumerRecord<?, ?> record) {
System.out.println("消费图片消息---"+"topic:"+record.topic()+"|partition:"+record.partition()+"|offset:"+record.offset()+"|value:"+record.value());
}
- 定时使用 消费方法
首先在该方法所在的类上面加上@EnableScheduling启动定时,@Component将该类交由spring管理
containerFactory = "delayContainerFactory",监听器容器工厂请查看上面config里面配置的方法,方法名:delayContainerFactory。
@Resource
private KafkaListenerEndpointRegistry registry;
/**
* description: 不同的topic测试
*
* @param record 队列消息
* @return void
* @author Tigger
*/
@KafkaListener(id = "sendMessageTopic",
groupId = "defaultConsumerGroup",
topicPartitions = {@TopicPartition(topic = "sendMessageTopic", partitions = "5")},
errorHandler = "consumerAwareErrorHandler",
containerFactory = "delayContainerFactory")
public void sendMessageTopic(ConsumerRecord<?, ?> record) {
System.out.println("消费图片消息---"+"topic:"+record.topic()+"|partition:"+record.partition()+"|offset:"+record.offset()+"|value:"+record.value());
}
// 定时启动监听器
@Scheduled(cron = "0 23 16 4 * ?")
public void startListener() {
System.out.println("启动定时监听·······");
// ”timingCounsumer“是@KafkaListener注解后面设置的监听器id,标识这个监听器
if (!Objects.requireNonNull(registry.getListenerContainer("sendMessageTopic")).isRunning()) {
Objects.requireNonNull(registry.getListenerContainer("sendMessageTopic")).start();
}
// registry.getListenerContainer("sendMessageTopic").resume();
}
// 定时关闭监听器
@Scheduled(cron = "0 25 16 4 * ?")
public void shutDownListener() {
System.out.println("定时关闭监听器·····");
Objects.requireNonNull(registry.getListenerContainer("sendMessageTopic")).pause();
}
三、spring线程数配合kafka使用
- config
package com.hqll.news.config;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor;
import java.util.concurrent.ThreadPoolExecutor;
/**
* @author wwwh
* @date 2022/2/18 10:24
*/
@Configuration
public class ThreadTaskPoolConfig {
/**
* description: Spring 注解模式
*
* @param
* @return org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor
* @author wh
*/
@Bean
public ThreadPoolTaskExecutor threadPoolTaskExecutor() {
ThreadPoolTaskExecutor threadPoolTaskExecutor = new ThreadPoolTaskExecutor();
// 获取cpu线程数
int cpuNum = Runtime.getRuntime().availableProcessors();
// 核心线程数
threadPoolTaskExecutor.setCorePoolSize(cpuNum * 2);
// 最大线程数
threadPoolTaskExecutor.setMaxPoolSize(cpuNum * 2);
// 线程空闲后的最大存活时间 单位 秒
threadPoolTaskExecutor.setKeepAliveSeconds(10);
// 队列大小
threadPoolTaskExecutor.setQueueCapacity(cpuNum * 2 * 10);
// 线程前置名称
threadPoolTaskExecutor.setThreadNamePrefix("ThreadPoolTask");
//当调度器shutdown被调用时等待当前被调度的任务完成
threadPoolTaskExecutor.setWaitForTasksToCompleteOnShutdown(true);
/*
* description: 配置拒绝策略
*
* rejectedExectutionHandler参数字段用于配置绝策略,常用拒绝策略如下:
*
* AbortPolicy:用于被拒绝任务的处理程序,它将抛出RejectedExecutionException
* CallerRunsPolicy:用于被拒绝任务的处理程序,它直接在execute方法的调用线程中运行被拒绝的任务。
* DiscardOldestPolicy:用于被拒绝任务的处理程序,它放弃最旧的未处理请求,然后重试execute。
* DiscardPolicy:用于被拒绝任务的处理程序,默认情况下它将丢弃被拒绝的任务。
*/
threadPoolTaskExecutor.setRejectedExecutionHandler(new ThreadPoolExecutor.CallerRunsPolicy());
//加载
threadPoolTaskExecutor.initialize();
return threadPoolTaskExecutor;
}
}
- 创建一个线程消费公用方法
首先在该方法所在的类上面加上@Component将该类交由spring管理
@Async("taskExecutor") @Async:异步线程注解,taskExecutor:config里配置的线程池信息的方法名
countDownLatch 计数器配置异步线程使用,让其他线程执行完,再执行主线程。
package com.hqll.news.util;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.springframework.scheduling.annotation.Async;
import org.springframework.stereotype.Component;
import java.util.concurrent.CountDownLatch;
/**
* @Author: wwwh
* @Date: 2022/2/18 13:16
*/
@Component
public class ThreadUtil {
@Async("taskExecutor")
public void outAsyncTest(ConsumerRecord<?, ?> record, CountDownLatch countDownLatch) {
System.out.println("消费图片消息---"+"topic:"+record.topic()+"|partition:"+record.partition()+"|offset:"+record.offset()+"|value:"+record.value());
System.out.println(Thread.currentThread().getName() + ":执行了...");
countDownLatch.countDown();
}
}
- 配合使用
- kafka单个消息消费
@Autowired
private ThreadUtil threadUtil;
/**
* description: 消费图片消息
* ① id:消费者ID;
* ② groupId:消费组ID;
* ③ topics:监听的topic,可监听多个;
* ④ topicPartitions:可配置更加详细的监听信息,可指定topic、parition、offset监听。
* partitionOffsets表示详细的指定分区,partition表示那个分区,initialOffset表示Offset的初始位置
* partitionOffsets = @PartitionOffset(partition = "1", initialOffset = "1")
*
* 上面easyConsumptionMsg监听的含义:监听sendMsgTopic的0号分区,监听sendMsgTopic的0号分区里面offset从8开始的消息。
* 注意:topics和topicPartitions不能同时使用;
*
* @param record 队列消息
* @author Tigger
*/
@KafkaListener(id = "manyConsumptionMsg",
groupId = "defaultConsumerGroup",
topicPartitions = {@TopicPartition(topic = "sendMsgTopic", partitions = "5")},
errorHandler = "consumerAwareErrorHandler")
public void manyConsumptionMsg(ConsumerRecord<?, ?> record) {
//CountDownLatch countDownLatch = new CountDownLatch(1);
threadUtil.outAsyncTest(record);
// runnable接口
// kafkaThread.outTest(record);
//try {
// countDownLatch.await();
//} catch (InterruptedException e) {
// e.printStackTrace();
//}
System.out.println("主线程开始······其他线程执行完毕!");
}
- kafka批量消息消费
首先修改kafka yml配置
@KafkaListener(id = "manyConsumptionMsg",
groupId = "defaultConsumerGroup",
topicPartitions = {@TopicPartition(topic = "sendMsgTopic", partitions = "5")},
errorHandler = "consumerAwareErrorHandler")
public void manyConsumptionMsgList(List<ConsumerRecord<?, ?>> recordList) {
System.out.println("本次消息数量"+ recordList.size());
// 计数器
CountDownLatch countDownLatch = new CountDownLatch(recordList.size());
for (ConsumerRecord<?, ?> record : recordList) {
threadUtil.outAsyncTest(record, countDownLatch);
}
try {
countDownLatch.await();
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println("主线程开始······其他线程执行完毕!");
}
四、参考文章
希望对你有所帮助!