一、实现消息转发ReplyTemplate
目的
可以使用转发功能实现业务解耦,系统A从Topic-A中获取到消息,进行处理后转发到Topic-B中,系统B监听Topic-B获取消息再次进行处理,这个消息可以是订单相关数据,系统A处理用户提交的订单审核,系统B处理订单的物流信息等等。
实现方式
Spring-Kafka整合了两种消息转发方式:
- 使用Headers设置回复主题(Reply_Topic),这种方式比较特别,是一种请求响应模式,使用的是ReplyingKafkaTemplate类
- 手动转发,使用@SendTo注解将监听方法返回值转发到Topic中
@SendTo方式
- 配置ConcurrentKafkaListenerContainerFactory的ReplyTemplate
- 监听方法加上@SendTo注解
这里我们为监听容器工厂(ConcurrentKafkaListenerContainerFactory)配置一个ReplyTemplate,ReplyTemplate是我们用来转发消息所使用的类。@SendTo注解本质其实就是利用这个ReplyTemplate转发监听方法的返回值到对应的Topic中,我们也可以是用代码实现KakfaTemplate.send(),不过使用注解的好处就是减少代码量,加快开发效率。
@Bean
public ConcurrentKafkaListenerContainerFactory<Integer, String> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<Integer, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.setReplyTemplate(kafkaTemplate());
return factory;
}
@Component
public class ForwardListener {
private static final Logger log= LoggerFactory.getLogger(ForwardListener.class);
@KafkaListener(id = "forward", topics = "topic.quick.target")
@SendTo("topic.quick.real")
public String forward(String data) {
log.info("topic.quick.target forward "+data+" to topic.quick.real");
return "topic.quick.target send msg : " + data;
}
}
顺便就写个测试方法测试一下吧,可以看到运行成功后,topic.quick.real这个主题会产生一条数据,这条数据就是我们在forward方法返回的值。
@Autowired
private KafkaTemplate kafkaTemplate;
@Test
public void testForward() {
kafkaTemplate.send("topic.quick.target", "test @SendTo");
}
ReplyTemplate方式
使用ReplyTemplate方式不同于@SendTo方式,@SendTo是直接将监听方法的返回值转发对应的Topic中,而ReplyTemplate也是将监听方法的返回值转发Topic中,但转发Topic成功后,会被请求者消费。
这是怎么回事呢?我们可以回想一下请求响应模式,这种模式其实我们是经常使用的,就像你调用某个第三方接口,这个接口会把响应报文返回给你,你可以根据业务处理这段响应报文。而ReplyTemplate方式的这种请求响应模式也是相同的,首先生产者发送消息到Topic-A中,Topic-A的监听器则会处理这条消息,紧接着将消息转发到Topic-B中,当这条消息转发到Topic-B成功后则会被ReplyTemplate接收。那最终生产者获得的是被处理过的数据。
ReplyTemplate实现的代码也并不复杂,实现的功能确更多。
讲一下流程吧:
- 配置ConcurrentKafkaListenerContainerFactory的ReplyTemplate
- 配置topic.quick.request的监听器
- 注册一个KafkaMessageListenerContainer类型的监听容器,监听topic.quick.reply,这个监听器里面我们不处理任何事情,交由ReplyingKafkaTemplate处理
- 通过ProducerFactory和KafkaMessageListenerContainer创建一个ReplyingKafkaTemplate类型的Bean,设置回复超时时间为10秒
@Bean
public ConcurrentKafkaListenerContainerFactory<Integer, String> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<Integer, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.setReplyTemplate(kafkaTemplate());
return factory;
}
@KafkaListener(id = "replyConsumer", topics = "topic.quick.request",containerFactory = "kafkaListenerContainerFactory")
@SendTo
public String replyListen(String msgData){
log.info("topic.quick.request receive : "+msgData);
return "topic.quick.reply reply : "+msgData;
}
@Bean
public KafkaMessageListenerContainer<String, String> replyContainer(@Autowired ConsumerFactory consumerFactory) {
ContainerProperties containerProperties = new ContainerProperties("topic.quick.reply");
return new KafkaMessageListenerContainer<>(consumerFactory, containerProperties);
}
@Bean
public ReplyingKafkaTemplate<String, String, String> replyingKafkaTemplate(@Autowired ProducerFactory producerFactory, KafkaMessageListenerContainer replyContainer) {
ReplyingKafkaTemplate template = new ReplyingKafkaTemplate<>(producerFactory, replyContainer);
template.setReplyTimeout(10000);
return template;
}
发送消息就显得稍微有点麻烦了,不过在项目编码过程中可以把它封装成一个工具类调用。
- 我们需要创建ProducerRecord类,用来发送消息,并添加KafkaHeaders.REPLY_TOPIC到record的headers参数中,这个参数配置我们想要转发到哪个Topic中。
- 使用replyingKafkaTemplate.sendAndReceive()方法发送消息,该方法返回一个Future类RequestReplyFuture,这里类里面包含了获取发送结果的Future类和获取返回结果的Future类。使用replyingKafkaTemplate发送及返回都是异步操作。
- 调用RequestReplyFuture.getSendFutrue().get()方法可以获取到发送结果
- 调用RequestReplyFuture.get()方法可以获取到响应结果
@Autowired
private ReplyingKafkaTemplate replyingKafkaTemplate;
@Test
public void testReplyingKafkaTemplate() throws ExecutionException, InterruptedException, TimeoutException {
ProducerRecord<String, String> record = new ProducerRecord<>("topic.quick.request", "this is a message");
record.headers().add(new RecordHeader(KafkaHeaders.REPLY_TOPIC, "topic.quick.reply".getBytes()));
RequestReplyFuture<String, String, String> replyFuture = replyingKafkaTemplate.sendAndReceive(record);
SendResult<String, String> sendResult = replyFuture.getSendFuture().get();
System.out.println("Sent ok: " + sendResult.getRecordMetadata());
ConsumerRecord<String, String> consumerRecord = replyFuture.get();
System.out.println("Return value: " + consumerRecord.value());
Thread.sleep(20000);
}
注意:
由于ReplyingKafkaTemplate也是通过监听容器实现的,所以响应时间可能会较慢,要注意选择合适的场景使用。
二、KafkaListener定时启动
定时启动的意义何在
在这里我举一个定时启动的应用场景:
比如现在单机环境下,我们需要利用Kafka做数据持久化的功能,由于用户活跃的时间为早上10点至晚上12点,那在这个时间段做一个大数据量的持久化可能会影响数据库性能导致用户体验降低,我们可以选择在用户活跃度低的时间段去做持久化的操作,也就是晚上12点后到第二条的早上10点前。
使用KafkaListenerEndpointRegistry
这里需要提及一下,@KafkaListener这个注解所标注的方法并没有在IOC容器中注册为Bean,而是会被注册在KafkaListenerEndpointRegistry中,KafkaListenerEndpointRegistry在SpringIOC中已经被注册为Bean,具体可以看一下该类的源码,当然不是使用注解方式注册啦...
public class KafkaListenerEndpointRegistry implements DisposableBean, SmartLifecycle, ApplicationContextAware, ApplicationListener<ContextRefreshedEvent> {
protected final Log logger = LogFactory.getLog(this.getClass());
private final Map<String, MessageListenerContainer> listenerContainers = new ConcurrentHashMap();
private int phase = 2147483547;
private ConfigurableApplicationContext applicationContext;
private boolean contextRefreshed;
......
}
那我们怎么让KafkaListener定时启动呢?
- 禁止KafkaListener自启动(AutoStartup)
- 编写两个定时任务,一个晚上12点,一个早上10点
- 分别在12点的任务上启动KafkaListener,在10点的任务上关闭KafkaListener
这里需要注意一下启动监听容器的方法,项目启动的时候监听容器是未启动状态,而resume是恢复的意思不是启动的意思,所以我们需要判断容器是否运行,如果运行则调用resume方法,否则调用start方法
@Component
@EnableScheduling
public class TaskListener{
private static final Logger log= LoggerFactory.getLogger(TaskListener.class);
@Autowired
private KafkaListenerEndpointRegistry registry;
@Autowired
private ConsumerFactory consumerFactory;
@Bean
public ConcurrentKafkaListenerContainerFactory delayContainerFactory() {
ConcurrentKafkaListenerContainerFactory container = new ConcurrentKafkaListenerContainerFactory();
container.setConsumerFactory(consumerFactory);
//禁止自动启动
container.setAutoStartup(false);
return container;
}
@KafkaListener(id = "durable", topics = "topic.quick.durable",containerFactory = "delayContainerFactory")
public void durableListener(String data) {
//这里做数据持久化的操作
log.info("topic.quick.durable receive : " + data);
}
//定时器,每天凌晨0点开启监听
@Scheduled(cron = "0 0 0 * * ?")
public void startListener() {
log.info("开启监听");
//判断监听容器是否启动,未启动则将其启动
if (!registry.getListenerContainer("durable").isRunning()) {
registry.getListenerContainer("durable").start();
}
registry.getListenerContainer("durable").resume();
}
//定时器,每天早上10点关闭监听
@Scheduled(cron = "0 0 10 * * ?")
public void shutDownListener() {
log.info("关闭监听");
registry.getListenerContainer("durable").pause();
}
}
修改修改一下定时器注解,修改为距离现在时间较近的时间点,然后写入些数据,启动SpringBoot项目,静静的等待时间的到来
//这个代表16:24执行
@Scheduled(cron = "0 24 16 * * ?")
@Test
public void testTask() {
for (int i = 0; i < 10; i++) {
kafkaTemplate.send("topic.quick.durable", "this is durable message");
}
}
这里可以看到在16:24的时候启动了监听容器,监听容器也成功从Topic中获取到了数据,等到16:28的时候容器被暂停了,这个时候可以运行一下测试方法,看看监听容器是否还能获取数据,答案肯定是不行的。
2018-09-12 16:24:00.003 INFO 2872 --- [pool-1-thread-1] com.viu.kafka.listen.TaskListener : 开启监听
2018-09-12 16:24:00.004 INFO 2872 --- [pool-1-thread-1] o.a.k.clients.consumer.ConsumerConfig : ConsumerConfig values:
auto.commit.interval.ms = 1000
auto.offset.reset = latest
bootstrap.servers = [localhost:9092]
check.crcs = true
client.id =
connections.max.idle.ms = 540000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = durable
heartbeat.interval.ms = 3000
interceptor.classes = null
internal.leave.group.on.close = true
isolation.level = read_uncommitted
key.deserializer = class org.apache.kafka.common.serialization.IntegerDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 305000
retry.backoff.ms = 100
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
session.timeout.ms = 15000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
2018-09-12 16:24:00.007 INFO 2872 --- [pool-1-thread-1] o.a.kafka.common.utils.AppInfoParser : Kafka version : 1.0.2
2018-09-12 16:24:00.007 INFO 2872 --- [pool-1-thread-1] o.a.kafka.common.utils.AppInfoParser : Kafka commitId : 2a121f7b1d402825
2018-09-12 16:24:00.007 INFO 2872 --- [pool-1-thread-1] o.s.s.c.ThreadPoolTaskScheduler : Initializing ExecutorService
2018-09-12 16:24:00.012 INFO 2872 --- [ durable-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-9, groupId=durable] Discovered group coordinator admin-PC:9092 (id: 2147483647 rack: null)
2018-09-12 16:24:00.013 INFO 2872 --- [ durable-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-9, groupId=durable] Revoking previously assigned partitions []
2018-09-12 16:24:00.014 INFO 2872 --- [ durable-0-C-1] o.s.k.l.KafkaMessageListenerContainer : partitions revoked: []
2018-09-12 16:24:00.014 INFO 2872 --- [ durable-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-9, groupId=durable] (Re-)joining group
2018-09-12 16:24:00.021 INFO 2872 --- [ durable-0-C-1] o.a.k.c.c.internals.AbstractCoordinator : [Consumer clientId=consumer-9, groupId=durable] Successfully joined group with generation 6
2018-09-12 16:24:00.021 INFO 2872 --- [ durable-0-C-1] o.a.k.c.c.internals.ConsumerCoordinator : [Consumer clientId=consumer-9, groupId=durable] Setting newly assigned partitions [topic.quick.durable-0]
2018-09-12 16:24:00.024 INFO 2872 --- [ durable-0-C-1] o.s.k.l.KafkaMessageListenerContainer : partitions assigned: [topic.quick.durable-0]
2018-09-12 16:24:00.042 INFO 2872 --- [ durable-0-C-1] com.viu.kafka.listen.TaskListener : topic.quick.durable receive : this is durable message
2018-09-12 16:24:00.043 INFO 2872 --- [ durable-0-C-1] com.viu.kafka.listen.TaskListener : topic.quick.durable receive : this is durable message
2018-09-12 16:24:00.043 INFO 2872 --- [ durable-0-C-1] com.viu.kafka.listen.TaskListener : topic.quick.durable receive : this is durable message
2018-09-12 16:24:00.043 INFO 2872 --- [ durable-0-C-1] com.viu.kafka.listen.TaskListener : topic.quick.durable receive : this is durable message
2018-09-12 16:24:00.043 INFO 2872 --- [ durable-0-C-1] com.viu.kafka.listen.TaskListener : topic.quick.durable receive : this is durable message
2018-09-12 16:24:00.043 INFO 2872 --- [ durable-0-C-1] com.viu.kafka.listen.TaskListener : topic.quick.durable receive : this is durable message
2018-09-12 16:24:00.043 INFO 2872 --- [ durable-0-C-1] com.viu.kafka.listen.TaskListener : topic.quick.durable receive : this is durable message
2018-09-12 16:24:00.043 INFO 2872 --- [ durable-0-C-1] com.viu.kafka.listen.TaskListener : topic.quick.durable receive : this is durable message
2018-09-12 16:24:00.043 INFO 2872 --- [ durable-0-C-1] com.viu.kafka.listen.TaskListener : topic.quick.durable receive : this is durable message
2018-09-12 16:24:00.043 INFO 2872 --- [ durable-0-C-1] com.viu.kafka.listen.TaskListener : topic.quick.durable receive : this is durable message
2018-09-12 16:28:00.023 INFO 2872 --- [pool-1-thread-1] com.viu.kafka.listen.TaskListener : 关闭监听