背景:
在业务中需要同时使用多个消息中间件,比如消费别的服务的消息,但是别的服务只提供kafka消息,但是自己的系统又是rabbitmq或者别的消息中间件,这种场景的时候。
What?
什么事SpringCloud Stream?我们来看下官方的解释。
Spring Cloud Stream is a framework for building highly scalable event-driven microservices connected with shared messaging systems.
The framework provides a flexible programming model built on already established and familiar Spring idioms and best practices, including support for persistent pub/sub semantics, consumer groups, and stateful partitions.
(SpringCloud Stream 是一个用于构建与共享消息传递系统连接的高度可伸缩的事件驱动微服务的框架。
该框架提供了一个构建在已经建立和熟悉的Spring习惯用法和最佳实践之上的灵活的编程模型,包括对持久发布/订阅语义、消费者组和有状态分区的支持)
SpringCloud Stream 核心构建模块:
Destination Binders: 负责和外部消息系统集成的组件
Destination Bindings: 外部消息系统合用户提供的应用代码(生产者/消费者)之间的桥梁
Message:生产者和消费者用来和Destination Binders交流的标准化数据结构
Spring Cloud Stream App 应用模型:
Why?
至于为什么使用SpringCloudStream,在我个人的愚见认为有以下几点:
1. SpringCloud Stream 是对各种消息中间件的高级抽象,可以屏蔽一些底层消息中间在操作和功能上的差异,减少代码和消息中间件之间的耦合
2. Spring全家桶产品,便于已经熟悉了Spring相关框架的开发人员使用。
How?
前面撤了很多关于SpringCloudStream的东西,当然那这不是我们的重点,大家可以自己去看官方文档就好了,接下来我们讨论下题目的东西吧。
step1: 首先第一步我们得在本地启动两种消息中间件,我这儿是启动了kafka和rabbitmq(主要是只用过这两别的不熟😄)
这儿我们用docker-compose已经编排好了两个容器(怎么编排大家可以自行学习)
kafka:
version: '3'
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
container_name: "zookeeper"
restart: always
kafka:
image: wurstmeister/kafka:2.12-2.3.0
container_name: "kafka"
ports:
- "9092:9092"
environment:
- TZ=CST-8
- KAFKA_ZOOKEEPER_CONNECT=zookeeper:2181
# 非必须,设置自动创建 topic
- KAFKA_AUTO_CREATE_TOPICS_ENABLE=true
- KAFKA_ADVERTISED_HOST_NAME=${IP}
- KAFKA_ADVERTISED_PORT=9092
- KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://${IP}:9092
- KAFKA_LISTENERS=PLAINTEXT://:9092
# 非必须,设置对内存
- KAFKA_HEAP_OPTS=-Xmx1G -Xms1G
# 非必须,设置保存7天数据,为默认值
- KAFKA_LOG_RETENTION_HOURS=168
volumes:
# 将 kafka 的数据文件映射出来
- ${DATA_PATH}/kafka:/kafka
- /var/run/docker.sock:/var/run/docker.sock
restart: always
rabbitmq:
version: '3'
services:
rabbitmq:
hostname: rabbitmq
environment:
RABBITMQ_DEFAULT_VHOST: "/"
RABBITMQ_DEFAULT_USER: "root"
RABBITMQ_DEFAULT_PASS: "root"
image: "rabbitmq:3.7.16-management"
restart: always
volumes:
- "./data:/var/lib/rabbitmq"
- "./log:/var/log/rabbitmq/log"
ports:
- "15672:15672"
- "4369:4369"
- "5672:5672"
- "25672:25672"
step2:
编写我们的服务。
首先构造一个springboot工程,并导入Spring CloudStream的相关配置
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.1.5.RELEASE</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>
<groupId>com.example</groupId>
<artifactId>demo</artifactId>
<version>0.0.1-SNAPSHOT</version>
<name>demo</name>
<description>Demo project for Spring Boot</description>
<properties>
<java.version>1.8</java.version>
<spring-cloud.version>Fishtown.SR4</spring-cloud.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-rabbit</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>4.12</version>
<scope>test</scope>
</dependency>
</dependencies>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-dependencies</artifactId>
<version>Greenwich.SR5</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
</project>
下面我们来构建我们的配置文件
server:
port: 12345
servlet:
context-path: /test
spring:
cloud:
stream:
binders:
localRabbit:
environment:
spring:
rabbitmq:
addresses: localhost
password: root
port: 5672
username: root
virtual-host: /
type: rabbit
localKafka:
environment:
spring:
kafka:
binder:
zk-nodes: localhost:2181
brokers: localhost:9092
type: kafka
bindings:
rabbitInput:
binder: localRabbit
content-type: application/json
destination: localRabbit
group: test
rabbitOutput:
binder: localRabbit
content-type: application/json
destination: localRabbit
group: test
kafkaInput:
binder: localKafka
content-type: application/json
destination: localKafka
group: test
kafkaOutput:
binder: localKafka
content-type: application/json
destination: localKafka
group: test
default-binder: localKafka
最后按照配置构建我们的两个channel
package com.example.demo.config;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.messaging.SubscribableChannel;
public interface InputChannel {
String RABBIT_INPUT = "rabbitInput";
@Input(RABBIT_INPUT)
SubscribableChannel rabbitInput();
String KAFKA_INPUT = "kafkaInput";
@Input(KAFKA_INPUT)
SubscribableChannel kafkaInput();
}
package com.example.demo.config;
import org.springframework.cloud.stream.annotation.Output;
import org.springframework.messaging.MessageChannel;
public interface OutputChannel {
String RABBIT_OUTPUT = "rabbitOutput";
@Output(RABBIT_OUTPUT)
MessageChannel rabbitOutput();
String KAFKA_OUTPUT = "kafkaOutput";
@Output(KAFKA_OUTPUT)
MessageChannel kafkaOutput();
}
接着是producer
package com.example.demo.producer;
import com.example.demo.config.OutputChannel;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.messaging.Message;
import org.springframework.messaging.MessageChannel;
import org.springframework.stereotype.Component;
@Component
public class MessageProducer {
@Autowired
private OutputChannel channel;
public boolean sendMessage(MessageChannel channel, Message<?> message) {
return channel.send(message);
}
public boolean sendMessageToKafka( Message<?> message) {
MessageChannel messageChannel = channel.kafkaOutput();
return sendMessage(messageChannel,message);
}
public boolean sendMessageToRabbit( Message<?> message) {
MessageChannel messageChannel = channel.rabbitOutput();
return sendMessage(messageChannel,message);
}
}
listener:
package com.example.demo.listener;
import com.example.demo.config.InputChannel;
import com.example.demo.producer.MessageProducer;
import com.fasterxml.jackson.databind.ObjectMapper;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.messaging.Message;
import org.springframework.stereotype.Component;
import java.io.IOException;
@Component
public class MessageListener {
private final ObjectMapper mapper = new ObjectMapper();
@Autowired
private MessageProducer producer;
@StreamListener(InputChannel.RABBIT_INPUT)
public void rabbitListen(Message<byte[]> message) throws IOException {
String s = new String(message.getPayload());
System.out.println("rabbit listener received msg:"+s);
}
@StreamListener(InputChannel.KAFKA_INPUT)
public void kafkaListen(Message<byte[]> message) throws IOException {
String s = new String(message.getPayload());
System.out.println("kafka listener received msg:"+s);
producer.sendMessageToRabbit(message);
}
}
主类:
package com.example.demo;
import com.example.demo.config.InputChannel;
import com.example.demo.config.OutputChannel;
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.cloud.stream.annotation.EnableBinding;
@SpringBootApplication
@EnableBinding(value = {InputChannel.class, OutputChannel.class})
public class DemoApplication {
public static void main(String[] args) {
SpringApplication.run(DemoApplication.class, args);
}
}
接着我们就可以通过自己编写一个controller来测试了:
package com.example.demo.controller;
import com.example.demo.producer.MessageProducer;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.messaging.support.MessageBuilder;
import org.springframework.web.bind.annotation.PostMapping;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RestController;
@RestController
public class TestController {
@Autowired
private MessageProducer producer;
@PostMapping("/msg")
public boolean sendMessage(@RequestBody String message) {
return producer.sendMessageToKafka(MessageBuilder.withPayload(message).build());
}
}
好我们启动起来:
好啦到此我们就成功的集成了多个消息中间件啦,并且可以转发