一、spring cloud stream的架构
1、发射器
当一个服务准备发布消息时,它将使用一个发射器发布消息。 发射器是 Spring 注解接口,它接收 Java 对象( POJO ),该对象代表要发布 消息 发射器接收消息,然后序列化它(默认的序列 化是JSON )并将消息发布到通道。
2、通道
通道是对队列的一个抽象,它将在消息生产者发布消息或消息消费者消费消息后保留该消息。通道名称始终与目标队名称相关联
3、绑定器
绑定器是Spring Cloud Stream框架的一部分,它是与特定消息平台对话的 Spring 代码。
4、接收器
在Spring Cloud Stream中,服务通过一个接收器从队列中接收消息。接收器监听传入消息的通道, 将消息反序列化为 POJO。
二、编写简单的消息生产者和消费者
从A服务(如OrganizationService)传递一条消息到B服务()中,在B服务中,唯一要做的就是将日志消息打印到控制台
1、在A服务中编写消息生产者
1)在A服务中添加依赖,pom.xml:
pom.xml:
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-stream-kafka</artifactId>
</dependency>
<!-- https://mvnrepository.com/artifact/org.springframework.cloud/spring-cloud-stream -->
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream</artifactId>
</dependency>
2)在引导类添加@EnableBinding注解
告诉Spring Cloud Stream将应用程序绑定到消息代理。
A服务,如OrganizationService
package com.example.organization;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.messaging.Source;
@EnableBinding(Source.class)
public class OrganizationServiceApplication {
// 省略其他
public static void main(String[] args) {
SpringApplication.run(AApplication .class, args);
}
}
@EnableBinding 注解中的 Source.class 告诉 Spring Cloud Stream ,该服务将通过Source 类上定义的一组通道与消息代理进行通信。该通道位于消息队列之上。Spring Cloud stream 有一个默认的通道集,可以配置它们来与消息代理进行通信。
向消息代理发布消息,SimpleSourceBean:
package com.example.organization.event.source;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.cloud.stream.messaging.Source;
import org.springframework.messaging.support.MessageBuilder;
import org.springframework.stereotype.Component;
// 省略一些import
@Component
public class SimpleSourceBean {
private Source source;
private static final Logger logger = LoggerFactory.getLogger(SimpleSourceBean.class);
@Autowired
public SimpleSourceBean(Source source){ // 注入一个Source接口,以供服务使用
this.source = source;
}
public void publishOrgChange(String action,String orgId){
logger.debug("Sending Kafka message {} for Organization Id: {}", action, orgId);
OrganizationChangeModel change = new OrganizationChangeModel(
OrganizationChangeModel.class.getTypeName(),
action,
orgId,
UserContext.getCorrelationId()); // 要发布的消息是一个java pojo
source.output().send(MessageBuilder.withPayload(change).build()); // 当准备发送消息时,使用Source类中定义的通道的send()方法
}
}
OrganizationChangeModel:
package com.example.organization.events.models;
public class OrganizationChangeModel{
private String type;
private String action;
private String organizationId;
private String correlationId;
public OrganizationChangeModel(String type, String action, String organizationId, String correlationId) {
super();
this.type = type;
this.action = action;
this.organizationId = organizationId;
this.correlationId = correlationId;
}
public String getType() {
return type;
}
public void setType(String type) {
this.type = type;
}
public String getAction() {
return action;
}
public void setAction(String action) {
this.action = action;
}
public String getOrganizationId() {
return organizationId;
}
public void setOrganizationId(String organizationId) {
this.organizationId = organizationId;
}
public String getCorrelationId() {
return correlationId;
}
public void setCorrelationId(String correlationId) {
this.correlationId = correlationId;
}
}
用于发布消息的Spring Cloud Stream的配置,application.yml:
spring:
cloud:
stream:
bindings: # stream.bindings 是所需配置的开始 ,用于服务将消息发布到 Spring Cloud Stream 消息代理。
output: # output 是通道的名称,映射到在代码source.output()通道。
destination: orgChangeTopic # destination 这是要写入消息的消息队列
content-type: application/json # content-type 提供了将要发送和接收什么类型的消息的提示
kafka:
binder: # stream.bindings.kafka 属性告诉Spring,将使用Kafka作为服务中的消息总线(可以使用RabbitMQ作为替代)
zkNodes: localhost # zkNodes和brokers属性告诉Spring Cloud Stream,Kafka和Zookeeper的网络配置
brokers: localhost
在A服务(OrganizationService )中发布消息,如OrganizationService 中:
package com.example.organization.services;
import com.example.organization.events.source.SimpleSourceBean;
import com.example.organization.model.Organization;
import com.example.organization.repository.OrganizationRepository;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import java.util.UUID;
@Service
public class OrganizationService {
@Autowired
private OrganizationRepository orgRepository;
@Autowired
SimpleSourceBean simpleSourceBean; // SimpleSourceBean 注入到OrganizationService
public Organization getOrg(String organizationId) {
return orgRepository.findById(organizationId);
}
public void saveOrg(Organization org){
org.setId( UUID.randomUUID().toString());
orgRepository.save(org);
simpleSourceBean.publishOrgChange("SAVE", org.getId()); // 调用simpleSourceBean.publishOrgChange()
}
public void updateOrg(Organization org){
orgRepository.save(org);
simpleSourceBean.publishOrgChange("UPDATE", org.getId());
}
public void deleteOrg(String orgId){
orgRepository.delete( orgId );
simpleSourceBean.publishOrgChange("DELETE", orgId);
}
}
2、在B服务(如 licensing-service)中编写消息消费者
1)添加依赖:在B服务中添加如下依赖
pom.xml
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-stream-kafka</artifactId>
</dependency>
<!-- https://mvnrepository.com/artifact/org.springframework.cloud/spring-cloud-stream -->
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream</artifactId>
</dependency>
2)使用@EnableBinding注解来标注B服务的引导类
与A服务区别在于,给@EnableBinding注解传的值
@EnableBinding(Sink.class) // 使用Sink接口中定义的通道来监听传入的消息
public class LicensingServiceApplication {
// 省略其他
public static void main(String[] args) {
SpringApplication.run(LicensingServiceApplication.class, args);
}
// 每次收到来自 input 通道的消息时, Spring Cloud Stream 将执行次方法
// Spring Cloud Stream 将自动把从通道中传出的消息反序列化为一个名为OrganizationChangeModel 的 JavaPOJO
@StreamListener(Sink.INPUT)
public void loggerSink(OrganizationChangeModel orgChange) {
logger.debug("Received an event for organization id {}", orgChange.getOrganizationId());
}
}
因为B服务(LicensingService)是消息的消费者,所以将会把值 Sink.class 传递给@EnableBinding注解。
同样,消息代理的主题到 input 通道的实际映射是在B服务(LicensingService)的配置中完成的,application.yml:
spring:
cloud:
stream: # spring.cloud.stream.bindings.input属性将 input 通道映射到orgChangeTopic队列
bindings:
inboundOrgChanges:
destination: orgChangeTopic
content-type: application/json
group: licensingGroup # 该group属性用于保证服务只处理一次。group 属性定义将要消费消息的消费者组的名称
kafka:
binder:
zkNodes: localhost
brokers: localhost
可能拥有多个服务,每个服务都有多个实例侦听同一个消息队列,但是只需要服务实例组中的一个服务实例来消费和处理消息。 group 属性标识服务所属的消费者组, 只要服务实例具有相同的组名, spring Cloud Stream 和底层消息代理将保证,只有消息的一个副本会被属于该组的服务实例所使用 。对于B服务(LicensingService), group 属性值将会是licensingGroup
3)在实际操作中查看消息服务
一旦A服务(OrganizationService)调用完成,在B服务(LicensingService),就可以看到日志输出。
三、Spring Cloud Stream 用例:分布式缓存
1、使用redis来缓存查找
1)配置B服务(LicensingService)以包含Spring Data Redis依赖项
LicensingService服务的pom.xml:添加依赖:
<!-- https://mvnrepository.com/artifact/org.springframework.data/spring-data-redis -->
<dependency>
<groupId>org.springframework.data</groupId>
<artifactId>spring-data-redis</artifactId>
<version>1.7.4.RELEASE</version>
</dependency>
<!-- https://mvnrepository.com/artifact/redis.clients/jedis -->
<dependency>
<groupId>redis.clients</groupId>
<artifactId>jedis</artifactId>
<version>2.9.0</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.apache.commons/commons-pool2 -->
<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-pool2</artifactId>
<version>2.0</version>
</dependency>
2)构造一个到redis服务器的数据库连接
B服务(LicensingService)
引导类:
package com.example.licenses;
import org.springframework.data.redis.connection.jedis.JedisConnectionFactory;
@EnableBinding(Sink.class)
public class LicensingServiceApplication {
@Autowired
private ServiceConfig serviceConfig;
// 省略其他
// 添加一个JedisConnectionFactory作为Spring Bean。一旦连接到redis,将使用该连接创建一个Spring RedisTemplate对象。
@Bean
public JedisConnectionFactory jedisConnectionFactory() {
JedisConnectionFactory jedisConnFactory = new JedisConnectionFactory();
jedisConnFactory.setHostName( serviceConfig.getRedisServer());
jedisConnFactory.setPort( serviceConfig.getRedisPort() );
return jedisConnFactory;
}
}
ServiceConfig:从配置读取
package com.example.licenses.config;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Component;
@Component
public class ServiceConfig{
@Value("${example.property}")
private String exampleProperty="";
@Value("${redis.server}")
private String redisServer="";
@Value("${redis.port}")
private String redisPort="";
public String getExampleProperty(){
return exampleProperty;
}
public String getRedisServer(){
return redisServer;
}
public Integer getRedisPort(){
return new Integer( redisPort ).intValue();
}
}
3)定义Spring Data Redis存储库
因为使用Spring Data来访问redis存储,所以需要定一个存储库。
如:
OrganizationRedisRepository :
package com.example.licenses.repository;
import com.example.licenses.model.Organization;
public interface OrganizationRedisRepository {
void saveOrganization(Organization org);
void updateOrganization(Organization org);
void deleteOrganization(String organizationId);
Organization findOrganization(String organizationId);
}
实现类OrganizationRedisRepositoryImpl :
package com.example.licenses.repository;
import com.example.licenses.model.Organization;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.data.redis.core.HashOperations;
import org.springframework.data.redis.core.RedisTemplate;
import org.springframework.stereotype.Repository;
import javax.annotation.PostConstruct;
//这个 Repository 注解告诉spring这个类是一个与SpringData一起使用的存储库类
@Repository
public class OrganizationRedisRepositoryImpl implements OrganizationRedisRepository {
private static final String HASH_NAME ="organization";
private RedisTemplate<String, Organization> redisTemplate;
private HashOperations hashOperations; //HashOperations 类包含一组用于在 Redis服务器上执行数据操作的辅助方法
public OrganizationRedisRepositoryImpl(){
super();
}
@Autowired
private OrganizationRedisRepositoryImpl(RedisTemplate redisTemplate) {
this.redisTemplate = redisTemplate;
}
@PostConstruct
private void init() {
hashOperations = redisTemplate.opsForHash();
}
// 与Redis 的所有交互都将使用由键存储单个Organization对象
@Override
public void saveOrganization(Organization org) {
hashOperations.put(HASH_NAME, org.getId(), org);
}
@Override
public void updateOrganization(Organization org) {
hashOperations.put(HASH_NAME, org.getId(), org);
}
@Override
public void deleteOrganization(String organizationId) {
hashOperations.delete(HASH_NAME, organizationId);
}
@Override
public Organization findOrganization(String organizationId) {
return (Organization) hashOperations.get(HASH_NAME, organizationId);
}
}
4)使用 Redis 和B(license-service)服务来存储和读取A(Organization )数据
OrganizationRestTemplateClient 将实现缓存逻辑,
OrganizationRestTemplateClient :
package com.example.licenses.clients;
import com.example.licenses.model.Organization;
import com.example.licenses.repository.OrganizationRedisRepository;
import com.example.licenses.utils.UserContext;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.http.HttpMethod;
import org.springframework.http.ResponseEntity;
import org.springframework.stereotype.Component;
import org.springframework.web.client.RestTemplate;
@Component
public class OrganizationRestTemplateClient {
@Autowired
RestTemplate restTemplate;
@Autowired
OrganizationRedisRepository orgRedisRepo;
private static final Logger logger = LoggerFactory.getLogger(OrganizationRestTemplateClient.class);
private Organization checkRedisCache(String organizationId) {
try {
return orgRedisRepo.findOrganization(organizationId);
}
catch (Exception ex){
logger.error("Error encountered while trying to retrieve organization {} check Redis Cache. Exception {}", organizationId, ex);
return null;
}
}
private void cacheOrganizationObject(Organization org) {
try {
orgRedisRepo.saveOrganization(org);
}catch (Exception ex){
logger.error("Unable to cache organization {} in Redis. Exception {}", org.getId(), ex);
}
}
public Organization getOrganization(String organizationId){
logger.debug("In Licensing Service.getOrganization: {}", UserContext.getCorrelationId());
Organization org = checkRedisCache(organizationId);
if (org!=null){
logger.debug("I have successfully retrieved an organization {} from the redis cache: {}", organizationId, org);
return org;
}
logger.debug("Unable to locate organization from the redis cache: {}.", organizationId);
ResponseEntity<Organization> restExchange =
restTemplate.exchange(
"http://zuulservice/api/organization/v1/organizations/{organizationId}",
HttpMethod.GET,
null, Organization.class, organizationId);
/*Save the record from cache*/
org = restExchange.getBody();
if (org!=null) {
cacheOrganizationObject(org);
}
return org;
}
}
2、自定义通道
要为应用程序定义多个通道,或者想要定制通道的名称,可以自定义自己的接口,并根据应用程序需要公开任意数量的输入和输出通道。
1)为B(licenses-service)服务定义一个自定义input通道,CustomChannels :
package com.example.licenses.events;
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.messaging.SubscribableChannel;
public interface CustomChannels {
// @input是方法级别的注解
// 通过@Input 注解公开的每个通道必须返回一个SubscribableChannel 类
@Input("inboundOrgChanges")
SubscribableChannel orgs();
}
2)修改B(license-service)服务以使用自定义 input 通道:
修改B(license-service)服务,以将自定义 input 通道名称映射到 Kafka 主题。
application.yml:
spring:
cloud:
stream:
bindings:
inboundOrgChanges:
destination: orgChangeTopic # 将通道的名称从 input 更改为inboundOrgChanges
content-type: application/json
group: licensingGroup
kafka:
binder:
zkNodes: localhost
brokers: localhost
3)在OrganizationChangeHandler 中使用新的自定义通道;在收到消息时清除缓存。
如OrganizationChangeHandler :
package com.example.licenses.events.handlers;
import com.example.licenses.events.CustomChannels;
import com.example.licenses.events.models.OrganizationChangeModel;
import com.example.licenses.repository.OrganizationRedisRepository;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.cloud.stream.messaging.Sink;
@EnableBinding(CustomChannels.class)
public class OrganizationChangeHandler {
@Autowired
private OrganizationRedisRepository organizationRedisRepository;
private static final Logger logger = LoggerFactory.getLogger(OrganizationChangeHandler.class);
@StreamListener("inboundOrgChanges") // 使用@StreamListener注解传入通道名称inboundOrgChanges而不是Sink.INPUT
public void loggerSink(OrganizationChangeModel orgChange) {
logger.debug("Received a message of type " + orgChange.getType());
switch(orgChange.getAction()){
case "GET":
logger.debug("Received a GET event from the organization service for organization id {}", orgChange.getOrganizationId());
break;
case "SAVE":
logger.debug("Received a SAVE event from the organization service for organization id {}", orgChange.getOrganizationId());
break;
case "UPDATE":
logger.debug("Received a UPDATE event from the organization service for organization id {}", orgChange.getOrganizationId());
organizationRedisRepository.deleteOrganization(orgChange.getOrganizationId());
break;
case "DELETE":
logger.debug("Received a DELETE event from the organization service for organization id {}", orgChange.getOrganizationId());
organizationRedisRepository.deleteOrganization(orgChange.getOrganizationId());
break;
default:
logger.error("Received an UNKNOWN event from the organization service of type {}", orgChange.getType());
break;
}
}
}