本文用于记录自己遇到的问题
项目需求:每个微服务根据实际启动时所在的物理ip订阅不同的主题。
SpringStreaming kafka相关依赖:
<dependency> <groupId>org.apache.kafka</groupId> <artifactId>kafka-streams</artifactId> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-bus</artifactId> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-stream</artifactId> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-stream-binder-kafka</artifactId> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-stream-binder-kafka-streams</artifactId> </dependency> <dependency> <groupId>org.springframework.kafka</groupId> <artifactId>spring-kafka</artifactId> </dependency>
channel接口:
import org.springframework.cloud.stream.annotation.Input;
import org.springframework.cloud.stream.annotation.Output;
import org.springframework.messaging.MessageChannel;
import org.springframework.messaging.SubscribableChannel;
import org.springframework.stereotype.Component;
import org.springframework.stereotype.Repository;
//import org.springframework.web.bind.annotation.ResponseBody;
/**
* 自定义channel接口
* Created by tym on 2019-6-3 17:35:57.
*/
@Component
public interface CommonStreams {
/** 消息接收channel名称 */
String INPUT = "channel-in";
/** 消息发送channel名称 */
String OUTPUT = "channel-out";
/**
* 消息接收channel
*/
@Input(INPUT)
SubscribableChannel inboundGreetings();
/**
* 消息发送channel
*/
@Output(OUTPUT)
MessageChannel outboundGreetings();
}
定义消息接收和发送的channel。
消费者:
import com.alibaba.fastjson.JSONObject;
import com.ecs.base.annotation.NodeLogInit;
import com.ecs.common.stream.CommonStreams;
import com.ecs.createpdf.service.EcsService;
import lombok.extern.slf4j.Slf4j;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.cloud.stream.annotation.EnableBinding;
import org.springframework.cloud.stream.annotation.StreamListener;
import org.springframework.messaging.Message;
/**
* 消费者
* Created by tym on 2019-6-3 17:35:57.
*/
@Slf4j
@EnableBinding(CommonStreams.class)
public class Consumer {
/**
* 消息监听
* @param message
*/
@StreamListener(CommonStreams.INPUT)
public void process(Message<JSONObject> message) {
//接收任务
JSONObject messageObj = message.getPayload();
}
}
将消息接收channel绑定到消息接收的方法@StreamListener(CommonStreams.INPUT)。
配置文件:
cloud: stream: kafka: binder: #kafka brokers 配置 brokers: ${maven.kafka.binder.brokers} #channel与topic (此处配置通过自定义方式替换) bindings: channel-in: destination: _test,test3 contentType: application/json channel-out: destination: _test contentType: application/json
channel-in、channel-out 对应自定义的channel名称。destination是需要订阅的topic。
这种配置的方式是预先知道每个服务里消费者要订阅的主题。如果需求是同一个微服务,部署在不同的机器上时需要订阅不同的主题,这种配置方式只能是打包部署,改配置文件再打包再部署;这样不仅麻烦而且容易出错。
我在项目中采用的是将所有ip,微服务的编号(每个微服务自定义的编号,目前是每个微服务中只有一个消费者)的对应关系配置到表中。当服务启动的时候获取本地物理IP、本微服务的编号(在配置文件中),到配置表中获取要订阅的topic。
具体实现(每个消费者启动的时候调用下面的方法):
public void bindChannelAndTopic(String node) throws SocketException {
//node及为微服务的编号
log.info("init bind rule,currentnoed:{}",node);
//获取本机ipv4地址
Enumeration allNetInterfaces = NetworkInterface.getNetworkInterfaces();
InetAddress ip = null;
//这个实体类封装的队列的属性
RuleWorkNodeTopicModel ruleWorkNodeTopicModel = null;
RuleWorkNodeTopicModel tempModel = null;
while (allNetInterfaces.hasMoreElements()) {
NetworkInterface netInterface = (NetworkInterface) allNetInterfaces.nextElement();
Enumeration addresses = netInterface.getInetAddresses();
while (addresses.hasMoreElements()) {
ip = (InetAddress) addresses.nextElement();
String str = ip.toString();
String hostAddress = str.substring(1,str.length());
if (ip != null
&& ip instanceof Inet4Address
&& !ip.isLoopbackAddress()
&& hostAddress.indexOf(connectionTag)==-1) {
//获取目标配置关键(从缓存中读取)
String key = KeyUtil.makeKey(headKey,connectionTag,nodeMidKey,node,hostAddress);
log.debug("通过ip获取绑定topic的key:{}",key);
tempModel = redisTemplateUtil.get(key,RuleWorkNodeTopicModel.class);
if (tempModel != null) {
ruleWorkNodeTopicModel = tempModel;
currentIp = hostAddress;
} else {
//缓存读不到读db
RuleWorkNodeTopicModel parameter = new RuleWorkNodeTopicModel();
parameter.setIp(hostAddress);
parameter.setNode(node);
List<RuleWorkNodeTopicModel> ruleWorkNodeTopicModels = ruleWorkNodeTopicDao.selectByParameter(parameter);
if (ruleWorkNodeTopicModels != null && ruleWorkNodeTopicModels.size()>0) {
ruleWorkNodeTopicModel = ruleWorkNodeTopicModels.get(0);
//缓存回写
redisTemplateUtil.save(key,ruleWorkNodeTopicModel);
currentIp = hostAddress;
}
}
}
}
}
//根据环境绑定对应topic
if (ruleWorkNodeTopicModel != null) {
//获取配置信息
String channel = ruleWorkNodeTopicModel.getChannel();//channel
String destinationStr = ruleWorkNodeTopicModel.getTopic();//topic
currentTopic = destinationStr;
String groupId = ruleWorkNodeTopicModel.getGroupId();//groupid
String contentType = ruleWorkNodeTopicModel.getContentType();//contentType
log.debug("待初始化的绑定信息-channel:{},destinationStr:{},groupId:{},contentType:{}",
channel,destinationStr,groupId,contentType);
Map map = bindingServiceProperties.getBindings();
//将springstreaming从配置文件中读取的配置替换掉
BindingProperties bindingProperties = (BindingProperties) map.get(channel);
bindingProperties.setDestination(destinationStr);
bindingProperties.setContentType(contentType);
bindingProperties.setGroup(groupId);
}
主要的思路就是在springstreaming 读取配置文件封装到BindingProperties 之后,实际绑定topic之前,将队列属性替换。