前言:本文为原创 若有错误欢迎评论!
一.SpringBoot 整合logBack
1.依赖:
在web-starter里 所以不需要再次引入
2.配置文件:
静态资源目录下 新建logback-spring.xml:
<?xml version="1.0" encoding="UTF-8" ?>
<configuration>
<appender name="consoleApp" class="ch.qos.logback.core.ConsoleAppender">
<layout class="ch.qos.logback.classic.PatternLayout">
<pattern>
%date{yyyy-MM-dd HH:mm:ss.SSS} %-5level[%thread]%logger{56}.%method:%L -%msg%n
</pattern>
</layout>
</appender>
<appender name="fileInfoApp" class="ch.qos.logback.core.rolling.RollingFileAppender">
<filter class="ch.qos.logback.classic.filter.LevelFilter">
<level>ERROR</level>
<onMatch>DENY</onMatch>
<onMismatch>ACCEPT</onMismatch>
</filter>
<encoder>
<pattern>
%date{yyyy-MM-dd HH:mm:ss.SSS} %-5level[%thread]%logger{56}.%method:%L -%msg%n
</pattern>
</encoder>
<!-- 滚动策略 -->
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<!-- 路径 -->
<fileNamePattern>绝对路径</fileNamePattern>
</rollingPolicy>
</appender>
<appender name="fileErrorApp" class="ch.qos.logback.core.rolling.RollingFileAppender">
<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
<level>ERROR</level>
</filter>
<encoder>
<pattern>
%date{yyyy-MM-dd HH:mm:ss.SSS} %-5level[%thread]%logger{56}.%method:%L -%msg%n
</pattern>
</encoder>
<!-- 设置滚动策略 -->
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<!-- 路径 -->
<fileNamePattern>绝对路径</fileNamePattern>
<!-- 控制保留的归档文件的最大数量,超出数量就删除旧文件,假设设置每个月滚动,
且<maxHistory> 是1,则只保存最近1个月的文件,删除之前的旧文件 -->
<MaxHistory>1</MaxHistory>
</rollingPolicy>
</appender>
<root level="INFO">
<appender-ref ref="consoleApp"/>
<appender-ref ref="fileInfoApp"/>
<appender-ref ref="fileErrorApp"/>
</root>
</configuration>
3.如何在类中使用:
Logger logger=LoggerFactory.getLogger(this.getClass());
log.debug(" “); /log.info(” “); /log.warn(” “); /log.error(” ");
二.SpringBoot 整合elasticsearch2
1.安装:
请参考我的博客:https://blog.csdn.net/weixin_43934607/article/details/100538881
2.使用:
- 依赖:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-elasticsearch</artifactId>
</dependency>
2.配置文件:
spring.data.elasticsearch.cluster-name=elasticsearch
spring.data.elasticsearch.cluster-nodes=ip地址:9300
spring.data.elasticsearch.repositories.enabled=true
3.创建实体类Entity并且implments Serliazable 再加注解
@Doucment(indexName=" “,type=” ")
public class User implements Serializable {
}
4.创建dao层Entity的接口 并extend ElasticsearchRepository(不用写任何方法)
@Component
public interface UserRepository extends ElasticsearchRepository<User,Long>{
}
5.在Controller中 注入dao层的接口 调用.save(Entity)保存
6.可以用dao层该接口的.search(QueryBuilder)进行查找
QueryBuilder builder= QueryBuilders.matchQuery(“name”,name);
Iterable list = userRepository.search(builder);
注意:后面的博客有elasticsearch的详细讲解
三.SpringBoot 整合ActiveMq
1.安装
请参考我的博客:https://blog.csdn.net/weixin_43934607/article/details/100538881
2.依赖:
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-activemq</artifactId>
</dependency>
<dependency>
<groupId>org.apache.activemq</groupId>
<artifactId>activemq-pool</artifactId>
<version>5.14.5</version>
</dependency>
3.配置文件:
#整合jms测试,安装在别的机器,防火墙和端口号记得开放
spring.activemq.broker-url=tcp://xxx.xxx.xxx.xxx:61616
#集群配置
#spring.activemq.broker-url=failover:(tcp://localhost:61616,tcp://localhost:61617)
spring.activemq.user=admin
spring.activemq.password=admin
#下列配置要增加依赖
spring.activemq.pool.enabled=false
4.启动类注解
@EnableJms
配置点对点和推送同时满足的bean:
@Bean
public JmsListenerContainerFactory<?> jmsListenerContainerTopic(ConnectionFactory activeMQConnectionFactory) {
DefaultJmsListenerContainerFactory bean = new DefaultJmsListenerContainerFactory();
bean.setPubSubDomain(true);
bean.setConnectionFactory(activeMQConnectionFactory);
return bean;
}
5.创建生产者:
- 创建service类
- 注入JmsMessagingTemplate :
@Autowired
private JmsMessagingTemplate jmsTemplate;
- 创建发送消息的一个普通方法(也可再@async异步发送)
- 调用api发送
jmsTemplate.convertAndSend(new ActiveMQQuene(“队列名”,text)
或 jmsTemplate.convertAndSend(new ActiveMQTopic(“会话名”),text)
6.创建消费者:
- 创建"…Consumer"类 并注入spring
- 用注解来配置该类接收的消息
- 点对点:
@JmsListener(destination = “队列名”)
- 会话类型:
@JmsListener(destination = “会话名”,containerFactory=“jmsListenerContainerTopic”)
- 创建方法 参数为String类型(接收到发送的值) 返回值为void
- 如果消费者还要发送消息 将返回值变为String类型 并注解
@SendTo(“队列或会话名”)
- 消费者也可以再开启子线程来提高效率
四.SpringBoot 整合RocketMq
1.安装
请参考我的博客:https://blog.csdn.net/weixin_43934607/article/details/100538881
2.使用
- 依赖与配置文件(配置文件为了方便以后修改):
<dependency>
<groupId>org.apache.rocketmq</groupId>
<artifactId>rocketmq-common</artifactId>
<version>4.4.0</version>
</dependency>
<dependency>
<groupId>org.apache.rocketmq</groupId>
<artifactId>rocketmq-client</artifactId>
<version>4.4.0</version>
</dependency>
- 配置文件(为了方便修改)
# 消费者的组名
apache.rocketmq.consumer.PushConsumer=Consumer
# 生产者的组名
apache.rocketmq.producer.producerGroup=Producer
# NameServer地址
apache.rocketmq.namesrvAddr=192.168.56.129:9876
3.发送消息
- 创建消息发送类:?Producer 并且注入spring
- 通过配置文件注入:
@Value("${apache.rocketmq.producer.producerGroup}")
private String producerGroup;@Value("${apache.rocketmq.namesrvAddr}")
private String namesrvAddr;
- 方法注解
@PostProduct //系统初始化时就可以开启生产者
- new一个DefaultMQProdrucer对象 并给其设置 ProductGruop、NamesrvAddr、VipChannelEnabled
mqProducer=new DefaultMQProducer(producerGroup);
mqProducer.setNamesrvAddr(namesrvAddr);
mqProducer.setVipChannelEnabled(false);
- 调用DefaultMQProdrucer对象的strat()启动启动生产者;
生产者总的代码:
@Component
@PropertySource({"classpath:application.properties"})
public class MsgProducer {
@Value("${apache.rocketmq.producer.producerGroup}")
private String producerGroup;
@Value("${apache.rocketmq.namesrvAddr}")
private String namesrvAddr;
private DefaultMQProducer mqProducer;
public DefaultMQProducer getMqProducer(){
return mqProducer;
}
@PostConstruct
public void initMQ(){
mqProducer=new DefaultMQProducer(producerGroup);
mqProducer.setNamesrvAddr(namesrvAddr);
mqProducer.setVipChannelEnabled(false);
try {
mqProducer.start();
} catch (MQClientException e) {
e.printStackTrace();
}
}
@PreDestory//在程序运行结束时执行
public void destory(){
mqProducer.shutdown()
}
}
- Controller中调用并发送消息
-
new Message(topic,tag,msg)
-
获得设置过的DefaultMQProduct
-
调用.send(Message)并获得返回对象SendResult
注意:如果rocketmq中不存在这个topic 默认会自动创建 在bin/mqbroker用 sh mqbroker -m查看autoCreateTopicEnable是否开启(但是前提是linux安装的rocketmq的版本和引入的rocketmq依赖的版本完全一致 才可以生效自动创建)
代码:
@Autowired private MsgProducer msgProducer; @GetMapping("/send") @ResponseBody public String sendMsg() throws InterruptedException, RemotingException, MQClientException, MQBrokerException { Message message=new Message("t1","t11","sendMsg".getBytes()); SendResult result=msgProducer.getMqProducer().send(message); System.out.println("发送id:"+result.getMsgId()); return "success"; }
4.接受消息
- 创建?Consumer类 并用注入spring
- 读取ConsumerGroup与NamesrcAddr
@Value("${apache.rocketmq.consumer.PushConsumer}")
private String consumerGroup;@Value("${apache.rocketmq.namesrvAddr}")
private String namesrvAddr;
- @PostProduce注解初始化消费者的方法
- 获得对象
new DefaultMQPUSHConsumer(consumerGroupName)
- 设置 consumerGroup namesrvAddr ConsumeFromWhere(消费的起始位置) subscribe(订阅的内容):
DefaultMQPushConsumer consumer=new DefaultMQPushConsumer(consumerGroup);
consumer.setNamesrvAddr(namesrvAddr);
//设置consumer所订阅的Topic和Tag,*代表全部的Tag
consumer.subscribe("testTopic", "*");
//CONSUME_FROM_LAST_OFFSET 默认策略,从该队列最尾开始消费,跳过历史消息
//CONSUME_FROM_FIRST_OFFSET 从队列最开始开始消费,即历史消息(还储存在broker的)全部消费一遍
consumer.setConsumeFromWhere(ConsumeFromWhere.CONSUME_FROM_LAST_OFFSET);
- 设置接收到的处理 并返回消费的状态:
consumer.registerMessageListener(new MessageListenerConcurrently() {
@Override
public ConsumeConcurrentlyStatus consumeMessage(List<MessageExt> list, ConsumeConcurrentlyContext consumeConcurrentlyContext) {
try{
list.get(0)
}catch (Exception e){
return ConsumeConcurrentlyStatus.RECONSUME_LATER;
}
return ConsumeConcurrentlyStatus.RECONSUME_LATER;
}
});
- 调用consumer.start()初始化消费者
消费者总的代码:
@Component
@PropertySource({"classpath:application.properties"})
public class MsgConsumer {
@Value("${apache.rocketmq.consumer.PushConsumer}")
private String consumerGroup;
@Value("${apache.rocketmq.namesrvAddr}")
private String namesrvAddr;
@PostConstruct
public void init() throws MQClientException {
DefaultMQPushConsumer consumer=new DefaultMQPushConsumer(consumerGroup);
consumer.setNamesrvAddr(namesrvAddr);
//设置consumer所订阅的Topic和Tag,*代表全部的Tag
consumer.subscribe("testTopic", "*");
//CONSUME_FROM_LAST_OFFSET 默认策略,从该队列最尾开始消费,跳过历史消息
//CONSUME_FROM_FIRST_OFFSET 从队列最开始开始消费,即历史消息(还储存在broker的)全部消费一遍
consumer.setConsumeFromWhere(ConsumeFromWhere.CONSUME_FROM_FIRST_OFFSET);
consumer.registerMessageListener(new MessageListenerConcurrently() {
@Override
public ConsumeConcurrentlyStatus consumeMessage(List<MessageExt> list, ConsumeConcurrentlyContext consumeConcurrentlyContext) {
try{
System.out.println("接受:"+new String(list.get(0).getBody()));
}catch (Exception e){
return ConsumeConcurrentlyStatus.RECONSUME_LATER;
}
return ConsumeConcurrentlyStatus.CONSUME_SUCCESS;
}
});
consumer.start();
}
@PreDestory
public void destory(){
consumer.shutdown()
}
}
注意:后面的博客有详细讲解rocketmq的高级使用