文章目录
前言
前端时间领导给我布置了个任务,要求给项目增加记录用户操作日志的功能,并且要求尽快完成。经过思考,我最终选择了springboot + filter + redisStream + mybatis 实现消息队列记录用户操作日志。相关代码经过测试已上传至gitee,感兴趣的小伙伴可以自行下载。
一、为什么选择redisStream?
相较于主流的消息队列实现方式如rabbitmq、rocketmq、kafka,redisStream无疑是轻量化的,它不需要太多的配置,学习成本也不高,唯一需要注意的就是,redis版本高于5.0才可以使用。
二、为什么选择filter?
其实使用springAop也能实现记录用户操作日志,但springAop学习起来成本不可谓不高,且需要理解何为切入点、切面,对代码的改动也比较大。
而我只需要拦截到方法的请求和响应(ServletRequest request, ServletResponse response),并对请求响应做出响应的过滤操作,这一点,filter完全可以满足我的需求。
想了解Spring:过滤器filter、拦截器interceptor、和AOP的区别与联系的小伙伴,可以参考下面的链接,这位大神的总结还是很到位的。
Spring:过滤器filter、拦截器interceptor、和AOP的区别与联系
三、使用步骤
1.创建项目
可以使用idea自动创建初始化项目
创建项目中如果遇到连接超时,可以选择custom,输入阿里的镜像:https://start.aliyun.com/
这里我选择的Java version为8,maven项目。
创建完成后的项目结构:
添加pom依赖:
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.study</groupId>
<artifactId>log</artifactId>
<version>0.0.1-SNAPSHOT</version>
<name>log</name>
<description>Demo project for Spring Boot</description>
<properties>
<java.version>1.8</java.version>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
<spring-boot.version>2.3.12.RELEASE</spring-boot.version>
</properties>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
<exclusions>
<exclusion>
<groupId>org.junit.vintage</groupId>
<artifactId>junit-vintage-engine</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<!-- mysql begin -->
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
</dependency>
<dependency>
<groupId>org.mybatis.spring.boot</groupId>
<artifactId>mybatis-spring-boot-starter</artifactId>
<version>2.2.2</version>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>
<dependency>
<groupId>com.alibaba</groupId>
<artifactId>fastjson</artifactId>
<version>2.0.6</version>
</dependency>
<!-- spring2.X集成redis所需common-pool2-->
<dependency>
<groupId>org.apache.commons</groupId>
<artifactId>commons-pool2</artifactId>
<version>2.4.3</version>
</dependency>
</dependencies>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-dependencies</artifactId>
<version>${spring-boot.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<build>
<plugins>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-compiler-plugin</artifactId>
<version>3.8.1</version>
<configuration>
<source>1.8</source>
<target>1.8</target>
<encoding>UTF-8</encoding>
</configuration>
</plugin>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<version>2.3.7.RELEASE</version>
<configuration>
<mainClass>com.study.log.LogApplication</mainClass>
</configuration>
<executions>
<execution>
<id>repackage</id>
<goals>
<goal>repackage</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>
上述pom文件中,除了idea初始化生产项目产生的依赖外,我又引入了mysql驱动、mybatis、redis、阿里的fastjson、以及spring2.X集成redis所需common-pool2。
导出依赖可能会踩几个坑导致启动项目报错:
ps1:springboot版本与spring-boot-starter-data-redis版本不兼容
ps2:缺少spring2.X集成redis所需common-pool2
org.springframework.context.ApplicationContextException: Unable to start web server; nested exception is org.springframework.boot.web.server.WebServerException: Unable to start embedded Tomcat
Caused by: java.lang.ClassNotFoundException: org.apache.commons.pool2.impl.GenericObjectPoolConfig
解决方案:
ps1: 可以适当升级或降级springboot版本,我使用的springboot版本是2.3.12.RELEASE,spring-boot-starter-data-redis没有手动指定版本号,maven会自动下载合适的版本号
ps2:导入spring2.X集成redis所需common-pool2即可
配置文件如下:
server:
port: 8888
servlet:
context-path: /log
spring:
datasource:
driver-class-name: com.mysql.cj.jdbc.Driver
url: jdbc:mysql://localhost:3306/izp?createDatabaseIfNotExist=true&serverTimezone=Asia/Shanghai&useUnicode=true&characterEncoding=utf8&useSSL=false
username: root
password: root
hikari:
maximum-pool-size: 30
connection-timeout: 3000
redis:
host: 127.0.0.1
port: 6379
database: 0
timeout: 1800000
lettuce:
pool:
max-active: 20
max-wait: -1
max-idle: 5
min-idle: 0
redis-stream:
#stream 名称数组
names: izp_log_redis_stream,mystream2
#stream 群组名称
groups: group1
mybatis:
mapper-locations: classpath:com/study/log/xml/*Mapper.xml
type-aliases-package: com.study.log.po.*
configuration:
log-impl: org.apache.ibatis.logging.stdout.StdOutImpl
2.创建redisStream工具类
package com.study.log.utils;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.data.domain.Range;
import org.springframework.data.redis.connection.stream.*;
import org.springframework.data.redis.core.RedisTemplate;
import org.springframework.stereotype.Component;
import java.time.Duration;
import java.util.List;
import java.util.Map;
/**
* @Description 添加stream消息
* @author haochuanwan
* @date 2022/10/19 14:22
**/
@Component
public class RedisStreamUtil {
private static final Logger log = LoggerFactory.getLogger(RedisStreamUtil.class);
private final RedisTemplate<String,String> redisTemplate;
@Autowired
public RedisStreamUtil(RedisTemplate<String,String> redisTemplate){
this.redisTemplate = redisTemplate;
}
/**
* XADD 添加stream消息
* @param key stream对应的key
* @param message 要村粗的消息数据
*/
public RecordId addStream(String key, Map<String,Object> message){
log.info("==============添加到流里的消息为=========:" + String.valueOf(message));
RecordId add = redisTemplate.opsForStream().add(key, message);
log.info("添加成功:"+add+", key: " + key);
return add;
}
public void addGroup(String key, String groupName){
redisTemplate.opsForStream().createGroup(key,groupName);
}
public void delField(String key, String fieldId){
redisTemplate.opsForStream().delete(key,fieldId);
}
/**
* 读取所有stream消息
* @param key
* @return
*/
public List<MapRecord<String,Object,Object>> getAllStream(String key){
List<MapRecord<String, Object, Object>> range = redisTemplate.opsForStream().range(key, Range.open("-", "+"));
if(range == null){
return null;
}
for(MapRecord<String,Object,Object> mapRecord : range){
redisTemplate.opsForStream().delete(key,mapRecord.getId());
}
return range;
}
public void getStream(String key){
List<MapRecord<String, Object, Object>> read = redisTemplate.opsForStream().read(StreamReadOptions.empty().block(Duration.ofMillis(1000*30)).count(2), StreamOffset.latest(key));
System.out.println(read);
}
public void getStreamByGroup(String key, String groupName,String consumerName){
List<MapRecord<String, Object, Object>> read = redisTemplate.opsForStream().read(Consumer.from(groupName, consumerName), StreamReadOptions.empty(), StreamOffset.create(key, ReadOffset.lastConsumed()));
log.info("group read :{}",read);
}
public boolean hasKey(String key){
Boolean aBoolean = redisTemplate.hasKey(key);
return aBoolean==null?false:aBoolean;
}
}
3.创建消息生产者
代码如下(示例):
package com.study.log.filter;
import com.alibaba.fastjson.JSONObject;
import com.study.log.utils.DateTimeUtils;
import com.study.log.utils.RedisStreamUtil;
import com.study.log.utils.RequestUtil;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.core.annotation.Order;
import org.springframework.stereotype.Component;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.filter.OncePerRequestFilter;
import org.springframework.web.util.ContentCachingRequestWrapper;
import org.springframework.web.util.ContentCachingResponseWrapper;
import javax.servlet.FilterChain;
import javax.servlet.ServletException;
import javax.servlet.annotation.WebFilter;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import java.io.IOException;
import java.nio.charset.StandardCharsets;
import java.util.HashMap;
import java.util.Map;
/**
* @Description 生产消息
* @author haochuanwan
* @date 2022/11/2 11:22
**/
@Component
@WebFilter(urlPatterns = "/*")
@Order(-999)
public class IzpLogFilter extends OncePerRequestFilter {
private final RedisStreamUtil redisStreamUtil;
@Autowired
public IzpLogFilter(RedisStreamUtil redisStreamUtil){
this.redisStreamUtil = redisStreamUtil;
}
private final static Logger log = LoggerFactory.getLogger(IzpLogFilter.class);
private final static String IZP_LOG_REDIS_STREAM = "izp_log_redis_stream";
@Override
protected void doFilterInternal(HttpServletRequest httpServletRequest, HttpServletResponse httpServletResponse, FilterChain filterChain) throws ServletException, IOException {
/**
* 过滤掉GET请求
*/
if (RequestMethod.GET.name().equals(httpServletRequest.getMethod())) {
// get请求不记录,继续执行chain
filterChain.doFilter(httpServletRequest, httpServletResponse);
return;
}
/**
* 解决因过滤器读取流导致接口不到传参的问题
*/
ContentCachingRequestWrapper req = new ContentCachingRequestWrapper(httpServletRequest);
ContentCachingResponseWrapper resp = new ContentCachingResponseWrapper(httpServletResponse);
//获取http请求类型
String httpType = httpServletRequest.getMethod();
//获取该url对应的服务类型
String serverType = httpServletRequest.getRequestURI().split("/")[1];
//从httpRequest中获取用户ip
// String ip = httpServletRequest.getRemoteAddr();
String ip = RequestUtil.getIpAddress(httpServletRequest);
//获取url
String requestURL = "";
requestURL = httpServletRequest.getRequestURL().toString();
String interfacePath = httpServletRequest.getServletPath();
// 继续执行chain
filterChain.doFilter(req, resp);
/**
* 下面的方法能获取form表单的入参,获取不到body的入参
* 需要先判断获取的map为不为空,如果为空,使用下面获取body的方法
* 如果下面获取body的方法也为空,那就保存为空
*/
// 获取入参
Map reqParameterMap = req.getParameterMap();
String urlInputPara = "";
if (reqParameterMap.size() != 0) {
urlInputPara = JSONObject.toJSONString(reqParameterMap);
} else {
byte[] reqBody = req.getContentAsByteArray();
urlInputPara = new String(reqBody, StandardCharsets.UTF_8);
}
log.info(" urlInputPara = {}", urlInputPara);
//获取返回值
byte[] respBody = resp.getContentAsByteArray();
String responseBody = new String(respBody, StandardCharsets.UTF_8);
log.info("response body = {}", responseBody);
//将以上所有以map的形式放入redis stream
Map logMap = new HashMap();
logMap.put("httpType", httpType);
logMap.put("serverType", serverType);
logMap.put("requestURL", requestURL);
logMap.put("interfacePath", interfacePath);
logMap.put("urlInputPara", urlInputPara);
logMap.put("responseBody", responseBody);
// logMap.put("operatorId", String.valueOf(user.getId()));
// logMap.put("operator", String.valueOf(user.getName()));
logMap.put("ip", ip);
logMap.put("operationTime", DateTimeUtils.getNowDateTimeStr());
redisStreamUtil.addStream(IZP_LOG_REDIS_STREAM, logMap);
// Finally remember to respond to the client with the cached data.
resp.copyBodyToResponse();
}
}
其实一开始在过滤器中获取用户传参的代码我是通过解析流的方式获取的(request.getInputStream()),但这会导致后续controller中获取不到参数,因为如果有@requestBody的注解,其本质也是获取请求流,就导致第二次读取获取不到,报错:HttpMessageNotReadableException: Required request body is missing。
原因:流对应的是数据,数据放在内存中,有的是部分放在内存中。read 一次标记一次当前位置(mark position),第二次read就从标记位置继续读(从内存中copy)数据。 所以这就是为什么读了一次第二次是空了。
解决办法:
使用ContentCachingRequestWrapper和ContentCachingResponseWrapper,通过这个类能够解决解决HttpServletRequest inputStream只能读取一次的问题,但是这个类有缺陷(前提必须是filterChain.doFilter之前不能使用request.getInputStream()方法)。
ContentCachingRequestWrapper req = new ContentCachingRequestWrapper(httpServletRequest);
ContentCachingResponseWrapper resp = new ContentCachingResponseWrapper(httpServletResponse);
4.解决redis乱码
package com.study.log.config;
import com.fasterxml.jackson.annotation.JsonAutoDetect;
import com.fasterxml.jackson.annotation.PropertyAccessor;
import com.fasterxml.jackson.databind.ObjectMapper;
import org.springframework.cache.CacheManager;
import org.springframework.cache.annotation.CachingConfigurerSupport;
import org.springframework.cache.annotation.EnableCaching;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.data.redis.cache.RedisCacheConfiguration;
import org.springframework.data.redis.cache.RedisCacheManager;
import org.springframework.data.redis.connection.RedisConnectionFactory;
import org.springframework.data.redis.core.RedisTemplate;
import org.springframework.data.redis.serializer.Jackson2JsonRedisSerializer;
import org.springframework.data.redis.serializer.RedisSerializationContext;
import org.springframework.data.redis.serializer.RedisSerializer;
import org.springframework.data.redis.serializer.StringRedisSerializer;
import java.time.Duration;
/**
* @Description 解决redis乱码
* @author haochuanwan
* @date 2022/11/2 11:22
**/
@EnableCaching //开启缓存
@Configuration //配置类
public class RedisConfig extends CachingConfigurerSupport {
@Bean
public RedisTemplate<String, Object> redisTemplate(RedisConnectionFactory factory) {
RedisTemplate<String, Object> template = new RedisTemplate<>();
RedisSerializer<String> redisSerializer = new StringRedisSerializer();
Jackson2JsonRedisSerializer jackson2JsonRedisSerializer = new Jackson2JsonRedisSerializer(Object.class);
ObjectMapper om = new ObjectMapper();
om.setVisibility(PropertyAccessor.ALL, JsonAutoDetect.Visibility.ANY);
om.enableDefaultTyping(ObjectMapper.DefaultTyping.NON_FINAL);
jackson2JsonRedisSerializer.setObjectMapper(om);
template.setConnectionFactory(factory);
//key序列化方式
template.setKeySerializer(redisSerializer);
//value序列化
template.setValueSerializer(jackson2JsonRedisSerializer);
//value hashmap序列化
template.setHashValueSerializer(jackson2JsonRedisSerializer);
return template;
}
@Bean
public CacheManager cacheManager(RedisConnectionFactory factory) {
RedisSerializer<String> redisSerializer = new StringRedisSerializer();
Jackson2JsonRedisSerializer jackson2JsonRedisSerializer = new Jackson2JsonRedisSerializer(Object.class);
//解决查询缓存转换异常的问题
ObjectMapper om = new ObjectMapper();
om.setVisibility(PropertyAccessor.ALL, JsonAutoDetect.Visibility.ANY);
om.enableDefaultTyping(ObjectMapper.DefaultTyping.NON_FINAL);
jackson2JsonRedisSerializer.setObjectMapper(om);
// 配置序列化(解决乱码的问题),过期时间600秒
RedisCacheConfiguration config = RedisCacheConfiguration.defaultCacheConfig()
.entryTtl(Duration.ofSeconds(600))
.serializeKeysWith(RedisSerializationContext.SerializationPair.fromSerializer(redisSerializer))
.serializeValuesWith(RedisSerializationContext.SerializationPair.fromSerializer(jackson2JsonRedisSerializer))
.disableCachingNullValues();
RedisCacheManager cacheManager = RedisCacheManager.builder(factory)
.cacheDefaults(config)
.build();
return cacheManager;
}
}
5.创建消息监听器
package com.study.log.redistool;
import com.study.log.po.LogAuthServer;
import com.study.log.service.LogAuthServerService;
import com.study.log.utils.DateTimeUtils;
import com.study.log.utils.RedisStreamUtil;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.data.redis.connection.stream.MapRecord;
import org.springframework.data.redis.stream.StreamListener;
import org.springframework.stereotype.Component;
/**
* @Description 监听消息
* @author haochuanwan
* @date 2022/10/17 11:22
**/
@Component
public class ListenerMessage implements StreamListener<String, MapRecord<String, String, String>> {
private static final Logger log = LoggerFactory.getLogger(ListenerMessage.class);
RedisStreamUtil redisStreamUtil;
public ListenerMessage(RedisStreamUtil redisStreamUtil){
this.redisStreamUtil = redisStreamUtil;
}
@Override
public void onMessage(MapRecord<String, String, String> entries) {
try{
log.info("接受到来自redis的消息");
int count = insertLog(entries);
// 执行数据库持久化,成功后删除消息
if(count > 0){
redisStreamUtil.delField("izp_log_redis_stream",entries.getId().getValue());
}
}catch (Exception e){
log.error("error message:{}",e.getMessage());
}
}
private int insertLog(MapRecord<String, String, String> entries) {
int count = 0;
String serverType = entries.getValue().get("serverType");
if("log".equals(serverType)){
LogAuthServer logAuthServer = new LogAuthServer();
logAuthServer.setHttpType( entries.getValue().get("httpType"));
logAuthServer.setServerType( entries.getValue().get("serverType"));
logAuthServer.setRequestUrl(entries.getValue().get("requestURL"));
logAuthServer.setInterfacePath(entries.getValue().get("interfacePath"));
logAuthServer.setUrlInputPara(entries.getValue().get("urlInputPara"));
logAuthServer.setResponseBody(entries.getValue().get("responseBody"));
logAuthServer.setIp(entries.getValue().get("ip"));
logAuthServer.setOperationTime(entries.getValue().get("operationTime"));
logAuthServer.setCreateTime(DateTimeUtils.getNowDateTimeStr());
LogAuthServerService logAuthServerService = SpringJobBeanFactory.getBean(LogAuthServerService.class);
count = logAuthServerService.insertSelective(logAuthServer);
}
return count;
}
}
使用监听器处理业务,需要使用自己的方法,注入对象,但最终得到的可能为null
原因:listener、fitter都不是Spring容器管理的,无法在这些类中直接使用Spring注解的方式来注入我们需要的对象
解决:写一个bean工厂,从spring的上下文WebApplicationContext 中获取
6.自定义bean工厂
package com.study.log.redistool;
import org.springframework.beans.BeansException;
import org.springframework.context.ApplicationContext;
import org.springframework.context.ApplicationContextAware;
import org.springframework.stereotype.Component;
/**
* @ClassName SpringJobBeanFactory
* @Description 解决监听器中自动注入bean失败的问题
* @Author haochuanwan
* @Date 2022/10/20 15:30
**/
@Component
public class SpringJobBeanFactory implements ApplicationContextAware {
private static ApplicationContext applicationContext;
@Override
public void setApplicationContext(ApplicationContext applicationContext) throws BeansException {
SpringJobBeanFactory.applicationContext=applicationContext;
}
public static ApplicationContext getApplicationContext() {
return applicationContext;
}
@SuppressWarnings("unchecked")
public static <T> T getBean(String name) throws BeansException {
if (applicationContext == null){
return null;
}
return (T)applicationContext.getBean(name);
}
public static <T> T getBean(Class<T> name) throws BeansException {
if (applicationContext == null){
return null;
}
return applicationContext.getBean(name);
}
}
7.注入监听器
package com.study.log.config;
import com.study.log.redistool.ListenerMessage;
import com.study.log.utils.RedisStreamUtil;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Value;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.data.redis.connection.RedisConnectionFactory;
import org.springframework.data.redis.connection.stream.*;
import org.springframework.data.redis.stream.StreamMessageListenerContainer;
import org.springframework.data.redis.stream.Subscription;
import java.time.Duration;
import java.util.ArrayList;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
/**
* @Description 注入监听类
* @author haochuanwan
* @date 2022/11/2 11:22
**/
@Configuration
public class RedisStreamConfig {
private static final Logger log = LoggerFactory.getLogger(RedisStreamConfig.class);
//监听类
private final ListenerMessage streamListener;
//redis工具类
private final RedisStreamUtil redisStreamUtil;
//redis stream 数组
@Value("${redis-stream.names}")
private String[]redisStreamNames;
//redis stream 群组数组
@Value("${redis-stream.groups}")
private String[]groups;
/**
* 注入工具类和监听类
*/
@Autowired
public RedisStreamConfig(RedisStreamUtil redisStreamUtil){
this.redisStreamUtil = redisStreamUtil;
this.streamListener = new ListenerMessage(redisStreamUtil);
}
@Bean
public List<Subscription> subscription(RedisConnectionFactory factory){
log.info("开始订阅消息.......");
List<Subscription> resultList = new ArrayList<>();
StreamMessageListenerContainer.StreamMessageListenerContainerOptions<String, MapRecord<String, String, String>> options = StreamMessageListenerContainer
.StreamMessageListenerContainerOptions
.builder()
.pollTimeout(Duration.ofSeconds(1))
.build();
for (String redisStreamName : redisStreamNames) {
initStream(redisStreamName,groups[0]);
StreamMessageListenerContainer<String, MapRecord<String, String, String>> listenerContainer = StreamMessageListenerContainer.create(factory,options);
Subscription subscription = listenerContainer.receiveAutoAck(Consumer.from(groups[0], this.getClass().getName()),
StreamOffset.create(redisStreamName, ReadOffset.lastConsumed()), streamListener);
resultList.add(subscription);
listenerContainer.start();
}
return resultList;
}
private void initStream(String key, String group){
log.info("执行初始化redis流......");
boolean hasKey = redisStreamUtil.hasKey(key);
if(!hasKey){
log.info("key不存在,执行创建redis流");
Map<String,Object> map = new HashMap<>();
map.put("field","value");
RecordId recordId = redisStreamUtil.addStream(key, map);
redisStreamUtil.addGroup(key,group);
//将初始化的值删除掉
redisStreamUtil.delField(key,recordId.getValue());
log.info("stream:{}-group:{} initialize success",key,group);
}else{
log.info("key存在,即将执行监听订阅");
}
}
}
8.建表语句
books
CREATE TABLE `books` (
`bookID` int NOT NULL AUTO_INCREMENT COMMENT '书id',
`bookName` varchar(100) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL COMMENT '书名',
`bookCounts` int NOT NULL COMMENT '数量',
`detail` varchar(200) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL COMMENT '描述',
`sfyx` varchar(1) CHARACTER SET utf8 COLLATE utf8_general_ci NOT NULL COMMENT '是否有效(1:有效 0:无效)',
`createdtime` timestamp NULL DEFAULT NULL COMMENT '创建时间',
PRIMARY KEY (`bookID`) USING BTREE,
KEY `bookID` (`bookID`) USING BTREE
) ENGINE=InnoDB AUTO_INCREMENT=19 DEFAULT CHARSET=utf8mb3 ROW_FORMAT=DYNAMIC;
log_auth_server
CREATE TABLE `log_auth_server` (
`id` int NOT NULL AUTO_INCREMENT COMMENT '主键id',
`http_type` varchar(255) CHARACTER SET utf8 COLLATE utf8_general_ci DEFAULT NULL COMMENT 'http请求类型',
`server_type` varchar(255) CHARACTER SET utf8 COLLATE utf8_general_ci DEFAULT NULL COMMENT '服务类型',
`request_url` varchar(1000) CHARACTER SET utf8 COLLATE utf8_general_ci DEFAULT NULL COMMENT '请求路径',
`interface_path` varchar(255) CHARACTER SET utf8 COLLATE utf8_general_ci DEFAULT NULL COMMENT '接口全路径',
`url_input_para` varchar(2000) CHARACTER SET utf8 COLLATE utf8_general_ci DEFAULT NULL COMMENT '入参',
`response_body` varchar(2000) CHARACTER SET utf8 COLLATE utf8_general_ci DEFAULT NULL COMMENT '返参',
`operator_id` int DEFAULT NULL COMMENT '操作人id',
`operator` varchar(255) CHARACTER SET utf8 COLLATE utf8_general_ci DEFAULT NULL COMMENT '操作人',
`ip` varchar(255) CHARACTER SET utf8 COLLATE utf8_general_ci DEFAULT NULL COMMENT '操作',
`operation_time` datetime DEFAULT NULL COMMENT '操作时间',
`create_time` datetime DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',
PRIMARY KEY (`id`) USING BTREE
) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=utf8mb3 ROW_FORMAT=DYNAMIC COMMENT='系统管理服务日志';
9.代码拉取与测试
至此,项目主体部分就完成了,gitee项目地址:https://gitee.com/fengqingyunnongyuanshanheng/study.git
如果你拉取了我的这个demo的话启动也是很简单的,只需要修改application.yml中的相关配置后就可以直接启动,注意的是你的redis版本应该大于5,然后直接启动就可以了。
通过下面的测试接口进行测试发送消息
@RequestMapping(value = "/books",method = RequestMethod.POST)
public Integer saveLog(@RequestBody Books books){
books.setSfyx("1");
books.setCreatedtime(DateTimeUtils.getNowDateTimeStr());
int i = booksService.insertSelective(books);
return i;
}
建议使用postman调用测试接口
http://localhost:8888/log/books
接口传参:
{
“bookname”: “今天星期五”,
“bookcounts”: 10,
“detail”: “今天星期五”
}
10.其他可能遇到的问题
1、能正常生产消息,但无法消费
请检查存到流里的key是否与配置文件中redis-stream的name属性对应一致,注入监听类中的redisStreamNames和groups是否与配置文件中的配置一致
2、消费消息发现key值乱码
请检查项目中有没有redis乱码配置,如果有再检查存到流里的数据类型是否与监听器中监听的一致
四、总结
其实在实现记录用户操作日志的过程,我也遇到了很多的问题:从一开始的过滤器读取流导致后续controller获取不到requestBody、到获取到消息发现redis的key值乱码,再到想实现持久化到数据库结果发现自动注入service空指针,每次以为万事大吉了都会又冒出点问题,心情真是起伏不定。
不过功能完成了,对于我这种小菜鸡来说,还是挺有满足感的,谨以此文,记录一下。
对了,后续如果小伙伴们导入项目有问题,欢迎留言,虽然技术不咋滴,看到了还是会努力回复的。