工作笔记-使用Redis作为Mysql数据库的缓存

 

有一个新需求,需要使用Redis做为Mysql的缓存,需要做全表缓存。

Redis是K/V的简单键值对存储形式,MySQL是传统关系型数据库,是一维和二维的区别,怎么把表映射到Redis中呢。

Redis对数据结构有很丰富的支持,使用String-Map作为Redis的Key-Value是一个很好的方法,Map中的key对应Mysql中的列名,一个Map就对应Mysql中的一行数据。再使用类似「表名+ID」的形式做Redis的key, 可以实现Mysql中表数据的全表加载。

分两个步骤,第一,将Mysql中的数据加载到Redis中, 第二,从Redis中读取数据并反序列化为对象。

第一步有两个方案,一个是在应用中定时批量全表加载,考虑到表中数据超过十万条并且缓存淘汰时间比较短,无法平滑加载,放弃。一个是在服务器端通过脚本将Mysql的数据批量导入到Redis中。我采取方案二。

从Mysql查询的SQL:

SELECT CONCAT(
   "*8\r\n",
   '$',LENGTH(redis_cmd),'\r\n',redis_cmd,'\r\n',
   '$',LENGTH(redis_key),'\r\n',redis_key,'\r\n',
   '$',LENGTH(hkey1),'\r\n',hkey1,'\r\n','$',LENGTH(hval1),'\r\n',hval1,'\r\n',
   '$',LENGTH(hkey2),'\r\n',hkey2,'\r\n','$',LENGTH(hval2),'\r\n',hval2,'\r\n',
   '$',LENGTH(hkey3),'\r\n',hkey3,'\r\n','$',LENGTH(hval3),'\r\n',hval3,'\r\n',
)FROM(
   SELECT 'HMSET' AS redis_cmd,
   concat_ws(':','Device::DeviceServiceImpl:findById',dev_id) AS redis_key,
   'devId' AS hkey1, dev_id AS hval1,
   'creTime' AS hkey2, cre_time AS hval2,
   'updTime' AS hkey3, upd_time AS hval3
   From t_device
)AS t

查询数据并导入Redis的脚本:

#!/bin/bash
mysql -h 127.0.0.1 -uadmin -padmin -Dtestdb --skip-column-names --raw < mysql_redis_device_findById_nokey.sql | redis-cli -h 127.0.0.1 --pipe | redis-cli -h 127.0.0.1 keys "Device::DeviceServiceImpl:findById*" | xargs -t -i redis-cli expire {} 100

echo "batchinput is over"

这样批量导入以后, 一行数据在redis中保存格式大概是这样的:

第一行是列名第二行是值,下面以此类推。

第一步骤完成了,下面开始第二步骤,就是在应用中存取。

Spring-data-redis基于注解的Redis缓存方案,对象的默认序列化方式是JdkSerializationRedisSerializer,使用的redis api指令是get 和 set, 我们看源码就能知道, 下面是DefaultRedisCacheWriter的实现

	@Override
	public void put(String name, byte[] key, byte[] value, @Nullable Duration ttl) {

		Assert.notNull(name, "Name must not be null!");
		Assert.notNull(key, "Key must not be null!");
		Assert.notNull(value, "Value must not be null!");

		execute(name, connection -> {

			if (shouldExpireWithin(ttl)) {
				connection.set(key, value, Expiration.from(ttl.toMillis(), TimeUnit.MILLISECONDS), SetOption.upsert());
			} else {
				connection.set(key, value);
			}

			return "OK";
		});
	}

	/*
	 * (non-Javadoc)
	 * @see org.springframework.data.redis.cache.RedisCacheWriter#get(java.lang.String, byte[])
	 */
	@Override
	public byte[] get(String name, byte[] key) {

		Assert.notNull(name, "Name must not be null!");
		Assert.notNull(key, "Key must not be null!");

		return execute(name, connection -> connection.get(key));
	}

AOP真的是很无敌,基于注解的方法基本是靠AOP实现,所以通过追踪源码可以发现,@Cacheable等注解最终会执行DefaultRedisCacheWriter的以上两个方法。

我们的脚本是通过hset将数据写进去的,所以这里再用get,是读不出来的,就算get直接把我们刚才导入的Map读出来,会把他当做序列化以后的对象用JdkSerializationRedisSerializer去反序列化它,必然会报错。解决的思路就是改写这个DefaultRedisCacheWriter类,在get和put中对读和写出的内容进行序列化和反序列化。

我创建一个类命名为MapRedisCacheWriter,其他实现参照DefaultRedisCacheWriter拷贝就行,主要修改put和get方法

	@Override
	public void put(String name, byte[] key, byte[] value, @Nullable Duration ttl) {

		Assert.notNull(name, "Name must not be null!");
		Assert.notNull(key, "Key must not be null!");
		Assert.notNull(value, "Value must not be null!");

		Object entityValue = jdkSerialization.deserialize(value);
		HashMapper<Object, byte[], byte[]> mapper = new ObjectHashMapper();
		Map<byte[],byte[]> mapValue = mapper.toHash(entityValue); 
		
		execute(name, connection -> {

			if (shouldExpireWithin(ttl)) {
//				connection.set(key, value, Expiration.from(ttl.toMillis(), TimeUnit.MILLISECONDS), SetOption.upsert());
				connection.hMSet(key, mapValue);
				connection.expire(key, ttl.toMillis());
			} else {
//				connection.set(key, value);
				connection.hMSet(key, mapValue);
			}

			return "OK";
		});

添加的逻辑是通过JdkSerializationRedisSerializer把原先已经序列化好的对象反序列化,还原为对象,然后使用ObjectHashMapper将对象转化为一个<byte[],byte[]>类型的Map,最后使用hMSet方法将key-Value写入Redis。

同时修改get

	@Override
	public byte[] get(String name, byte[] key) {

		Assert.notNull(name, "Name must not be null!");
		Assert.notNull(key, "Key must not be null!");
		
		Map<byte[],byte[]> mapValue = execute(name,connection -> connection.hGetAll(key));
		HashMapper<Object, byte[], byte[]> mapper = new ObjectHashMapper();
		Object objectValue = mapper.fromHash(mapValue);
		
		return jdkSerialization.serialize(objectValue);
//		return execute(name, connection -> connection.get(key));
	}

添加的逻辑是使用hGetAll指令去获取Redis缓存的数据,再用Map<byte[], byte[]>的数据结构去接收这个数据。然后通过ObjectHashMapper将数据转化为对象,最后使用JdkSerializationRedisSerializer将对象序列化。

修改RedisCacheConfig(在我上一篇有),使用MapRedisCacheWriter初始化cacheManager

    @Bean
    public CacheManager redisCacheManager(RedisConnectionFactory redisConnectionFactory) {
    	
    	Map<String,RedisCacheConfiguration> initializeConfigs = initConfig();
        return new RedisCacheManager(
        	new MapRedisCacheWriter(redisConnectionFactory),
        	RedisCacheConfiguration.defaultCacheConfig().entryTtl(Duration.ofSeconds(300)),
            initializeConfigs
        );
    }

这样就讲底层的<String-Object>的存储改为了<String-Map>的存取了,脚本批量导入的数据也能和应用写入的数据互通了。但是运行时报了这样的错误:

No converter found capable of converting from type [xxxxxx] to type [xxxxxx]

出现这种错误的原因大致是数据库中列的数据结构在转化为Java应用中数据结构的过程中,没有找到可以使用的converter。问题就出在Object objectValue = mapper.fromHash(mapValue);这里。我们跟踪一下ObjectHashMapper的源码:

	public ObjectHashMapper(@Nullable org.springframework.data.convert.CustomConversions customConversions) {

		MappingRedisConverter mappingConverter = new MappingRedisConverter(new RedisMappingContext(),
				new NoOpIndexResolver(), new NoOpReferenceResolver());
		mappingConverter.setCustomConversions(customConversions == null ? new RedisCustomConversions() : customConversions);
		mappingConverter.afterPropertiesSet();

		converter = mappingConverter;
	}

在构造方法中,new了一个MappingRedisConverter, 这是构造方法

	public MappingRedisConverter(@Nullable RedisMappingContext mappingContext, @Nullable IndexResolver indexResolver,
			@Nullable ReferenceResolver referenceResolver) {

		this.mappingContext = mappingContext != null ? mappingContext : new RedisMappingContext();

		this.entityInstantiators = new EntityInstantiators();
		this.conversionService = new DefaultConversionService();
		this.customConversions = new RedisCustomConversions();
		this.typeMapper = new DefaultTypeMapper<>(new RedisTypeAliasAccessor(this.conversionService));

		this.indexResolver = indexResolver != null ? indexResolver : new PathIndexResolver(this.mappingContext);
		this.referenceResolver = referenceResolver;
	}

构造方法中new了一个DefaultConversionService(),在这个类里就包含了可以使用的converter,

	public DefaultConversionService() {
		addDefaultConverters(this);
	}

	public static void addDefaultConverters(ConverterRegistry converterRegistry) {
		addScalarConverters(converterRegistry);
		addCollectionConverters(converterRegistry);

		converterRegistry.addConverter(new ByteBufferConverter((ConversionService) converterRegistry));
		converterRegistry.addConverter(new StringToTimeZoneConverter());
		converterRegistry.addConverter(new ZoneIdToTimeZoneConverter());
		converterRegistry.addConverter(new ZonedDateTimeToCalendarConverter());

		converterRegistry.addConverter(new ObjectToObjectConverter());
		converterRegistry.addConverter(new IdToEntityConverter((ConversionService) converterRegistry));
		converterRegistry.addConverter(new FallbackObjectToStringConverter());
		converterRegistry.addConverter(new ObjectToOptionalConverter((ConversionService) converterRegistry));
	}
public static void addCollectionConverters(ConverterRegistry converterRegistry) {
		ConversionService conversionService = (ConversionService) converterRegistry;

		converterRegistry.addConverter(new ArrayToCollectionConverter(conversionService));
		converterRegistry.addConverter(new CollectionToArrayConverter(conversionService));

		converterRegistry.addConverter(new ArrayToArrayConverter(conversionService));
		converterRegistry.addConverter(new CollectionToCollectionConverter(conversionService));
		converterRegistry.addConverter(new MapToMapConverter(conversionService));

		converterRegistry.addConverter(new ArrayToStringConverter(conversionService));
		converterRegistry.addConverter(new StringToArrayConverter(conversionService));

		converterRegistry.addConverter(new ArrayToObjectConverter(conversionService));
		converterRegistry.addConverter(new ObjectToArrayConverter(conversionService));

		converterRegistry.addConverter(new CollectionToStringConverter(conversionService));
		converterRegistry.addConverter(new StringToCollectionConverter(conversionService));

		converterRegistry.addConverter(new CollectionToObjectConverter(conversionService));
		converterRegistry.addConverter(new ObjectToCollectionConverter(conversionService));

		converterRegistry.addConverter(new StreamConverter(conversionService));
	}

	private static void addScalarConverters(ConverterRegistry converterRegistry) {
		converterRegistry.addConverterFactory(new NumberToNumberConverterFactory());

		converterRegistry.addConverterFactory(new StringToNumberConverterFactory());
		converterRegistry.addConverter(Number.class, String.class, new ObjectToStringConverter());

		converterRegistry.addConverter(new StringToCharacterConverter());
		converterRegistry.addConverter(Character.class, String.class, new ObjectToStringConverter());

		converterRegistry.addConverter(new NumberToCharacterConverter());
		converterRegistry.addConverterFactory(new CharacterToNumberFactory());

		converterRegistry.addConverter(new StringToBooleanConverter());
		converterRegistry.addConverter(Boolean.class, String.class, new ObjectToStringConverter());

		converterRegistry.addConverterFactory(new StringToEnumConverterFactory());
		converterRegistry.addConverter(new EnumToStringConverter((ConversionService) converterRegistry));

		converterRegistry.addConverterFactory(new IntegerToEnumConverterFactory());
		converterRegistry.addConverter(new EnumToIntegerConverter((ConversionService) converterRegistry));

		converterRegistry.addConverter(new StringToLocaleConverter());
		converterRegistry.addConverter(Locale.class, String.class, new ObjectToStringConverter());

		converterRegistry.addConverter(new StringToCharsetConverter());
		converterRegistry.addConverter(Charset.class, String.class, new ObjectToStringConverter());

		converterRegistry.addConverter(new StringToCurrencyConverter());
		converterRegistry.addConverter(Currency.class, String.class, new ObjectToStringConverter());

		converterRegistry.addConverter(new StringToPropertiesConverter());
		converterRegistry.addConverter(new PropertiesToStringConverter());

		converterRegistry.addConverter(new StringToUUIDConverter());
		converterRegistry.addConverter(UUID.class, String.class, new ObjectToStringConverter());
	}

你会发现报错的两个数据类型并没有被包含在这些Converter当中,所以报了错误。

解决办法就是重写RedisHashMapper、MappingRedisConverter和DefaultConversionService

创建RedisDefaultConversionService类,继承DefaultConversionService类

public class RedisDefaultConversionService extends DefaultConversionService {
	public RedisDefaultConversionService(){
		super();
		addTimeBytesConverter();
	}
	
	private void addTimeBytesConverter(){
		addConverter(new BytesTimeConverter());
		addConverter(new TimeBytesConverter());
		addConverter(new BytesDateConverter());
		addConverter(new DateBytesConverter());

	}
}

在构造函数中添加addTimeBytesConverter(), 添加byte[]-timestamp的双向转换和byte[]-Date的双向转换。因为这两个是我的项目中缺少的,实际缺少什么就可以加什么。

其中一个的实现:

public class TimeBytesConverter implements Converter<byte[], Timestamp>{

	@Override
	public Timestamp convert(byte[] source) {
		String timeStr = new String(source, StandardCharsets.UTF_8);
		SimpleDateFormat dfs = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
		Date date = new Date(Long.parseLong(timeStr));
		String dateStr = dfs.format(date);
		return Timestamp.valueOf(dateStr);
		
	}

}

创建TimestampMappingRedisConverter类,实现全部参照MappingRedsiConverter类,在构造函数中,使用RedisDefaultConversionService替代DefaultConversionService:

	public TimestampMappingRedisConverter(@Nullable RedisMappingContext mappingContext, @Nullable IndexResolver indexResolver,
			@Nullable ReferenceResolver referenceResolver) {

		this.mappingContext = mappingContext != null ? mappingContext : new RedisMappingContext();

		this.entityInstantiators = new EntityInstantiators();
		this.conversionService = new RedisDefaultConversionService();
		this.customConversions = new RedisCustomConversions();
		this.typeMapper = new DefaultTypeMapper<>(new RedisTypeAliasAccessor(this.conversionService));

		this.indexResolver = indexResolver != null ? indexResolver : new PathIndexResolver(this.mappingContext);
		this.referenceResolver = referenceResolver;
	}

创建RedisHashMapper类,实现全部参照ObjectHashMapper类,在构造函数中,使用TimestampMappingRedisConverter替代MappingRedisConverter。

	public RedisHashMapper(@Nullable org.springframework.data.convert.CustomConversions customConversions) {

		TimestampMappingRedisConverter mappingConverter = new TimestampMappingRedisConverter(new RedisMappingContext(),
				new NoOpIndexResolver(), new NoOpReferenceResolver());
		mappingConverter.setCustomConversions(customConversions == null ? new RedisCustomConversions() : customConversions);
		mappingConverter.afterPropertiesSet();

		converter = mappingConverter;
	}

最后,在MapRedisCacheWriter中,将ObjectHashMapper替换掉。


		HashMapper<Object, byte[], byte[]> mapper = new RedisHashMapper();
		Object objectValue = mapper.fromHash(mapValue);
		

再次使用就不会再出现convert相关的错误了。

map来回转的性能消耗并没有做检测。

  • 2
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值