目录
Redis
Redis下载与安装
下载路径Index of /releases/ (redis.io)
在Linux系统安装Redis步骤:
1.将Redis安装包上传到Linux
2.解压安装包,命令:tar-zxvf redis-4.0.0.tar.gz-C/usr/local
3.安装Redis的依赖环境gcc,命令:yum install gcc-c++
4.进入/usr/local/redis-4.0.0,进行编译,命令:make
5.进入redis的src目录,进行安装,命令:make install
开启服务
进入redis的src文件夹运行 ./redis-server
修改redis后台运行
vim redis.conf 编辑redis配置文件 /dae 查找daemonize no i 把no改成yes esc :wq 保存 [redis-4.0.0]# src/redis-server ./redis.conf 以redis的配置运行服务
Redis常用命令
字符串string操作命令
Redis中字符串类型常用命令:
●SET key value 设置指定key的值
●GET key 获取指定key的值
●SETEX key seconds value 设置指定key的值,并将key的过期时间设为seconds秒
●SETNX key value 只有在key不存在时设置key的值
更多命令可以参考Redis中文网:https://www.redis.net.cn
哈希hash操作命令
Redis hash是一个string类型的field和value的映射表,hash特别适合用于存储对象,常用命令:
●HSET key field value 将哈希表key中的字段field的值设为value
●HGET key field 获取存储在哈希表中指定字段的值
●HDEL key field 删除存储在哈希表中的指定字段
●HKEYS key 获取哈希表中所有字段
●HVALS key 获取哈希表中所有值
●HGETALL key 获取在哈希表中指定key的所有字段和值
列表ist操作命令
Redis列表是简单的字符串列表,按照插入顺序排序,常用命令:
●LPUSH key value1 [value2] 将一个或多个值插入到列表头部
●LRANGE key start stop 获取列表指定范围内的元素
●RPOP key 移除并获取列表最后一个元素
●LLEN key 获取列表长度
●BRPOP key1 [key2] timeout 移出并获取列表的最后一个元素,如果列表没有元素会阻塞列表直到等待超时或发现可弹出元素为止
集合set操作命令
Redis set是string类型的无序集合。集合成员是唯一的,这就意味着集合中不能出现重复的数据,常用命令:
●SADD key member1 [member2] 向集合添加一个或多个成员
●SMEMBERS key 返回集合中的所有成员
●SCARD key 获取集合的成员数
●SINTER key1 [key2] 返回给定所有集合的交集
●SUNION key1 [key2] 返回所有给定集合的并集
●SDIFF key1 [key2] 返回给定所有集合的差集
●SREM key member1 [member2] 移除集合中一个或多个成员
有序集合sorted set操作命令
Redis sorted set有序集合是string类型元素的集合,且不允许重复的成员。每个元素都会关联一个double类型的分数(score)。redis正是通过分数来为集合中的成员进行从小到大排序。有序集合的成员是唯一的,但分数却可以重复。
常用命令:
●ZADD key score1 member1 [score2 member2] 向有序集合添加一个或多个成员,或者更新已存在成员的分数
●ZRANGE key start stop [WITHSCORES] 通过索引区间返回有序集合中指定区间内的成员
●ZINCRBY key increment member 有序集合中对指定成员的分数加上增量increment
●ZREM key member [member ... ] 移除有序集合中的一个或多个成员
通用命令
●KEYS pattern 查找所有符合给定模式(pattern)的key
●EXISTS key 检查给定key是否存在
●TYPE key 返回key所储存的值的类型
●TTL key 返回给定key的剩余生存时间(TTL,time to live),以秒为单位
●DEL key 该命令用于在key存在是删除key
在Java中操作Redis
介绍
Redis的java客户端很多,官方推荐的有三种:
● Jedis
● Lettuce
● Redisson
Spring对Redis客户端进行了整合,提供了Spring Data Redis,在Spring Boot]项目中还提供了对应的Starter,即spring-boot-starter-data-redis
Jedis
1.导入坐标
<dependency> <groupId>redis.clients</groupId> <artifactId>jedis</artifactId> <version>2.8.0</version> </dependency>
//新建jedis对象
Jedis jedis = new Jedis("localhost", 6379);
//设置key和value
jedis.set("username","zhangsan");
//通过key获取value
String username = jedis.get("username");
System.out.println(username);
//删除key
//jedis.del("username");
//设置哈希的值
jedis.hset("hash","age","13");
String hValue = jedis.hget("hash", "age");
System.out.println(hValue);
//遍历所有key获取所有value
Set<String> keys = jedis.keys("*");
for (String key : keys) {
System.out.println(key);
}
//关闭连接
jedis.close();
Spring Data Redis
Spring Data Redis中提供了一个高度封装的类:RedisTemplate,针对jedis.客户端中大量api进行了归类封装,将同一类型操作封装为operation接口,具体分类如下:
●ValueOperations:简单K-V操作
●SetOperations:set类型数据操作
●ZSetOperations:Zset类型数据操作
●HashOperations:针对map类型的数据操作
●ListOperations:针对list类型的数据操作
1.导入坐标
<dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-redis</artifactId> </dependency>
2.配置文件
redis:
host: localhost
port: 6379
#password:123456
jedis:
#redis连接池配置
pool:
max-active: 8 #最大连接数
max-wait: 1ms #连接池最大阻塞等待时间
max-idle: 4 #连接池中的最大空闲连接
min-idle: 0 #连接池中的最小空闲连接
database: 0 #操作的是0号数据库
3.由于spring中使用redis设置key到redis服务器中会序列化,所有需要添加redis的配置类
import org.springframework.cache.annotation.CachingConfigurerSupport;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.data.redis.connection.RedisConnectionFactory;
import org.springframework.data.redis.core.RedisTemplate;
import org.springframework.data.redis.serializer.StringRedisSerializer;
/**
* Redis配置类
*/
@Configuration
public class RedisConfig extends CachingConfigurerSupport {
@Bean
public RedisTemplate<Object, Object> redisTemplate(RedisConnectionFactory connectionFactory) {
RedisTemplate<Object, Object> redisTemplate = new RedisTemplate<>();
//默认的Key序列化器为:JdkSerializationRedisSerializer
redisTemplate.setKeySerializer(new StringRedisSerializer());
redisTemplate.setHashKeySerializer(new StringRedisSerializer());
redisTemplate.setConnectionFactory(connectionFactory);
return redisTemplate;
}
}
代码演示
@Autowired
private RedisTemplate redisTemplate;
//操作string类型的数据
@Test
public void testString(){
//设置key和value
redisTemplate.opsForValue().set("city","beijing");
//获取value
String city = (String) redisTemplate.opsForValue().get("city");
System.out.println(city);;
//设置带时间的key
redisTemplate.opsForValue().set("city1","shenzhen", 10L, TimeUnit.SECONDS);
//当key没有值时设置值,有值时不执行操作
Boolean aBoolean = redisTemplate.opsForValue().setIfAbsent("city1", "guangzhou");
System.out.println(aBoolean);
}
//操作Hash类型的数据
@Test
public void testHash() {
HashOperations hashOperations = redisTemplate.opsForHash();
//设置值
hashOperations.put("002","name","zhangsan");
//获取值
String name = (String) hashOperations.get("002", "name");
System.out.println(name);
//获取所有的key
Set keys = hashOperations.keys("002");
for (Object key : keys) {
System.out.println(key);
}
//获取所有的值
List values = hashOperations.values("002");
for (Object value : values) {
System.out.println(value);
}
}
//操作List类型的数据
@Test
public void testList() {
ListOperations listOperations = redisTemplate.opsForList();
//添加一条数据
listOperations.leftPush("list1","zhangsan");
//添加多条数据
listOperations.leftPushAll("list1", "lisi", "wangwu", "qianliu");
//取值
List list = listOperations.range("list1", 0, -1);
System.out.println(list);
//获取列表长度
Long list1 = listOperations.size("list1");
int length = list1.intValue();
//取值并删除该值
for (int i = 0; i < length; i++) {
String lis1 = (String) listOperations.rightPop("list1");
System.out.println(lis1);
}
}
//操作Set类型的数据
@Test
public void testSet() {
SetOperations setOperations = redisTemplate.opsForSet();
//设置值,会去重
setOperations.add("list2","a","b","c","d","a");
//获取值
Set<String> list2 = setOperations.members("list2");
for (String value : list2) {
System.out.println(value);
}
//删除成员
setOperations.remove("list2","a","b");
}
//操作有序Set,Zset类型的数据
@Test
public void testZset() {
ZSetOperations zSetOperations = redisTemplate.opsForZSet();
//设置值
zSetOperations.add("zlist","a",10.0);
zSetOperations.add("zlist","b",11.0);
zSetOperations.add("zlist","c",12.0);
//取值
Set<String> zlist = zSetOperations.range("zlist", 0, -1);
System.out.println(zlist);
for (String o : zlist) {
System.out.println(o);
}
//修改分数
zSetOperations.incrementScore("zlist","b",18);
Set<String> zlist1 = zSetOperations.range("zlist", 0, -1);
System.out.println(zlist1);
//删除成员
zSetOperations.remove("zlist","a");
}
//通用操作,不同的数据类型都可以操作
@Test
public void testCommon() {
//获取redis中所有的key
Set<String> keys = redisTemplate.keys("*");
System.out.println(keys);
//判断某个key是否存在
Boolean list1 = redisTemplate.hasKey("list1");
System.out.println(list1);
//删除指定key
redisTemplate.delete("zlist");
//获取指定key对应的value的数据类型
DataType dataType = redisTemplate.type("list1");
System.out.println(dataType);
}
如果使用redis转List<Object>需要使用序列化和反序列化
RedisConfig.java
package com.chj.library.utils;
import com.fasterxml.jackson.annotation.JsonAutoDetect;
import com.fasterxml.jackson.annotation.PropertyAccessor;
import com.fasterxml.jackson.databind.ObjectMapper;
import org.springframework.cache.annotation.CachingConfigurerSupport;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.data.redis.connection.RedisConnectionFactory;
import org.springframework.data.redis.core.RedisTemplate;
import org.springframework.data.redis.serializer.StringRedisSerializer;
import org.springframework.stereotype.Component;
/**
* Redis配置类
*/
@Component
public class RedisConfig {
@Bean
public RedisTemplate<String, String> redisTemplate(RedisConnectionFactory redisConnectionFactory) {
RedisTemplate<String, String> redisTemplate = new RedisTemplate<>();
redisTemplate.setConnectionFactory(redisConnectionFactory);
//使用Jackson2JsonRedisSerializer来序列化和反序列化redis的value值
//Jackson2JsonRedisSerializer serializer = new Jackson2JsonRedisSerializer(Object.class);
//使用Fastjson2JsonRedisSerializer来序列化和反序列化redis的value值
FastJson2JsonRedisSerializer serializer = new FastJson2JsonRedisSerializer(Object.class);
ObjectMapper mapper = new ObjectMapper();
mapper.setVisibility(PropertyAccessor.ALL, JsonAutoDetect.Visibility.ANY);
mapper.enableDefaultTyping(ObjectMapper.DefaultTyping.NON_FINAL);
serializer.setObjectMapper(mapper);
redisTemplate.setValueSerializer(serializer);
//使用StringRedisSerializer来序列化和反序列化redis的key值
redisTemplate.setKeySerializer(new StringRedisSerializer());
redisTemplate.afterPropertiesSet();
return redisTemplate;
}
}
FastJson2JsonRedisSerializer.java
package com.chj.library.utils;
import com.alibaba.fastjson.JSON;
import com.alibaba.fastjson.parser.ParserConfig;
import com.alibaba.fastjson.serializer.SerializerFeature;
import com.fasterxml.jackson.databind.JavaType;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.fasterxml.jackson.databind.type.TypeFactory;
import org.springframework.data.redis.serializer.RedisSerializer;
import org.springframework.data.redis.serializer.SerializationException;
import org.springframework.util.Assert;
import java.nio.charset.Charset;
public class FastJson2JsonRedisSerializer<T> implements RedisSerializer<T> {
private ObjectMapper objectMapper = new ObjectMapper();
public static final Charset DEFAULT_CHARSET = Charset.forName("UTF-8");
private Class<T> clazz;
static {
ParserConfig.getGlobalInstance().setAutoTypeSupport(true);
//如果遇到反序列化autoType is not support错误,请添加并修改一下包名到bean文件路径
// ParserConfig.getGlobalInstance().addAccept("com.xxxxx.xxx");
}
public FastJson2JsonRedisSerializer(Class<T> clazz) {
super();
this.clazz = clazz;
}
public byte[] serialize(T t) throws SerializationException {
if (t == null) {
return new byte[0];
}
return JSON.toJSONString(t, SerializerFeature.WriteClassName).getBytes(DEFAULT_CHARSET);
}
public T deserialize(byte[] bytes) throws SerializationException {
if (bytes == null || bytes.length <= 0) {
return null;
}
String str = new String(bytes, DEFAULT_CHARSET);
return JSON.parseObject(str, clazz);
}
public void setObjectMapper(ObjectMapper objectMapper) {
Assert.notNull(objectMapper, "'objectMapper' must not be null");
this.objectMapper = objectMapper;
}
protected JavaType getJavaType(Class<?> clazz) {
return TypeFactory.defaultInstance().constructType(clazz);
}
}
或者自己使用fastjson手动转成json字符再转回来
List<Book> bookList = bookService.list();
String s = JSON.toJSONString(bookList);
List<Book> bookList1 = JSON.parseArray(s, Book.class);
System.out.println(bookList1);
Spring Cache
Spring Cache介绍
Spring Cache是一个框架,实现了基于注解的缓存功能,只需要简单地加一个注解,就能实现缓存功能。
Spring Cache提供了一层抽象,底层可以切换不同的cache:实现。具体就是通过CacheManager接口来统一不同的缓存技术。
CacheManager是Spring提供的各种缓存技术抽象接口。
坐标:
<dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-cache</artifactId> </dependency>
针对不同的缓存技术需要实现不同的CacheManager:
CacheManager | 描述 |
EhCacheCacheManager | 使用EhCache作为缓存技术 |
GuavaCacheManager | 使用Googlet的GuavaCache作为缓存技术 |
RedisCacheManager | 使用Redis作为缓存技术 |
Spring Cache常用注解
注解 | 说明 |
@EnableCaching | 开启缓存注解功能,放在application类 |
@Cacheable | 在方法执行前spring先查看缓存中是否有数据,如果有数据,则直接返回缓存数据;若没有数据,调用方法并将方法返回值放到缓存中 |
@CachePut | 将方法的返回值放到缓存中 |
@CacheEvict | 将一条或多条数据从缓存中删除,allEntries=true删除所有 |
在spring bootI项目中,使用缓存技术只需在项目中导入相关缓存技术的依赖包,并在启动类上使用@EnableCaching开启缓存支持即可。
例如,使用Redis作为缓存技术,只需要导入Spring data Redis的maven坐标即可。
Spring Cache使用方式
在Spring Bootl项目中使用Spring Cache的操作步骤(使用redis缓存技术):
1、导入maven坐标
spring-boot-starter-data-redis,spring-boot-starter-cache
2、配置application.yml
spring:
cache:
redis:
time-to-live:1800000#设置缓存有效期30分钟
3、在启动类上加入@EnableCaching注解,开启缓存注解功能
4、在Controller的方法上加入@Cacheable、@CacheEvict等注解,进行缓存操作
/**
* @Cacheable:在方法执行前spring先查看缓存中是否有数据,如果有数据,则直接返回缓存数据;若没有数据,调用方法并将方法返回值放到缓存中
* key:缓存的key
* condition:条件,满足条件时才缓存
* unless:满足条件则不缓存
* @param id
* @return
*/
@GetMapping({"{id}"})
@Cacheable(value = "userCache",key = "#p0",unless = "#result==null")
public User getOne(@PathVariable int id){
User user = userService.getById(id);
return user;
}
/**
* @CachePut:将方法返回值放入缓存
* value:缓存的名称,每个缓存名称下面可以有多个key
* key:缓存的key
* userCache=30:缓存有效时间30秒
* @param user
* @return
*/
@CachePut(value = "userCache=30",key = "#user.id")
@PostMapping
public User save(User user){
userService.save(user);
return user;
}
@CacheEvict(value = "userCache",key = "#user.id")
//@CacheEvict(value = "userCache",key = "#p0.id")
@PutMapping()
public R<String> update(@RequestBody User user){
boolean flag = userService.updateById(user);
if (!flag){
return R.error("修改失败");
}
return R.success("修改成功");
}
/**
* @CacheEvict:清理指定缓存
* value:缓存的名称,每个缓存名称下面可以有多个key
* key:缓存的key
* @param id
* @return
*/
@CacheEvict(value = "userCache",allEntries = true)
//@CachePut(value = "deleteCache",key = "#p0")
@DeleteMapping("/{id}")
public R<String> delete(@PathVariable Integer id){
boolean flag = userService.removeById(id);
if (!flag){
return R.error("删除失败");
}
return R.success("删除成功");
}
缓存供应商变更:Ehcache
● 加入Ehcache:坐标(缓存供应商实现)
<dependency>
<groupId>net.sf.ehcache</groupId>
<artifactId>ehcache</artifactId>
</dependency>
● 添加配置文件ehcache.xml
<?xml version="1.0" encoding="UTF-8"?>
<ehcache xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation="http://ehcache.org/ehcache.xsd"
updateCheck="false">
<diskStore path="D:\ehcache" />
<!--默认缓存策略 -->
<!-- external:是否永久存在,设置为true则不会被清除,此时与timeout冲突,通常设置为false-->
<!-- diskPersistent:是否启用磁盘持久化-->
<!-- maxElementsInMemory:最大缓存数量-->
<!-- overflowToDisk:超过最大缓存数量是否持久化到磁盘-->
<!-- timeToIdleSeconds:最大不活动间隔,设置过长缓存容易溢出,设置过短无效果,可用于记录时效性数据,例如验证码-->
<!-- timeToLiveSeconds:最大存活时间-->
<!-- memoryStoreEvictionPolicy:缓存清除策略-->
<defaultCache
eternal="false"
diskPersistent="false"
maxElementsInMemory="1000"
overflowToDisk="false"
timeToIdleSeconds="60"
timeToLiveSeconds="60"
memoryStoreEvictionPolicy="LRU" />
<cache
name="phoneCode"
eternal="false"
diskPersistent="false"
maxElementsInMemory="1000"
overflowToDisk="false"
timeToIdleSeconds="10"
timeToLiveSeconds="10"
memoryStoreEvictionPolicy="LRU" />
</ehcache>
● 配置yml
ehcache
cache:
type: ehcache
ehcache:
config: ehcache.xml
缓存供应商变更:Redis
● 加入redis:坐标
<dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-redis</artifactId> </dependency>
● 配置yml
spring
redis:
host: localhost
port: 6379
cache:
type: redis
redis:
use-key-prefix: true #是否使用前缀
key-prefix: aa #前缀名
cache-null-values: false #是否可以缓存空值
time-to-live: 10 #缓存存活时间
缓存供应商变更:jetcache
● jetCache对SpringCache进行了封装,在原有功能基础上实现了多级缓存、缓存统计、自动刷新、异步调用、数据报表等功能
● jetCache设定了本地缓存与远程缓存的多级缓存解决方案
◆ 本地缓存(1oca1)
■ LinkedHashMap
■ Caffeine
◆ 远程缓存(remote)
■ Redis
■ Tair
● 导入jetCache:坐标
<dependency> <groupId>com.alicp.jetcache</groupId> <artifactId>jetcache-starter-redis</artifactId> <version>2.6.2</version> </dependency>
● 配置yml
jetcache:
remote:
default:
type: redis
host: localhost
port: 6379
poolConfig:
maxTotal: 50 #连接数
● 使用
1.在application类中添加注解
@SpringBootApplication
//开启方法注解缓存
@EnableMethodCache(basePackages = "com.spring")
//@EnableCaching
//jetCache启用缓存的开关
@EnableCreateCacheAnnotation
public class CacheApplication {
public static void main(String[] args) {
SpringApplication.run(CacheApplication.class, args);
}
}
2.使用本地缓存
@CreateCache(name="jetCache",expire = 10,timeUnit = TimeUnit.SECONDS)//创建缓存
private Cache<String,String> jetCache;
@Override
public String getCode(String tele) {
Integer code = codeUtils.generateValidateCode(6);
String s = code.toString();
jetCache.put(tele,s);//设置缓存
System.out.println(code);
return s;
}
@Override
public boolean check(SMSCode smsCode) {
String cacheCode = jetCache.get(smsCode.getTele());//获取缓存
return smsCode.getCode().equals(cacheCode);
}
3.使用方法缓存
@Autowired
private BookDao bookDao;
@Override
@Cached(name="book_",key="#id",expire = 60,cacheType = CacheType.REMOTE)//创建缓存过期时间1分钟
@CacheRefresh(refresh = 10)//每10秒刷新一次
public Book getById(Integer id) {
return bookDao.selectById(id);
}
@Override
public List<Book> getAll() {
return bookDao.selectList(null);
}
@Override
public boolean save(Book book) {
int insert = bookDao.insert(book);
return insert != 0;
}
@Override
@CacheUpdate(name="book_",key="#book.id",value = "#book")//更新缓存
public boolean update(Book book) {
int i = bookDao.updateById(book);
return i != 0;
}
@Override
@CacheInvalidate(name="book_",key="#book.id")//删除缓存
public boolean delete(Integer id) {
int i = bookDao.deleteById(id);
return i != 0;
}
缓存的统计报告
jetcache:
statIntervalMinutes: 15
注意 ● 缓存对象必须保障可序列化
@Data
public class Book implements Serializable{}
jetcache:
remote:
default:
type:redis
keyConvertor:fastjson
valueEncoder:java
valueDecoder:java
缓存供应商变更:j2cache
● j2cache是一个缓存整合框架,可以提供缓存的整合方案,使各种缓存搭配使用,自身不提供缓存功能
● 导入jet2Cache:坐标
<dependency>
<groupId>net.sf.ehcache</groupId>
<artifactId>ehcache</artifactId>
</dependency>
<dependency>
<groupId>net.oschina.j2cache</groupId>
<artifactId>j2cache-core</artifactId>
<version>2.8.2-release</version>
</dependency>
<dependency>
<groupId>net.oschina.j2cache</groupId>
<artifactId>j2cache-spring-boot2-starter</artifactId>
<version>2.8.0-release</version>
</dependency>
● 配置yml
j2cache:
config-location: j2cache.properties
● 添加ehcache.xml和j2cache.properties
ehcache.xml
<?xml version="1.0" encoding="UTF-8"?>
<ehcache xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:noNamespaceSchemaLocation="http://ehcache.org/ehcache.xsd"
updateCheck="false">
<diskStore path="D:\ehcache" />
<!--默认缓存策略 -->
<!-- external:是否永久存在,设置为true则不会被清除,此时与timeout冲突,通常设置为false-->
<!-- diskPersistent:是否启用磁盘持久化-->
<!-- maxElementsInMemory:最大缓存数量-->
<!-- overflowToDisk:超过最大缓存数量是否持久化到磁盘-->
<!-- timeToIdleSeconds:最大不活动间隔,设置过长缓存容易溢出,设置过短无效果,可用于记录时效性数据,例如验证码-->
<!-- timeToLiveSeconds:最大存活时间-->
<!-- memoryStoreEvictionPolicy:缓存清除策略-->
<defaultCache
eternal="false"
diskPersistent="false"
maxElementsInMemory="1000"
overflowToDisk="false"
timeToIdleSeconds="60"
timeToLiveSeconds="60"
memoryStoreEvictionPolicy="LRU" />
<cache
name="phoneCode"
eternal="false"
diskPersistent="false"
maxElementsInMemory="1000"
overflowToDisk="false"
timeToIdleSeconds="10"
timeToLiveSeconds="10"
memoryStoreEvictionPolicy="LRU" />
</ehcache>
j2cache.properties
#1级缓存
j2cache.L1.provider_class = ehcache
ehcache.configXml = ehcache.xml
#设置是否启用二级缓存
j2cache.l2-cache-open = false
#2级缓存
j2cache.L2.provider_class = net.oschina.j2cache.cache.support.redis.SpringRedisProvider
j2cache.L2.config_section = redis
redis.hosts = localhost:6379
#1级缓存中的数据如何到达2级缓存
j2cache.broadcast =net.oschina.j2cache.cache.support.redis.SpringRedisPubSubPolicy
redis.mode = single
#single ->single redis server 单机服务
#sentinel ->master-slaves servers 主从服务
#cluster ->cluster servers
#sharded ->sharded servers
#前缀
redis.namespace = sms
● 使用
@Autowired
private CacheChannel cacheChannel;
@Override
public String getCode(String tele) {
String code = codeUtils.generateValidateCode(6);
cacheChannel.set("sms",tele,code);//设置缓存
System.out.println(code);
return code;
}
@Override
public boolean check(SMSCode smsCode) {
String code = cacheChannel.get("sms", smsCode.getTele()).asString();//获取缓存
return smsCode.getCode().equals(code);
}
MangoDB
Mongo下载及安装
Windows版Mongo下载
https://www.mongodb.com/try/download
●Windows版Mongo安装
◆解压缩后设置数据目录
●Windows版Mongo启动
◆服务端启动
mongod --dbpath=..\data\db
◆客户端启动
mongo --host=127.0.0.1 --port=27017
可视化工具 Robo 3T
Mongo常用查询
1.基础查询
◆查询全部:db.集合.find();
◆查第一条:db.集合.find0ne()
◆查询指定数量文档:db.集合.find().limit(10) //查10条文档
◆跳过指定数量文档:db.集合.find().skip(20) //跳过20条文档
◆统计:db.集合.count()
◆排序:db.集合.sort({age:1}) //按age升序排序
◆投影:db.集合名称.find(条件,{name:1,age:1}) //仅保留name与age域
2.条件查询
◆基本格式:db.集合.find({条件))
◆模糊查询:db.集合.find({域名:/正则表达式/小) //等同SQL中的like,比like强大,可以执行正则所有规则
◆条件比较运算:db.集合.find(域名:{$gt:值}) //等同sQL中的数值比较操作,例如:name>18
◆包含查询:db.集合.find({域名:{$in:[值1,值2]}}) //等同于SQL中的in
◆条件连接查询:db.集合.find({$and:[{条件1},{条件21}]) //等同于SQL中的and、or
Spring中使用MangDB
1.导入坐标
<dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-data-mongodb</artifactId> </dependency>
2.配置yml
spring: data: mongodb: uri: mongodb://localhost/chj
3.测试
package com.chj.mongodb;
import com.chj.mongodb.domain.Book;
import net.bytebuddy.TypeCache;
import org.junit.jupiter.api.Test;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
import org.springframework.data.domain.Sort;
import org.springframework.data.mongodb.core.MongoTemplate;
import org.springframework.data.mongodb.core.query.Criteria;
import org.springframework.data.mongodb.core.query.Query;
import org.springframework.data.mongodb.core.query.Update;
import java.util.List;
@SpringBootTest
class MongodbApplicationTests {
@Autowired
private MongoTemplate mongoTemplate;
@Test
void testSave() {
Book book=new Book();
book.setId(3);;
book.setName("test");
book.setType("test");
book.setDescription("test");
mongoTemplate.save(book);
}
@Test
void testFind() {
//查询所有class为Book的数据
List<Book> books = mongoTemplate.findAll(Book.class);
System.out.println(books);
//精确查询
Query query = new Query();
query.addCriteria(Criteria.where("id").is(1));
Book one = mongoTemplate.findOne(query, Book.class);
System.out.println(one);
//模糊查询
Query query1 = new Query();
query1.addCriteria(Criteria.where("name").regex("spring"));
List<Book> books1 = mongoTemplate.find(query1, Book.class);
System.out.println(books1);
//多条件查询
Criteria criteria=new Criteria();
criteria.and("name").regex("springNa");
criteria.and("type").is("springTy2");
Query query2 = new Query(criteria);
//设置分页
//query2.skip(0).limit(5);
//设置排序
query2.with(Sort.by(
Sort.Order.desc("id")
));
List<Book> books2 = mongoTemplate.find(query2, Book.class);
System.out.println(books2);
}
@Test
void testUpdate(){
Query query = new Query();
query.addCriteria(Criteria.where("id").is(1));
Update update=Update.update("name","springUpdate");
mongoTemplate.upsert(query,update,Book.class);
}
@Test
void testDelete(){
Query query = new Query();
query.addCriteria(Criteria.where("id").is(2));
mongoTemplate.remove(query,Book.class);
}
@Test
void testDeleteAll(){
mongoTemplate.dropCollection(Book.class);
}
}
Elasticsearch (ES)
◆ Elasticsearch是一个分布式全文搜索引擎
ES的下载与安装
● Windows版ES下载
https://www.elastic.co/cn/downloads/elasticsearch
● 安装后运行elasticsearch.bat文件开启服务端口为9200
● 开启服务后使用web形式操作es
● 创建/查询/删除索引
PUT http://localhost:9200/books
GET http://localhost:9200/books
DELETE http://localhost:9200/books
● IK分词器
◆下载:https:/github.com/medcl/elasticsearch-analysis-ik/releases
下载解压到es的plugins目录下重启es的服务
使用postman创建mappings的属性
{
"mappings":{
"properties":{
"id":{
"type":"keyword"
},
"name":{
"type":"text",
"analyzer":"ik_max_word",//使用分词器
"copy_to":"all"
},
"type":{
"type":"keyword"
},
"description":{
"type":"text",
"analyzer":"ik_max_word",
"copy_to":"all" //拷贝到all
},
"all":{ //合并name和description的数据,实际不存在
"type":"text",
"analyzer":"ik_max_word"
}
}
}
}
●创建文档
POST http://localhost:9200/books/_doc #使用系统生成id
POST http://localhost:9200/books/create/1 #使用指定id POST http://localhost:9200/books/_doc/1 #使用指定id,不存在创建,存在更新(版本递增)
查询文档
查询单条:GET http://localhost:9200/books/_doc/1
查询所有:GET http://localhost:9200/books/_search
条件查询: GET http://localhost:9200/books/_search?q=name:springboot
删除文档
删除单条:DELETE http://localhost:9200/books/_doc/1
修改文档
修改所有内容:PUT http://localhost:9200/books/_doc/1
修改单条: POST http://localhost:9200/books/_update/1
{ "doc":{
"name":"update one"}}
spring整合ES
低等级客户端
1.导入坐标
<!-- https://mvnrepository.com/artifact/org.springframework.boot/spring-boot-starter-data-elasticsearch -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-elasticsearch</artifactId>
<version>2.2.5.RELEASE</version>
</dependency>
2.配置yml
spring elasticsearch: rest: uris: http://localhost:9200
3.注入
@Autowired private ElasticsearchRestTemplate elasticsearchRestTemplate;
高等级客户端
● SpringBoot平台并没有跟随ES的更新速度进行同步更新,ES提供了High Level Client操作ES
● 导入坐标
<dependency>
<groupId>org.elasticsearch.client</groupId>
<artifactId>elasticsearch-rest-high-level-client</artifactId>
</dependency>
● 配置(无)
private RestHighLevelClient client;
@Test
void testCreateIndex() throws IOException {
//创建客户端
HttpHost host = HttpHost.create("http://localhost:9200");
RestClientBuilder builder= RestClient.builder(host);
client = new RestHighLevelClient(builder);
//创建索引
CreateIndexRequest request=new CreateIndexRequest("books");
client.indices().create(request, RequestOptions.DEFAULT);
//关闭客户端
client.close();
}
客户端改进
private RestHighLevelClient client;
@BeforeEach
void setUp() {
//创建客户端
HttpHost host = HttpHost.create("http://localhost:9200");
RestClientBuilder builder= RestClient.builder(host);
client = new RestHighLevelClient(builder);
}
@AfterEach
void tearDown() throws IOException {
//关闭客户端
client.close();
}
@Test
void testCreateIndex1() throws IOException {
//创建索引
CreateIndexRequest request=new CreateIndexRequest("books");
client.indices().create(request, RequestOptions.DEFAULT);
}
创建索引和mapping的属性
@Test
void testCreateIndexByKey() throws IOException {
//创建索引
CreateIndexRequest request = new CreateIndexRequest("books");
request.source("{\n" +
" \"mappings\":{\n" +
" \"properties\":{\n" +
" \"id\":{\n" +
" \"type\":\"keyword\" \n" +
" },\n" +
" \"name\":{\n" +
" \"type\":\"text\",\n" +
" \"analyzer\":\"ik_max_word\",//使用分词器\n" +
" \"copy_to\":\"all\"\n" +
" },\n" +
" \"type\":{\n" +
" \"type\":\"keyword\"\n" +
" },\n" +
" \"description\":{\n" +
" \"type\":\"text\",\n" +
" \"analyzer\":\"ik_max_word\",\n" +
" \"copy_to\":\"all\" //拷贝到all\n" +
" },\n" +
" \"all\":{ //合并name和description的数据,实际不存在\n" +
" \"type\":\"text\",\n" +
" \"analyzer\":\"ik_max_word\"\n" +
" }\n" +
" }\n" +
" }\n" +
"}", XContentType.JSON);
client.indices().create(request, RequestOptions.DEFAULT);
}
添加文档
/**
* 创建文档
* @throws IOException
*/
@Test
public void testCreateDoc() throws IOException {
Book book = bookService.getById(2);
IndexRequest request = new IndexRequest("books").id(book.getId().toString());
String json = JSON.toJSONString(book);
request.source(json,XContentType.JSON);
client.index(request,RequestOptions.DEFAULT);
}
/**
* 使用clent.bulk(),批处理请求的容器
* 批量添加文档
* @throws IOException
*/
@Test
public void testCreateDocAll() throws IOException {
BulkRequest bulkRequest= new BulkRequest();
List<Book> list = bookService.list();
for (Book book : list) {
IndexRequest indexRequest = new IndexRequest("books").id(book.getId().toString());
String json = JSON.toJSONString(book);
indexRequest.source(json,XContentType.JSON);
bulkRequest.add(indexRequest);
}
client.bulk(bulkRequest,RequestOptions.DEFAULT);
}
查询文档
/**
* 查询单个
* @throws IOException
*/
@Test
void testGet() throws IOException {
GetRequest request=new GetRequest("books","2");
GetResponse response = client.get(request, RequestOptions.DEFAULT);
String json = response.getSourceAsString();
System.out.println(json);
}
/**
* 条件查询
* @throws IOException
*/
@Test
void testSearch() throws IOException {
SearchRequest search=new SearchRequest("books");
//下面三行不要就是查询所有
SearchSourceBuilder builder = new SearchSourceBuilder();
builder.query(QueryBuilders.termQuery("name","java"));
search.source(builder);
SearchResponse response = client.search(search,RequestOptions.DEFAULT);
SearchHits hits = response.getHits();
for (SearchHit hit : hits) {
String json = hit.getSourceAsString();
System.out.println(json);
}
}
修改文档
/**
* 修改文档
* @throws IOException
*/
@Test
public void testUpdateById() throws IOException {
UpdateRequest updateRequest=new UpdateRequest("books","17");
Book byId = bookService.getById(17);
byId.setDescription("testEsUpdate");
updateRequest.doc(JSON.toJSONString(byId),XContentType.JSON);
client.update(updateRequest,RequestOptions.DEFAULT);
}
/**
* 按条件查询修改
* @throws IOException
*/
@Test
public void testUpdate() throws IOException {
SearchRequest search=new SearchRequest("books");
SearchSourceBuilder builder = new SearchSourceBuilder();
builder.query(QueryBuilders.termQuery("name","suiafbui"));
search.source(builder);
SearchResponse response = client.search(search, RequestOptions.DEFAULT);
SearchHits hits = response.getHits();
for (SearchHit hit : hits) {
String sourceAsString = hit.getSourceAsString();
Book book = JSON.parseObject(sourceAsString, Book.class);
book.setName("testUpdateByQuery");
UpdateRequest updateRequest=new UpdateRequest("books",book.getId().toString());
updateRequest.doc(JSON.toJSONString(book),XContentType.JSON);
client.update(updateRequest,RequestOptions.DEFAULT);
}
}
删除文档
/**
* 删除文档
* @throws IOException
*/
@Test
public void testDelete() throws IOException {
DeleteRequest deleteRequest = new DeleteRequest("books","18");
client.delete(deleteRequest,RequestOptions.DEFAULT);
//删除所有
DeleteIndexRequest deleteIndexRequest = new DeleteIndexRequest("books");
client.indices().delete(deleteIndexRequest,RequestOptions.DEFAULT);
}