Redis

redis入门

分析淘宝使用的组件

#1.商品的基本信息
	名称,价格....
	使用关系型数据库就可以解决如(MYSQL/oracle)
#2.商品描述,评论(文字较多)
	文档数据库中如:mongoDB
#3.图片
	分布式文件系统:FastDFS
	-淘宝自己的 TFS
	-谷歌的 GFS
	-Hadoop HDFS
	--阿里云的 oss
	--腾讯云的 cos
#4.关键字的搜索
	搜索引擎
	solr elasticsearch ISearch
#5.商品热门的波段消息
	内存数据库
	-redis Tair memache...
#6.商品的交易,外部支付接口
	-第三方应用

nosql的四大分类

KV键值对:

​ 新浪:Redis

​ 美团: Redis+Tair

​ 阿里百度:Redis+memecache

文档型(bson格式):

​ MongoDB

​ MongoDB 是基于分布式文件存储的数据库,用于存储大量文档

列存储:

​ HBase

​ 分布式文件系统

图关系数据库

​ Neo4j,InfoGrid;

概述

Redis(Remote Dictionary Server ),即远程字典服务,是一个开源的使用ANSI C语言编写、支持网络、可基于内存亦可持久化的日志型、Key-Value数据库,并提供多种语言的API。

Redis能干嘛?

1.内存存储,持久化,内存中是断电即消,所以持久化很重要(rdb,aof)

2.效率高,可以用于高速缓存

3.发布订阅

4.地图消息分析

5.计时器,计数器,浏览量

特性

1.多样化数据类型

2.持久化

3.集群

4.事务

学习中用到的东西

1.官网:https://redis.io/

2.中文网:http://redis.cn/

Linux安装

1.下载Linux压缩包

2.基本环境安装

yum install gcc-c++
make
make install

3.修改redis.conf配置文件

daemonize yes

4.启动服务

redis-server conf路径/redis.conf

5.使用redis-cli连接

redis-cli -p 6379
kyes * 查询所有key

6.查看进程是否开启

ps -ef |grep redis

7.关闭

shutdown//关闭
exit//退出

redis-benchmark性能测试

#测试:100个并发链接每个并发10w请求
redis-benchmark -h localhost -p 6379 -c 100 -n  100000
====== SET ======
  100000 requests completed in 1.13 seconds //10w并发写入 1.13s
  100 parallel clients //100个并发客户端
  3 bytes payload //每次写入3个字节
  keep alive: 1  //只有一台服务器,单机性能
  host configuration "save": 900 1 300 10 60 10000
  host configuration "appendonly": no
  multi-thread: no

#所有请求在3ms中处理完成
79.99% <= 1 milliseconds
99.80% <= 2 milliseconds
99.90% <= 3 milliseconds
100.00% <= 3 milliseconds
#每秒处理88183.43请求
88183.43 requests per second
====== GET ====== 一样
  100000 requests completed in 1.18 seconds
  100 parallel clients
  3 bytes payload
  keep alive: 1
  host configuration "save": 900 1 300 10 60 10000
  host configuration "appendonly": no
  multi-thread: no

80.23% <= 1 milliseconds
98.86% <= 2 milliseconds
99.06% <= 3 milliseconds
99.11% <= 4 milliseconds
99.44% <= 5 milliseconds
99.74% <= 6 milliseconds
99.90% <= 8 milliseconds
100.00% <= 9 milliseconds
100.00% <= 9 milliseconds
84889.65 requests per second

基础知识

redis 默认16个数据库在配置文件中写有databases 16,默认使用的第0个数据库

#切换数据库
select 1
#查看数据当前大写 
dbsize
#清空当前数据库
flushdb
#清空全部数据库
flushall

redis是单线程的但是在redis6以后支持多线程

Redis为什么单线程还快

redis是将所有数据放在内存中所以单线程操作效率就是最高的。

五大数据类型

Redis-key基本命令

#遇到不会的命令可以在官网查看帮助文档
#set name 1 set值
#get name get值
#exists name 判断key是否存在(存在返回1否则0)
#move name 1 将当前库的key为name移动到数据库1中
#EXPIRE name 10 设置key为name的数据10s后过期
#ttl name 查看key为name的剩余秒数
#type name 查看key为name的类型

String(字符串)

String的value除了是字符串还可以是数字

#append key1 gf 在当前key1中加上gf字符 相当于java中的 String a=1; a+="gf"; 如果当前key 不存在就相当于set命令 
#strlen key1 获取当前key1的长度
#incr views  views值+1操作
#decr views views值-1操作
#incrby views 10 views值+10操作
#decrby vies 10 views值-10操作
#getrange key1 0 3 截取字符串0-3 的所有字符 0 -1 是所有
#SETRANGE key2 1 xxx 替换字符串重下标1开始替换
#setex (set with expire) #设置过期时间 SETEX key1 10 name 给key赋值为name并设置10s过期时间
#setnx (set if not exist) #(在分布式锁经常使用)不存在才创建 setne key1 "redis" 有key1就没用效否则创建一个key1值为redis (创建成功返回1失败0)
#mset k1 v1 k2 v2 k3 v3 批量set key value key value
#mget k1 k2 k3  批量get 获取 k1,k2,k3的所有
#MSETNX k1 v4 k4 v4 和setnx相同但是如果里面有一个key值存在则失败
#set user:1 {name:zhangsan,age:3} 设置user:1的值为json字符串
#getset db redis 先get在set 如果没有值就创建有值就返回当前值在set进去

List

在redis中我们可以吧list玩成栈,队列,阻塞队列等。

所有list命令都是l开头的

#LPUSH list one  创建一个key为list的list第一个参数为 one(注意最新push进去在列表的头部(左)第0个)
#RPUSH list one  创建一个key为list的list第最后一个参数为 one(注意最新push进去在列表的最后(右)最后一个L)
#Lrange list 0 -1 获取这个list中所有的值
#Lrange list 0 0 获取第一个值
#LPOP list 移除第一个值
#RPOP list 移除最后一个值
#LINDEX list 0 根据下标获取指这里是获取list第0个值
#LLEN list 返回list长度(相当于java的list.size())
#LREM list 2 12 移除指定的值(移除list中value为12的如果有重复value就移除2个)
#ltrim list 1 2 只保留指定的下标内容(保留list中下标1-2的元素其他删除)
#RPOPLPUSH mylist list 将mylist里最后一个元素移动到list的第一个去
#LSET list 0 item 重新赋值给list集合中下标为0的值改为item
#LINSERT list befor "world" see 在list集合中的world值的前面插入see值
#LINSERT list after "world" see1 在list集合中的world值的后面插入see1值

Set

set值不能重复

set中的命令都是s开头

#sadd set hello 创建一个set名字为set并赋值一个hello
#SMEMBERS set 获取这个key为set的所有元素
#SISMEMBER set hello 判断这个set中包含hello没有如果包含返回1否则是0 
#SCARD set 获取set中的个数(size)
#SREM set hello 移除set中为hello的数据
#SRANDMEMBER set 1 从set中随机取出一个值(如果1为2的话就是随机取出2个)
#SPOP set 1 随机移除一个元素(1为2就是两个)
#smove set set2 hello2 移除指定的值(移除set中value为hello2的值并添加到set2中)
#SDIFF set set2 # 查看set2中没有set中的元素(差集)
1) "a"
2) "b"
# SINTER set set2# 查看set 和set2中相同的(交集)
1) "c"
#SUNION set set2# 查看set和set2所有的(并集)
1) "a"
2) "b"
3) "c"
4) "d"
5) "e"


Hash

map集合,key-value的map

Hash中命令是h开头的

#hset myhash field1 hello 添加一个myhash 其中设置里面的第一个key为field1 value为hello
#hget myhash field1 获得myhash里面的 field1数据
#HMSET myhash field1 hello1 field2 world 批量添加
#hmget myhash field1 field2 批量获取
#HGETALL myhash 获取所有 
#HDEL myhash field1 删除hash指定的key字段
#HLEN myhash 获取hash的长度(SIZE)
#HEXISTS myhash field2 判断hash中的key是否存在(0:不存在,1:存在)
#hkeys myhash 获取所有的key
#hvals myhash 获取所有的value
#HINCRBY myhash field1 1 自增+1 如果要-1就设置为 -1
#HSETNX myhash field3 2 如果有就创建失败否则就成功

ZSet

在set的基础上加上了一个值 set K1 V1 zset k1 score1 v1

#ZADD zset 1 one 添加一个zset 1做为排序使用值为one
#ZRANGE zset 0 -1 获取一个zset 
#ZRANGEBYSCORE zset -inf +inf withscores 升序查询(-inf和+inf代表负无穷到正无穷,withscores查询所有信息)
#ZREVRANGEBYSCORE zset +inf -inf withscores 降序查询
#ZREM zset one 移除指定的名称
#ZCARD zset 查看大小
#zcount set 0 2 获取有序区间中的数量

三种特殊数据类型

geospatial

地理位置,朋友定位,附近的人打车距离。两地之间的距离。

只有6个命令

geospatial底层是zset所以可以使用zset的命令进行修改和其他操作

#添加地理位置(前面数字是经度后面数字是纬度最后是城市首拼)
geoadd china:city 114.085947 22.547 shengzheng  
#获取key为china:city的beijin里的经度和纬度
GEOPOS china:city beijin 
#m代表米 km代表千米 ft代表英尺 mi英尺 
#北京和上海的直线距离
GEODIST china:city beijin shanghai km 
#georadius  以给定的经纬度为中心,找出某一半径内的元素
#              当前位置的经度纬度范围km  全部信息   查出1条数据
georadius china:city 110 30 1000 km withcoord count 1
#GEORADIUSBYMEMBER  以给定的元素经纬度为中心,找出某一半径内的元素
#                             查出距离背景4000km的所有数据
GEORADIUSBYMEMBER china:city beijin 4000 km 

hyperloglog

hyperloglog是用来做基数统计的算法!

网页uv(一个人访问一个网站多次,但是还是算作一个人)

传统使用set来保存用户id来判断,如果保存大量用户id比较麻烦!我们的目的是为了计数而不是保存用户id;

#创建2个hyperloglog
pfadd mykey a b c d e f g h j
PFADD mykey2 f g h i j k l m n
#求基数(并集)并合并(2个中不重复的值)
PFMERGE mykey3 mykey mykey2
#查看数量 
PFCOUNT mykey3

bitmap

只能value只能存储(0和1)

位存储

统计用户信息,活跃人数或者不活跃人数

#位运算存储 记录周一到周日的打卡 第一个0可以代表日期如0为星期天,第二个0可以代表状态值
SETBIT sing 0 0
SETBIT sing 1 0
SETBIT sing 2 0
SETBIT sing 3 0
SETBIT sing 4 1
#获取结果 sing 的key为0的元素
GETBIT sing 0
#统计value为1的数量
BITCOUNT sing 

事务

要么同时成功要么同时失败,原子性!

Redis单条命令是保证原子性的,但redis的事务是不保证原子性的

Redis事务本质:一组命令的集合!一个事务中的所有命令都会被序列化会顺序执行

一次性,顺序性,排他性。

Redis事务没有隔离级别的概念。

Redis事务:

​ 开启事务(multi)

​ 命令入队(执行的命令)

​ 执行事务(exec) 或者取消事务(discard)

锁:Redis可以实现乐观锁,watch

#开启事务
MULTI 
OK #得到返回ok表示已开启
#创建一些执行的命令
set k1 v1
QUEUED  #得到返回表示存入队列
sadd k2 v2
QUEUED
set k3 v3
QUEUED
get k3
QUEUED
#执行事务
exec
#得到返回
1) OK
2) (integer) 1
3) OK
4) "v3"
#MULTI 开启事务
multi
#OK
#创建一些执行的命令
set k1 v1
QUEUED
set k2 v2
QUEUED
set k4 v4
QUEUED
#放弃本次事务表示这次事务无用
DISCARD

#如果在创建命令中报错了(相当于java的编译性错误)进行执行事务命令时会抛出错误EXECABORT Transaction discarded because of previous errors. 因为原来有错而放弃了本次事务导致本次事务无效 
MULTI
set k1 v1
QUEUED
127.0.0.1:6379> set k3
(error) ERR wrong number of arguments for 'set' command
127.0.0.1:6379> exec
(error) EXECABORT Transaction discarded because of previous errors.
#如果是(语法性命令错误)那么只有当前这一条语句无效其他正常运行 
#事务报错参考 https://blog.csdn.net/liuxiao723846/article/details/26488525
MULTI
set k1 v1
QUEUED
sadd k1 v2
QUEUED
set k1 3
QUEUED
exec
1) OK
2) (error) WRONGTYPE Operation against a key holding the wrong kind of value
3) OK

监控(Watch)

悲观锁:

​ 很悲观认为什么时候都会出现问题,无论做什么都会加锁!

乐观锁:

​ 很乐观认为什么时候都不会出现问题,所以不会上锁!在更新数据时去判断一下,在此期间是否有人修改过这个数据

#正常执行一个事务并监视money对象
set money 100 
OK
set out 0
OK
#监视watch money对象
watch money 
OK
#数据期间没有发生变动就正常执行成功
MULTI
OK
DECRBY money 20
QUEUED
INCRBY out 20
QUEUED
exec
1) (integer) 80
2) (integer) 20
#测试多线程下监控的作用
##事务之前先进行money的监控
watch money
OK
#开启事务
multi
OK
#更改值
DECRBY money 10
QUEUED
INCRBY out 10
QUEUED
#在我未执行事务的时候有另一个线程吧money的值给更改了的情况下我们在执行会发现事务执行失败
exec
(nil)
#另一个线程
#获取
get money
"80"
#需要改值
set money 1000
OK

#解决watch导致事务失效
#只需要先放弃监视在重新监视拿到最新的值在执行代码即可(unwatch放弃监视)
unwatch
OK
127.0.0.1:6379> watch money
OK
127.0.0.1:6379> multi
OK
127.0.0.1:6379> DECRBY money 10
QUEUED
127.0.0.1:6379> incrby out 10
QUEUED
127.0.0.1:6379> exec

jedis

使用java来操作客户端

什么是jedis?

jedis是官方推荐的java连接开发工具,是java操作jedis的中间件,如果要使用java操作redis那么一定要对jedis熟悉

1.导入对应的依赖

 <dependencies>
        <!--导入jedis包-->
        <!-- https://mvnrepository.com/artifact/redis.clients/jedis -->
        <dependency>
            <groupId>redis.clients</groupId>
            <artifactId>jedis</artifactId>
            <version>3.2.0</version>
        </dependency>
        <!--导入jedis包-->
        <dependency>
            <groupId>com.alibaba</groupId>
            <artifactId>fastjson</artifactId>
            <version>1.2.62</version>
        </dependency>
    </dependencies>

2.编码测试

  • 连接redis
  • 操作命令
  • 结束测试断开连接
   		//new 一个jedis对象 连接如果连接不上远程的请修改redis.conf具体查看https://www.cnblogs.com/l48x4264l46/p/11055692.html
        Jedis jedis=new Jedis("127.0.0.1",6379);
        //所有的命令都在jedis中 返回pong表示连接成功
        System.out.println(jedis.ping());

常用api

String

   //new 一个jedis对象
        Jedis jedis=new Jedis("114.55.254.22",6379);
        //所有的命令都在jedis中
        System.out.println(jedis.ping());
        System.out.println("清空数据"+jedis.flushDB());
        System.out.println("判断某个键是否存在"+jedis.exists("username"));
        System.out.println("新增<'username','gf'的键值对"+jedis.set("username","gf"));
        System.out.println("新增<'password','123'的键值对"+jedis.set("password","gf"));
        System.out.println("系统中所有的键---------------------");
        System.out.println( jedis.keys("*"));
        System.out.println("删除键password"+jedis.del("password"));
        System.out.println("判断password键是否存在"+jedis.exists("password"));
        System.out.println("查看username键的类型"+jedis.type("username"));
        System.out.println("重命名key"+jedis.rename("username","name"));
        System.out.println("取出name"+jedis.get("name"));
        System.out.println("按照索引选择数据库"+jedis.select(0));
        System.out.println("删除当前选择数据库的所有键"+jedis.flushDB());
        System.out.println("返回当前数据库中key的数目"+jedis.dbSize());
        System.out.println("删除所有数据库中的键"+jedis.flushAll());
        System.out.println("清空数据"+jedis.flushDB());
        System.out.println(jedis.set("k1","v1"));
        System.out.println(jedis.set("k2","v1"));
        System.out.println(jedis.set("k3","v1"));
        System.out.println("判断某个键是否存在"+jedis.exists("k1"));
        System.out.println("删除键k2"+jedis.del("k2"));
        System.out.println("获取键k2"+jedis.get("k2"));
        System.out.println("修改键k1"+jedis.set("k1","va1"));
        System.out.println("获取键k1"+jedis.get("k1"));
        System.out.println("在k3后面加入值"+jedis.append("k3","end"));
        System.out.println("获取键k3"+jedis.get("k3"));
        System.out.println("增加多个键值对"+jedis.mset("k4","v4","k5","v5"));
        System.out.println("获取多个键值对"+jedis.mget("k4","k5"));
        System.out.println("获取多个键值对"+jedis.mget("k4","k5","k6"));
        System.out.println("删除多个键值对"+jedis.del("k4","k5","k6"));
        System.out.println("获取多个键值对"+jedis.mget("k4","k5","k1"));
        System.out.println("清空数据"+jedis.flushDB());
        System.out.println("----------------------------新增键值防止覆盖原有值---------------------------------");
        System.out.println(jedis.setnx("k1","value1"));
        System.out.println(jedis.setnx("k2","value2"));
        System.out.println(jedis.setnx("k2","value3"));
        System.out.println(jedis.mget("k1","k2"));
        System.out.println("----------------------------新增键值设置过期时间---------------------------------");
        System.out.println(jedis.setex("k3",2,"value3"));
        System.out.println(jedis.get("k3"));
        try {
            TimeUnit.SECONDS.sleep(3);
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
        System.out.println(jedis.get("k3"));
        System.out.println("----------------------------获取原值,更新新值---------------------------------");
        System.out.println(jedis.getSet("k2","key2GetSet"));
        System.out.println(jedis.get("k2"));
        System.out.println("获得k2的值的字串"+jedis.getrange("k2",2,4));

List

//new 一个jedis对象
Jedis jedis=new Jedis("114.55.254.22",6379);
jedis.flushDB();
System.out.println("----------------添加一个list--------------------");
jedis.lpush("collections","ArrayList","Vector","HashMap");
jedis.lpush("collections","HashSet");
jedis.lpush("collections","TreeSet");
jedis.lpush("collections","TreeMap");
//-1代表查询所有
System.out.println("collections中的内容:"+jedis.lrange("collections",0,-1));
System.out.println("collections中区间值0-3的内容:"+jedis.lrange("collections",0,3));
System.out.println("------------------------------------");
//删除指定的值,第二个参数代表删除的个数(重复value值的情况下),先add进去的值先被删类似于出栈
System.out.println("删除指定元素个数:"+jedis.lrem("collections",2,"TreeMap"));
System.out.println("collections中的内容:"+jedis.lrange("collections",0,-1));
System.out.println("删除区间0-3以外的所有:"+jedis.ltrim("collections",0,3));
System.out.println("collections中的内容:"+jedis.lrange("collections",0,-1));
System.out.println("弹出collections中最左侧的元素(出栈)"+jedis.lpop("collections"));
System.out.println("collections中的内容:"+jedis.lrange("collections",0,-1));
System.out.println("弹出collections中最右侧的元素(出栈)"+jedis.rpop("collections"));
System.out.println("collections中的内容:"+jedis.lrange("collections",0,-1));
System.out.println("修改collections中下标为1的内容:"+jedis.lset("collections",1,"LinkedList"));
System.out.println("collections中的内容:"+jedis.lrange("collections",0,-1));
System.out.println("------------------------------------");
System.out.println("collections中长度:"+jedis.llen("collections"));
System.out.println("获取collections中下标为2的内容:"+jedis.lindex("collections",2));
System.out.println("------------------------------------");
jedis.lpush("sortedList","23","1","2","4","3","11");
System.out.println("sortedList排序前:"+jedis.lrange("sortedList",0,-1));
System.out.println("sortedList排序:"+jedis.sort("sortedList"));

Set

 //new 一个jedis对象
 Jedis jedis = new Jedis("114.55.254.22", 6379);
 //所有的命令都在jedis中
 System.out.println(jedis.ping());
 System.out.println("清空数据" + jedis.flushDB());
 System.out.println("----------------向集合中添加元素(不重复)----------------------");
 System.out.println(jedis.sadd("eleSet","e1","e2","e3","e4","e5","e6"));
 System.out.println(jedis.sadd("eleSet","e7"));
 System.out.println(jedis.sadd("eleSet","e7"));
 System.out.println("eleSet中所有的元素为:"+jedis.smembers("eleSet"));
 System.out.println("删除e1:"+jedis.srem("eleSet","e1"));
 System.out.println("eleSet中所有的元素为:"+jedis.smembers("eleSet"));
 System.out.println("删除e6,e7:"+jedis.srem("eleSet","e6","e7"));
 System.out.println("eleSet中所有的元素为:"+jedis.smembers("eleSet"));
 System.out.println("随机移除eleSet中的某个元素:"+jedis.spop("eleSet"));
 System.out.println("eleSet中所有的元素为:"+jedis.smembers("eleSet"));
 System.out.println("eleSet中的数量:"+jedis.scard("eleSet"));
 System.out.println("e3是否存在eleSet中:"+jedis.sismember("eleSet","e3"));
 System.out.println("e1是否存在eleSet中:"+jedis.sismember("eleSet","e1"));
 System.out.println("-----------------------------------------");
 System.out.println(jedis.sadd("eleSet1", "e1", "e2", "e3", "e4", "e5", "e6"));
 System.out.println(jedis.sadd("eleSet2", "e1", "e2", "e3", "e7", "e8", "e9"));
 System.out.println("将eleSet1中删除e1并存入eleSet3中:"+jedis.smove("eleSet1","eleSet3","e1"));
 System.out.println("将eleSet1中删除e3并存入eleSet3中:"+jedis.smove("eleSet1","eleSet3","e3"));
 System.out.println("eleSet1中的元素:"+jedis.smembers("eleSet1"));
 System.out.println("eleSet3中的元素:"+jedis.smembers("eleSet3"));
 System.out.println("-------------------集合运算----------------------");
 System.out.println("eleSet1中的元素:"+jedis.smembers("eleSet1"));
 System.out.println("eleSet2中的元素:"+jedis.smembers("eleSet2"));
 System.out.println("eleSet1中的元素和eleSet2中的元素的交集:"+jedis.sinter("eleSet1","eleSet2"));
 System.out.println("eleSet1中的元素和eleSet2中的元素的并集:"+jedis.sunion("eleSet1","eleSet2"));
//eleSet1中有eleSet2中没有的
 System.out.println("eleSet1中的元素和eleSet2中的元素的差集:"+jedis.sdiff("eleSet1","eleSet2"));
 System.out.println("eleSet1中的元素和eleSet2中的元素的交集并保存到eleSet4中:"
         +jedis.sinterstore("eleSet4","eleSet1","eleSet2"));
 System.out.println("eleSet4中的元素:"+jedis.smembers("eleSet4"));

Hash

//new 一个jedis对象
Jedis jedis = new Jedis("114.55.254.22", 6379);
jedis.flushDB();
Map<String, String> map = new HashMap<String, String>();
map.put("k1", "v1");
map.put("k2", "v2");
map.put("k3", "v3");
map.put("k4", "v4");
map.put("k5", "v5");
//添加名称为hash的元素
jedis.hmset("hash", map);
//向名称为hash中添加k为k6,value为v6的元素
jedis.hset("hash","k6","v6");
System.out.println("散列hash的所有键值对为:"+jedis.hgetAll("hash"));
System.out.println("散列hash的所有键:"+jedis.hkeys("hash"));
System.out.println("散列hash的所有值为:"+jedis.hvals("hash"));
System.out.println("将k7的值加上一个整数如果k7不存在就添加k7:"
        +jedis.hincrBy("hash","k7",1));
System.out.println("散列hash的所有键值对为:"+jedis.hgetAll("hash"));
System.out.println("将k7的值加上一个整数如果k7不存在就添加k7:"
        +jedis.hincrBy("hash","k7",1));
System.out.println("删除一个或多个键值对:"+jedis.hdel("hash","k1","k2"));
System.out.println("散列hash的所有键值对为:"+jedis.hgetAll("hash"));
System.out.println("散列hash的所有键值对个数:"+jedis.hlen("hash"));
System.out.println("判断k1是否存在:"+jedis.hexists("hash","k1"));
System.out.println("判断k3是否存在:"+jedis.hexists("hash","k3"));
System.out.println("获取hash中的值:"+jedis.hmget("hash","k3"));
System.out.println("获取hash中的值:"+jedis.hmget("hash","k3","k4"));

Zset

//new 一个jedis对象
Jedis jedis=new Jedis("114.55.254.22",6379);
//所有的命令都在jedis中
System.out.println(jedis.ping());
System.out.println("清空数据"+jedis.flushDB());
System.out.println("添加一个zset 1做为排序使用值为one:"+jedis.zadd("zset", 1, "one"));
System.out.println("添加一个zset 1做为排序使用值为two:"+jedis.zadd("zset", 2, "two"));
System.out.println("添加一个zset 1做为排序使用值为three:"+jedis.zadd("zset", 3, "three"));
System.out.println("获取一个zset所有元素 "+jedis.zrange("zset", 0, -1));
System.out.println("升序查询(-inf和+inf代表负无穷到正无穷)"+jedis.zrangeByScore("zset", "-inf", "+inf"));
System.out.println(" 降序查询"+jedis.zrevrangeByScore("zset", "+inf", "-inf"));
System.out.println("0 1 区间中的数量"+jedis.zcount("zset",0,1));
System.out.println("移除指定的名称"+jedis.zrem("zset", "one"));
System.out.println("获取一个zset所有元素"+jedis.zrange("zset", 0, -1));
System.out.println("查看大小"+jedis.zcard("zset"));
//其他命令和set一样

事务

//多线程下测试redis的watch
 //watch的实际用法https://www.jianshu.com/p/93cd65d07b56
new Thread(() -> {
    try {
        TimeUnit.SECONDS.sleep(3);
    } catch (InterruptedException e) {
        e.printStackTrace();
    }
    Jedis jedis1 = new Jedis("114.55.254.22", 6379);
    System.out.println(Thread.currentThread().getName()+":"+jedis1.get("user1"));
    jedis1.set("user1","0.4");
},"a").start();


new Thread(() -> {
    //new 一个jedis对象
    Jedis jedis = new Jedis("114.55.254.22", 6379);
    JSONObject user = new JSONObject();
    user.put("name", "zhangSan");
    user.put("age", "1");
    String s = user.toJSONString();
    jedis.watch("user1");
    System.out.println("拿到了"+jedis.get("user1"));
    //开启事务
    Transaction multi = jedis.multi();
    try {
        multi.set("user1", s);
  		//延迟3s模拟
        try {
            TimeUnit.SECONDS.sleep(3);
        } catch (InterruptedException e) {
            e.printStackTrace();
        }
        multi.exec();
    } catch (Exception e) {
        //失败放弃事务
        multi.discard();
        e.printStackTrace();
    } finally {
        //这里返回的肯定是没有修改成功的值因为在我们执行exec方法之前监视的值已经被修改了所以导致我们修改失败
       
        System.out.println("成功的"+jedis.get("user1"));
        //关闭连接
        jedis.close();
    }
}).start();

springboot整合

springbooot2.x之后原来使用的jedis被替换为了lettuce;

jedis:采用的是直连,如果有多个线程操作是不安全的,如果要避免不安全就要使用jedis的连接池。

lettuce:采用netty,实例可以在多个线程中进行共享,不存在线程不安全的情况。可以减少线程数量。

//Redis配置类源码
@Bean
@ConditionalOnMissingBean(name = "redisTemplate")//我们可以自己定义一个redisTemplate来替换默认的
public RedisTemplate<Object, Object> redisTemplate(RedisConnectionFactory redisConnectionFactory)
      throws UnknownHostException {
    //默认的RedisTemplate 没有过多的设置,redis 对象都需要序列化!这里没有序列化操作
    //两个泛型都是object类型我们使用需要强制转换
   RedisTemplate<Object, Object> template = new RedisTemplate<>();
   template.setConnectionFactory(redisConnectionFactory);
   return template;
}

@Bean
@ConditionalOnMissingBean
//因为string类型常用所以单独有个StringRedisTemplate配置
public StringRedisTemplate stringRedisTemplate(RedisConnectionFactory redisConnectionFactory)
      throws UnknownHostException {
   StringRedisTemplate template = new StringRedisTemplate();
   template.setConnectionFactory(redisConnectionFactory);
   return template;
}

导入依赖

<!--操作redis-->
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>

配置基本yml

spring:
  #配置redis
  redis:
    host: 127.0.0.1
    port: 6379

测试

@Resource
private RedisTemplate redisTemplate;

@Test
void contextLoads() {
    //redisTemplate操作不同的类型选择类型后如opsForValue后面在.出来的东西和jedis一样
    //opsForValue操作字符串类似String类型
    //opsForList操作list类型
    //opsForSet 操作set
    //opsForZSet 操作ZSet
    //opsForHash 操作hash
    //除了基本的操作常用的方法都可以直接.出来如redisTemplate.watch();redisTemplate.multi();
    redisTemplate.opsForValue().set("mykey","张三");
    System.out.println(redisTemplate.opsForValue().get("mykey"));
    //获取redis连接对象如flushAll();和flushDb();都在里面
    RedisConnection connection = redisTemplate.getConnectionFactory().getConnection();
    connection.flushAll();
    connection.flushDb();
}

redis中查看乱码原因

//默认的RedisTemplate源码里序列化是使用默认的jdk序列化的
@Nullable
private RedisSerializer keySerializer = null;
@Nullable
private RedisSerializer valueSerializer = null;
@Nullable
private RedisSerializer hashKeySerializer = null;
@Nullable
private RedisSerializer hashValueSerializer = null;

 if (this.defaultSerializer == null) {
            this.defaultSerializer = new JdkSerializationRedisSerializer(this.classLoader != null ? this.classLoader : this.getClass().getClassLoader());
        }
//默认的jdk序列化可能会引起中文转义所以我们需要自己定义配置类
 if (this.enableDefaultSerializer) {
            if (this.keySerializer == null) {
                this.keySerializer = this.defaultSerializer;
                defaultUsed = true;
            }

            if (this.valueSerializer == null) {
                this.valueSerializer = this.defaultSerializer;
                defaultUsed = true;
            }

            if (this.hashKeySerializer == null) {
                this.hashKeySerializer = this.defaultSerializer;
                defaultUsed = true;
            }

            if (this.hashValueSerializer == null) {
                this.hashValueSerializer = this.defaultSerializer;
                defaultUsed = true;
            }
        }


关于对象的保存

//Caused by: java.lang.IllegalArgumentException: DefaultSerializer requires a Serializable payload but received an object of type [com.gf.pojo.User] 直接传输对象会报错告诉我们没有序列化所以我们传输对象需要序列化
//自己定义RedisTemplate
import com.fasterxml.jackson.annotation.JsonAutoDetect;
import com.fasterxml.jackson.annotation.PropertyAccessor;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.fasterxml.jackson.databind.jsontype.impl.LaissezFaireSubTypeValidator;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.data.redis.connection.RedisConnectionFactory;
import org.springframework.data.redis.core.RedisTemplate;
import org.springframework.data.redis.serializer.Jackson2JsonRedisSerializer;
import org.springframework.data.redis.serializer.StringRedisSerializer;

import java.net.UnknownHostException;

/**
 * @author Administrator
 */
@Configuration
public class RedisConfig {

    /**
     * 编写我们直接的 RedisTemplate
     * @param redisConnectionFactory
     * @return
     * @throws UnknownHostException
     */
    @Bean
    public RedisTemplate<String, Object> redisTemplate(RedisConnectionFactory redisConnectionFactory) throws UnknownHostException {
        RedisTemplate<String, Object> template = new RedisTemplate<>();
        //连接工厂
        template.setConnectionFactory(redisConnectionFactory);
        //配置具体的序列化方式
        //jackson序列化
        Jackson2JsonRedisSerializer<Object> Jackson2JsonRedisSerializer = new Jackson2JsonRedisSerializer<>(Object.class);
        //使用objectMapper
        ObjectMapper om = new ObjectMapper();
        // 指定要序列化的域,field,get和set,以及修饰符范围,ANY是都有包括private和public
        om.setVisibility(PropertyAccessor.ALL, JsonAutoDetect.Visibility.ANY);
        // 指定序列化输入的类型,类必须是非final修饰的,final修饰的类,比如String,Integer等会跑出异常om.enableDefaultTyping(ObjectMapper.DefaultTyping.NON_FINAL); 已经过时,
        // 参考https://blog.csdn.net/zzhongcy/article/details/105813105 启用反序列化所需的类型信息,在属性中添加@class
        om.activateDefaultTyping(LaissezFaireSubTypeValidator.instance, ObjectMapper.DefaultTyping.NON_FINAL);
        Jackson2JsonRedisSerializer.setObjectMapper(om);
        //值采用json序列化
        template.setValueSerializer(Jackson2JsonRedisSerializer);
        StringRedisSerializer stringRedisSerializer = new StringRedisSerializer();
        //key采用StringRedisSerializer来序列化和反序列化redis的key值
        template.setKeySerializer(stringRedisSerializer);
        // 设置hash key 和value序列化模式
        template.setHashKeySerializer(stringRedisSerializer);
        template.setHashValueSerializer(Jackson2JsonRedisSerializer);
        template.afterPropertiesSet();
        return template;
    }
}

redis工具类

package com.gf.utils;

import org.springframework.data.redis.core.RedisTemplate;
import org.springframework.stereotype.Component;
import org.springframework.util.CollectionUtils;

import javax.annotation.Resource;
import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.TimeUnit;

@Component
public final class RedisUtil {

    @Resource
    private RedisTemplate<String, Object> redisTemplate;

    // =============================common============================

    /**
     * 指定缓存失效时间
     *
     * @param key  键
     * @param time 时间(秒)
     */
    public boolean expire(String key, long time) {
        try {
            if (time > 0) {
                redisTemplate.expire(key, time, TimeUnit.SECONDS);
            }
            return true;
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }

    /**
     * 根据key 获取过期时间
     *
     * @param key 键 不能为null
     * @return 时间(秒) 返回0代表为永久有效
     */
    public long getExpire(String key) {
        return redisTemplate.getExpire(key, TimeUnit.SECONDS);
    }


    /**
     * 判断key是否存在
     *
     * @param key 键
     * @return true 存在 false不存在
     */
    public boolean hasKey(String key) {
        try {
            return redisTemplate.hasKey(key);
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }


    /**
     * 删除缓存
     *
     * @param key 可以传一个值 或多个
     */
    @SuppressWarnings("unchecked")
    public void del(String... key) {
        if (key != null && key.length > 0) {
            if (key.length == 1) {
                redisTemplate.delete(key[0]);
            } else {
                redisTemplate.delete(CollectionUtils.arrayToList(key));
            }
        }
    }


    // ============================String=============================

    /**
     * 普通缓存获取
     *
     * @param key 键
     * @return 值
     */
    public Object get(String key) {
        return key == null ? null : redisTemplate.opsForValue().get(key);
    }

    /**
     * 普通缓存放入
     *
     * @param key   键
     * @param value 值
     * @return true成功 false失败
     */

    public boolean set(String key, Object value) {
        try {
            redisTemplate.opsForValue().set(key, value);
            return true;
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }


    /**
     * 普通缓存放入并设置时间
     *
     * @param key   键
     * @param value 值
     * @param time  时间(秒) time要大于0 如果time小于等于0 将设置无限期
     * @return true成功 false 失败
     */

    public boolean set(String key, Object value, long time) {
        try {
            if (time > 0) {
                redisTemplate.opsForValue().set(key, value, time, TimeUnit.SECONDS);
            } else {
                set(key, value);
            }
            return true;
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }


    /**
     * 递增
     *
     * @param key   键
     * @param delta 要增加几(大于0)
     */
    public long incr(String key, long delta) {
        if (delta < 0) {
            throw new RuntimeException("递增因子必须大于0");
        }
        return redisTemplate.opsForValue().increment(key, delta);
    }


    /**
     * 递减
     *
     * @param key   键
     * @param delta 要减少几(小于0)
     */
    public long decr(String key, long delta) {
        if (delta < 0) {
            throw new RuntimeException("递减因子必须大于0");
        }
        return redisTemplate.opsForValue().increment(key, -delta);
    }


    // ================================Map=================================

    /**
     * HashGet
     *
     * @param key  键 不能为null
     * @param item 项 不能为null
     */
    public Object hget(String key, String item) {
        return redisTemplate.opsForHash().get(key, item);
    }

    /**
     * 获取hashKey对应的所有键值
     *
     * @param key 键
     * @return 对应的多个键值
     */
    public Map<Object, Object> hmget(String key) {
        return redisTemplate.opsForHash().entries(key);
    }

    /**
     * HashSet
     *
     * @param key 键
     * @param map 对应多个键值
     */
    public boolean hmset(String key, Map<String, Object> map) {
        try {
            redisTemplate.opsForHash().putAll(key, map);
            return true;
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }


    /**
     * HashSet 并设置时间
     *
     * @param key  键
     * @param map  对应多个键值
     * @param time 时间(秒)
     * @return true成功 false失败
     */
    public boolean hmset(String key, Map<String, Object> map, long time) {
        try {
            redisTemplate.opsForHash().putAll(key, map);
            if (time > 0) {
                expire(key, time);
            }
            return true;
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }


    /**
     * 向一张hash表中放入数据,如果不存在将创建
     *
     * @param key   键
     * @param item  项
     * @param value 值
     * @return true 成功 false失败
     */
    public boolean hset(String key, String item, Object value) {
        try {
            redisTemplate.opsForHash().put(key, item, value);
            return true;
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }

    /**
     * 向一张hash表中放入数据,如果不存在将创建
     *
     * @param key   键
     * @param item  项
     * @param value 值
     * @param time  时间(秒) 注意:如果已存在的hash表有时间,这里将会替换原有的时间
     * @return true 成功 false失败
     */
    public boolean hset(String key, String item, Object value, long time) {
        try {
            redisTemplate.opsForHash().put(key, item, value);
            if (time > 0) {
                expire(key, time);
            }
            return true;
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }


    /**
     * 删除hash表中的值
     *
     * @param key  键 不能为null
     * @param item 项 可以使多个 不能为null
     */
    public void hdel(String key, Object... item) {
        redisTemplate.opsForHash().delete(key, item);
    }


    /**
     * 判断hash表中是否有该项的值
     *
     * @param key  键 不能为null
     * @param item 项 不能为null
     * @return true 存在 false不存在
     */
    public boolean hHasKey(String key, String item) {
        return redisTemplate.opsForHash().hasKey(key, item);
    }


    /**
     * hash递增 如果不存在,就会创建一个 并把新增后的值返回
     *
     * @param key  键
     * @param item 项
     * @param by   要增加几(大于0)
     */
    public double hincr(String key, String item, double by) {
        return redisTemplate.opsForHash().increment(key, item, by);
    }


    /**
     * hash递减
     *
     * @param key  键
     * @param item 项
     * @param by   要减少记(小于0)
     */
    public double hdecr(String key, String item, double by) {
        return redisTemplate.opsForHash().increment(key, item, -by);
    }


    // ============================set=============================

    /**
     * 根据key获取Set中的所有值
     *
     * @param key 键
     */
    public Set<Object> sGet(String key) {
        try {
            return redisTemplate.opsForSet().members(key);
        } catch (Exception e) {
            e.printStackTrace();
            return null;
        }
    }


    /**
     * 根据value从一个set中查询,是否存在
     *
     * @param key   键
     * @param value 值
     * @return true 存在 false不存在
     */
    public boolean sHasKey(String key, Object value) {
        try {
            return redisTemplate.opsForSet().isMember(key, value);
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }


    /**
     * 将数据放入set缓存
     *
     * @param key    键
     * @param values 值 可以是多个
     * @return 成功个数
     */
    public long sSet(String key, Object... values) {
        try {
            return redisTemplate.opsForSet().add(key, values);
        } catch (Exception e) {
            e.printStackTrace();
            return 0;
        }
    }


    /**
     * 将set数据放入缓存
     *
     * @param key    键
     * @param time   时间(秒)
     * @param values 值 可以是多个
     * @return 成功个数
     */
    public long sSetAndTime(String key, long time, Object... values) {
        try {
            Long count = redisTemplate.opsForSet().add(key, values);
            if (time > 0)
                expire(key, time);
            return count;
        } catch (Exception e) {
            e.printStackTrace();
            return 0;
        }
    }


    /**
     * 获取set缓存的长度
     *
     * @param key 键
     */
    public long sGetSetSize(String key) {
        try {
            return redisTemplate.opsForSet().size(key);
        } catch (Exception e) {
            e.printStackTrace();
            return 0;
        }
    }


    /**
     * 移除值为value的
     *
     * @param key    键
     * @param values 值 可以是多个
     * @return 移除的个数
     */

    public long setRemove(String key, Object... values) {
        try {
            Long count = redisTemplate.opsForSet().remove(key, values);
            return count;
        } catch (Exception e) {
            e.printStackTrace();
            return 0;
        }
    }

    // ===============================list=================================

    /**
     * 获取list缓存的内容
     *
     * @param key   键
     * @param start 开始
     * @param end   结束 0 到 -1代表所有值
     */
    public List<Object> lGet(String key, long start, long end) {
        try {
            return redisTemplate.opsForList().range(key, start, end);
        } catch (Exception e) {
            e.printStackTrace();
            return null;
        }
    }


    /**
     * 获取list缓存的长度
     *
     * @param key 键
     */
    public long lGetListSize(String key) {
        try {
            return redisTemplate.opsForList().size(key);
        } catch (Exception e) {
            e.printStackTrace();
            return 0;
        }
    }


    /**
     * 通过索引 获取list中的值
     *
     * @param key   键
     * @param index 索引 index>=0时, 0 表头,1 第二个元素,依次类推;index<0时,-1,表尾,-2倒数第二个元素,依次类推
     */
    public Object lGetIndex(String key, long index) {
        try {
            return redisTemplate.opsForList().index(key, index);
        } catch (Exception e) {
            e.printStackTrace();
            return null;
        }
    }


    /**
     * 将list放入缓存
     *
     * @param key   键
     * @param value 值
     */
    public boolean lSet(String key, Object value) {
        try {
            redisTemplate.opsForList().rightPush(key, value);
            return true;
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }


    /**
     * 将list放入缓存
     *
     * @param key   键
     * @param value 值
     * @param time  时间(秒)
     */
    public boolean lSet(String key, Object value, long time) {
        try {
            redisTemplate.opsForList().rightPush(key, value);
            if (time > 0)
                expire(key, time);
            return true;
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }

    }


    /**
     * 将list放入缓存
     *
     * @param key   键
     * @param value 值
     * @return
     */
    public boolean lSet(String key, List<Object> value) {
        try {
            redisTemplate.opsForList().rightPushAll(key, value);
            return true;
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }

    }


    /**
     * 将list放入缓存
     *
     * @param key   键
     * @param value 值
     * @param time  时间(秒)
     * @return
     */
    public boolean lSet(String key, List<Object> value, long time) {
        try {
            redisTemplate.opsForList().rightPushAll(key, value);
            if (time > 0)
                expire(key, time);
            return true;
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }


    /**
     * 根据索引修改list中的某条数据
     *
     * @param key   键
     * @param index 索引
     * @param value 值
     * @return
     */

    public boolean lUpdateIndex(String key, long index, Object value) {
        try {
            redisTemplate.opsForList().set(key, index, value);
            return true;
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }


    /**
     * 移除N个值为value
     *
     * @param key   键
     * @param count 移除多少个
     * @param value 值
     * @return 移除的个数
     */

    public long lRemove(String key, long count, Object value) {
        try {
            Long remove = redisTemplate.opsForList().remove(key, count, value);
            return remove;
        } catch (Exception e) {
            e.printStackTrace();
            return 0;
        }

    }

}

redis.conf

启动的时候就通过配置文件来启动的

单位

# Redis configuration file example.
#
# Note that in order to read the configuration file, Redis must be
# started with the file path as first argument:
#
# ./redis-server /path/to/redis.conf

# Note on units: when memory size is needed, it is possible to specify
# it in the usual form of 1k 5GB 4M and so forth:
# 可以设置默认的大小
# 1k => 1000 bytes
# 1kb => 1024 bytes
# 1m => 1000000 bytes
# 1mb => 1024*1024 bytes
# 1g => 1000000000 bytes
# 1gb => 1024*1024*1024 bytes
# 配置文件对大小写不敏感
# units are case insensitive so 1GB 1Gb 1gB are all the same.

################################## INCLUDES ###################################

包含 include

# Include one or more other config files here.  This is useful if you
# have a standard template that goes to all Redis servers but also need
# to customize a few per-server settings.  Include files can include
# other files, so use this wisely.
#
# Notice option "include" won't be rewritten by command "CONFIG REWRITE"
# from admin or Redis Sentinel. Since Redis always uses the last processed
# line as value of a configuration directive, you'd better put includes
# at the beginning of this file to avoid overwriting config change at runtime.
#
# If instead you are interested in using includes to override configuration
# options, it is better to use include as the last line.
# 可以包含多个配置文件
# include /path/to/local.conf
# include /path/to/other.conf
                                

网络 NETWORK

################################## NETWORK #####################################

# By default, if no "bind" configuration directive is specified, Redis listens
# for connections from all the network interfaces available on the server.
# It is possible to listen to just one or multiple selected interfaces using
# the "bind" configuration directive, followed by one or more IP addresses.
#
# Examples:
#
# bind 192.168.1.100 10.0.0.1
# bind 127.0.0.1 ::1
#
# ~~~ WARNING ~~~ If the computer running Redis is directly exposed to the
# internet, binding to all the interfaces is dangerous and will expose the
# instance to everybody on the internet. So by default we uncomment the
# following bind directive, that will force Redis to listen only into
# the IPv4 loopback interface address (this means Redis will be able to
# accept connections only from clients running into the same computer it
# is running).
#
# IF YOU ARE SURE YOU WANT YOUR INSTANCE TO LISTEN TO ALL THE INTERFACES
# JUST COMMENT THE FOLLOWING LINE.
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#不注释掉代表只能本地访问可以使用*号或者注释掉
#bind 127.0.0.1

# Protected mode is a layer of security protection, in order to avoid that
# Redis instances left open on the internet are accessed and exploited.
#
# When protected mode is on and if:
#
# 1) The server is not binding explicitly to a set of addresses using the
#    "bind" directive.
# 2) No password is configured.
#
# The server only accepts connections from clients connecting from the
# IPv4 and IPv6 loopback addresses 127.0.0.1 and ::1, and from Unix domain
# sockets.
#
# By default protected mode is enabled. You should disable it only if
# you are sure you want clients from other hosts to connect to Redis
# even if no authentication is configured, nor a specific set of interfaces
# are explicitly listed using the "bind" directive.
#是否受保护的默认是开启我这里为了远程连接关闭掉了
protected-mode no

# Accept connections on the specified port, default is 6379 (IANA #815344).
# If port 0 is specified Redis will not listen on a TCP socket.
#默认端口6379可以更改
port 6379

# TCP listen() backlog.
#
# In high requests-per-second environments you need an high backlog in order
# to avoid slow clients connections issues. Note that the Linux kernel
# will silently truncate it to the value of /proc/sys/net/core/somaxconn so
# make sure to raise both the value of somaxconn and tcp_max_syn_backlog
# in order to get the desired effect.
tcp-backlog 511

# Unix socket.
#
# Specify the path for the Unix socket that will be used to listen for
# incoming connections. There is no default, so Redis will not listen
# on a unix socket when not specified.
#
# unixsocket /tmp/redis.sock
# unixsocketperm 700

# Close the connection after a client is idle for N seconds (0 to disable)
timeout 0

# TCP keepalive.
#
# If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence
# of communication. This is useful for two reasons:
#
# 1) Detect dead peers.
# 2) Take the connection alive from the point of view of network
#    equipment in the middle.
#
# On Linux, the specified value (in seconds) is the period used to send ACKs.
# Note that to close the connection the double of the time is needed.
# On other kernels the period depends on the kernel configuration.
#
# A reasonable value for this option is 300 seconds, which is the new
# Redis default starting with Redis 3.2.1.
tcp-keepalive 300

通用配置

################################# GENERAL #####################################

# By default Redis does not run as a daemon. Use 'yes' if you need it.
# Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
#以守护进程方式运行默认是no需要设置为yes
daemonize yes

# If you run Redis from upstart or systemd, Redis can interact with your
# supervision tree. Options:
#   supervised no      - no supervision interaction
#   supervised upstart - signal upstart by putting Redis into SIGSTOP mode
#   supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET
#   supervised auto    - detect upstart or systemd method based on
#                        UPSTART_JOB or NOTIFY_SOCKET environment variables
# Note: these supervision methods only signal "process is ready."
#       They do not enable continuous liveness pings back to your supervisor.
#管理守护进程的不需要动
supervised no

# If a pid file is specified, Redis writes it where specified at startup
# and removes it at exit.
#
# When the server runs non daemonized, no pid file is created if none is
# specified in the configuration. When the server is daemonized, the pid file
# is used even if not specified, defaulting to "/var/run/redis.pid".
#
# Creating a pid file is best effort: if Redis is not able to create it
# nothing bad happens, the server will start and run normally.
#如果是以后台的方式运行我们就需要指定一个pid文件
pidfile /www/server/redis/redis.pid

# Specify the server verbosity level.
# This can be one of:
# debug (a lot of information, useful for development/testing) 测试和开发阶段
# verbose (many rarely useful info, but not a mess like the debug level) 记录较多的日志
# notice (moderately verbose, what you want in production probably) 生产环境使用
# warning (only very important / critical messages are logged) 紧急的
#日志的类型
loglevel notice

# Specify the log file name. Also the empty string can be used to force
# Redis to log on the standard output. Note that if you use standard
# output for logging but daemonize, logs will be sent to /dev/null
#日志文件的位置和名字默认是空需要修改
logfile "/www/server/redis/redis.log"

# To enable logging to the system logger, just set 'syslog-enabled' to yes,
# and optionally update the other syslog parameters to suit your needs.
# syslog-enabled no

# Specify the syslog identity.
# syslog-ident redis

# Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7.
# syslog-facility local0

# Set the number of databases. The default database is DB 0, you can select
# a different one on a per-connection basis using SELECT <dbid> where
# dbid is a number between 0 and 'databases'-1
#默认的数据库数量
databases 16

# By default Redis shows an ASCII art logo only when started to log to the
# standard output and if the standard output is a TTY. Basically this means
# that normally a logo is displayed only in interactive sessions.
#
# However it is possible to force the pre-4.0 behavior and always show a
# ASCII art logo in startup logs by setting the following option to yes.
#是否显示logo默认是开启的启动的logo    
always-show-logo yes

快照 SNAPSHOTTING (持久化中使用到) 在规定的时间内执行了多少次操作则会持久化到文件.rdb/.aof

################################ SNAPSHOTTING  ################################
# Redis是内存数据库如果没有持久化那么断电会失去
# Save the DB on disk:
#
#   save <seconds> <changes>
#
#   Will save the DB if both the given number of seconds and the given
#   number of write operations against the DB occurred.
#
#   In the example below the behaviour will be to save:
#   after 900 sec (15 min) if at least 1 key changed
#   after 300 sec (5 min) if at least 10 keys changed
#   after 60 sec if at least 10000 keys changed
#
#   Note: you can disable saving completely by commenting out all "save" lines.
#
#   It is also possible to remove all the previously configured save
#   points by adding a save directive with a single empty string argument
#   like in the following example:
#
#   save ""
#持久化规则可以自己设置
#如果900s以内至少有1个key进行了修改,我们就进行持久化操作    
save 900 1
#如果300s以内至少有10个key进行了修改,我们就进行持久化操作    
save 300 10
#如果60s以内至少有10000个key进行了修改,我们就进行持久化操作    
save 60 10000

# By default Redis will stop accepting writes if RDB snapshots are enabled
# (at least one save point) and the latest background save failed.
# This will make the user aware (in a hard way) that data is not persisting
# on disk properly, otherwise chances are that no one will notice and some
# disaster will happen.
#
# If the background saving process will start working again Redis will
# automatically allow writes again.
#
# However if you have setup your proper monitoring of the Redis server
# and persistence, you may want to disable this feature so that Redis will
# continue to work as usual even if there are problems with disk,
# permissions, and so forth.
#持久化出现错误以后是否继续工作默认开启
stop-writes-on-bgsave-error yes

# Compress string objects using LZF when dump .rdb databases?
# For default that's set to 'yes' as it's almost always a win.
# If you want to save some CPU in the saving child set it to 'no' but
# the dataset will likely be bigger if you have compressible values or keys.
#是否压缩rdb(持久化)文件默认开启需要消耗cpu一些资源    
rdbcompression yes

# Since version 5 of RDB a CRC64 checksum is placed at the end of the file.
# This makes the format more resistant to corruption but there is a performance
# hit to pay (around 10%) when saving and loading RDB files, so you can disable it
# for maximum performances.
#
# RDB files created with checksum disabled have a checksum of zero that will
# tell the loading code to skip the check.
#保存rdb文件进行检查    
rdbchecksum yes

# The filename where to dump the DB
#rdb文件保存的目录    
dbfilename dump.rdb

# The working directory.
#
# The DB will be written inside this directory, with the filename specified
# above using the 'dbfilename' configuration directive.
#
# The Append Only File will also be created inside this directory.
#
# Note that you must specify a directory here, not a file name.
dir /www/server/redis/

复制 REPLICATION 主从复制的参数文件

################################# REPLICATION #################################

# Master-Replica replication. Use replicaof to make a Redis instance a copy of
# another Redis server. A few things to understand ASAP about Redis replication.
#
#   +------------------+      +---------------+
#   |      Master      | ---> |    Replica    |
#   | (receive writes) |      |  (exact copy) |
#   +------------------+      +---------------+
#
# 1) Redis replication is asynchronous, but you can configure a master to
#    stop accepting writes if it appears to be not connected with at least
#    a given number of replicas.
# 2) Redis replicas are able to perform a partial resynchronization with the
#    master if the replication link is lost for a relatively small amount of
#    time. You may want to configure the replication backlog size (see the next
#    sections of this file) with a sensible value depending on your needs.
# 3) Replication is automatic and does not need user intervention. After a
#    network partition replicas automatically try to reconnect to masters
#    and resynchronize with them.
# 配置主机的配置 主机ip地址  主机端口
# replicaof <masterip> <masterport>

# If the master is password protected (using the "requirepass" configuration
# directive below) it is possible to tell the replica to authenticate before
# starting the replication synchronization process, otherwise the master will
# refuse the replica request.
# 主机的密码
# masterauth <master-password>

# When a replica loses its connection with the master, or when the replication
# is still in progress, the replica can act in two different ways:
#
# 1) if replica-serve-stale-data is set to 'yes' (the default) the replica will
#    still reply to client requests, possibly with out of date data, or the
#    data set may just be empty if this is the first synchronization.
#
# 2) if replica-serve-stale-data is set to 'no' the replica will reply with
#    an error "SYNC with master in progress" to all the kind of commands
#    but to INFO, replicaOF, AUTH, PING, SHUTDOWN, REPLCONF, ROLE, CONFIG,
#    SUBSCRIBE, UNSUBSCRIBE, PSUBSCRIBE, PUNSUBSCRIBE, PUBLISH, PUBSUB,
#    COMMAND, POST, HOST: and LATENCY.
#
replica-serve-stale-data yes

# You can configure a replica instance to accept writes or not. Writing against
# a replica instance may be useful to store some ephemeral data (because data
# written on a replica will be easily deleted after resync with the master) but
# may also cause problems if clients are writing to it because of a
# misconfiguration.
#
# Since Redis 2.6 by default replicas are read-only.
#
# Note: read only replicas are not designed to be exposed to untrusted clients
# on the internet. It's just a protection layer against misuse of the instance.
# Still a read only replica exports by default all the administrative commands
# such as CONFIG, DEBUG, and so forth. To a limited extent you can improve
# security of read only replicas using 'rename-command' to shadow all the
# administrative / dangerous commands.
replica-read-only yes

# Replication SYNC strategy: disk or socket.
#
# -------------------------------------------------------
# WARNING: DISKLESS REPLICATION IS EXPERIMENTAL CURRENTLY
# -------------------------------------------------------
#
# New replicas and reconnecting replicas that are not able to continue the replication
# process just receiving differences, need to do what is called a "full
# synchronization". An RDB file is transmitted from the master to the replicas.
# The transmission can happen in two different ways:
#
# 1) Disk-backed: The Redis master creates a new process that writes the RDB
#                 file on disk. Later the file is transferred by the parent
#                 process to the replicas incrementally.
# 2) Diskless: The Redis master creates a new process that directly writes the
#              RDB file to replica sockets, without touching the disk at all.
#
# With disk-backed replication, while the RDB file is generated, more replicas
# can be queued and served with the RDB file as soon as the current child producing
# the RDB file finishes its work. With diskless replication instead once
# the transfer starts, new replicas arriving will be queued and a new transfer
# will start when the current one terminates.
#
# When diskless replication is used, the master waits a configurable amount of
# time (in seconds) before starting the transfer in the hope that multiple replicas
# will arrive and the transfer can be parallelized.
#
# With slow disks and fast (large bandwidth) networks, diskless replication
# works better.
repl-diskless-sync no

# When diskless replication is enabled, it is possible to configure the delay
# the server waits in order to spawn the child that transfers the RDB via socket
# to the replicas.
#
# This is important since once the transfer starts, it is not possible to serve
# new replicas arriving, that will be queued for the next RDB transfer, so the server
# waits a delay in order to let more replicas arrive.
#
# The delay is specified in seconds, and by default is 5 seconds. To disable
# it entirely just set it to 0 seconds and the transfer will start ASAP.
repl-diskless-sync-delay 5

# Replicas send PINGs to server in a predefined interval. It's possible to change
# this interval with the repl_ping_replica_period option. The default value is 10
# seconds.
#
# repl-ping-replica-period 10

# The following option sets the replication timeout for:
#
# 1) Bulk transfer I/O during SYNC, from the point of view of replica.
# 2) Master timeout from the point of view of replicas (data, pings).
# 3) Replica timeout from the point of view of masters (REPLCONF ACK pings).
#
# It is important to make sure that this value is greater than the value
# specified for repl-ping-replica-period otherwise a timeout will be detected
# every time there is low traffic between the master and the replica.
#
# repl-timeout 60

# Disable TCP_NODELAY on the replica socket after SYNC?
#
# If you select "yes" Redis will use a smaller number of TCP packets and
# less bandwidth to send data to replicas. But this can add a delay for
# the data to appear on the replica side, up to 40 milliseconds with
# Linux kernels using a default configuration.
#
# If you select "no" the delay for data to appear on the replica side will
# be reduced but more bandwidth will be used for replication.
#
# By default we optimize for low latency, but in very high traffic conditions
# or when the master and replicas are many hops away, turning this to "yes" may
# be a good idea.
repl-disable-tcp-nodelay no

# Set the replication backlog size. The backlog is a buffer that accumulates
# replica data when replicas are disconnected for some time, so that when a replica
# wants to reconnect again, often a full resync is not needed, but a partial
# resync is enough, just passing the portion of data the replica missed while
# disconnected.
#
# The bigger the replication backlog, the longer the time the replica can be
# disconnected and later be able to perform a partial resynchronization.
#
# The backlog is only allocated once there is at least a replica connected.
#
# repl-backlog-size 1mb

# After a master has no longer connected replicas for some time, the backlog
# will be freed. The following option configures the amount of seconds that
# need to elapse, starting from the time the last replica disconnected, for
# the backlog buffer to be freed.
#
# Note that replicas never free the backlog for timeout, since they may be
# promoted to masters later, and should be able to correctly "partially
# resynchronize" with the replicas: hence they should always accumulate backlog.
#
# A value of 0 means to never release the backlog.
#
# repl-backlog-ttl 3600

# The replica priority is an integer number published by Redis in the INFO output.
# It is used by Redis Sentinel in order to select a replica to promote into a
# master if the master is no longer working correctly.
#
# A replica with a low priority number is considered better for promotion, so
# for instance if there are three replicas with priority 10, 100, 25 Sentinel will
# pick the one with priority 10, that is the lowest.
#
# However a special priority of 0 marks the replica as not able to perform the
# role of master, so a replica with priority of 0 will never be selected by
# Redis Sentinel for promotion.
#
# By default the priority is 100.
replica-priority 100

# It is possible for a master to stop accepting writes if there are less than
# N replicas connected, having a lag less or equal than M seconds.
#
# The N replicas need to be in "online" state.
#
# The lag in seconds, that must be <= the specified value, is calculated from
# the last ping received from the replica, that is usually sent every second.
#
# This option does not GUARANTEE that N replicas will accept the write, but
# will limit the window of exposure for lost writes in case not enough replicas
# are available, to the specified number of seconds.
#
# For example to require at least 3 replicas with a lag <= 10 seconds use:
#
# min-replicas-to-write 3
# min-replicas-max-lag 10
#
# Setting one or the other to 0 disables the feature.
#
# By default min-replicas-to-write is set to 0 (feature disabled) and
# min-replicas-max-lag is set to 10.

# A Redis master is able to list the address and port of the attached
# replicas in different ways. For example the "INFO replication" section
# offers this information, which is used, among other tools, by
# Redis Sentinel in order to discover replica instances.
# Another place where this info is available is in the output of the
# "ROLE" command of a master.
#
# The listed IP and address normally reported by a replica is obtained
# in the following way:
#
#   IP: The address is auto detected by checking the peer address
#   of the socket used by the replica to connect with the master.
#
#   Port: The port is communicated by the replica during the replication
#   handshake, and is normally the port that the replica is using to
#   listen for connections.
#
# However when port forwarding or Network Address Translation (NAT) is
# used, the replica may be actually reachable via different IP and port
# pairs. The following two options can be used by a replica in order to
# report to its master a specific set of IP and port, so that both INFO
# and ROLE will report those values.
#
# There is no need to use both the options if you need to override just
# the port or the IP address.
#
# replica-announce-ip 5.5.5.5
# replica-announce-port 1234

安全 SECURITY

################################## SECURITY ###################################

# Require clients to issue AUTH <PASSWORD> before processing any other
# commands.  This might be useful in environments in which you do not trust
# others with access to the host running redis-server.
#
# This should stay commented out for backward compatibility and because most
# people do not need auth (e.g. they run their own servers).
#
# Warning: since Redis is pretty fast an outside user can try up to
# 150k passwords per second against a good box. This means that you should
# use a very strong password otherwise it will be very easy to break.
# redis设置密码默认是没有密码
# requirepass foobared

# Command renaming.
#
# It is possible to change the name of dangerous commands in a shared
# environment. For instance the CONFIG command may be renamed into something
# hard to guess so that it will still be available for internal-use tools
# but not available for general clients.
#
# Example:
#
# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
#
# It is also possible to completely kill a command by renaming it into
# an empty string:
#
# rename-command CONFIG ""
#
# Please note that changing the name of commands that are logged into the
# AOF file or transmitted to replicas may cause problems.
 

限制 CLIENTS

################################### CLIENTS ####################################

# Set the max number of connected clients at the same time. By default
# this limit is set to 10000 clients, however if the Redis server is not
# able to configure the process file limit to allow for the specified limit
# the max number of allowed clients is set to the current file limit
# minus 32 (as Redis reserves a few file descriptors for internal uses).
#
# Once the limit is reached Redis will close all the new connections sending
# an error 'max number of clients reached'.
#默认能连接上redis最大客户端数量
# maxclients 10000
############################## MEMORY MANAGEMENT ################################

# Set a memory usage limit to the specified amount of bytes.
# When the memory limit is reached Redis will try to remove keys
# according to the eviction policy selected (see maxmemory-policy).
#
# If Redis can't remove keys according to the policy, or if the policy is
# set to 'noeviction', Redis will start to reply with errors to commands
# that would use more memory, like SET, LPUSH, and so on, and will continue
# to reply to read-only commands like GET.
#
# This option is usually useful when using Redis as an LRU or LFU cache, or to
# set a hard memory limit for an instance (using the 'noeviction' policy).
#
# WARNING: If you have replicas attached to an instance with maxmemory on,
# the size of the output buffers needed to feed the replicas are subtracted
# from the used memory count, so that network problems / resyncs will
# not trigger a loop where keys are evicted, and in turn the output
# buffer of replicas is full with DELs of keys evicted triggering the deletion
# of more keys, and so forth until the database is completely emptied.
#
# In short... if you have replicas attached it is suggested that you set a lower
# limit for maxmemory so that there is some free RAM on the system for replica
# output buffers (but this is not needed if the policy is 'noeviction').
# redis配置最大的内存容量
# maxmemory <bytes>

# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
# is reached. You can select among five behaviors:
#
# volatile-lru -> Evict using approximated LRU among the keys with an expire set.
# allkeys-lru -> Evict any key using approximated LRU.
# volatile-lfu -> Evict using approximated LFU among the keys with an expire set.
# allkeys-lfu -> Evict any key using approximated LFU.
# volatile-random -> Remove a random key among the ones with an expire set.
# allkeys-random -> Remove a random key, any key.
# volatile-ttl -> Remove the key with the nearest expire time (minor TTL)
# noeviction -> Don't evict anything, just return an error on write operations.
#
# LRU means Least Recently Used
# LFU means Least Frequently Used
#
# Both LRU, LFU and volatile-ttl are implemented using approximated
# randomized algorithms.
#
# Note: with any of the above policies, Redis will return an error on write
#       operations, when there are no suitable keys for eviction.
#
#       At the date of writing these commands are: set setnx setex append
#       incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
#       sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
#       zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
#       getset mset msetnx exec sort
#
# The default is:
#内存达到上限的处理策略 移除一些过期的key/报错
#1、volatile-lru:只对设置了过期时间的key进行LRU(默认值) 
#2、allkeys-lru : 删除lru算法的key   
#3、volatile-random:随机删除即将过期key   
#4、allkeys-random:随机删除   
#5、volatile-ttl : 删除即将过期的   
#6、noeviction : 永不过期,返回错误
# maxmemory-policy noeviction

# LRU, LFU and minimal TTL algorithms are not precise algorithms but approximated
# algorithms (in order to save memory), so you can tune it for speed or
# accuracy. For default Redis will check five keys and pick the one that was
# used less recently, you can change the sample size using the following
# configuration directive.
#
# The default of 5 produces good enough results. 10 Approximates very closely
# true LRU but costs more CPU. 3 is faster but not very accurate.
#
# maxmemory-samples 5

# Starting from Redis 5, by default a replica will ignore its maxmemory setting
# (unless it is promoted to master after a failover or manually). It means
# that the eviction of keys will be just handled by the master, sending the
# DEL commands to the replica as keys evict in the master side.
#
# This behavior ensures that masters and replicas stay consistent, and is usually
# what you want, however if your replica is writable, or you want the replica to have
# a different memory setting, and you are sure all the writes performed to the
# replica are idempotent, then you may change this default (but be sure to understand
# what you are doing).
#
# Note that since the replica by default does not evict, it may end using more
# memory than the one set via maxmemory (there are certain buffers that may
# be larger on the replica, or data structures may sometimes take more memory and so
# forth). So make sure you monitor your replicas and make sure they have enough
# memory to never hit a real out-of-memory condition before the master hits
# the configured maxmemory setting.
#
# replica-ignore-maxmemory yes

APPEND ONLY MODE 模式 aof

############################## APPEND ONLY MODE ###############################

# By default Redis asynchronously dumps the dataset on disk. This mode is
# good enough in many applications, but an issue with the Redis process or
# a power outage may result into a few minutes of writes lost (depending on
# the configured save points).
#
# The Append Only File is an alternative persistence mode that provides
# much better durability. For instance using the default data fsync policy
# (see later in the config file) Redis can lose just one second of writes in a
# dramatic event like a server power outage, or a single write if something
# wrong with the Redis process itself happens, but the operating system is
# still running correctly.
#
# AOF and RDB persistence can be enabled at the same time without problems.
# If the AOF is enabled on startup Redis will load the AOF, that is the file
# with the better durability guarantees.
#
# Please check http://redis.io/topics/persistence for more information.
#默认是关闭aof模式的 默认是使用rdb持久化方式在默认情况下rdb完全够用
appendonly no

# The name of the append only file (default: "appendonly.aof")
#持久化文件的名字
appendfilename "appendonly.aof"

# The fsync() call tells the Operating System to actually write data on disk
# instead of waiting for more data in the output buffer. Some OS will really flush
# data on disk, some other OS will just try to do it ASAP.
#
# Redis supports three different modes:
#
# no: don't fsync, just let the OS flush the data when it wants. Faster.
# always: fsync after every write to the append only log. Slow, Safest.
# everysec: fsync only one time every second. Compromise.
#
# The default is "everysec", as that's usually the right compromise between
# speed and data safety. It's up to you to understand if you can relax this to
# "no" that will let the operating system flush the output buffer when
# it wants, for better performances (but if you can live with the idea of
# some data loss consider the default persistence mode that's snapshotting),
# or on the contrary, use "always" that's very slow but a bit safer than
# everysec.
#
# More details please check the following article:
# http://antirez.com/post/redis-persistence-demystified.html
#
# If unsure, use "everysec".

#没次修改都会同步速度较慢
# appendfsync always
#每秒执行一次同步 可能会丢失这1s数据
appendfsync everysec
#不同步
# appendfsync no

Redis持久化

Redis是内存数据库,如果不能将内存中的数据库状态保存到磁盘,那么一旦服务器进程退出,服务器中的数据库状态也会消失,所以Redis提供了持久化功能

RDB(Redis DataBase)

什么是RDB

在指定的时间间隔内将内存中的数据集快照写入磁盘,也就是Snapshot快照,它恢复时是将快照文件直接读到内存中。

Redis会单独创建(fork)一个子进程来进行持久化,会先将数据写入到一个临时的文件中,待持久化过程结束再用这个临时文件替换上次持久化好的文件,整个过程主线程是不进行任何IO操作的这就确保了极高的性能。如果需要大规模数据的恢复,且对于数据恢复的完整性不是非常敏感,那么RDB方式要比AOF方式更加高效。RDB缺点是最后一次持久化后数据可能丢失(生产环境我们可以进行备份rdb文件)

触发机制

1.save规则满足后会触发

2.执行flushall命令后也会触发

3.退出redis,也会触发

如何恢复rdb文件

1.只需要将rdb文件放在redis启动目录下就可以了redis会自动检查并恢复其中的数据

2.查看我们需要存放的位置(config get dir)

优点:

1.适合大规模的数据恢复

2.对数据的完整性不高

缺点:

1.需要一定的时间间隔进行操作

2.如果redis意外宕机了最后一次修改的数据就没了

3.fork进程时会占用一定的内存空间

AOF (Append Only File)

AOF是什么

将我们所有的命令都记录下来,恢复的时候就把我们这个文件全部执行一遍

以日志的形式记录每个写的操作,将Redis执行的过程所有执行指令记录下来(读操作不记录),只追加文件但不可以改写文件,Redis启动之初会读取该文件重新构建数据,换言之,redis重启的话就根据日志的写入命令在执行一次完成数据恢复Aof保存的是 appendonly.aof文件

默认是不开启的我们需要手动开启appendonly 改为yes表示开启重启后就可以生效了

如果这个aof文件有错误这时候redis是无法启动的我们需要修复aof文件使用Redis-check-aof --fix来修复

#./redis-check-aof --fix  路径/appendonly.aof
./redis-check-aof --fix ../appendonly.aof
0x              6e: Expected prefix '*', got: 'a'
AOF analyzed: size=116, ok_up_to=110, diff=6
This will shrink the AOF from 116 bytes, with 6 bytes, to 110 bytes
#输入Y
Continue? [y/N]: Y
#修复成功Successfully
Successfully truncated AOF

同步方式

#没次修改都会同步速度较慢
# appendfsync always
#默认是这个(每秒执行一次同步 可能会丢失这1s数据)
appendfsync everysec
#不同步
# appendfsync no

重写规则

no-appendfsync-on-rewrite no

# Automatic rewrite of the append only file.
# Redis is able to automatically rewrite the log file implicitly calling
# BGREWRITEAOF when the AOF log size grows by the specified percentage.
#
# This is how it works: Redis remembers the size of the AOF file after the
# latest rewrite (if no rewrite has happened since the restart, the size of
# the AOF at startup is used).
#
# This base size is compared to the current size. If the current size is
# bigger than the specified percentage, the rewrite is triggered. Also
# you need to specify a minimal size for the AOF file to be rewritten, this
# is useful to avoid rewriting the AOF file even if the percentage increase
# is reached but it is still pretty small.
#
# Specify a percentage of zero in order to disable the automatic AOF
# rewrite feature.
#如果aof文件大于64mbfork一个新的进程来进行重写
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb

优点:

默认每1秒同步一次文件,完整性更好

缺点:

相对于数据文件来说,aof远远大于rdb,修复速度比rdb慢

aof运行效率比rdb慢

Redis发布订阅

Redis发布订阅(pub/sub)是一种消息通信模式,发送者(pub)发送消息,订阅者(sub)接收消息。

Redis客户端可以订阅任意数量的频道

测试

#订阅者 PSUBSCRIBE多个订阅
#subscribe gf 订阅一个gf的消息并实时接受这个消息
SUBSCRIBE gf
Reading messages... (press Ctrl-C to quit)
#得到的消息
1) "message" #消息
2) "gf" #哪个频道的消息
3) "zheshiyighahahah" #消息内容
1) "message"
2) "gf"
3) "heihehiehi"
#消息发布者
# 往gf这个频道发送消息
publish gf heihehiehi
(integer) 1

Redis主从复制

概念

主从复制是指将一台Redis服务器的数据,复制到其他的Redis服务器.前者称之为主节点(master/leader),后者称为从节点;数据的复制是单向的,只能由主节点到从节点.Master以写为主,Skave以读为主

默认情况下,每台Redis服务器都是主节点;且一个主节点可以有多个从节点(或没有),但一个从节点只能有一个主节点。

主从复制的作用:

1.数据冗余:主从复制实现了数据的热备份,是持久化之外的一种数据冗余方式。

2.故障恢复:当主节点出现问题时,可以由从节点提供服务

3.负载均衡:在主从复制的基础上,配合读写分离,由主节点提供写入,从节点提供读取分担服务器压力。

4.高可用基石:主从复制还是哨兵模式和集群能够实施的基础

环境配置

只配置从库,不需要配置主库

#查看当前库的消息
info replication
# Replication
role:master #主库
connected_slaves:0#没有从机
master_replid:eeea2eb94554d44c5ae4f8de56ded697be859725
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:0
second_repl_offset:-1
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0
127.0.0.1:6379>  

复制3个redis.conf文件并修改对应信息

1:端口号修改

2:日志名称修改

3: pidfile名称修改

4:rdb文件名称修改

#修改后启动3个redis查看进程有3个redis在运行
[root@izbp1ghrkgusjy2buhhq4qz redis]# ps -ef|grep redis
root     31658     1  0 15:55 ?        00:00:00 ./redis-server *:6379
root     31666     1  0 15:55 ?        00:00:00 ./redis-server *:6380
root     31672     1  0 15:56 ?        00:00:00 ./redis-server *:6381
root     31678 31600  0 15:56 pts/3    00:00:00 grep --color=auto redis

搭建一主二从

只配置从库,不需要配置主库

#在从机中配置 主机的Ip 和端口
SLAVEOF 127.0.0.1 6379
#在从机中查看信息
 info replication
# Replication
role:slave #从机
master_host:127.0.0.1 #主机ip
master_port:6379 #主机端口
master_link_status:up
master_last_io_seconds_ago:9
master_sync_in_progress:0
slave_repl_offset:14
slave_priority:100
slave_read_only:1
connected_slaves:0
master_replid:12077cd62be896f0e581cfae7daefd97f93355df
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:14
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1
repl_backlog_histlen:14

真实的主从配置应该在配置文件中配置那样是永久的如果使用命令是暂时的

主机中的所有数据都会被从机保存

如果是使用命令配置的主从如果有一个从集断开后在连接他会恢复成主机(但是只要恢复了从机那么就可以立即获取主机的值)

全量复制:slave服务在接收到数据库文件数据后,将其保存并加载

增量复制:Master继续将新的值依次传递给slave完成同步

但是只要是重新连接master,一次完全同步(全量复制)将被自动执行

#如果主机宕机后通过命令手动更改主节点(如果主机修复了那就只能重新连接)
#将目前这个从机当作主机
> SLAVEOF no one 

哨兵模式

当主机宕机以后我们不需要在手动配置从机为主机而是通过哨兵模式自动选取主机

哨兵模式是一种特殊的模式,首先Redis提供哨兵的命令,哨兵是一个独立的进程,作为进程,他会独立运行,他通过发送命令给Redis客户端来判断是否宕机

测试

1.配置哨兵配置文件 sentinel.conf
#sentinel monitor 被监控的名称 主机地址 端口  2(代表有多少个哨兵觉得这个主机挂了才是真的挂了否则是主管下线)
sentinel monitor mymaster 127.0.0.1 6379 2
2.启动哨兵
#启动redis-sentinel 路径加上sentinel.conf
./redis-sentinel ../sentinel.conf

如果主机断掉了那么会从从机里选出一个主机,如果原来的主机恢复后只能当作从机。

优点:

1.哨兵集群,基于主从复制

2.主从可以切换,故障可以转移

3.哨兵模式就是主从模式的升级

缺点:

1.Redis不好在线扩容,集群容量一旦到达上限,在线扩容就十分麻烦

2.哨兵模式的配置是很麻烦的有多种选择

哨兵模式的全部配置

# Example sentinel.conf

# *** IMPORTANT ***
#
# By default Sentinel will not be reachable from interfaces different than
# localhost, either use the 'bind' directive to bind to a list of network
# interfaces, or disable protected mode with "protected-mode no" by
# adding it to this configuration file.
#
# Before doing that MAKE SURE the instance is protected from the outside
# world via firewalling or other means.
#
# For example you may use one of the following:
#
# bind 127.0.0.1 192.168.1.1
#
# protected-mode no

# port <sentinel-port>
# The port that this sentinel instance will run on
#哨兵sentinel实例运行的端口默认26379如果有哨兵集群我们需要配置每个哨兵端口
port 26379

# By default Redis Sentinel does not run as a daemon. Use 'yes' if you need it.
# Note that Redis will write a pid file in /var/run/redis-sentinel.pid when
# daemonized.
daemonize no

# When running daemonized, Redis Sentinel writes a pid file in
# /var/run/redis-sentinel.pid by default. You can specify a custom pid file
# location here.
pidfile "/var/run/redis-sentinel.pid"

# Specify the log file name. Also the empty string can be used to force
# Sentinel to log on the standard output. Note that if you use standard
# output for logging but daemonize, logs will be sent to /dev/null
logfile ""

# sentinel announce-ip <ip>
# sentinel announce-port <port>
#
# The above two configuration directives are useful in environments where,
# because of NAT, Sentinel is reachable from outside via a non-local address.
#
# When announce-ip is provided, the Sentinel will claim the specified IP address
# in HELLO messages used to gossip its presence, instead of auto-detecting the
# local address as it usually does.
#
# Similarly when announce-port is provided and is valid and non-zero, Sentinel
# will announce the specified TCP port.
#
# The two options don't need to be used together, if only announce-ip is
# provided, the Sentinel will announce the specified IP and the server port
# as specified by the "port" option. If only announce-port is provided, the
# Sentinel will announce the auto-detected local IP and the specified port.
#
# Example:
#
# sentinel announce-ip 1.2.3.4

# dir <working-directory>
# Every long running process should have a well-defined working directory.
# For Redis Sentinel to chdir to /tmp at startup is the simplest thing
# for the process to don't interfere with administrative tasks such as
# unmounting filesystems.
# 哨兵sentinel的工作目录
dir "/tmp"

# sentinel monitor <master-name> <ip> <redis-port> <quorum>
#
# Tells Sentinel to monitor this master, and to consider it in O_DOWN
# (Objectively Down) state only if at least <quorum> sentinels agree.
#
# Note that whatever is the ODOWN quorum, a Sentinel will require to
# be elected by the majority of the known Sentinels in order to
# start a failover, so no failover can be performed in minority.
#
# Replicas are auto-discovered, so you don't need to specify replicas in
# any way. Sentinel itself will rewrite this configuration file adding
# the replicas using additional configuration options.
# Also note that the configuration file is rewritten when a
# replica is promoted to master.
#
# Note: master name should not include special characters or spaces.
# The valid charset is A-z 0-9 and the three characters ".-_".
sentinel myid ed1798aba4ccdc556a6fcd10ab569b6bb898cbc6


# sentinel auth-pass <master-name> <password>
# Set the password to use to authenticate with the master and replicas.
# Useful if there is a password set in the Redis instances to monitor.
#
# Note that the master password is also used for replicas, so it is not
# possible to set a different password in masters and replicas instances
# if you want to be able to monitor these instances with Sentinel.
#
# However you can have Redis instances without the authentication enabled
# mixed with Redis instances requiring the authentication (as long as the
# password set is the same for all the instances requiring the password) as
# the AUTH command will have no effect in Redis instances with authentication
# switched off.
#
# Example:
# 当Redis实例中开启了密码这样连接Redis实例的客户端都要提供密码
# 设置哨兵连接主从的密码 注意必须为主从设置一样的密码
# sentinel auth-pass mymaster MySUPER--secret-0123passw0rd

# sentinel auth-user <master-name> <username>
#
# This is useful in order to authenticate to instances having ACL capabilities,
# that is, running Redis 6.0 or greater. When just auth-pass is provided the
# Sentinel instance will authenticate to Redis using the old "AUTH <pass>"
# method. When also an username is provided, it will use "AUTH <user> <pass>".
# In the Redis servers side, the ACL to provide just minimal access to
# Sentinel instances, should be configured along the following lines:
#
#     user sentinel-user >somepassword +client +subscribe +publish \
#                        +ping +info +multi +slaveof +config +client +exec on

# 指定多少毫秒之后 主节点没有响应哨兵此时哨兵认为主节点下线默认30s
# sentinel down-after-milliseconds <master-name> <milliseconds>
# sentinel down-after-milliseconds mymaster 3000  
# Number of milliseconds the master (or any attached replica or sentinel) should
# be unreachable (as in, not acceptable reply to PING, continuously, for the
# specified period) in order to consider it in S_DOWN state (Subjectively
# Down).
#
# Default is 30 seconds.
sentinel deny-scripts-reconfig yes

# requirepass <password>
#
# You can configure Sentinel itself to require a password, however when doing
# so Sentinel will try to authenticate with the same password to all the
# other Sentinels. So you need to configure all your Sentinels in a given
# group with the same "requirepass" password. Check the following documentation
# for more info: https://redis.io/topics/sentinel

# 指定在发生了failover主备切换时最多可以有多少个slave同时对master进行同步
# 这个数越小完成failover所需时间越长
# 但是如果这个数越大就意味越多的slave因为replication而不可用
# 可以通过设置为1来保证每次只有一个slave处于不能处理命令的请求状态
# sentinel parallel-syncs <master-name> <numreplicas>
# sentinel parallel-syncs mymaster 1
# How many replicas we can reconfigure to point to the new replica simultaneously
# during the failover. Use a low number if you use the replicas to serve query
# to avoid that all the replicas will be unreachable at about the same
# time while performing the synchronization with the master.
# 哨兵sentinel 监控的redis主节点的 ip port 
# master-name 可以自己命名的主节点名字 只能由字母A-Z 数字0-9 这3个字符".-_"组成
# quorum 配置多少个哨兵统一认为主节点丢失那么就客观认为主节点丢失了
#sentinel monitor <master-name> <ip> <port> <quorum>
sentinel monitor mymaster 127.0.0.1 6381 2

# 故障转移的超时时间
# 同一个哨兵对同一个master两次failover之间的间隔
# 当一个slave从一个错误的master那里同步数据开始计算时间直到slave被纠正为向正确的master那里同步数据
# 当想取消一个正在进行的failover所需要的时间
# 当进行failover时配置所有的slave指向新的master所需的最大时间,不过即使超过了这个时间slave依然会被正确配置指向master,但不按照parallel-syncs规则来
# 默认3分种
# sentinel failover-timeout <master-name> <milliseconds>
# sentinel failover-timeout mymaster 180000
# Specifies the failover timeout in milliseconds. It is used in many ways:
#
# - The time needed to re-start a failover after a previous failover was
#   already tried against the same master by a given Sentinel, is two
#   times the failover timeout.
#
# - The time needed for a replica replicating to a wrong master according
#   to a Sentinel current configuration, to be forced to replicate
#   with the right master, is exactly the failover timeout (counting since
#   the moment a Sentinel detected the misconfiguration).
#
# - The time needed to cancel a failover that is already in progress but
#   did not produced any configuration change (SLAVEOF NO ONE yet not
#   acknowledged by the promoted replica).
#
# - The maximum time a failover in progress waits for all the replicas to be
#   reconfigured as replicas of the new master. However even after this time
#   the replicas will be reconfigured by the Sentinels anyway, but not with
#   the exact parallel-syncs progression as specified.
#
# Default is 3 minutes.
sentinel config-epoch mymaster 1

# SCRIPTS EXECUTION
#
# sentinel notification-script and sentinel reconfig-script are used in order
# to configure scripts that are called to notify the system administrator
# or to reconfigure clients after a failover. The scripts are executed
# with the following rules for error handling:
#
# If script exits with "1" the execution is retried later (up to a maximum
# number of times currently set to 10).
#
# If script exits with "2" (or an higher value) the script execution is
# not retried.
#
# If script terminates because it receives a signal the behavior is the same
# as exit code 1.
#
# A script has a maximum running time of 60 seconds. After this limit is
# reached the script is terminated with a SIGKILL and the execution retried.

# NOTIFICATION SCRIPT
#
# sentinel notification-script <master-name> <script-path>
#
# Call the specified notification script for any sentinel event that is
# generated in the WARNING level (for instance -sdown, -odown, and so forth).
# This script should notify the system administrator via email, SMS, or any
# other messaging system, that there is something wrong with the monitored
# Redis systems.
#
# The script is called with just two arguments: the first is the event type
# and the second the event description.
#
# The script must exist and be executable in order for sentinel to start if
# this option is provided.
#
# Example:
# 配置当某一事件发生时所需要的脚本,可以通过脚本来通知管理员,比如当系统不正常时发送邮件通知管理员
# 若脚本执行后返回1那么该脚本稍后会再次执行重复次数默认20
# 若脚本执行后返回2,或者比2更高的值脚本将不会重复执行
# 如果脚本在执行过程中由于收到系统中断信号被终止了则返回值1时相同行为
# 一个脚本最大执行时间为60s如果超过这个时间脚本会被一个SIGKILL信号终止之后重新执行
# 
# 通知脚本:当哨兵有任何警告级别的事件发生,将会调用这个脚本,这时这个脚本应该通过邮件等方式通知管理员,调用该脚本时有2个参数,一个是事件类型
# 一个是事件描述,如果在配置文件中配置了这个脚本路径那么必须保证这个脚本存在而且可以执行否则哨兵无法启动成功
# shell编程
# sentinel notification-script mymaster /var/redis/notify.sh
#
# CLIENTS RECONFIGURATION SCRIPT
#
# sentinel client-reconfig-script <master-name> <script-path>
#
# When the master changed because of a failover a script can be called in
# order to perform application-specific tasks to notify the clients that the
# configuration has changed and the master is at a different address.
#
# The following arguments are passed to the script:
#
# <master-name> <role> <state> <from-ip> <from-port> <to-ip> <to-port>
#
# <state> is currently always "failover"
# <role> is either "leader" or "observer"
#
# The arguments from-ip, from-port, to-ip, to-port are used to communicate
# the old address of the master and the new address of the elected replica
# (now a master).
#
# This script should be resistant to multiple invocations.
#
# Example:
# 客户端重新配置主节点参数脚本
# 当主机由于failover而发生改变时,这个脚本将被调用通知相关的客户端关于master已经被改变
# 以下参数会在调用脚本时传给脚本
# <master-name><role><state><from-ip><from-port><to-ip><to-port>
# 这个脚本是通用的能被多次调用
# sentinel client-reconfig-script mymaster /var/redis/reconfig.sh

# SECURITY
#
# By default SENTINEL SET will not be able to change the notification-script
# and client-reconfig-script at runtime. This avoids a trivial security issue
# where clients can set the script to anything and trigger a failover in order
# to get the program executed.

sentinel leader-epoch mymaster 1

# REDIS COMMANDS RENAMING
#
# Sometimes the Redis server has certain commands, that are needed for Sentinel
# to work correctly, renamed to unguessable strings. This is often the case
# of CONFIG and SLAVEOF in the context of providers that provide Redis as
# a service, and don't want the customers to reconfigure the instances outside
# of the administration console.
#
# In such case it is possible to tell Sentinel to use different command names
 # instead of the normal ones. For example if the master "mymaster", and the
# associated replicas, have "CONFIG" all renamed to "GUESSME", I could use:
#
# SENTINEL rename-command mymaster CONFIG GUESSME
#
# After such configuration is set, every time Sentinel would use CONFIG it will
# use GUESSME instead. Note that there is no actual need to respect the command
# case, so writing "config guessme" is the same in the example above.
#
# SENTINEL SET can also be used in order to perform this configuration at runtime.
#
# In order to set a command back to its original name (undo the renaming), it
# is possible to just rename a command to itsef:
#
# SENTINEL rename-command mymaster CONFIG CONFIG
# Generated by CONFIG REWRITE
protected-mode no
user default on nopass ~* +@all
sentinel known-replica mymaster 127.0.0.1 6380
sentinel known-replica mymaster 127.0.0.1 6379
sentinel current-epoch 1                                                                                                                     

Redis穿透和雪崩

缓存穿透

用户想要查询一个数据发现Redis内存中没有,于是向数据库查询,发现也没有于是本次查询失败,当用户多时缓存都没有命中于是都去请求数据库这会给数据库造成很大的压力这时候就相当于出现了缓存穿透。

解决方案

布隆过滤器

布隆过滤器是一种数据结构,对所有可能查询的参数以hash形式存储,在控制层先进行效验,不符合则丢弃,从而避免了对底层存储系统的压力

缓存空对象

​ 当存储层不命中后,即使返回的空对象也把他存储起来,同时设置一个过期时间,之后再访问这个数据将会从缓存中获取。

​ 但是有非常多的空数据影响性能而且可能会导致数据不同步。

缓存击穿

缓存击穿是执向一个key非常热点,在不停的抗这大并发,大并发集中对这一个点进行访问,当这个key在失效的瞬间持续大并发就会导致直接访问数据库

解决方案

设置热点不过期

从缓存层来看没有设置过期时间所以不会出现热点key过期后产生的问题

加互斥锁

分布式锁:使用分布式锁,保证对于每个key同时只有一个线程去查询后端服务,其他线程没有获得分布式锁的权限,因此只需要等待即可,这种方式将高并发的压力转移到分布式锁,因此对分布式锁的考验很大

缓存雪崩

在某一个时间段缓存集体过期失效或者Redis集群宕机

解决方案

Redis高可用

多增加几台redis一台挂掉可以使用其他的

限流降级

缓存失效后通过加锁或者队列来控制读数据库写缓存的线程数量,比如某一个key只允许一个线程查询数据库和写缓存其他线程等待

数据预热

数据预热的含义是正式部署之前我们可以先把数据先预访问一遍,这样部分的大访问量数据就会加载到缓存中,在即将发生大并发访问前手动触发加载缓存key设置不同的过期时间让缓存失效时间尽量均匀。

  • 2
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值