Redis学习笔记

Redis

NoSql

NoSql是什么

NoSQL(NoSQL = Not Only SQL ),意即“不仅仅是SQL”,
泛指非关系型的数据库。随着互联网web2.0网站的兴起,传统的关系数据库在应付web2.0网站,特别是超大规模和高并发的SNS类型的web2.0纯动态网站已经显得力不从心,暴露了很多难以克服的问题,而非关系型的数据库则由于其本身的特点得到了非常迅速的发展。NoSQL数据库的产生就是为了解决大规模数据集合多重数据种类带来的挑战,尤其是大数据应用难题,包括超大规模数据的存储。

(例如谷歌或Facebook每天为他们的用户收集万亿比特的数据)。这些类型的数据存储不需要固定的模式,无需多余操作就可以横向扩展。

为什么使用NoSQL

今天我们可以通过第三方平台(如:Google,Facebook等)可以很容易的访问和抓取数据。用户的个人信息,社交网络,地理位置,用户生成的数据和用户操作日志已经成倍的增加。我们如果要对这些用户数据进行挖掘,那SQL数据库已经不适合这些应用了, NoSQL数据库的发展也却能很好的处理这些大的数据。

NoSql的优点

1. 易扩展

NoSQL数据库种类繁多,但是一个共同的特点都是去掉关系数据库的关系型特性。
数据之间无关系,这样就非常容易扩展。也无形之间,在架构的层面上带来了可扩展的能力。

2. 多样灵活的数据模型

NoSQL无需事先为要存储的数据建立字段,随时可以存储自定义的数据格式。而在关系数据库里,
增删字段是一件非常麻烦的事情。如果是非常大数据量的表,增加字段简直就是一个噩梦

3. 传统RDBMS VS NOSQL

RDBMS vs NoSQL

RDBMS

  • 高度组织化结构化数据
  • 结构化查询语言(SQL)
  • 数据和关系都存储在单独的表中。
  • 数据操纵语言,数据定义语言
  • 严格的一致性
  • 基础事务

NoSQL

  • 代表着不仅仅是SQL
  • 没有声明性查询语言
  • 没有预定义的模式
    -键 - 值对存储,列存储,文档存储,图形数据库
  • 最终一致性,而非ACID属性
  • 非结构化和不可预知的数据
  • CAP定理
  • 高性能,高可用性和可伸缩性

NoSql有哪些

Redis
Memcached
Mongdb

NoSql解决的问题

大数据时代的3V
  • 海量Volume
  • 多样Variety
  • 实时Velocity
互联网需求的3高
  • 高并发
  • 高可扩
  • 高性能

NoSql应用案例

  • 电商项目中
  1. 商品基本信息
  2. 商品描述、详情、评价信息(多文字类)
  3. 商品的图片
  4. 商品的关键字
  5. 商品的波段性的热点高频信息
  6. 商品的交易、价格计算、积分累计
  • 难点
  1. 数据类型多样性
  2. 数据源多样性和变化重构
  3. 数据源改造而数据服务平台不需要大面积重构
  • 解决办法

UDSL:统一数据服务层。 为了解决多数据源多数据的问题,以及各种数据库之前不同映射的问题。UDSL也就是在网站应用集群和底层数据源之间,建立了一层代理。

作用:

  1. 完成各种数据库之间的映射
  2. 统一 API
  3. 热点缓存

udsl

NoSql数据模型

什么是Bson

BSON(/ˈbiːsən/)是一种计算机数据交换格式,主要被用作MongoDB数据库中的数据存储和网络传输格式。它是一种二进制表示形式,能用来表示简单数据结构、关联数组(MongoDB中称为“对象”或“文档”)以及MongoDB中的各种数据类型。BSON之名缘于JSON,含义为Binary JSON(二进制JSON)。 它和JSON一样,支持内嵌的文档对象和数组对象

{
	"customer":
	{
		"id":1136,
		"name":"Z3",
		"billingAddress":
		[{
			"city":"beijing"
		}],
		"orders":
		[{
			"id":17,
			"customerId":1136,
			"orderItems":
			[{
				"productId":27,
				"price":77.5,
				"productName":"thinking in java"
			}],
			"shippingAddress":
			[{
				"city":"beijing"
			}],
			"orderPayment":
			[{
				"ccinfo":"111-222-333",
				"txnid":"asdfadcd334",
				"billingAddress":
				{
					"city":"beijing"
				}
			}]
		}]
	}
}
为什么要用Bson来处理
  • 高并发的操作是不太建议有关联查询的,互联网公司用冗余数据来避免关联查询
  • 分布式事务支持不了太多的并发
聚合模型
  • KV键值
  • bson
  • 列族
    • 顾名思义,是按列存储数据的。最大的特点是方便存储结构化和半结构化数据,方便做数据压缩,
      对针对某一列或者某几列的查询有非常大的IO优势。

img

  • 图形

img

NoSql数据库四大分类

键值(Key-Value)存储数据库

这一类数据库主要会使用到一个哈希表,这个表中有一个特定的键和一个指针指向特定的数据。Key/value模型对于IT系统来说的优势在于简单、易部署。但是如果DBA]只对部分值进行查询或更新的时候,Key/value就显得效率低下了。

如:BerkeleyDB,Redis,tair,memcache。

文档型数据库

文档型数据库的灵感是来自于Lotus Notes办公软件的,而且它同第一种键值存储相类似。该类型的数据模型是版本化的文档,半结构化的文档以特定的格式存储,比如JSON。文档型数据库可 以看作是键值数据库的升级版,允许之间嵌套键值。而且文档型数据库比键值数据库的查询效率更高。

如:CouchDB, MongoDb. 国内也有文档型数据库SequoiaDB,已经开源。

列存储数据库

顾名思义,是按列存储数据的。最大的特点是方便存储结构化和半结构化数据,方便做数据压缩,
对针对某一列或者某几列的查询有非常大的IO优势。

这部分数据库通常是用来应对分布式存储的海量数据。键仍然存在,但是它们的特点是指向了多个列。这些列是由列家族来安排的。

如:Cassandra, HBase, Riak.

图形(Graph)数据库

图形结构的数据库同其他行列以及刚性结构的SQL数据库不同,它是使用灵活的图形模型,并且能够扩展到多个服务器上。NoSQL数据库没有标准的查询语言(SQL),因此进行数据库查询需要制定数据模型。许多NoSQL数据库都有REST式的数据接口或者查询API。

如:Neo4J, InfoGrid, Infinite Graph。

对比

对比图

关系型数据库事务的ACID

  1. A (Atomicity) 原子性
    原子性很容易理解,也就是说事务里的所有操作要么全部做完,要么都不做,事务成功的条件是事务里的所有操作都成功,只要有一个操作失败,整个事务就失败,需要回滚。比如银行转账,从A账户转100元至B账户,分为两个步骤:1)从A账户取100元;2)存入100元至B账户。这两步要么一起完成,要么一起不完成,如果只完成第一步,第二步失败,钱会莫名其妙少了100元。
  2. C (Consistency) 一致性
    一致性也比较容易理解,也就是说数据库要一直处于一致的状态,事务的运行不会改变数据库原本的一致性约束。
  3. I (Isolation) 独立性
    所谓的独立性是指并发的事务之间不会互相影响,如果一个事务要访问的数据正在被另外一个事务修改,只要另外一个事务未提交,它所访问的数据就不受未提交事务的影响。比如现有有个交易是从A账户转100元至B账户,在这个交易还未完成的情况下,如果此时B查询自己的账户,是看不到新增加的100元的
  4. D (Durability) 持久性
    持久性是指一旦事务提交后,它所做的修改将会永久的保存在数据库上,即使出现宕机也不会丢失。

NoSql的事务CAP

  1. C(Consistency)强一致性

    任何一个读操作总是能读取到之前完成的写操作结果,也就是在分布式环境中,多点的数据是一致的。

  2. A(Availability)可用性

    保证每个请求不管成功或者失败都有响应 。

  3. P(Partition tolerance)分区容错性

    在出现网络分区(如断网)的情况下,分离的系统也能正常运行。

一致性©与可用性(A)的决择

数据库事务一致性需求
  很多web实时系统并不要求严格的数据库事务,对读一致性的要求很低, 有些场合对写一致性要求并不高。允许实现最终一致性。

数据库的写实时性和读实时性需求
  对关系数据库来说,插入一条数据之后立刻查询,是肯定可以读出来这条数据的,但是对于很多web应用来说,并不要求这么高的实时性,比方说发一条消息之 后,过几秒乃至十几秒之后,我的订阅者才看到这条动态是完全可以接受的。

对复杂的SQL查询,特别是多表关联查询的需求
  任何大数据量的web系统,都非常忌讳多个大表的关联查询,以及复杂的数据分析类型的报表查询,特别是SNS类型的网站,从需求以及产品设计角 度,就避免了这种情况的产生。往往更多的只是单表的主键查询,以及单表的简单条件分页查询,SQL的功能被极大的弱化了。

  • CAP理论就是说在分布式存储系统中,最多只能实现上面的两点。而由于当前的网络硬件肯定会出现延迟丢包等问题,所以分区容忍性是我们必须需要实现的。我们只能在一致性可用性之间进行权衡,没有NoSQL系统能同时保证这三点

CA 传统Oracle数据库。单点集群,满足一致性,可用性的系统,通常在可扩展性上不太强大。
AP 大多数网站架构的选择。满足一致性,分区容忍性的系统,通常性能不是特别高。
CP Redis、Mongodb。 满足可用性,分区容忍性的系统,通常可能对一致性要求低一些。

  • 注意:分布式架构的时候必须做出取舍。
    一致性和可用性之间取一个平衡。多余大多数web应用,其实并不需要强一致性。因此牺牲C换取P,这是目前分布式数据库产品的方向

BASE是什么

BASE就是为了解决关系数据库强一致性引起的问题而引起的可用性降低而提出的解决方案。

BASE其实是下面三个术语的缩写:

  • 基本可用(Basically Available)
  • 软状态(Soft state)
  • 最终一致(Eventually consistent)

它的思想是通过让系统放松对某一时刻数据一致性的要求来换取系统整体伸缩性和性能上改观。为什么这么说呢,缘由就在于大型系统往往由于地域分布和极高性能的要求,不可能采用分布式事务来完成这些指标,要想获得这些指标,我们必须采用另外一种方式来完成,这里BASE就是解决这个问题的办法

Redis

Redis 是什么

Redis:REmote DIctionary Server(远程字典服务器)
是完全开源免费的,用C语言编写的,遵守BSD协议,是一个高性能的(key/value)分布式内存数据库,基于内存运行并支持持久化的NoSQL数据库,是当前最热门的NoSql数据库之一,也被人们称为数据结构服务器。

Redis的特点

  • Redis支持数据的持久化,可以将内存中的数据保持在磁盘中,重启的时候可以再次加载进行使用
  • Redis不仅仅支持简单的key-value类型的数据,同时还提供list,set,zset,hash等数据结构的存储
  • Redis支持数据的备份,即master-slave模式的数据备份

Redis的特殊知识

单进程

单进程模型来处理客户端的请求。对读写等事件的响应是通过对epoll函数的包装来做到的。Redis的实际处理速度完全依靠主进程的执行效率。

epoll是Linux内核为处理大批量文件描述符而作了改进的epoll,是Linux下多路复用IO接口select/poll的增强版本,它能显著提高程序在大量并发连接中只有少量活跃的情况下的系统CPU利用率。

默认16个库

默认16个数据库,类似数组下表从0开始,初始默认使用0号库。

## redis.conf内容

# Set the number of databases. The default database is DB 0, you can select
# a different one on a per-connection basis using SELECT <dbid> where
# dbid is a number between 0 and 'databases'-1
# 设置数据库数。默认数据库为DB 0,您可以使用SELECT <dbid>在每个连接的基础上选择一个不同的数据库,其中dbid是介于0和'databases'-1之间的数字
databases 16
默认端口6379
## redis.conf内容

# Accept connections on the specified port, default is 6379 (IANA #815344).
# If port 0 is specified Redis will not listen on a TCP socket.
# 接受指定端口上的连接,默认值为6379(IANA#815344)。
# 如果指定了端口0,则Redis将不会在TCP套接字上侦听。
port 6379

Redis官网

Redis安装

[root@localhost ~]$ cd /opt
# 下载
[root@localhost opt]$ wget http://download.redis.io/releases/redis-5.0.7.tar.gz
# 解压
[root@localhost opt]$ tar -zxvf redis-5.0.7.tar.gz
[root@localhost opt]$ cd redis-5.0.7
# 编译
[root@localhost redis-5.0.7]$ make
  • 注意:

    如果没装gcc(c语言编译工具),联网环境下执行yum install gcc-c++,然后在redis-5.0.7目录下执行make distclean清除重复目录后再make

Redis启动

# 启动命令,Ctrl+C退出
[root@localhost redis-5.0.7]$ src/redis-server 

# 打印日志
5679:C 28 Jan 2020 19:14:21.626 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
5679:C 28 Jan 2020 19:14:21.626 # Redis version=5.0.7, bits=64, commit=00000000, modified=0, pid=5679, just started
5679:C 28 Jan 2020 19:14:21.626 # Warning: no config file specified, using the default config. In order to specify a config file use src/redis-server /path/to/redis.conf
5679:M 28 Jan 2020 19:14:21.626 * Increased maximum number of open files to 10032 (it was originally set to 1024).
                _._                                                  
           _.-``__ ''-._                                             
      _.-``    `.  `_.  ''-._           Redis 5.0.7 (00000000/0) 64 bit
  .-`` .-```.  ```\/    _.,_ ''-._                                   
 (    '      ,       .-`  | `,    )     Running in standalone mode
 |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379
 |    `-._   `._    /     _.-'    |     PID: 5679
  `-._    `-._  `-./  _.-'    _.-'                                   
 |`-._`-._    `-.__.-'    _.-'_.-'|                                  
 |    `-._`-._        _.-'_.-'    |           http://redis.io        
  `-._    `-._`-.__.-'_.-'    _.-'                                   
 |`-._`-._    `-.__.-'    _.-'_.-'|                                  
 |    `-._`-._        _.-'_.-'    |                                  
  `-._    `-._`-.__.-'_.-'    _.-'                                   
      `-._    `-.__.-'    _.-'                                       
          `-._        _.-'                                           
              `-.__.-'                                               

5679:M 28 Jan 2020 19:14:21.627 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
5679:M 28 Jan 2020 19:14:21.627 # Server initialized
5679:M 28 Jan 2020 19:14:21.627 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
5679:M 28 Jan 2020 19:14:21.627 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
5679:M 28 Jan 2020 19:14:21.627 * Ready to accept connections

Redis客户端使用

# 重新开一个命令窗口,登陆redis客户端访问redis
# 或$ src/redis-cli -p 6379
[helin@localhost redis-5.0.7]$ src/redis-cli
# 测试是否连接上
127.0.0.1:6379> ping
PONG
# 127.0.0.1:6379> set key value [expiration EX seconds|PX milliseconds] [NX|XX]
127.0.0.1:6379> set key1 value1
OK
127.0.0.1:6379> get key1
"value1"

Redis后台启动

# 备份redis.conf文件,然后编辑
[root@localhost redis-5.0.7]$ cp redis.conf redis.conf.bak
[root@localhost redis-5.0.7]$ vim redis.conf

################################# GENERAL #####################################

# By default Redis does not run as a daemon. Use 'yes' if you need it.
# 默认情况下,Redis不会作为守护程序运行。如果需要,请使用“是”
# Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
# 请注意,Redis守护进程将在/var/run/redis.pid中写入一个pid文件
daemonize no

# 如果需要让redis以守护进程的方式启动,把no改为yes并保存

# 再次启动redis-server,指定配置文件路径
[root@localhost redis-5.0.7]$ src/redis-server /opt/redis-5.0.7/redis.conf
5741:C 28 Jan 2020 19:46:47.702 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
5741:C 28 Jan 2020 19:46:47.702 # Redis version=5.0.7, bits=64, commit=00000000, modified=0, pid=5741, just started
5741:C 28 Jan 2020 19:46:47.702 # Configuration loaded
[root@localhost redis-5.0.7]$

Redis客户端关闭

# 第一种,直接Ctrl+C强制关闭
# 第二种,shutdown [NOSAVE|SAVE] 保存/不保存并断开连接,可以不填,默认为save
# 这一步同时会关闭redis服务端
127.0.0.1:6379> shutdown save
# exit退出客户端
not connected> exit
[helin@localhost redis-5.0.7]$
# 第三种,quit
# 用于关闭与当前客户端与redis服务的连接。一旦所有等待中的回复(如果有的话)顺利写入到客户端,连接就会被关闭
redis 127.0.0.1:6379> QUIT
OK

Redis 五大数据类型

String(字符串)

String是redis最基本的类型,你可以理解成与Memcached一模一样的类型,一个key对应一个value。

String类型是二进制安全的。意思是Redis的String可以包含任何数据。比如jpg图片或者序列化的对象 。

String类型是Redis最基本的数据类型,一个Redis中字符串value最多可以是512M

Hash(哈希)

Redis hash 是一个键值对集合。

Redis hash是一个String类型的field和value的映射表,hash特别适合用于存储对象。

类似Java里面的Map<String,Object>

List(列表)

Redis 列表是简单的字符串列表,按照插入顺序排序。你可以添加一个元素导列表的头部(左边)或者尾部(右边)。它的底层实际是个链表

Set(集合)

Redis的Set是String类型的无序集合。它是通过HashTable实现实现的

ZSet(有序集合)

ZSet(sorted set)和 Set一样也是String类型元素的集合,且不允许重复的成员。

不同的是每个元素都会关联一个double类型的分数。

Redis正是通过分数来为集合中的成员进行从小到大的排序。ZSet的成员是唯一的,但分数(score)却可以重复。

Redis命令

Redis Key命令
exists:检测给定key是否存在
# 检测key1是否存在
# 127.0.0.1:6379> EXISTS key [key ...]
127.0.0.1:6379> EXISTS key1
(integer) 1
127.0.0.1:6379> EXISTS key111
(integer) 0

del: 删除已存在的key
# 删除key11。不存在的 key 会被忽略。
# 127.0.0.1:6379> DEL key [key ...]
127.0.0.1:6379[1]> DEL key11
(integer) 1

keys:查看当前库所有满足条件的key
# 只能查询当前库的key
# 127.0.0.1:6379> keys pattern
127.0.0.1:6379> keys *
1) "key2"
2) "key1"
127.0.0.1:6379[1]> KEYS ke*
1) "key11"

move: 将当前库的 key 移动到给定的库
# 从0号库把key1移动到1号库
# 127.0.0.1:6379> MOVE key db
127.0.0.1:6379> MOVE key1 1
(integer) 1
127.0.0.1:6379> SELECT 1
OK
127.0.0.1:6379[1]> EXISTS key1
(integer) 1

ttl: 返回 key 的剩余过期时间(秒)
# 检测key2剩余过期时间 -1表示永不过期
# 127.0.0.1:6379> TTL key
127.0.0.1:6379> ttl key2
(integer) -1

expire: 设置 key 的过期时间(秒)
# 给key2设置10秒的过期时间,过期后key2不可用
# 127.0.0.1:6379> EXPIRE key seconds
127.0.0.1:6379> EXPIRE key2 10
(integer) 1
# 10秒后,查询key2的剩余过期时间,-2表示过期不可访问
127.0.0.1:6379> ttl key2
(integer) -2

persist: 移除给定 key 的过期时间
# 设置key3 1000秒后过期 
127.0.0.1:6379> EXPIRE key3 1000
(integer) 1
127.0.0.1:6379> ttl key3
(integer) 984
# 移除过期时间,key 永不过期
# 127.0.0.1:6379> PERSIST key
127.0.0.1:6379> PERSIST key3
(integer) 1
127.0.0.1:6379> ttl key3
(integer) -1

type: 返回 key 所储存的值的类型
# 查询key3的值数据类型
# 127.0.0.1:6379> TYPE key
127.0.0.1:6379> TYPE key3
string
127.0.0.1:6379> type list1
list

rename: 修改 key 的名称
# 修改 key4 的名称为key44
# 127.0.0.1:6379> RENAME key newkey
127.0.0.1:6379> RENAME key4 key44
OK
# 如果key不存在则报错
127.0.0.1:6379> RENAME key444 key44
(error) ERR no such key

randomkey: 从当前库中随机返回一个 key
# 从当前库随机返回key
# 127.0.0.1:6379> RANDOMKEY
127.0.0.1:6379> RANDOMKEY
"key44"

Redis String命令
set:给key赋值,如果有其他值则覆盖
# 127.0.0.1:6379> SET key value [expiration EX seconds|PX milliseconds] [NX|XX]
127.0.0.1:6379> set key1 value1
OK

get:通过key取值。
# 如果没有key返回nil,如果值不是String类型,则报错
# 127.0.0.1:6379> GET key
127.0.0.1:6379> GET key1
"value1"
# 如果没有key返回nil
127.0.0.1:6379> GET key111
(nil)
# 如果值不是String类型,则报错
127.0.0.1:6379> GET list1
(error) WRONGTYPE Operation against a key holding the wrong kind of value

append: 为指定的 key 追加值
# 给key1的值追加值
# 127.0.0.1:6379> APPEND key value
127.0.0.1:6379> GET key1
"value1"
127.0.0.1:6379> APPEND key1 key1
(integer) 10
127.0.0.1:6379> GET key1
"value1key1"
# 如果key不存在则同set
127.0.0.1:6379> EXISTS key11111
(integer) 0
127.0.0.1:6379> APPEND key11111 value11111
(integer) 10

strlen:获取值的长度
# 获取key11111的长度。
# 127.0.0.1:6379> STRLEN key
127.0.0.1:6379> STRLEN key11111
(integer) 10
# 如果value不是字符串,则报错
127.0.0.1:6379> STRLEN list1
(error) WRONGTYPE Operation against a key holding the wrong kind of value

incr:给值中的数字值加1
# 给值加1
# 127.0.0.1:6379> INCR key
127.0.0.1:6379> SET keyNum 1
OK
127.0.0.1:6379> INCR keyNum
(integer) 2
127.0.0.1:6379> get keyNum
"2"
# 如果key不存在,则先set key10000 0,然后再incr key10000
127.0.0.1:6379> INCR key10000
(integer) 1
127.0.0.1:6379> GET key10000
"1"
# 如果值不是数值,则报错
127.0.0.1:6379> INCR key1
(error) ERR value is not an integer or out of range
# 必须是整数
127.0.0.1:6379> set keyDouble 1.1111
OK
127.0.0.1:6379> INCR keyDouble
(error) ERR value is not an integer or out of range

incrby:给值中的数字值加指定的值
# 给值加10,其他同incr
# 127.0.0.1:6379> INCRBY key increment
127.0.0.1:6379> INCRBY key10000 10
(integer) 11
127.0.0.1:6379> INCRBY key10000 10
(integer) 21
127.0.0.1:6379> get key10000
"21"

decr:给值中的数字值减1
# 给值减1,其他同incr
# 127.0.0.1:6379> DECR key
127.0.0.1:6379> DECR key10000
(integer) 20

decrby:给值中的数字值减指定的值
# 给值减10,其他同incr
# 127.0.0.1:6379> DECRBY key increment
127.0.0.1:6379> DECRBY key10000 10
(integer) 10
127.0.0.1:6379> DECRBY key10000 5
(integer) 5
127.0.0.1:6379> GET key10000
"5"

getrange:获取指定范围内的值
# 获取第1-2之间的字符。(这里start和end是字符串的位数,不是索引数,0号字符串是v,1号是a,2号是l)
# 127.0.0.1:6379> GETRANGE key start end
127.0.0.1:6379> get key44
"value4"
127.0.0.1:6379> GETRANGE key44 1 2
"al"
# end=-1就取全部
127.0.0.1:6379> GETRANGE key44 0 -1
"value4"
# key不存在则取空字符串
127.0.0.1:6379> GETRANGE key4444 1 2
""
# 值不是String则报错
127.0.0.1:6379> GETRANGE list1 1 2
(error) WRONGTYPE Operation against a key holding the wrong kind of value

setrange:给值指定范围内赋值
# 给key44从第0个字符开始覆盖赋值
# 127.0.0.1:6379> SETRANGE key offset value
127.0.0.1:6379> SETRANGE key44 0 start
(integer) 6
127.0.0.1:6379> get key44
"start4"
# 如果key不存在则赋值,\x00是"NULL",此方法插入值前方会带上NULL的ascii码
127.0.0.1:6379> SETRANGE ke000 4 value000
(integer) 12
127.0.0.1:6379> GET ke000
"\x00\x00\x00\x00value000"
# 如果value不是String类型则报错
127.0.0.1:6379> SETRANGE list1 1 li1
(error) WRONGTYPE Operation against a key holding the wrong kind of value

setex:给key设置值和过期时间(秒)。
# 给key1设置1000秒过期时间,并把value设置为key1.1,覆盖了原value
# 127.0.0.1:6379> SETEX key seconds value
127.0.0.1:6379> SETEX key1 1000 key1.1
OK
127.0.0.1:6379> get key1
"key1.1"
127.0.0.1:6379> ttl key1
(integer) 986
# 如果key不存在则直接赋值
127.0.0.1:6379> SETEX key100 100 key100
OK
127.0.0.1:6379> get key100
"key100"
127.0.0.1:6379> ttl key100
(integer) 86
# 如果值不是String也是直接覆盖(无视类型)
127.0.0.1:6379> SETEX list1 100 key1
OK
127.0.0.1:6379> GET list1
"key1"
127.0.0.1:6379> type list1
string

setnx:key不存在时为key设置值
# 只有key不存在时才设置值
# 127.0.0.1:6379> SETNX key value
127.0.0.1:6379> SETNX key100 key100
(integer) 1
127.0.0.1:6379> GET key100
"key100"
127.0.0.1:6379> SETNX key1 key100
(integer) 0

mset:同时设置一个或多个键值对
# 同时设置多个键值对
# 127.0.0.1:6379> MSET key value [key value ...]
127.0.0.1:6379> mset key100 value100 key101 value101
OK

mget:获取一个或多个key的值
# 如果key不存在,则取到nil
# 127.0.0.1:6379> MGET key [key ...]
127.0.0.1:6379> MGET key1 key100 key101 key102
1) "key1.1"
2) "value100"
3) "value101"
4) (nil)
# 如果key不是String类型,也返回nil
127.0.0.1:6379> MGET list1
1) (nil)

msetnx:所有key都不存在时同时设置键值对
# 只有设置的key都不存在时才会设置这些键值对
# 127.0.0.1:6379> MSETNX key value [key value ...]
127.0.0.1:6379> MSETNX key200 value200 key201 value201
(integer) 1
127.0.0.1:6379> MSETNX key1 va1 key300 value300
(integer) 0
127.0.0.1:6379> get key300
(nil)

getset:给key赋值,并返回旧值
# 检测key1是否存在
# 127.0.0.1:6379> GETSET key value
127.0.0.1:6379> GETSET key1 k1
"key1.1"
# 如果key不存在,依然会set进值,但是返回值为nil
127.0.0.1:6379> GETSET kk kv
(nil)
127.0.0.1:6379> GET kk
"kv"
# 如果value的类型不为String,则报错
127.0.0.1:6379> GETSET list1 ll
(error) WRONGTYPE Operation against a key holding the wrong kind of value

Redis List命令
lpush:把一个或多个值插入列表头部(左)
# 从左边插入列表,数据插入顺序和遍历顺序相反,后进先出(栈)
# 127.0.0.1:6379> LPUSH key value [value ...]
127.0.0.1:6379> LPUSH list1 0 1 2 3
(integer) 4

lrange:获取list中指定范围内的元素
# 获取列表第2-第3个元素的值
# 127.0.0.1:6379> LRANGE key start stop
127.0.0.1:6379> LRANGE list1 1 2
1) "2"
2) "1"
# 如果取的值范围大于列表则取交集
127.0.0.1:6379> LRANGE list1 1 6
1) "2"
2) "1"
3) "0"
# 如果没有交集则返回空集
127.0.0.1:6379> LRANGE list1 5 6
(empty list or set)
# end=-1表示查询全部list
127.0.0.1:6379> LRANGE list1 0 -1
1) "3"
2) "2"
3) "1"
4) "0"
5) "4"
6) "5"
7) "6"

rpush:把一个或多个值插入列表尾部(右)
# 从右边插入列表,数据插入顺序和遍历顺序相同,先进先出(队列)
# 127.0.0.1:6379> RPUSH key value [value ...]
127.0.0.1:6379> RPUSH list1 4 5 6
(integer) 7
127.0.0.1:6379> LRANGE list1 0 9
1) "3"
2) "2"
3) "1"
4) "0"
5) "4"
6) "5"
7) "6"

lpop:移除并返回list第一个元素
# 移除list1的第一个元素并返回
# 127.0.0.1:6379> LPOP key
127.0.0.1:6379> LPOP list1
"3"

rpop:移除并返回list最后一个元素
# 移除list1的最后一个元素并返回
# 127.0.0.1:6379> RPOP key
127.0.0.1:6379> RPOP list1
"6"

lindex:通过索引获取list元素
# 127.0.0.1:6379> LINDEX key index
127.0.0.1:6379> LRANGE list1 0 -1
1) "2"
2) "1"
3) "0"
4) "4"
5) "5"
# 返回0号位的元素
127.0.0.1:6379> LINDEX list1 0
"2"
# 如果9号位没有元素则返回nil
127.0.0.1:6379> LINDEX list1 9
(nil)


llen:获取list的长度
# 获取长度
# 127.0.0.1:6379> LLEN key
127.0.0.1:6379> LLEN list1
(integer) 5
# 如果key不存在,则返回空串长度0
127.0.0.1:6379> LLEN list3
(integer) 0
# 如果key不是list,则报错
127.0.0.1:6379> LLEN k1
(error) WRONGTYPE Operation against a key holding the wrong kind of value

lrem:删除list中若干个指定元素
127.0.0.1:6379> LRANGE list2 0 -1
1) "1"
2) "1"
3) "1"
4) "2"
5) "2"
6) "3"
7) "4"
# 移除list中总共2个1,如果count大于0则从list头向尾检索
# 127.0.0.1:6379> LREM key count value
127.0.0.1:6379> LREM list2 2 1
(integer) 2
127.0.0.1:6379> LRANGE list2 0 -1
1) "1"
2) "2"
3) "2"
4) "3"
5) "4"
# 如果count小于0则从list尾向头检索
127.0.0.1:6379> LRANGE list2 0 -1
 1) "1"
 2) "1"
 3) "1"
 4) "1"
 5) "1"
 6) "2"
 7) "2"
 8) "3"
 9) "4"
10) "1"
11) "1"
12) "1"
127.0.0.1:6379> LREM list2 -2 1
(integer) 2
127.0.0.1:6379> LRANGE list2 0 -1
 1) "1"
 2) "1"
 3) "1"
 4) "1"
 5) "1"
 6) "2"
 7) "2"
 8) "3"
 9) "4"
10) "1"
# 如果count=0,则删除list中所有的指定元素
127.0.0.1:6379> LREM list2 0 1
(integer) 6
127.0.0.1:6379> LRANGE list2 0 -1
1) "2"
2) "2"
3) "3"
4) "4"

ltrim:删除list中指定范围以外的元素
127.0.0.1:6379> LRANGE list3 0 -1
1) "1"
2) "2"
3) "3"
4) "4"
5) "5"
6) "6"
7) "7"
8) "8"
9) "9"
# 删除list中除了第三个元素到倒数第一个元素之间的元素以外的元素
# 127.0.0.1:6379> LTRIM key start stop
127.0.0.1:6379> LTRIM list3 2 -1
OK
127.0.0.1:6379> LRANGE list3 0 -1
1) "3"
2) "4"
3) "5"
4) "6"
5) "7"
6) "8"
7) "9"
# 如果范围交集为空,则清空list
127.0.0.1:6379> LRANGE list3 0 -1
1) "3"
2) "4"
3) "5"
4) "6"
5) "7"
6) "8"
7) "9"
127.0.0.1:6379> LTRIM list3 4 -5
OK
127.0.0.1:6379> LRANGE list3 0 -1
(empty list or set)
# 顺序截取第1个到第5个元素
127.0.0.1:6379> LRANGE list3 0 -1
1) "1"
2) "2"
3) "3"
4) "4"
5) "5"
6) "6"
7) "7"
8) "8"
9) "9"
127.0.0.1:6379> LTRIM list3 0 4
OK
127.0.0.1:6379> LRANGE list3 0 -1
1) "1"
2) "2"
3) "3"
4) "4"
5) "5"

rpoplpush:把list1最后一个元素移除、添加到一list2并返回
127.0.0.1:6379> LRANGE list 0 -1
1) "4"
2) "5"
3) "6"
127.0.0.1:6379> LRANGE list1 0 -1
1) "2"
2) "1"
3) "0"
4) "4"
5) "5"
# 把list的最后一个元素6移除,添加到list1的第一位,返回6
# 127.0.0.1:6379> RPOPLPUSH source destination
127.0.0.1:6379> RPOPLPUSH list list1
"6"
127.0.0.1:6379> LRANGE list 0 -1
1) "4"
2) "5"
127.0.0.1:6379> LRANGE list1 0 -1
1) "6"
2) "2"
3) "1"
4) "0"
5) "4"
6) "5"

lset:通过索引来设置元素的值
127.0.0.1:6379> LRANGE list1 0 -1
1) "6"
2) "2"
3) "1"
4) "0"
5) "4"
6) "5"
# 设置list1的第二个元素为x
# 127.0.0.1:6379> LSET key index value
127.0.0.1:6379> LSET list1 1 x
OK
127.0.0.1:6379> LRANGE list1 0 -1
1) "6"
2) "x"
3) "1"
4) "0"
5) "4"
6) "5"
# 如果list为空串或key不存在则报错
127.0.0.1:6379> LSET list111 0 1
(error) ERR no such key
# 如果索引越界,则报错
127.0.0.1:6379> LSET list1 100 1
(error) ERR index out of range

linsert:在list元素前或后插入元素
127.0.0.1:6379> LRANGE list1 0 -1
1) "6"
2) "x"
3) "1"
4) "0"
5) "4"
6) "5"
# 在list1的x元素之前插入before元素
# 127.0.0.1:6379> LINSERT key BEFORE|AFTER pivot value
127.0.0.1:6379> LINSERT list1 before x before
(integer) 7
127.0.0.1:6379> LRANGE list1 0 -1
1) "6"
2) "before"
3) "x"
4) "1"
5) "0"
6) "4"
7) "5"
# 在list1的x元素之后插入after元素
127.0.0.1:6379> LINSERT list1 after x after
(integer) 8
127.0.0.1:6379> LRANGE list1 0 -1
1) "6"
2) "before"
3) "x"
4) "after"
5) "1"
6) "0"
7) "4"
8) "5"

Redis Set命令
sadd:把一个或多个元素添加进set
# 重复值忽略,如果key不存在则创建并添加该元素
# 127.0.0.1:6379> SADD key member [member ...]
127.0.0.1:6379> SADD set1 0 1 2 3 4 5 5 5 6 3
(integer) 7
127.0.0.1:6379> SMEMBERS set1
1) "0"
2) "1"
3) "2"
4) "3"
5) "4"
6) "5"
7) "6"
# 如果value不是set则报错
127.0.0.1:6379> SADD k1
(error) ERR wrong number of arguments for 'sadd' command

smembers:获取set的所有元素
# 获取所有set的元素,
# 127.0.0.1:6379> SMEMBERS key
127.0.0.1:6379> SMEMBERS set1
1) "0"
2) "1"
3) "2"
4) "3"
5) "4"
6) "5"
7) "6"
# 如果key不存在则返回空set
127.0.0.1:6379> SMEMBERS set2
(empty list or set)

scard:获取set元素的数量
# 获取set中元素数量
# 127.0.0.1:6379> SCARD key
127.0.0.1:6379> SCARD set1
(integer) 7
# key不存在则返回0
127.0.0.1:6379> SCARD set2
(integer) 0

srem:移除set中一个或多个元素
# 删除set中0,2,4,6的元素
# 127.0.0.1:6379> SREM key member [member ...]
127.0.0.1:6379> SREM set1 0 2 4 6
(integer) 4
127.0.0.1:6379> SMEMBERS set1
1) "1"
2) "3"
3) "5"
# 不存在的元素会忽略
127.0.0.1:6379> SREM set1 -1 9
(integer) 0

srandmember:随机返回set中一个或多个元素
# 如果count>0则随机返回set中count个元素
# 127.0.0.1:6379> SRANDMEMBER key [count]
127.0.0.1:6379> SRANDMEMBER set1
"3"
127.0.0.1:6379> SRANDMEMBER set1 2
1) "3"
2) "1"
# 如果count>0且count大于set的大小,则返回整个set的元素
127.0.0.1:6379> SRANDMEMBER set1 3
1) "1"
2) "3"
3) "5"
# 如果count<0,则返回包含count个元素的数组,其中元素可重复
127.0.0.1:6379> SRANDMEMBER set1 -3
1) "1"
2) "1"
3) "5"
# 如果count=0,则返回空set
127.0.0.1:6379> SRANDMEMBER set1 0
(empty list or set)
# 如果key不存在,则返回nil
127.0.0.1:6379> SRANDMEMBER set222
(nil)

spop:set中随机移除一个或多个元素,并返回
# 随机出栈
# 127.0.0.1:6379> SPOP key [count]
127.0.0.1:6379> SPOP set1
"5"
127.0.0.1:6379> SPOP set1 2
1) "6"
2) "0"
127.0.0.1:6379> SMEMBERS set1
1) "1"
2) "2"
3) "3"
4) "4"
5) "7"
6) "8"
7) "9"
# count<0会报错
127.0.0.1:6379> SPOP set1 -2
(error) ERR index out of range
# count=0会返回空set
127.0.0.1:6379> SPOP set1 0
(empty list or set)
# key不存在返回nil
127.0.0.1:6379> SPOP set1111
(nil)

smove:把set1的指定元素移动到set2中
127.0.0.1:6379> SMEMBERS set1
1) "1"
2) "2"
3) "3"
4) "4"
5) "5"
6) "6"
127.0.0.1:6379> SMEMBERS set2
1) "b"
2) "a"
3) "e"
4) "f"
5) "c"
6) "d"
# 把set1中的元素1移动到set2中
# 127.0.0.1:6379> SMOVE source destination member
127.0.0.1:6379> SMOVE set1 set2 1 
(integer) 1
127.0.0.1:6379> SMEMBERS set1
1) "2"
2) "3"
3) "4"
4) "5"
5) "6"
127.0.0.1:6379> SMEMBERS set2
1) "e"
2) "f"
3) "a"
4) "b"
5) "c"
6) "d"
7) "1"
# 元素不存在则忽略
127.0.0.1:6379> SMOVE set1 set2 1
(integer) 0
# 如果set2不存在则创建并赋值
127.0.0.1:6379> SMOVE set1 set3 2
(integer) 1
127.0.0.1:6379> SMEMBERS set3
1) "2"
# 如果set1不存在则忽略
127.0.0.1:6379> SMOVE set22 set3 1
(integer) 0

sdiff:取差集
# 取多个集合的差集(差集:在前一个集合而不在后一个集合中的项)
# 127.0.0.1:6379> SDIFF key [key ...]
127.0.0.1:6379> SADD set1 1 2 3 4 5
(integer) 5
127.0.0.1:6379> SADD set2 3 4 5 6
(integer) 4
127.0.0.1:6379> SADD set3 s b d 3 4
(integer) 5
127.0.0.1:6379> SDIFF set1 set2 set3
1) "1"
2) "2"
# 不存在的key当做空集
127.0.0.1:6379> SDIFF set1 set22
1) "1"
2) "2"
3) "3"
4) "4"
5) "5"
127.0.0.1:6379> SDIFF set22 set1
(empty list or set)

sinter:取交集
# 集合内容同上,key不存在为空集
# 127.0.0.1:6379> SINTER key [key ...]
127.0.0.1:6379> SINTER set1 set2 set3
1) "3"
2) "4"

sunion:取并集
# 集合内容同上,key不存在为空集
# 127.0.0.1:6379> SUNION key [key ...]
127.0.0.1:6379> SUNION set1 set2 set3
1) "4"
2) "b"
3) "6"
4) "2"
5) "3"
6) "1"
7) "s"
8) "5"
9) "d"

Redis Hash命令
hset:为hash中的key赋值
# 如果hash1不存在则创建赋值,如果存在则覆盖
# 127.0.0.1:6379> HSET key field value
127.0.0.1:6379> HSET hash1 f1 v1
(integer) 1

hset:为hash中的key赋值
# 如果hash1不存在则创建赋值,如果存在则覆盖
# 127.0.0.1:6379> HSET key field value
127.0.0.1:6379> HSET hash1 f1 v1
(integer) 1

hget:获取hash中指定field的值
# 获取hash1中f1中的值
# 127.0.0.1:6379> HGET key field
127.0.0.1:6379> HGET hash1 f1
"v1"
# 如果field不存在则返回nil
127.0.0.1:6379> HGET hash1 f2
(nil)
# 如果key不存在也返回nil
127.0.0.1:6379> HGET hash2 f2
(nil)

hmset:给hash中同时赋值多个kv键值对
# 同时多个赋值,如果有重复的field会覆盖
# 127.0.0.1:6379> HMSET key field value [field value ...]
127.0.0.1:6379> HMSET hash1 f2 v2 f3 v3 f1 v1
OK
# 如果key不存在会创建一个hash并保存
127.0.0.1:6379> HMSET hash2 f2 v2 f3 v3 f1 v1
OK

hmget:给hash中同时赋值多个kv键值对
# 同时取多个值,如果field不存在则返回nil
# 127.0.0.1:6379> HMGET key field [field ...]
127.0.0.1:6379> HMGET hash1 f1 f2 f3 f4
1) "v1"
2) "v2"
3) "v3"
4) (nil)
# 如果key不存在则都返回nil
127.0.0.1:6379> HMGET hash3 f1 f2 f3 f4
1) (nil)
2) (nil)
3) (nil)
4) (nil)

hgetall:获取hash中所有kv键值对
# 返回全部field和value
# 127.0.0.1:6379> HGETALL key
127.0.0.1:6379> HGETALL hash1
1) "f1"
2) "v1"
3) "f2"
4) "v2"
5) "f3"
6) "v3"
# 如果key不存在则返回空hash
127.0.0.1:6379> HGETALL hash3
(empty list or set)

hdel:删除hash中一个或多个kv键值对
# 删除hash2中f1,f2的kv键值对,f5不存在所以忽略
# 127.0.0.1:6379> HGETALL key
127.0.0.1:6379> HDEL hash2 f1 f2 f5
(integer) 2
127.0.0.1:6379> HGETALL hash2
1) "f3"
2) "v3"
# key不存在也忽略
127.0.0.1:6379> HDEL hash3 f1
(integer) 0

hlen:获取hash中field数量
# hash1中有三个kv键值对,所以hlen为3
# 127.0.0.1:6379> HLEN key
127.0.0.1:6379> HLEN hash1
(integer) 3

hexists:判断hash中是否存在指定field
# hash中存在指定field返回1否则返回0,key和field不存在都返回0
# 127.0.0.1:6379> HEXISTS key field
127.0.0.1:6379> HEXISTS hash1 f1
(integer) 1
127.0.0.1:6379> HEXISTS hash1 f6
(integer) 0
127.0.0.1:6379> HEXISTS hash3 f6
(integer) 0

hkeys:获取hash中所有field
# 返回hash中所有field,如果key不存在则返回空hash
# 127.0.0.1:6379> HKEYS key
127.0.0.1:6379> HKEYS hash1
1) "f1"
2) "f2"
3) "f3"
127.0.0.1:6379> HKEYS hash3
(empty list or set)

hvals:获取hash中所有value
# 返回hash中所有value,如果key不存在则返回空hash
# 127.0.0.1:6379> HVALS key
127.0.0.1:6379> HVALS hash1
1) "v1"
2) "v2"
3) "v3"
127.0.0.1:6379> HVALS hash3
(empty list or set)

hincrby:为hash中指定field增加固定值
# 给hash1中的fnum增加值,值可以为正整数和负整数
# 127.0.0.1:6379> HINCRBY key field increment
127.0.0.1:6379> HGET hash1 fnum
"10"
127.0.0.1:6379> HINCRBY hash1 fnum 1
(integer) 11
127.0.0.1:6379> HINCRBY hash1 fnum -2
(integer) 9
# 如果field不存在或key不存在,都会创建并赋值
127.0.0.1:6379> HINCRBY hash1 fnum1 11
(integer) 11
127.0.0.1:6379> HINCRBY hash11 fnum1 11
(integer) 11

hincrbyfloat:为hash中指定field增加固定浮点数
# 给hash1中的fnum增加值,值可以为浮点数,其他同上
# 127.0.0.1:6379> HINCRBY key field increment
127.0.0.1:6379> HGET hash1 fnum
"10"
127.0.0.1:6379> HINCRBY hash1 fnum 1.1
(integer) 11.1
127.0.0.1:6379> HINCRBY hash1 fnum -2
(integer) 9.1

hsetnx:为hash中不存在的field赋值
# 如果field不存在则赋值,key不存在也会赋值
# 127.0.0.1:6379> HSETNX key field value
127.0.0.1:6379> HSETNX hash1 f1 vvv
(integer) 0
127.0.0.1:6379> HSETNX hash1 f11 vvv
(integer) 1
127.0.0.1:6379> HSETNX hash3 f11 vvv
(integer) 1

Redis ZSet命令
zadd:将一个或多个成员及其分数加入到zset中
# 把多个score和member加入z1,如果成员已存在则覆盖更新
# 127.0.0.1:6379> ZADD key [NX|XX] [CH] [INCR] score member [score member ...]
127.0.0.1:6379> ZADD z1 60 v1 70 v2 80 v3 90 v4 100 v5
(integer) 5

zrange:获取zset中指定区域的成员
# 获取zset中指定区域的member
# 127.0.0.1:6379> ZRANGE key start stop [WITHSCORES]
127.0.0.1:6379> ZRANGE z1 0 2
1) "v1"
2) "v2"
3) "v3"
# 带上withscores参数后会把score一起返回
127.0.0.1:6379> ZRANGE z1 0 -1 withscores
 1) "v1"
 2) "60"
 3) "v2"
 4) "70"
 5) "v3"
 6) "80"
 7) "v4"
 8) "90"
 9) "v5"
10) "100"

zrangebyscore:获取zset中指定score区间的值
# 取score在60~80之间的值,值根据各自对应的score顺序(小到大)排列
# 127.0.0.1:6379> ZRANGEBYSCORE key min max [WITHSCORES] [LIMIT offset count]
127.0.0.1:6379> ZRANGEBYSCORE z1 60 80 
1) "v1"
2) "v2"
3) "v3"
# '('左小括号是区间概念的不等于,'['是区间概念的等于,默认是等于
# 所以ZRANGEBYSCORE z1 60 (80等价于ZRANGEBYSCORE z1 [60 (80
127.0.0.1:6379> ZRANGEBYSCORE z1 60 (80 
1) "v1"
2) "v2"
# withscores是带上score显示
127.0.0.1:6379> ZRANGEBYSCORE z1 60 (80 withscores
1) "v1"
2) "60"
3) "v2"
4) "70"
# limit 0 1表示前面返回的集合从第0个开始取1个返回,跟mysql的limit类似
127.0.0.1:6379> ZRANGEBYSCORE z1 60 (80 withscores limit 0 1
1) "v1"
2) "60"
127.0.0.1:6379> ZRANGEBYSCORE z1 60 (80 withscores limit 1 1
1) "v2"
2) "70"

zrem:移除zset中的一个或多个成员
# 移除z2中的v1,如果成员或key不存在则忽略
# 127.0.0.1:6379> ZREM key member [member ...]
127.0.0.1:6379> ZREM z2 v1
(integer) 1
127.0.0.1:6379> ZREM z2 v9
(integer) 0
127.0.0.1:6379> ZREM z8 v9
(integer) 0

zcard:获取zset的成员数
# z1有5个成员,z8不存在
# 127.0.0.1:6379> ZCARD key
127.0.0.1:6379> ZCARD z1
(integer) 5
127.0.0.1:6379> ZCARD z8
(integer) 0
# value不是zcard报错
127.0.0.1:6379> ZCARD k1
(error) WRONGTYPE Operation against a key holding the wrong kind of value

zcount:获取zset中指定score区间的成员数
# 获取60<=score<80区间的成员数
# 127.0.0.1:6379> ZCOUNT key min max
127.0.0.1:6379> ZCOUNT z1 60 (80
(integer) 2

zrank:返回zset中指定成员的顺序排名
127.0.0.1:6379> ZRANGE z1 0 -1 withscores
 1) "v1"
 2) "60"
 3) "v2"
 4) "70"
 5) "v3"
 6) "80"
 7) "v4"
 8) "90"
 9) "v5"
10) "100"
# 获取指定member的顺序排名,从v1开始第0位,v2第1位,v4第3位
# 127.0.0.1:6379> ZRANK key member
127.0.0.1:6379> ZRANK z1 v2
(integer) 1
127.0.0.1:6379> ZRANK z1 v4
(integer) 3
# 如果member不存在或key不存在则返回nil
127.0.0.1:6379> ZRANK z1 v9
(nil)
127.0.0.1:6379> ZRANK z10 v9
(nil)

zscore:获取zset中成员的score
# 获取z1集合中v3成员的分数值,如果member或key不存在则返回nil
# 127.0.0.1:6379> ZSCORE key member
127.0.0.1:6379> ZSCORE z1 v3
"80"
127.0.0.1:6379> ZSCORE z1 v8
(nil)
127.0.0.1:6379> ZSCORE z10 v8
(nil)

zrevrange:倒序获取zset中指定索引区间的成员(同zrange)
# score倒序,其他跟zrange相同
# 127.0.0.1:6379> ZREVRANGE key start stop [WITHSCORES]
127.0.0.1:6379> ZREVRANGE z1 0 -1
1) "v5"
2) "v4"
3) "v3"
4) "v2"
5) "v1"
127.0.0.1:6379> ZREVRANGE z1 0 -1 withscores
 1) "v5"
 2) "100"
 3) "v4"
 4) "90"
 5) "v3"
 6) "80"
 7) "v2"
 8) "70"
 9) "v1"
10) "60"

zrevrank:返回zset中指定成员的倒序排名(同zrank)
127.0.0.1:6379> ZRANGE z1 0 -1
1) "v1"
2) "v2"
3) "v3"
4) "v4"
5) "v5"
# 获取指定member的倒序排名,从v5开始第0位,v2第3位
# 127.0.0.1:6379> ZREVRANK key member
127.0.0.1:6379> ZREVRANK z1 v2
(integer) 3

zrevrangebyscore :倒序返回zset中指定score区间的成员(同zrangebyscore)
# 倒序获取60<=score<80之间成员的,其他和zrangebyscore一样
# 127.0.0.1:6379> ZREVRANGEBYSCORE key max min [WITHSCORES] [LIMIT offset count]
127.0.0.1:6379> ZREVRANGEBYSCORE z1 (80 60
1) "v2"
2) "v1"

Redis 连接命令
select:切换到指定库
# 切换到1号库
# 127.0.0.1:6379> SELECT index
127.0.0.1:6379> SELECT 1
OK
127.0.0.1:6379[1]>

ping:测试是否连通redis服务端
# 127.0.0.1:6379[1]> PING [message]
127.0.0.1:6379[1]> ping
PONG
# 如果传111,则打印111
127.0.0.1:6379[1]> PING 111
"111"

Redis 服务器命令
dbsize:获取当前库key的数量
#
# 127.0.0.1:6379[1]> DBSIZE
127.0.0.1:6379[1]> DBSIZE
(integer) 1

info:返回关于 Redis 服务器的各种信息和统计数值
# INFO后面带上参数,如replication可以只查询部分信息
# 127.0.0.1:6379[1]> INFO [section]
127.0.0.1:6379> INFO replication
# Replication
role:master
connected_slaves:0
master_replid:d2e247ba3ffab4fea530cdff13d038f0bb7302d3
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:0
second_repl_offset:-1
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0


# 显示redis服务器的所有信息
127.0.0.1:6379> INFO
# Server	#一般 Redis 服务器信息
redis_version:5.0.7	# Redis 服务器版本
redis_git_sha1:00000000	# Git SHA1
redis_git_dirty:0	# Git dirty flag
redis_build_id:c390211643a1c73e
redis_mode:standalone
os:Linux 3.10.0-862.el7.x86_64 x86_64	#Redis 服务器的宿主操作系统
arch_bits:64	#架构(32 或 64 位)
multiplexing_api:epoll	#Redis 所使用的事件处理机制
atomicvar_api:atomic-builtin
gcc_version:4.8.5	# 编译 Redis 时所使用的 GCC 版本
process_id:1519	# 服务器进程的 PID
run_id:f4499b9a69aa039fa346062a8b190f81482a86c7 #Redis 服务器的随机标识符(用于 Sentinel 和集群)
tcp_port:6379	#TCP/IP 监听端口
uptime_in_seconds:10979	#自 Redis 服务器启动以来,经过的秒数
uptime_in_days:0	#自 Redis 服务器启动以来,经过的天数
hz:10
configured_hz:10
lru_clock:3323307	#以分钟为单位进行自增的时钟,用于 LRU 管理
executable:/opt/redis-5.0.7/src/redis-server
config_file:/opt/redis-5.0.7/redis.conf

# Clients	#已连接客户端信息
connected_clients:1		#已连接客户端的数量(不包括通过从属服务器连接的客户端)
client_recent_max_input_buffer:2	#当前连接的客户端当中,最大输入缓存
client_recent_max_output_buffer:0	#当前连接的客户端当中,最长的输出列表
blocked_clients:0	#正在等待阻塞命令(BLPOP、BRPOP、BRPOPLPUSH)的客户端的数量

# Memory		#内存信息
used_memory:854888		#由 Redis 分配器分配的内存总量,以字节(byte)为单位
used_memory_human:834.85K	#以人类可读的格式返回 Redis 分配的内存总量
used_memory_rss:14274560	#从操作系统的角度,返回 Redis 已分配的内存总量(俗称常驻集大小)。这个值和 top 、 ps 等命令的输出一致。
used_memory_rss_human:13.61M	
used_memory_peak:874992		#Redis 的内存消耗峰值(以字节为单位)
used_memory_peak_human:854.48K	#以人类可读的格式返回 Redis 的内存消耗峰值
used_memory_peak_perc:97.70%
used_memory_overhead:841206
used_memory_startup:791400
used_memory_dataset:13682
used_memory_dataset_perc:21.55%
allocator_allocated:1078752
allocator_active:1331200
allocator_resident:4038656
total_system_memory:1910050816
total_system_memory_human:1.78G
used_memory_lua:37888	#Lua 引擎所使用的内存大小(以字节为单位)
used_memory_lua_human:37.00K
used_memory_scripts:0
used_memory_scripts_human:0B
number_of_cached_scripts:0
maxmemory:0
maxmemory_human:0B
maxmemory_policy:noeviction
allocator_frag_ratio:1.23
allocator_frag_bytes:252448
allocator_rss_ratio:3.03
allocator_rss_bytes:2707456
rss_overhead_ratio:3.53
rss_overhead_bytes:10235904
mem_fragmentation_ratio:17.57	#used_memory_rss 和 used_memory 之间的比率
mem_fragmentation_bytes:13461920
mem_not_counted_for_evict:0
mem_replication_backlog:0
mem_clients_slaves:0
mem_clients_normal:49694
mem_aof_buffer:0
mem_allocator:jemalloc-5.1.0	#在编译时指定的, Redis 所使用的内存分配器。可以是 libc 、 jemalloc 或者 tcmalloc
active_defrag_running:0
lazyfree_pending_objects:0

# Persistence	#RDB 和 AOF 的相关信息
loading:0
rdb_changes_since_last_save:0
rdb_bgsave_in_progress:0
rdb_last_save_time:1580376361
rdb_last_bgsave_status:ok
rdb_last_bgsave_time_sec:0
rdb_current_bgsave_time_sec:-1
rdb_last_cow_size:4304896
aof_enabled:0
aof_rewrite_in_progress:0
aof_rewrite_scheduled:0
aof_last_rewrite_time_sec:-1
aof_current_rewrite_time_sec:-1
aof_last_bgrewrite_status:ok
aof_last_write_status:ok
aof_last_cow_size:0

# Stats		#一般统计信息
total_connections_received:4
total_commands_processed:39
instantaneous_ops_per_sec:0
total_net_input_bytes:1157
total_net_output_bytes:47010
instantaneous_input_kbps:0.00
instantaneous_output_kbps:0.00
rejected_connections:0
sync_full:0
sync_partial_ok:0
sync_partial_err:0
expired_keys:0
expired_stale_perc:0.00
expired_time_cap_reached_count:0
evicted_keys:0
keyspace_hits:1
keyspace_misses:0
pubsub_channels:0
pubsub_patterns:0
latest_fork_usec:528
migrate_cached_sockets:0
slave_expires_tracked_keys:0
active_defrag_hits:0
active_defrag_misses:0
active_defrag_key_hits:0
active_defrag_key_misses:0

# Replication		#主/从复制信息
role:master
connected_slaves:0
master_replid:d2e247ba3ffab4fea530cdff13d038f0bb7302d3
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:0
second_repl_offset:-1
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0

# CPU		#CPU 计算量统计信息
used_cpu_sys:4.949190
used_cpu_user:3.907255
used_cpu_sys_children:0.015324
used_cpu_user_children:0.000000

# Cluster		#Redis 集群信息
cluster_enabled:0

# Keyspace		#数据库相关的统计信息
db0:keys=2,expires=0,avg_ttl=0

slaveof:将当前服务器转变为指定服务器的从属服务器(slave server)
# 指定IP(127.0.0.1)和端口(6379)的服务器为主机
# 如果当前服务器已经是某个主服务器(master server)的从属服务器,那么执行 SLAVEOF host port 将使当前服务器停止
# 对旧主服务器的同步,丢弃旧数据集,转而开始对新主服务器进行同步
# 127.0.0.1:6379> SLAVEOF host port
127.0.0.1:6379> SLAVEOF 127.0.0.1 6379
OK

# 对一个从属服务器执行命令 SLAVEOF NO ONE 将使得这个从属服务器关闭复制功能,并从从属服务器转变回主服务器,
# 原来同步所得的数据集不会被丢弃
127.0.0.1:6379> SLAVEOF no one
OK

save:同步保存数据到磁盘
# 将当前 Redis 实例的所有数据快照(snapshot)以 RDB 文件的形式保存到硬盘。
127.0.0.1:6379> SAVE
OK

bgsave:异步保存数据到磁盘
# BGSAVE 命令执行之后立即返回 OK ,然后 Redis fork 出一个新子进程,原来的 Redis # 进程(父进程)继续处理客户端请求,而子进程则负责将数据保存到磁盘,然后退出。
127.0.0.1:6379> BGSAVE
OK

flushdb:清空当前库所有key
# 清空当前库
# 127.0.0.1:6379[1]> FLUSHDB [ASYNC]
127.0.0.1:6379[1]> FLUSHDB
OK

flushall:清空所有库所有key
# 清空所有库
# 127.0.0.1:6379[1]> FLUSHALL [ASYNC]
127.0.0.1:6379[1]> FLUSHALL
OK

shutdown:断开客户端连接并关闭redis服务器
# save表示关闭服务前保存数据  nosave不保存  默认save
# 127.0.0.1:6379[1]> SHUTDOWN [NOSAVE|SAVE]
127.0.0.1:6379[1]> SHUTDOWN
# 此时已断开连接关闭了redis-server,输入exit退出客户端
not connected> exit
[helin@localhost redis-5.0.7]$

config get:获取指定配置参数的值
# 获取通行密码
# 127.0.0.1:6379> CONFIG GET parameter
127.0.0.1:6379> config get requirepass
1) "requirepass"
2) ""
# 获取redis启动根目录
127.0.0.1:6379> CONFIG GET dir
1) "dir"
2) "/root"


config set:修改 redis 配置参数(设置密码),无需重启
# 设置通行密码
# 127.0.0.1:6379> CONFIG SET parameter value
127.0.0.1:6379> config set requirepass 123456
OK
# 设置之后每次访问需要先验证身份然后在使用各种命令
127.0.0.1:6379> auth 123456
OK

Redis事务命令
multi:标记一个事务块的开始
# 事务块内的多条命令会按照先后顺序被放进一个队列当中,
# 最后由 EXEC 命令原子性(atomic)地执行。
127.0.0.1:6379> MULTI
OK

exec:执行所有事务块内的命令
# 事务块内所有命令的返回值,按命令执行的先后顺序排列。 当操作被打断时,返回空值 nil 
127.0.0.1:6379> MULTI
OK
127.0.0.1:6379> INCR user_id
QUEUED
127.0.0.1:6379> INCR user_id
QUEUED
127.0.0.1:6379> INCR user_id
QUEUED
127.0.0.1:6379> PING
QUEUED
redis 127.0.0.1:6379> EXEC
1) (integer) 1
2) (integer) 2
3) (integer) 3
4) PONG

watch:监视一个(或多个) key
# 监视一个(或多个) key ,如果在事务执行之前这个(或这些) key 被其他命令所改动,
# 那么事务将被打断
127.0.0.1:6379> WATCH z1 [key ...]
OK

unwatch:取消 WATCH 命令对所有 key 的监视
# 取消 WATCH 命令对所有 key 的监视
127.0.0.1:6379> UNWATCH
OK

discard:取消事务
# 取消事务,放弃执行事务块内的所有命令
127.0.0.1:6379> DISCARD
OK

Redis发布订阅
publish:将信息发送到指定的频道
# 发布"hello world" 到mychannel
# 127.0.0.1:6379> PUBLISH channel message
127.0.0.1:6379> PUBLISH mychannel "hello world"
(integer) 1

subscribe:订阅给定的一个或多个频道的信息
# 订阅mychannel
# 127.0.0.1:6379> SUBSCRIBE channel [channel ...]
127.0.0.1:6379> SUBSCRIBE mychannel 
Reading messages... (press Ctrl-C to quit)
1) "subscribe"
2) "mychannel"
3) (integer) 1
1) "message"
2) "mychannel"
3) "a"

pubsub:查看订阅与发布系统状态
# 查看订阅与发布系统状态,它由数个不同格式的子命令组成
# 127.0.0.1:6379> PUBSUB <subcommand> [argument [argument ...]]
127.0.0.1:6379> PUBSUB CHANNELS
(empty list or set)

psubscribe:订阅一个或多个符合给定模式的频道
# 每个模式以 * 作为匹配符,比如 it* 匹配所有以 it 开头的频道
# ( it.news 、 it.blog 、 it.tweets 等等)。 news.* 匹配所有以 news. 开头的频
# 道( news.it 、 news.global.today 等等),诸如此类
# 127.0.0.1:6379> PSUBSCRIBE pattern [pattern ...]
127.0.0.1:6379> PSUBSCRIBE mychannel
Reading messages... (press Ctrl-C to quit)
1) "psubscribe"
2) "mychannel"
3) (integer) 1

unsubscribe:退订给定的一个或多个频道的信息
# 退订mychannel
# 127.0.0.1:6379> UNSUBSCRIBE channel [channel ...]
127.0.0.1:6379> UNSUBSCRIBE mychannel 
1) "unsubscribe"
2) "a"
3) (integer) 0

Redis配置文件

Units单位
# Note on units: when memory size is needed, it is possible to specify
# it in the usual form of 1k 5GB 4M and so forth:
# 关于单位的注意事项:当需要内存大小时,可以以通常的形式1k 5GB 4M等等来指定它
# 
# 1k => 1000 bytes
# 1kb => 1024 bytes
# 1m => 1000000 bytes
# 1mb => 1024*1024 bytes
# 1g => 1000000000 bytes
# 1gb => 1024*1024*1024 bytes
#
# units are case insensitive so 1GB 1Gb 1gB are all the same.
# 单位不区分大小写,因此1GB 1Gb 1gB都相同

Includes包含
################################## INCLUDES ###################################

# Include one or more other config files here.  This is useful if you
# have a standard template that goes to all Redis servers but also need
# to customize a few per-server settings.  Include files can include
# other files, so use this wisely.
# 在此处包括一个或多个其他配置文件。如果您具有可用于所有Redis服务器的标准模板,但也需要
# 自定义一些每台服务器的设置。包含文件可以包括其他文件,因此请明智地使用它

# Notice option "include" won't be rewritten by command "CONFIG REWRITE"
# from admin or Redis Sentinel. Since Redis always uses the last processed
# line as value of a configuration directive, you'd better put includes
# at the beginning of this file to avoid overwriting config change at runtime.
# 注意,选项“include”不会被命令“CONFIG REWRITE”重写,来自管理员或Redis哨兵。由于Redis始终使用最后处理的
# 行作为配置指令的值,最好将包含在此文件的开头,以避免在运行时覆盖配置更改。
#
# If instead you are interested in using includes to override configuration
# options, it is better to use include as the last line.
# 相反,如果您有兴趣使用include覆盖配置选项,最好使用include作为最后一行
#
# include /path/to/local.conf
# include /path/to/other.conf

network网络
################################## NETWORK #####################################

# By default, if no "bind" configuration directive is specified, Redis listens
# for connections from all the network interfaces available on the server.
# It is possible to listen to just one or multiple selected interfaces using
# the "bind" configuration directive, followed by one or more IP addresses.
#
# Examples:
#
# bind 192.168.1.100 10.0.0.1
# bind 127.0.0.1 ::1
#
# ~~~ WARNING ~~~ If the computer running Redis is directly exposed to the
# internet, binding to all the interfaces is dangerous and will expose the
# instance to everybody on the internet. So by default we uncomment the
# following bind directive, that will force Redis to listen only into
# the IPv4 loopback interface address (this means Redis will be able to
# accept connections only from clients running into the same computer it
# is running).
#
# IF YOU ARE SURE YOU WANT YOUR INSTANCE TO LISTEN TO ALL THE INTERFACES
# JUST COMMENT THE FOLLOWING LINE.
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
# 绑定的主机地址
bind 127.0.0.1

# Protected mode is a layer of security protection, in order to avoid that
# Redis instances left open on the internet are accessed and exploited.
#
# When protected mode is on and if:
#
# 1) The server is not binding explicitly to a set of addresses using the
#    "bind" directive.
# 2) No password is configured.
#
# The server only accepts connections from clients connecting from the
# IPv4 and IPv6 loopback addresses 127.0.0.1 and ::1, and from Unix domain
# sockets.
#
# By default protected mode is enabled. You should disable it only if
# you are sure you want clients from other hosts to connect to Redis
# even if no authentication is configured, nor a specific set of interfaces
# are explicitly listed using the "bind" directive.
protected-mode yes

# Accept connections on the specified port, default is 6379 (IANA #815344).
# If port 0 is specified Redis will not listen on a TCP socket.
# 默认端口
port 6379

# TCP listen() backlog.
#
# In high requests-per-second environments you need an high backlog in order
# to avoid slow clients connections issues. Note that the Linux kernel
# will silently truncate it to the value of /proc/sys/net/core/somaxconn so
# make sure to raise both the value of somaxconn and tcp_max_syn_backlog
# in order to get the desired effect.
# 
# 
# 设置tcp的backlog,backlog其实是一个连接队列,backlog队列总和=未完成三次握手队列 + 已经完成三次握手队列。
# 在高并发环境下你需要一个高backlog值来避免慢客户端连接问题。
# 注意Linux内核会将这个值减小到/proc/sys/net/core/somaxconn的值,
# 所以需要确认增大somaxconn和tcp_max_syn_backlog两个值来达到想要的效果
tcp-backlog 511

# Unix socket.
#
# Specify the path for the Unix socket that will be used to listen for
# incoming connections. There is no default, so Redis will not listen
# on a unix socket when not specified.
#
# unixsocket /tmp/redis.sock
# unixsocketperm 700

# Close the connection after a client is idle for N seconds (0 to disable)
# 当客户端闲置多长时间后关闭连接,如果指定为 0,表示关闭该功能
timeout 0

# TCP keepalive.
#
# If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence
# of communication. This is useful for two reasons:
#
# 1) Detect dead peers.
# 2) Take the connection alive from the point of view of network
#    equipment in the middle.
#
# On Linux, the specified value (in seconds) is the period used to send ACKs.
# Note that to close the connection the double of the time is needed.
# On other kernels the period depends on the kernel configuration.
#
# A reasonable value for this option is 300 seconds, which is the new
# Redis default starting with Redis 3.2.1.
# 单位为秒,如果设置为0,则不会进行Keepalive检测,建议设置成60 
tcp-keepalive 300

general通用
################################# GENERAL #####################################

# By default Redis does not run as a daemon. Use 'yes' if you need it.
# Note that Redis will write a pid file in /var/run/redis.pid when daemonized.
# 默认情况下,Redis不会作为守护程序运行。如果需要,请使用“是”。
# 请注意,Redis守护进程将在/var/run/redis.pid中写入一个pid文件
daemonize yes

# If you run Redis from upstart or systemd, Redis can interact with your
# supervision tree. Options:
#   supervised no      - no supervision interaction
#   supervised upstart - signal upstart by putting Redis into SIGSTOP mode
#   supervised systemd - signal systemd by writing READY=1 to $NOTIFY_SOCKET
#   supervised auto    - detect upstart or systemd method based on
#                        UPSTART_JOB or NOTIFY_SOCKET environment variables
# Note: these supervision methods only signal "process is ready."
#       They do not enable continuous liveness pings back to your supervisor.
supervised no

# If a pid file is specified, Redis writes it where specified at startup
# and removes it at exit.
#
# When the server runs non daemonized, no pid file is created if none is
# specified in the configuration. When the server is daemonized, the pid file
# is used even if not specified, defaulting to "/var/run/redis.pid".
#
# Creating a pid file is best effort: if Redis is not able to create it
# nothing bad happens, the server will start and run normally.
# 当 Redis 以守护进程方式运行时,Redis 默认会把 pid 写入 /var/run/redis.pid 文件,可以通过 pidfile 指定
pidfile /var/run/redis_6379.pid

# Specify the server verbosity level.
# This can be one of:
# debug (a lot of information, useful for development/testing)
# verbose (many rarely useful info, but not a mess like the debug level)
# notice (moderately verbose, what you want in production probably)
# warning (only very important / critical messages are logged)
# 日志等级(debug,verbose,notice,warning)
loglevel notice

# Specify the log file name. Also the empty string can be used to force
# Redis to log on the standard output. Note that if you use standard
# output for logging but daemonize, logs will be sent to /dev/null
# 日志记录方式,默认为标准输出,如果配置 Redis 为守护进程方式运行,
# 而这里又配置为日志记录方式为标准输出,则日志将会发送给 /dev/null
logfile ""

# To enable logging to the system logger, just set 'syslog-enabled' to yes,
# and optionally update the other syslog parameters to suit your needs.
# 是否把日志输出到syslog中
# syslog-enabled no

# Specify the syslog identity.
# 指定syslog里的日志标志
# syslog-ident redis

# Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7.
# 指定syslog设备,值可以是USER或LOCAL0-LOCAL7
# syslog-facility local0

# Set the number of databases. The default database is DB 0, you can select
# a different one on a per-connection basis using SELECT <dbid> where
# dbid is a number between 0 and 'databases'-1
# 默认库的数量
databases 16

# By default Redis shows an ASCII art logo only when started to log to the
# standard output and if the standard output is a TTY. Basically this means
# that normally a logo is displayed only in interactive sessions.
#
# However it is possible to force the pre-4.0 behavior and always show a
# ASCII art logo in startup logs by setting the following option to yes.
# 默认显示logo
always-show-logo yes

snapshotting快照
################################ SNAPSHOTTING  ################################
#
# Save the DB on disk:
#
#   save <seconds> <changes>
#
#   Will save the DB if both the given number of seconds and the given
#   number of write operations against the DB occurred.
#
#   In the example below the behaviour will be to save:
#   after 900 sec (15 min) if at least 1 key changed
#   after 300 sec (5 min) if at least 10 keys changed
#   after 60 sec if at least 10000 keys changed
#
#   Note: you can disable saving completely by commenting out all "save" lines.
#
#   It is also possible to remove all the previously configured save
#   points by adding a save directive with a single empty string argument
#   like in the following example:
#
#   save ""
# 指定在多长时间内,有多少次更新操作,就将数据同步到数据文件,可以多个条件配合
# save <seconds> <changes> 多少秒内有多少个更改
# 如果想禁用RDB持久化的策略,只要不设置任何save指令,或者给save传入一个空字符串参数也可以
save 900 1
save 300 10
save 60 10000

# By default Redis will stop accepting writes if RDB snapshots are enabled
# (at least one save point) and the latest background save failed.
# This will make the user aware (in a hard way) that data is not persisting
# on disk properly, otherwise chances are that no one will notice and some
# disaster will happen.
#
# If the background saving process will start working again Redis will
# automatically allow writes again.
#
# However if you have setup your proper monitoring of the Redis server
# and persistence, you may want to disable this feature so that Redis will
# continue to work as usual even if there are problems with disk,
# permissions, and so forth.
#
# 后台save出错时前台是否停止写入
# 如果配置成no,表示你不在乎数据不一致或者有其他的手段发现和控制
stop-writes-on-bgsave-error yes

# Compress string objects using LZF when dump .rdb databases?
# For default that's set to 'yes' as it's almost always a win.
# If you want to save some CPU in the saving child set it to 'no' but
# the dataset will likely be bigger if you have compressible values or keys.
#
# rdbcompression:对于存储到磁盘中的快照,可以设置是否进行压缩存储。
# 如果是的话,redis会采用LZF算法进行压缩。
# 如果你不想消耗CPU来进行压缩的话,可以设置为关闭此功能,但会导致数据库文件变的巨大
rdbcompression yes

# Since version 5 of RDB a CRC64 checksum is placed at the end of the file.
# This makes the format more resistant to corruption but there is a performance
# hit to pay (around 10%) when saving and loading RDB files, so you can disable it
# for maximum performances.
#
# RDB files created with checksum disabled have a checksum of zero that will
# tell the loading code to skip the check.
#
# rdbchecksum:在存储快照后,还可以让redis使用CRC64算法来进行数据校验,
# 但是这样做会增加大约10%的性能消耗,如果希望获取到最大的性能提升,可以关闭此功能
rdbchecksum yes

# The filename where to dump the DB
# 指定本地数据库文件名,默认值为 dump.rdb
dbfilename dump.rdb

# The working directory.
#
# The DB will be written inside this directory, with the filename specified
# above using the 'dbfilename' configuration directive.
#
# The Append Only File will also be created inside this directory.
#
# Note that you must specify a directory here, not a file name.
# 指定本地数据库存放目录
dir ./

replication复制
################################# REPLICATION #################################

# Master-Replica replication. Use replicaof to make a Redis instance a copy of
# another Redis server. A few things to understand ASAP about Redis replication.
#
#   +------------------+      +---------------+
#   |      Master      | ---> |    Replica    |
#   | (receive writes) |      |  (exact copy) |
#   +------------------+      +---------------+
#
# 1) Redis replication is asynchronous, but you can configure a master to
#    stop accepting writes if it appears to be not connected with at least
#    a given number of replicas.
# 2) Redis replicas are able to perform a partial resynchronization with the
#    master if the replication link is lost for a relatively small amount of
#    time. You may want to configure the replication backlog size (see the next
#    sections of this file) with a sensible value depending on your needs.
# 3) Replication is automatic and does not need user intervention. After a
#    network partition replicas automatically try to reconnect to masters
#    and resynchronize with them.
#
# 设置当本机为 slav 服务时,设置 master 服务的 IP 地址及端口,
# 在 Redis 启动时,它会自动从 master 进行数据同步
# replicaof <masterip> <masterport>

# If the master is password protected (using the "requirepass" configuration
# directive below) it is possible to tell the replica to authenticate before
# starting the replication synchronization process, otherwise the master will
# refuse the replica request.
#
# 当 master 服务设置了密码保护时,slav 服务连接 master 的密码
# masterauth <master-password>

# When a replica loses its connection with the master, or when the replication
# is still in progress, the replica can act in two different ways:
#
# 1) if replica-serve-stale-data is set to 'yes' (the default) the replica will
#    still reply to client requests, possibly with out of date data, or the
#    data set may just be empty if this is the first synchronization.
#
# 2) if replica-serve-stale-data is set to 'no' the replica will reply with
#    an error "SYNC with master in progress" to all the kind of commands
#    but to INFO, replicaOF, AUTH, PING, SHUTDOWN, REPLCONF, ROLE, CONFIG,
#    SUBSCRIBE, UNSUBSCRIBE, PSUBSCRIBE, PUNSUBSCRIBE, PUBLISH, PUBSUB,
#    COMMAND, POST, HOST: and LATENCY.
#
replica-serve-stale-data yes

# You can configure a replica instance to accept writes or not. Writing against
# a replica instance may be useful to store some ephemeral data (because data
# written on a replica will be easily deleted after resync with the master) but
# may also cause problems if clients are writing to it because of a
# misconfiguration.
#
# Since Redis 2.6 by default replicas are read-only.
#
# Note: read only replicas are not designed to be exposed to untrusted clients
# on the internet. It's just a protection layer against misuse of the instance.
# Still a read only replica exports by default all the administrative commands
# such as CONFIG, DEBUG, and so forth. To a limited extent you can improve
# security of read only replicas using 'rename-command' to shadow all the
# administrative / dangerous commands.
replica-read-only yes

# Replication SYNC strategy: disk or socket.
#
# -------------------------------------------------------
# WARNING: DISKLESS REPLICATION IS EXPERIMENTAL CURRENTLY
# -------------------------------------------------------
#
# New replicas and reconnecting replicas that are not able to continue the replication
# process just receiving differences, need to do what is called a "full
# synchronization". An RDB file is transmitted from the master to the replicas.
# The transmission can happen in two different ways:
#
# 1) Disk-backed: The Redis master creates a new process that writes the RDB
#                 file on disk. Later the file is transferred by the parent
#                 process to the replicas incrementally.
# 2) Diskless: The Redis master creates a new process that directly writes the
#              RDB file to replica sockets, without touching the disk at all.
#
# With disk-backed replication, while the RDB file is generated, more replicas
# can be queued and served with the RDB file as soon as the current child producing
# the RDB file finishes its work. With diskless replication instead once
# the transfer starts, new replicas arriving will be queued and a new transfer
# will start when the current one terminates.
#
# When diskless replication is used, the master waits a configurable amount of
# time (in seconds) before starting the transfer in the hope that multiple replicas
# will arrive and the transfer can be parallelized.
#
# With slow disks and fast (large bandwidth) networks, diskless replication
# works better.
repl-diskless-sync no

# When diskless replication is enabled, it is possible to configure the delay
# the server waits in order to spawn the child that transfers the RDB via socket
# to the replicas.
#
# This is important since once the transfer starts, it is not possible to serve
# new replicas arriving, that will be queued for the next RDB transfer, so the server
# waits a delay in order to let more replicas arrive.
#
# The delay is specified in seconds, and by default is 5 seconds. To disable
# it entirely just set it to 0 seconds and the transfer will start ASAP.
repl-diskless-sync-delay 5

# Replicas send PINGs to server in a predefined interval. It's possible to change
# this interval with the repl_ping_replica_period option. The default value is 10
# seconds.
#
# repl-ping-replica-period 10

# The following option sets the replication timeout for:
#
# 1) Bulk transfer I/O during SYNC, from the point of view of replica.
# 2) Master timeout from the point of view of replicas (data, pings).
# 3) Replica timeout from the point of view of masters (REPLCONF ACK pings).
#
# It is important to make sure that this value is greater than the value
# specified for repl-ping-replica-period otherwise a timeout will be detected
# every time there is low traffic between the master and the replica.
#
# repl-timeout 60

# Disable TCP_NODELAY on the replica socket after SYNC?
#
# If you select "yes" Redis will use a smaller number of TCP packets and
# less bandwidth to send data to replicas. But this can add a delay for
# the data to appear on the replica side, up to 40 milliseconds with
# Linux kernels using a default configuration.
#
# If you select "no" the delay for data to appear on the replica side will
# be reduced but more bandwidth will be used for replication.
#
# By default we optimize for low latency, but in very high traffic conditions
# or when the master and replicas are many hops away, turning this to "yes" may
# be a good idea.
repl-disable-tcp-nodelay no

# Set the replication backlog size. The backlog is a buffer that accumulates
# replica data when replicas are disconnected for some time, so that when a replica
# wants to reconnect again, often a full resync is not needed, but a partial
# resync is enough, just passing the portion of data the replica missed while
# disconnected.
#
# The bigger the replication backlog, the longer the time the replica can be
# disconnected and later be able to perform a partial resynchronization.
#
# The backlog is only allocated once there is at least a replica connected.
#
# repl-backlog-size 1mb

# After a master has no longer connected replicas for some time, the backlog
# will be freed. The following option configures the amount of seconds that
# need to elapse, starting from the time the last replica disconnected, for
# the backlog buffer to be freed.
#
# Note that replicas never free the backlog for timeout, since they may be
# promoted to masters later, and should be able to correctly "partially
# resynchronize" with the replicas: hence they should always accumulate backlog.
#
# A value of 0 means to never release the backlog.
#
# repl-backlog-ttl 3600

# The replica priority is an integer number published by Redis in the INFO output.
# It is used by Redis Sentinel in order to select a replica to promote into a
# master if the master is no longer working correctly.
#
# A replica with a low priority number is considered better for promotion, so
# for instance if there are three replicas with priority 10, 100, 25 Sentinel will
# pick the one with priority 10, that is the lowest.
#
# However a special priority of 0 marks the replica as not able to perform the
# role of master, so a replica with priority of 0 will never be selected by
# Redis Sentinel for promotion.
#
# By default the priority is 100.
replica-priority 100

# It is possible for a master to stop accepting writes if there are less than
# N replicas connected, having a lag less or equal than M seconds.
#
# The N replicas need to be in "online" state.
#
# The lag in seconds, that must be <= the specified value, is calculated from
# the last ping received from the replica, that is usually sent every second.
#
# This option does not GUARANTEE that N replicas will accept the write, but
# will limit the window of exposure for lost writes in case not enough replicas
# are available, to the specified number of seconds.
#
# For example to require at least 3 replicas with a lag <= 10 seconds use:
#
# min-replicas-to-write 3
# min-replicas-max-lag 10
#
# Setting one or the other to 0 disables the feature.
#
# By default min-replicas-to-write is set to 0 (feature disabled) and
# min-replicas-max-lag is set to 10.

# A Redis master is able to list the address and port of the attached
# replicas in different ways. For example the "INFO replication" section
# offers this information, which is used, among other tools, by
# Redis Sentinel in order to discover replica instances.
# Another place where this info is available is in the output of the
# "ROLE" command of a master.
#
# The listed IP and address normally reported by a replica is obtained
# in the following way:
#
#   IP: The address is auto detected by checking the peer address
#   of the socket used by the replica to connect with the master.
#
#   Port: The port is communicated by the replica during the replication
#   handshake, and is normally the port that the replica is using to
#   listen for connections.
#
# However when port forwarding or Network Address Translation (NAT) is
# used, the replica may be actually reachable via different IP and port
# pairs. The following two options can be used by a replica in order to
# report to its master a specific set of IP and port, so that both INFO
# and ROLE will report those values.
#
# There is no need to use both the options if you need to override just
# the port or the IP address.
#
# replica-announce-ip 5.5.5.5
# replica-announce-port 1234

security安全
################################## SECURITY ###################################

# Require clients to issue AUTH <PASSWORD> before processing any other
# commands.  This might be useful in environments in which you do not trust
# others with access to the host running redis-server.
#
# This should stay commented out for backward compatibility and because most
# people do not need auth (e.g. they run their own servers).
#
# Warning: since Redis is pretty fast an outside user can try up to
# 150k passwords per second against a good box. This means that you should
# use a very strong password otherwise it will be very easy to break.
#
# 设置 Redis 连接密码,如果配置了连接密码,客户端在连接 Redis 时需要通过 AUTH <password> 命令提供密码,
# 默认关闭
# requirepass foobared

# Command renaming.
#
# It is possible to change the name of dangerous commands in a shared
# environment. For instance the CONFIG command may be renamed into something
# hard to guess so that it will still be available for internal-use tools
# but not available for general clients.
#
# Example:
#
# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52
#
# It is also possible to completely kill a command by renaming it into
# an empty string:
#
# rename-command CONFIG ""
#
# Please note that changing the name of commands that are logged into the
# AOF file or transmitted to replicas may cause problems.

clients客户端
################################### CLIENTS ####################################

# Set the max number of connected clients at the same time. By default
# this limit is set to 10000 clients, however if the Redis server is not
# able to configure the process file limit to allow for the specified limit
# the max number of allowed clients is set to the current file limit
# minus 32 (as Redis reserves a few file descriptors for internal uses).
#
# Once the limit is reached Redis will close all the new connections sending
# an error 'max number of clients reached'.
#
# 设置redis同时可以与多少个客户端进行连接。默认情况下为10000个客户端。当你
# 无法设置进程文件句柄限制时,redis会设置为当前的文件句柄限制值减去32,因为redis会为自
# 身内部处理逻辑留一些句柄出来。如果达到了此限制,redis则会拒绝新的连接请求,并且向这
# 些连接请求方发出“max number of clients reached”以作回应。
# maxclients 10000

memory management内存管理
############################## MEMORY MANAGEMENT ################################

# Set a memory usage limit to the specified amount of bytes.
# When the memory limit is reached Redis will try to remove keys
# according to the eviction policy selected (see maxmemory-policy).
#
# If Redis can't remove keys according to the policy, or if the policy is
# set to 'noeviction', Redis will start to reply with errors to commands
# that would use more memory, like SET, LPUSH, and so on, and will continue
# to reply to read-only commands like GET.
#
# This option is usually useful when using Redis as an LRU or LFU cache, or to
# set a hard memory limit for an instance (using the 'noeviction' policy).
#
# WARNING: If you have replicas attached to an instance with maxmemory on,
# the size of the output buffers needed to feed the replicas are subtracted
# from the used memory count, so that network problems / resyncs will
# not trigger a loop where keys are evicted, and in turn the output
# buffer of replicas is full with DELs of keys evicted triggering the deletion
# of more keys, and so forth until the database is completely emptied.
#
# In short... if you have replicas attached it is suggested that you set a lower
# limit for maxmemory so that there is some free RAM on the system for replica
# output buffers (but this is not needed if the policy is 'noeviction').
#
#设置redis可以使用的内存量。一旦到达内存使用上限,redis将会试图移除内部数据,
# 移除规则可以通过maxmemory-policy来指定。如果redis无法根据移除规则来移除内存中的数据,
# 或者设置了“不允许移除”,那么redis则会针对那些需要申请内存的指令返回错误信息,比如SET、LPUSH等。
# 但是对于无内存申请的指令,仍然会正常响应,比如GET等。如果你的redis是主redis(说明你的redis有从redis),
# 那么在设置内存使用上限时,需要在系统中留出一些内存空间给同步队列缓存,只有在你设置的是“不移除”的情况下,
# 才不用考虑这个因素
# maxmemory <bytes>

# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
# is reached. You can select among five behaviors:
#
# volatile-lru -> Evict using approximated LRU among the keys with an expire set.
# allkeys-lru -> Evict any key using approximated LRU.
# volatile-lfu -> Evict using approximated LFU among the keys with an expire set.
# allkeys-lfu -> Evict any key using approximated LFU.
# volatile-random -> Remove a random key among the ones with an expire set.
# allkeys-random -> Remove a random key, any key.
# volatile-ttl -> Remove the key with the nearest expire time (minor TTL)
# noeviction -> Don't evict anything, just return an error on write operations.
#
# LRU means Least Recently Used
# LFU means Least Frequently Used
#
# Both LRU, LFU and volatile-ttl are implemented using approximated
# randomized algorithms.
#
# Note: with any of the above policies, Redis will return an error on write
#       operations, when there are no suitable keys for eviction.
#
#       At the date of writing these commands are: set setnx setex append
#       incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
#       sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
#       zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
#       getset mset msetnx exec sort
#
# The default is:
#
# 过期清除缓存策略
#(1)volatile-lru:使用LRU算法移除key,只对设置了过期时间的键
#(2)allkeys-lru:使用LRU算法移除key
#(3)volatile-random:在过期集合中移除随机的key,只对设置了过期时间的键
#(4)allkeys-random:移除随机的key
#(5)volatile-ttl:移除那些TTL值最小的key,即那些最近要过期的key
#(6)noeviction:不进行移除。针对写操作,只是返回错误信息
# maxmemory-policy noeviction

# LRU, LFU and minimal TTL algorithms are not precise algorithms but approximated
# algorithms (in order to save memory), so you can tune it for speed or
# accuracy. For default Redis will check five keys and pick the one that was
# used less recently, you can change the sample size using the following
# configuration directive.
#
# The default of 5 produces good enough results. 10 Approximates very closely
# true LRU but costs more CPU. 3 is faster but not very accurate.
#
# 设置样本数量,LRU算法和最小TTL算法都并非是精确的算法,而是估算值,
# 所以你可以设置样本的大小,redis默认会检查这么多个key并选择其中LRU的那个
# maxmemory-samples 5

# Starting from Redis 5, by default a replica will ignore its maxmemory setting
# (unless it is promoted to master after a failover or manually). It means
# that the eviction of keys will be just handled by the master, sending the
# DEL commands to the replica as keys evict in the master side.
#
# This behavior ensures that masters and replicas stay consistent, and is usually
# what you want, however if your replica is writable, or you want the replica to have
# a different memory setting, and you are sure all the writes performed to the
# replica are idempotent, then you may change this default (but be sure to understand
# what you are doing).
#
# Note that since the replica by default does not evict, it may end using more
# memory than the one set via maxmemory (there are certain buffers that may
# be larger on the replica, or data structures may sometimes take more memory and so
# forth). So make sure you monitor your replicas and make sure they have enough
# memory to never hit a real out-of-memory condition before the master hits
# the configured maxmemory setting.
#
# replica-ignore-maxmemory yes

append only mode追加
############################## APPEND ONLY MODE ###############################

# By default Redis asynchronously dumps the dataset on disk. This mode is
# good enough in many applications, but an issue with the Redis process or
# a power outage may result into a few minutes of writes lost (depending on
# the configured save points).
#
# The Append Only File is an alternative persistence mode that provides
# much better durability. For instance using the default data fsync policy
# (see later in the config file) Redis can lose just one second of writes in a
# dramatic event like a server power outage, or a single write if something
# wrong with the Redis process itself happens, but the operating system is
# still running correctly.
#
# AOF and RDB persistence can be enabled at the same time without problems.
# If the AOF is enabled on startup Redis will load the AOF, that is the file
# with the better durability guarantees.
#
# Please check http://redis.io/topics/persistence for more information.
#
# 指定是否在每次更新操作后进行日志记录,Redis 在默认情况下是异步的把数据写入磁盘,
# 如果不开启,可能会在断电时导致一段时间内的数据丢失。因为 redis 本身同步数据文件是按上面 save 条件来同步的,
# 所以有的数据会在一段时间内只存在于内存中。默认为 no
appendonly no

# The name of the append only file (default: "appendonly.aof")
# 指定更新日志文件名,默认为 appendonly.aof
appendfilename "appendonly.aof"

# The fsync() call tells the Operating System to actually write data on disk
# instead of waiting for more data in the output buffer. Some OS will really flush
# data on disk, some other OS will just try to do it ASAP.
#
# Redis supports three different modes:
#
# no: don't fsync, just let the OS flush the data when it wants. Faster.
# always: fsync after every write to the append only log. Slow, Safest.
# everysec: fsync only one time every second. Compromise.
#
# The default is "everysec", as that's usually the right compromise between
# speed and data safety. It's up to you to understand if you can relax this to
# "no" that will let the operating system flush the output buffer when
# it wants, for better performances (but if you can live with the idea of
# some data loss consider the default persistence mode that's snapshotting),
# or on the contrary, use "always" that's very slow but a bit safer than
# everysec.
#
# More details please check the following article:
# http://antirez.com/post/redis-persistence-demystified.html
#
# If unsure, use "everysec".

#
# 指定更新日志条件,共有 3 个可选值:
# no:表示等操作系统进行数据缓存同步到磁盘(快)
# always:表示每次更新操作后手动调用 fsync() 将数据写到磁盘(慢,安全)
# everysec:表示每秒同步一次(折中,默认值)
#
# appendfsync always
appendfsync everysec
# appendfsync no

# When the AOF fsync policy is set to always or everysec, and a background
# saving process (a background save or AOF log background rewriting) is
# performing a lot of I/O against the disk, in some Linux configurations
# Redis may block too long on the fsync() call. Note that there is no fix for
# this currently, as even performing fsync in a different thread will block
# our synchronous write(2) call.
#
# In order to mitigate this problem it's possible to use the following option
# that will prevent fsync() from being called in the main process while a
# BGSAVE or BGREWRITEAOF is in progress.
#
# This means that while another child is saving, the durability of Redis is
# the same as "appendfsync none". In practical terms, this means that it is
# possible to lose up to 30 seconds of log in the worst scenario (with the
# default Linux settings).
#
# If you have latency problems turn this to "yes". Otherwise leave it as
# "no" that is the safest pick from the point of view of durability.

#
# 重写时是否可以运用Appendfsync,用默认no即可,保证数据安全性。
no-appendfsync-on-rewrite no

# Automatic rewrite of the append only file.
# Redis is able to automatically rewrite the log file implicitly calling
# BGREWRITEAOF when the AOF log size grows by the specified percentage.
#
# This is how it works: Redis remembers the size of the AOF file after the
# latest rewrite (if no rewrite has happened since the restart, the size of
# the AOF at startup is used).
#
# This base size is compared to the current size. If the current size is
# bigger than the specified percentage, the rewrite is triggered. Also
# you need to specify a minimal size for the AOF file to be rewritten, this
# is useful to avoid rewriting the AOF file even if the percentage increase
# is reached but it is still pretty small.
#
# Specify a percentage of zero in order to disable the automatic AOF
# rewrite feature.

#
# 设置重写的基准值(同时满足才触发)
auto-aof-rewrite-percentage 100	# 当文件大小超过上次重写文件大小的百分之100时触发(2倍)
auto-aof-rewrite-min-size 64mb  # 当文件大小超过64M时触发 (大型项目3G起步)

# An AOF file may be found to be truncated at the end during the Redis
# startup process, when the AOF data gets loaded back into memory.
# This may happen when the system where Redis is running
# crashes, especially when an ext4 filesystem is mounted without the
# data=ordered option (however this can't happen when Redis itself
# crashes or aborts but the operating system still works correctly).
#
# Redis can either exit with an error when this happens, or load as much
# data as possible (the default now) and start if the AOF file is found
# to be truncated at the end. The following option controls this behavior.
#
# If aof-load-truncated is set to yes, a truncated AOF file is loaded and
# the Redis server starts emitting a log to inform the user of the event.
# Otherwise if the option is set to no, the server aborts with an error
# and refuses to start. When the option is set to no, the user requires
# to fix the AOF file using the "redis-check-aof" utility before to restart
# the server.
#
# Note that if the AOF file will be found to be corrupted in the middle
# the server will still exit with an error. This option only applies when
# Redis will try to read more data from the AOF file but not enough bytes
# will be found.
aof-load-truncated yes

# When rewriting the AOF file, Redis is able to use an RDB preamble in the
# AOF file for faster rewrites and recoveries. When this option is turned
# on the rewritten AOF file is composed of two different stanzas:
#
#   [RDB file][AOF tail]
#
# When loading Redis recognizes that the AOF file starts with the "REDIS"
# string and loads the prefixed RDB file, and continues loading the AOF
# tail.
aof-use-rdb-preamble yes

★常见redis.conf配置

参数说明

redis.conf 配置项说明如下:

  1. Redis默认不是以守护进程的方式运行,可以通过该配置项修改,使用yes启用守护进程

daemonize no

  1. 当Redis以守护进程方式运行时,Redis默认会把pid写入/var/run/redis.pid文件,可以通过pidfile指定

pidfile /var/run/redis.pid

  1. 指定Redis监听端口,默认端口为6379,作者在自己的一篇博文中解释了为什么选用6379作为默认端口,因为6379在手机按键上MERZ对应的号码,而MERZ取自意大利歌女Alessia Merz的名字

port 6379

  1. 绑定的主机地址

bind 127.0.0.1

  1. 当 客户端闲置多长时间后关闭连接,如果指定为0,表示关闭该功能

timeout 300

  1. 指定日志记录级别,Redis总共支持四个级别:debug、verbose、notice、warning,默认为verbose

loglevel verbose

  1. 日志记录方式,默认为标准输出,如果配置Redis为守护进程方式运行,而这里又配置为日志记录方式为标准输出,则日志将会发送给/dev/null

logfile stdout

  1. 设置数据库的数量,默认数据库为0,可以使用SELECT 命令在连接上指定数据库id

databases 16

  1. 指定在多长时间内,有多少次更新操作,就将数据同步到数据文件,可以多个条件配合

save

Redis默认配置文件中提供了三个条件:

save 900 1

save 300 10

save 60 10000

分别表示900秒(15分钟)内有1个更改,300秒(5分钟)内有10个更改以及60秒内有10000个更改。

  1. 指定存储至本地数据库时是否压缩数据,默认为yes,Redis采用LZF压缩,如果为了节省CPU时间,可以关闭该选项,但会导致数据库文件变的巨大

rdbcompression yes

  1. 指定本地数据库文件名,默认值为dump.rdb

dbfilename dump.rdb

  1. 指定本地数据库存放目录

dir ./

  1. 设置当本机为slav服务时,设置master服务的IP地址及端口,在Redis启动时,它会自动从master进行数据同步

slaveof

  1. 当master服务设置了密码保护时,slav服务连接master的密码

masterauth

  1. 设置Redis连接密码,如果配置了连接密码,客户端在连接Redis时需要通过AUTH 命令提供密码,默认关闭

requirepass foobared

  1. 设置同一时间最大客户端连接数,默认无限制,Redis可以同时打开的客户端连接数为Redis进程可以打开的最大文件描述符数,如果设置 maxclients 0,表示不作限制。当客户端连接数到达限制时,Redis会关闭新的连接并向客户端返回max number of clients reached错误信息

maxclients 128

  1. 指定Redis最大内存限制,Redis在启动时会把数据加载到内存中,达到最大内存后,Redis会先尝试清除已到期或即将到期的Key,当此方法处理 后,仍然到达最大内存设置,将无法再进行写入操作,但仍然可以进行读取操作。Redis新的vm机制,会把Key存放内存,Value会存放在swap区

maxmemory

  1. 指定是否在每次更新操作后进行日志记录,Redis在默认情况下是异步的把数据写入磁盘,如果不开启,可能会在断电时导致一段时间内的数据丢失。因为 redis本身同步数据文件是按上面save条件来同步的,所以有的数据会在一段时间内只存在于内存中。默认为no

appendonly no

  1. 指定更新日志文件名,默认为appendonly.aof

appendfilename appendonly.aof

  1. 指定更新日志条件,共有3个可选值:

no:表示等操作系统进行数据缓存同步到磁盘(快)

always:表示每次更新操作后手动调用fsync()将数据写到磁盘(慢,安全)

everysec:表示每秒同步一次(折衷,默认值)

appendfsync everysec

  1. 指定是否启用虚拟内存机制,默认值为no,简单的介绍一下,VM机制将数据分页存放,由Redis将访问量较少的页即冷数据swap到磁盘上,访问多的页面由磁盘自动换出到内存中(在后面的文章我会仔细分析Redis的VM机制)

vm-enabled no

  1. 虚拟内存文件路径,默认值为/tmp/redis.swap,不可多个Redis实例共享

vm-swap-file /tmp/redis.swap

  1. 将所有大于vm-max-memory的数据存入虚拟内存,无论vm-max-memory设置多小,所有索引数据都是内存存储的(Redis的索引数据 就是keys),也就是说,当vm-max-memory设置为0的时候,其实是所有value都存在于磁盘。默认值为0

vm-max-memory 0

  1. Redis swap文件分成了很多的page,一个对象可以保存在多个page上面,但一个page上不能被多个对象共享,vm-page-size是要根据存储的 数据大小来设定的,作者建议如果存储很多小对象,page大小最好设置为32或者64bytes;如果存储很大大对象,则可以使用更大的page,如果不 确定,就使用默认值

vm-page-size 32

  1. 设置swap文件中的page数量,由于页表(一种表示页面空闲或使用的bitmap)是在放在内存中的,,在磁盘上每8个pages将消耗1byte的内存。

vm-pages 134217728

  1. 设置访问swap文件的线程数,最好不要超过机器的核数,如果设置为0,那么所有对swap文件的操作都是串行的,可能会造成比较长时间的延迟。默认值为4

vm-max-threads 4

  1. 设置在向客户端应答时,是否把较小的包合并为一个包发送,默认为开启

glueoutputbuf yes

  1. 指定在超过一定的数量或者最大的元素超过某一临界值时,采用一种特殊的哈希算法

hash-max-zipmap-entries 64

hash-max-zipmap-value 512

  1. 指定是否激活重置哈希,默认为开启(后面在介绍Redis的哈希算法时具体介绍)

activerehashing yes

  1. 指定包含其它的配置文件,可以在同一主机上多个Redis实例之间使用同一份配置文件,而同时各个实例又拥有自己的特定配置文件

include /path/to/local.conf

Redis持久化

RDB(Redis DataBase)
  • rdb说明

在指定的时间间隔内将内存中的数据集快照写入磁盘,也就是行话讲的Snapshot快照,
它恢复时是将快照文件直接读到内存里

Redis会单独创建(fork)一个子进程来进行持久化,会先将数据写入到
一个临时文件(dump.rdb)中,待持久化过程都结束了,再用这个临时文件替换上次持久化好的文件。
整个过程中,主进程是不进行任何IO操作的,这就确保了极高的性能
如果需要进行大规模数据的恢复,且对于数据恢复的完整性不是非常敏感,那RDB方
式要比AOF方式更加的高效。RDB的缺点是最后一次持久化后的数据可能丢失。

  • fork说明

fork的作用是复制一个与当前进程一样的进程。新进程的所有数据(变量、环境变量、程序计数器等)
数值都和原进程一致,但是是一个全新的进程,并作为原进程的子进程

  • dump.rdb文件位置

config get dir的目录(不同客户端生成的dump.rdb文件所在目录可能不一样)

  • 如何触发快照
  1. redis.conf中的默认配置策略(snapshotting)
  2. SAVE或BGSAVE命令
  3. FLUSHALL命令。但是这样产生的dump.rdb文件里面是空的,无意义
  • rdb恢复

通过CONFIG GET dir获取目录,将备份文件 (dump.rdb) 移动到 redis 安装目录并启动服务即可
如果dump.rdb文件损坏,可以使用redis-check-dump --fix修复

  • 优势
  1. 适合大规模的数据恢复
  2. 对数据完整性和一致性要求不高
  • 劣势
  1. 在一定间隔时间做一次备份,所以如果redis意外down掉的话,就
    会丢失最后一次快照后的所有修改
  2. fork的时候,内存中的数据被克隆了一份,大致2倍的膨胀性需要考虑
AOF(Append Only File)
  • aof说明

以日志的形式来记录每个写操作,将Redis执行过的所有写指令记录到appendonly.aof文件(读操作不记录),
只许追加文件但不可以改写文件,redis启动之初会读取该文件重新构建数据,换言之,redis
重启的话就根据日志文件的内容将写指令从前到后执行一次以完成数据的恢复工作

  • aof恢复
  • 正常恢复
    1. 启动:在redis.conf文件的[append only mode](#append only mode追加)中,把默认的appendonly no,改为yes
    2. 通过CONFIG GET dir获取目录,将有数据的aof文件复制一份保存到对应目录
    3. 重启重新加载
  • 异常恢复(由于断电等不可知原因aof文件写坏了)
    1. 依然要设置appendonly yes
    2. 备份被写坏的aof文件
    3. 使用redis-check-aof --fix进行修复(由于redis-server启动优先加载aof文件,所以如果aof文件损坏且不修复的话无法通过rdb文件恢复也无法启动redis服务)
    4. 重启重新加载
  • rewrite重写
  • 说明

AOF采用文件追加方式,文件会越来越大为避免出现此种情况,新增了重写机制,
当AOF文件的大小超过所设定的阈值时,Redis就会启动AOF文件的内容压缩,
只保留可以恢复数据的最小指令集.可以使用命令bgrewriteaof

  • 原理

AOF文件持续增长而过大时,会fork出一条新进程来将文件重写(也是先写临时文件最后再rename),
遍历新进程的内存中数据,每条记录有一条的Set语句。重写aof文件的操作,并没有读取旧的aof文件,
而是将整个内存中的数据库内容用命令的方式重写了一个新的aof文件,这点和快照有点类似

  • 触发机制

Redis会记录上次重写时的AOF大小,默认配置是当AOF文件大小是上次rewrite后大小的一倍且文件大于64M时触发

  • 优势
  1. 每修改同步:appendfsync always 同步持久化 每次发生数据变更会被立即记录到磁盘 性能较差但数据完整性比较好
  2. 每秒同步:appendfsync everysec 异步操作,每秒记录 如果一秒内宕机,有数据丢失
  3. 不同步:appendfsync no 从不同步
  • 劣势
  1. 相同数据集的数据而言aof文件要远大于rdb文件,恢复速度慢于rdb
  2. aof运行效率要慢于rdb,每秒同步策略效率较好,不同步效率和rdb相同
总结
  • 对比
  • RDB持久化方式能够在指定的时间间隔能对你的数据进行快照存储
  • AOF持久化方式记录每次对服务器写的操作,当服务器重启的时候会重新执行这些
    命令来恢复原始的数据,AOF命令以redis协议追加保存每次写的操作到文件末尾.
    Redis还能对AOF文件进行后台重写,使得AOF文件的体积不至于过大
  • 持久化使用建议
  • 只做缓存:如果你只希望你的数据在服务器运行的时候存在,你也可以不使用任何持久化方式.

  • 同时开启两种持久化方式:

    1. 在这种情况下,当redis重启的时候会优先载入AOF文件来恢复原始的数据,
      因为在通常情况下AOF文件保存的数据集要比RDB文件保存的数据集要完整.
    2. RDB的数据不实时,同时使用两者时服务器重启也只会找AOF文件。那要不要只使用AOF呢?
      作者建议不要,因为RDB更适合用于备份数据库(AOF在不断变化不好备份),
      快速重启,而且不会有AOF可能潜在的bug,留着作为一个万一的手段。
  • 性能建议

因为RDB文件只用作后备用途,建议只在Slave上持久化RDB文件,而且只要15分钟备份一次就够了,只保留save 900 1这条规则。

如果Enalbe AOF,好处是在最恶劣情况下也只会丢失不超过两秒数据,启动脚本较简单只load自己的AOF文件就可以了。代价一是带来了持续的IO,二是AOF rewrite的最后将rewrite过程中产生的新数据写到新文件造成的阻塞几乎是不可避免的。只要硬盘许可,应该尽量减少AOF rewrite的频率,AOF重写的基础大小默认值64M太小了,可以设到5G以上。默认超过原大小100%大小时重写可以改到适当的数值。

如果不Enable AOF ,仅靠Master-Slave Replication 实现高可用性也可以。能省掉一大笔IO也减少了rewrite时带来的系统波动。代价是如果Master/Slave同时倒掉,会丢失十几分钟的数据,启动脚本也要比较两个Master/Slave中的RDB文件,载入较新的那个。新浪微博就选用了这种架构

Redis的事务

Redis事务是什么

可以一次执行多个命令,本质是一组命令的集合。一个事务中的所有命令都会序列化,按顺序地串行化执行而不会被其它命令插入,不许加塞

事务的命令
Redis事务的特点
  • 正常执行
127.0.0.1:6379> MULTI
OK
127.0.0.1:6379> set k1 v1
QUEUED
127.0.0.1:6379> set k2 v2
QUEUED
127.0.0.1:6379> get k2
QUEUED
127.0.0.1:6379> set k3 v3
QUEUED
127.0.0.1:6379> EXEC
1) OK
2) OK
3) "v2"
4) OK

  • 放弃事务
127.0.0.1:6379> MULTI
OK
127.0.0.1:6379> set k1 11
QUEUED
127.0.0.1:6379> set k2 22
QUEUED
127.0.0.1:6379> DISCARD
OK

  • 执行中异常全体终止
127.0.0.1:6379> MULTI
OK
127.0.0.1:6379> set k1 11
QUEUED
127.0.0.1:6379> set k2 22
QUEUED
127.0.0.1:6379> LRANGE k1
(error) ERR wrong number of arguments for 'lrange' command
127.0.0.1:6379> set k3 33
QUEUED
127.0.0.1:6379> EXEC
(error) EXECABORT Transaction discarded because of previous errors.

  • 执行中成功执行结果报错不影响其他命令
127.0.0.1:6379> MULTI
OK
127.0.0.1:6379> set k1 v1
QUEUED
127.0.0.1:6379> set k2 v2
QUEUED
127.0.0.1:6379> INCR k1
QUEUED
127.0.0.1:6379> set k3 v3
QUEUED
127.0.0.1:6379> EXEC
1) OK
2) OK
3) (error) ERR value is not an integer or out of range
4) OK

  • 小结
  1. 单独的隔离操作:事务中的所有命令都会序列化、按顺序地执行。事务在执行的过程中,不会被其他客户端发送来的命令请求所打断。
  2. 没有隔离级别的概念:队列中的命令没有提交之前都不会实际的被执行,因为事务提交前任何指令都不会被实际执行,也就不存在”事务内的查询要看到事务里的更新,在事务外查询不能看到”这个让人万分头痛的问题
  3. 不保证原子性:redis同一个事务中如果有一条命令执行失败,其后的命令仍然会被执行,没有回滚
Redis watch监控
  • 悲观锁

悲观锁(Pessimistic Lock), 顾名思义,就是很悲观,每次去拿数据的时候都认为别人会修改,所以每次在拿数据的时候都会上锁,这样别人想拿这个数据就会block直到它拿到锁。传统的关系型数据库里边就用到了很多这种锁机制,比如行锁,表锁等,读锁,写锁等,都是在做操作之前先上锁

  • 乐观锁

乐观锁(Optimistic Lock), 顾名思义,就是很乐观,每次去拿数据的时候都认为别人不会修改,所以不会上锁,但是在更新的时候会判断一下在此期间别人有没有去更新这个数据,可以使用版本号等机制。乐观锁适用于多读的应用类型,这样可以提高吞吐量,

乐观锁策略:提交版本必须大于记录当前版本才能执行更新

  • watch监控

Watch指令,类似乐观锁,事务提交时,如果Key的值已被别的客户端改变,
比如某个list已被别的客户端push/pop过了,整个事务队列都不会被执行

通过WATCH命令在事务执行之前监控了多个Keys,倘若在WATCH之后有任何Key的值发生了变化,
EXEC命令执行的事务都将被放弃,同时返回Nullmulti-bulk应答以通知调用者事务执行失败

  • watch和unwatch命令

    见[watch](#watch:监视一个(或多个) key)和[unwatch](#unwatch:取消 WATCH 命令对所有 key 的监视)

  • watch使用

127.0.0.1:6379> WATCH k1	# 开始对k1进行监控
OK
127.0.0.1:6379> MULTI		# 开启事务
OK
127.0.0.1:6379> DECRBY k1 20	# k1减20
QUEUED
127.0.0.1:6379> INCRBY k2 20	# k2加20
QUEUED
127.0.0.1:6379> EXEC	# 执行成功
1) (integer) 80
2) (integer) 20

# ==========================

127.0.0.1:6379> WATCH k1	
OK
## 如果在这时有另个一线程修改了k1,比如
# 127.0.0.1:6379>set k1 1000
127.0.0.1:6379> MULTI		
OK
127.0.0.1:6379> DECRBY k1 20	
QUEUED
127.0.0.1:6379> INCRBY k2 20	
QUEUED
127.0.0.1:6379> EXEC	
(nil)	# 这时由于watch监控到了数据的变化而终止了事务

  • 注意事项

一旦执行了exec之前加的监控锁都会被取消掉了

Redis的发布订阅

Redis发布订阅是什么

进程间的一种消息通信模式:发送者(pub)发送消息,订阅者(sub)接收消息。

发布订阅命令
案例
  • 例1:SUBSCRIBE
# 先订阅后发布后才能收到消息
# 一次性订阅多个
# 或用通配符的方式订阅多个
# 127.0.0.1:6379> PSUBSCRIBE new*
127.0.0.1:6379> SUBSCRIBE c1 c2 c3
Reading messages... (press Ctrl-C to quit)
1) "subscribe"
2) "c1"
3) (integer) 1
1) "subscribe"
2) "c2"
3) (integer) 2
1) "subscribe"
2) "c3"
3) (integer) 3

# 这时候另个一客户单给c2发布消息
# 如果用通配符的方式订阅,所有符合通配符格式的频道消息他都能收到
# 127.0.0.1:6379> PUBLISH new1  message1
127.0.0.1:6379> PUBLISH c3 m3
(integer) 1
127.0.0.1:6379> PUBLISH c2 m2
(integer) 1
127.0.0.1:6379> PUBLISH c1 m1
(integer) 1

# 之前的客户端会受到消息
# 如果通配符的方式
# 1) "pmessage"
# 2) "new1"
# 3) "message1"
1) "message"
2) "c3"
3) "m3"
1) "message"
2) "c2"
3) "m2"
1) "message"
2) "c1"
3) "m1"


  • 例2:PSUBSCRIBE
# 用通配符的方式订阅多个
127.0.0.1:6379> PSUBSCRIBE new*
Reading messages... (press Ctrl-C to quit)
1) "psubscribe"
2) "new*"
3) (integer) 1


# 这时候另个一客户单给c2发布消息
# 所有符合通配符格式的频道消息他都能收到
127.0.0.1:6379> PUBLISH new1  message1


# 之前的客户端会受到消息
1) "pmessage"
2) "new1"
3) "message1"

Redis复制

Redis复制是什么

也就是我们所说的主从复制,主机数据更新后根据配置和策略,
自动同步到备机的master/slaver机制,Master以写为主,Slave以读为主

Redis复制作用
  • 读写分离
  • 容灾备份
Redis复制配置说明
  • 配置原则:配从(库)不配主(库)
  • 从库配置:slaveof 主库IP 主库端口(每次与master断开之后,都需要重新连接,除非你配置进redis.conf文件)
Redis复制redis.conf配置方式

假如有3台redis服务器,redis6379,redis6380,redis6381,服务器IP都是127.0.0.1

# 复制多个redis.conf文件

# 1.修改daemonize 为yes
################################# GENERAL #####################################
daemonize yes

# 2.修改pid文件名分别为/var/run/redis_6379.pid,/var/run/redis_6380.pid和/var/run/redis_6381.pid
pidfile /var/run/redis_6379.pid

# 3.修改logfile文件名为6379.log,6380.log和6381.log
logfile "6379.log"

# 4.修改端口号,分别为6379,6380,6381
################################## NETWORK #####################################
port 6379

# 5.修改dump.rdb文件名为dump_6379.rdb,dump_6380.rdb,dump_6381.rdb
################################ SNAPSHOTTING  ################################
dbfilename dump_6379.rdb

# 6.如果用配置文件方式配置主从,则需要修改以下两个参数
################################# REPLICATION #################################
replicaof <masterip> <masterport>	# 配置主机ip和端口,同slaveof命令
masterauth <master-password>		# 如果主机有密码,配置密码


Redis复制实现效果
  • 主从复制
# 首先查询三台服务器信息
127.0.0.1:6379> INFO replication
# Replication
role:master		# 目前是主机
connected_slaves:0	# 从机数为0
master_replid:d2e247ba3ffab4fea530cdff13d038f0bb7302d3
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:0
second_repl_offset:-1
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0


## 在6380和6381两台服务器上执行命令
# 指定IP(127.0.0.1)和端口(6379)的服务器为主机
# 这时主从复制已经建立完成.成为从机的时候就把主机所有数据都进行了备份
127.0.0.1:6380> SLAVEOF 127.0.0.1 6379
OK


## 如果在6379机上
127.0.0.1:6379> set k1 v1
OK

## 在6380和6381上都能取的到
127.0.0.1:6380> get k1
"v1"

# 这时再查询主机
127.0.0.1:6379> INFO replication
# Replication
role:master		# 目前是主机
connected_slaves:2	# 从机数为2
slave0:ip=127.0.0.1,port=6380,state=online,offset=361,lag=1	#从机1
slave1:ip=127.0.0.1,port=6381,state=online,offset=375,lag=0	#从机2
master_replid:d2e247ba3ffab4fea530cdff13d038f0bb7302d3
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:375
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:2
repl_backlog_histlen:374

  • 读写分离
# 在主机上可以进行写操作
127.0.0.1:6379> set k1 v1
OK
# 在从机只能读
127.0.0.1:6380> set k1 v1
(error)READONLY You can`t write against a read only slave

  • 注意:如果不是用配置文件的方式进行主从绑定,则:
    1. 如果主机shutdown,从机依然保持待命,连接状态变为down,当主机重新启动,他们会自动连回来,且一切正常
    2. 如果一台从机shutdown,主机和其他从机一切正常不受影响。但当从机重启之后,必须再次用命令设置主从关系
Redis复制原理

slave启动成功连接到master后会发送一个sync命令,Master接到命令启动后台的存盘进程,同时收集所有接收到的用于修改数据集命令,在后台进程执行完毕之后,master将传送整个数据文件到slave,以完成一次完全同步

  • 全量复制:slave服务在接收到数据库文件数据后,将其存盘并加载到内存中。(首次)
  • 增量复制:Master继续将新的所有收集到的修改命令依次传给slave,完成同步。(之后)

但是只要是重新连接master,一次完全同步(全量复制)将被自动执行

Redis的缺点
  • 复制延时 :由于所有的写操作都是先在Master上操作,然后同步更新到Slave上,所以从Master同步到Slave机器有一定的延迟,当系统很繁忙的时候,延迟问题会更加严重,Slave机器数量的增加也会使这个问题更加严重。
Redis主从复制策略
一主二从

​ 一台主机两台备机,两台备机均挂载在主机上。

各为其主

​ 一台主机两台备机,其中一台备机挂载在主机上,另一台备机挂载在上一台备机上。数据从主机向下传递

反客为主

​ 一台主机两台备机,当主机断开时,使用slaveof no one命令把其中一台备机升级为主机。如果另一台备机是挂载在主机上的,需要重新执行slaveof 新主机ip 端口命令重新挂载到新主机上。

★哨兵模式(sentinel)

​ 反客为主的自动版,能够后台监控主机是否故障,如果故障了根据投票数自动将备机转换为主机。而原主机如果重启后,会自动被挂载到新的主机上成为其备机。

# 6379主机上挂载着6380和6381两台备机
# 参数说明:
# host_6379:被监控主机名字(自己起名字)  
# 127.0.0.1:被监控服务ip  
# 6379:被监控服务器端口 
# 1:投票时谁票数多余1票以上则升级为主机

# 在 /opt/redis-5.0.7/中创建sentinel.conf文件,写入以下内容
# sentinel monitor host_6379 127.0.0.1 6379 1
[helin@localhost ~]$ vim /opt/redis-5.0.7/sentinel.conf
# 启动
[helin@localhost ~]$ /opt/redis-5.0.7/src/redis-sentinel /opt/redis-5.0.7/sentinel.conf

Java项目集成Redis

集成Jedis和JedisPool
  • Jedis直连
  1. 添加依赖
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-redis</artifactId>
    <version>1.3.2.RELEASE</version>
</dependency>

  1. 在测试类中测试是否已经连接上
@Test
public void test1(){

    Jedis jedis=new Jedis("127.0.0.1", 6379);
    System.out.println(jedis.ping());
}

  • JedisPool
  1. 封装JedisPoolUtil获取单例JedisPool
package cn.helin9s.redis;

import redis.clients.jedis.JedisPool;
import redis.clients.jedis.JedisPoolConfig;

/**
 * @author helin9s
 * @project redistest
 * @package cn.helin9s.redis
 * @description @TODO
 * @create 2020-01-31 17:27
 */
public class JedisPoolUtil {

    private static volatile JedisPool jedisPool=null;

    private JedisPoolUtil() {}

    /**
     * 用单例模式获取连接池对象
     * @return
     */
    public static JedisPool getJedisPoolInstance(){
        if (jedisPool == null) {
            synchronized (JedisPoolUtil.class){
                if (jedisPool == null) {
                    JedisPoolConfig config=new JedisPoolConfig();
                    config.setMaxWaitMillis(30000); //  最大等待时间
                    config.setMaxTotal(32);         //  最大连接数
                    config.setMinIdle(6);           //  允许最小的空闲连接数
                    config.setTestOnBorrow(false);  //  申请到连接时是否效验连接是否有效,对性能有影响,建议关闭
                    config.setTestOnReturn(false);  //  使用完连接放回连接池时是否效验连接是否有效,对性能有影响,建议关闭
                    config.setTestWhileIdle(true);  //  申请到连接时,如果空闲时间大于TimeBetweenEvictionRunsMillis时间,效验连接是否有效,建议开启,对性能影响不大
                    config.setTimeBetweenEvictionRunsMillis(30000); //TestWhileIdle的判断依据

                    jedisPool=new JedisPool(config, "127.0.0.1", 6379);
                }
            }
        }
        return jedisPool;
    }
}


  1. 调用JedisPool
@Test
public void test1(){
	// java7新特性,需要关闭的资源写在try的括号中,使用完毕后会自动关闭
    try (Jedis jedis = JedisPoolUtil.getJedisPoolInstance().getResource()) {
        System.out.println(jedis.ping());
    }
}

Springboot集成Redis
  1. 添加依赖
<dependency>
    <groupId>org.springframework.boot</groupId>
    <artifactId>spring-boot-starter-data-redis</artifactId>
</dependency>

  1. 在application.yml中配置redis参数
##端口号
server:
  port: 8888
spring:
  redis:
    # Redis服务器地址
    host: 127.0.0.1
    # Redis数据库索引(默认为0)
    database: 0
    # Redis服务器连接端口
    port: 6379
    # Redis服务器连接密码(默认为空)
    password: ""
    # 连接超时时间(毫秒)
    timeout: 300
    jedis:
      pool:
        #连接池最大连接数(使用负值表示没有限制)
        max-active: 8
        # 连接池最大阻塞等待时间(使用负值表示没有限制)
        max-wait: -1
        # 连接池中的最大空闲连接
        max-idle: 8
        # 连接池中的最小空闲连接
        min-idle: 0

  1. 编写 RedisConfig.java类
package com.dist.config;

import com.fasterxml.jackson.annotation.JsonAutoDetect;
import com.fasterxml.jackson.annotation.PropertyAccessor;
import com.fasterxml.jackson.databind.ObjectMapper;
import org.springframework.cache.annotation.CachingConfigurerSupport;
import org.springframework.cache.annotation.EnableCaching;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.data.redis.connection.RedisConnectionFactory;
import org.springframework.data.redis.core.*;
import org.springframework.data.redis.serializer.Jackson2JsonRedisSerializer;
import org.springframework.data.redis.serializer.StringRedisSerializer;

/**
 * redis配置类
 */
@Configuration
@EnableCaching //开启注解
public class RedisConfig extends CachingConfigurerSupport {

    /**
     * retemplate相关配置
     * @param factory
     * @return
     */
    @Bean
    public RedisTemplate<String, Object> redisTemplate(RedisConnectionFactory factory) {

        RedisTemplate<String, Object> template = new RedisTemplate<>();
        // 配置连接工厂
        template.setConnectionFactory(factory);

        //使用Jackson2JsonRedisSerializer来序列化和反序列化redis的value值(默认使用JDK的序列化方式)
        Jackson2JsonRedisSerializer jacksonSeial = new Jackson2JsonRedisSerializer(Object.class);

        ObjectMapper om = new ObjectMapper();
        // 指定要序列化的域,field,get和set,以及修饰符范围,ANY是都有包括private和public
        om.setVisibility(PropertyAccessor.ALL, JsonAutoDetect.Visibility.ANY);
        // 指定序列化输入的类型,类必须是非final修饰的,final修饰的类,比如String,Integer等会跑出异常
        om.enableDefaultTyping(ObjectMapper.DefaultTyping.NON_FINAL);
        jacksonSeial.setObjectMapper(om);

        // 值采用json序列化
        template.setValueSerializer(jacksonSeial);
        //使用StringRedisSerializer来序列化和反序列化redis的key值
        template.setKeySerializer(new StringRedisSerializer());

        // 设置hash key 和value序列化模式
        template.setHashKeySerializer(new StringRedisSerializer());
        template.setHashValueSerializer(jacksonSeial);
        template.afterPropertiesSet();

        return template;
    }

    /**
     * 对hash类型的数据操作
     *
     * @param redisTemplate
     * @return
     */
    @Bean
    public HashOperations<String, String, Object> hashOperations(RedisTemplate<String, Object> redisTemplate) {
        return redisTemplate.opsForHash();
    }

    /**
     * 对redis字符串类型数据操作
     *
     * @param redisTemplate
     * @return
     */
    @Bean
    public ValueOperations<String, Object> valueOperations(RedisTemplate<String, Object> redisTemplate) {
        return redisTemplate.opsForValue();
    }

    /**
     * 对链表类型的数据操作
     *
     * @param redisTemplate
     * @return
     */
    @Bean
    public ListOperations<String, Object> listOperations(RedisTemplate<String, Object> redisTemplate) {
        return redisTemplate.opsForList();
    }

    /**
     * 对无序集合类型的数据操作
     *
     * @param redisTemplate
     * @return
     */
    @Bean
    public SetOperations<String, Object> setOperations(RedisTemplate<String, Object> redisTemplate) {
        return redisTemplate.opsForSet();
    }

    /**
     * 对有序集合类型的数据操作
     *
     * @param redisTemplate
     * @return
     */
    @Bean
    public ZSetOperations<String, Object> zSetOperations(RedisTemplate<String, Object> redisTemplate) {
        return redisTemplate.opsForZSet();
    }

}


  1. 编写 RedisUtil.java 类
package com.dist.util;

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.data.redis.core.RedisTemplate;
import org.springframework.stereotype.Component;
import org.springframework.util.CollectionUtils;

import java.util.List;
import java.util.Map;
import java.util.Set;
import java.util.concurrent.TimeUnit;

/**
 * redisTemplate封装
 */
@Component
public class RedisUtil {

    @Autowired
    private RedisTemplate<String, Object> redisTemplate;

    public RedisUtil(RedisTemplate<String, Object> redisTemplate) {
        this.redisTemplate = redisTemplate;
    }

    /**
     * 指定缓存失效时间
     * @param key 键
     * @param time 时间(秒)
     * @return
     */
    public boolean expire(String key,long time){
        try {
            if(time>0){
                redisTemplate.expire(key, time, TimeUnit.SECONDS);
            }
            return true;
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }

    /**
     * 根据key 获取过期时间
     * @param key 键 不能为null
     * @return 时间(秒) 返回0代表为永久有效
     */
    public long getExpire(String key){
        return redisTemplate.getExpire(key,TimeUnit.SECONDS);
    }

    /**
     * 判断key是否存在
     * @param key 键
     * @return true 存在 false不存在
     */
    public boolean hasKey(String key){
        try {
            return redisTemplate.hasKey(key);
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }

    /**
     * 删除缓存
     * @param key 可以传一个值 或多个
     */
    @SuppressWarnings("unchecked")
    public void del(String ... key){
        if(key!=null&&key.length>0){
            if(key.length==1){
                redisTemplate.delete(key[0]);
            }else{
                redisTemplate.delete(CollectionUtils.arrayToList(key));
            }
        }
    }

    //============================String=============================
    /**
     * 普通缓存获取
     * @param key 键
     * @return 值
     */
    public Object get(String key){
        return key==null?null:redisTemplate.opsForValue().get(key);
    }

    /**
     * 普通缓存放入
     * @param key 键
     * @param value 值
     * @return true成功 false失败
     */
    public boolean set(String key,Object value) {
        try {
            redisTemplate.opsForValue().set(key, value);
            return true;
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }

    /**
     * 普通缓存放入并设置时间
     * @param key 键
     * @param value 值
     * @param time 时间(秒) time要大于0 如果time小于等于0 将设置无限期
     * @return true成功 false 失败
     */
    public boolean set(String key,Object value,long time){
        try {
            if(time>0){
                redisTemplate.opsForValue().set(key, value, time, TimeUnit.SECONDS);
            }else{
                set(key, value);
            }
            return true;
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }

    /**
     * 递增
     * @param key 键
     * @param delta 要增加几(大于0)
     * @return
     */
    public long incr(String key, long delta){
        if(delta<0){
            throw new RuntimeException("递增因子必须大于0");
        }
        return redisTemplate.opsForValue().increment(key, delta);
    }

    /**
     * 递减
     * @param key 键
     * @param delta 要减少几(小于0)
     * @return
     */
    public long decr(String key, long delta){
        if(delta<0){
            throw new RuntimeException("递减因子必须大于0");
        }
        return redisTemplate.opsForValue().increment(key, -delta);
    }

    //================================Map=================================
    /**
     * HashGet
     * @param key 键 不能为null
     * @param item 项 不能为null
     * @return 值
     */
    public Object hget(String key,String item){
        return redisTemplate.opsForHash().get(key, item);
    }

    /**
     * 获取hashKey对应的所有键值
     * @param key 键
     * @return 对应的多个键值
     */
    public Map<Object,Object> hmget(String key){
        return redisTemplate.opsForHash().entries(key);
    }

    /**
     * HashSet
     * @param key 键
     * @param map 对应多个键值
     * @return true 成功 false 失败
     */
    public boolean hmset(String key, Map<String,Object> map){
        try {
            redisTemplate.opsForHash().putAll(key, map);
            return true;
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }

    /**
     * HashSet 并设置时间
     * @param key 键
     * @param map 对应多个键值
     * @param time 时间(秒)
     * @return true成功 false失败
     */
    public boolean hmset(String key, Map<String,Object> map, long time){
        try {
            redisTemplate.opsForHash().putAll(key, map);
            if(time>0){
                expire(key, time);
            }
            return true;
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }

    /**
     * 向一张hash表中放入数据,如果不存在将创建
     * @param key 键
     * @param item 项
     * @param value 值
     * @return true 成功 false失败
     */
    public boolean hset(String key,String item,Object value) {
        try {
            redisTemplate.opsForHash().put(key, item, value);
            return true;
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }

    /**
     * 向一张hash表中放入数据,如果不存在将创建
     * @param key 键
     * @param item 项
     * @param value 值
     * @param time 时间(秒)  注意:如果已存在的hash表有时间,这里将会替换原有的时间
     * @return true 成功 false失败
     */
    public boolean hset(String key,String item,Object value,long time) {
        try {
            redisTemplate.opsForHash().put(key, item, value);
            if(time>0){
                expire(key, time);
            }
            return true;
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }

    /**
     * 删除hash表中的值
     * @param key 键 不能为null
     * @param item 项 可以使多个 不能为null
     */
    public void hdel(String key, Object... item){
        redisTemplate.opsForHash().delete(key,item);
    }

    /**
     * 判断hash表中是否有该项的值
     * @param key 键 不能为null
     * @param item 项 不能为null
     * @return true 存在 false不存在
     */
    public boolean hHasKey(String key, String item){
        return redisTemplate.opsForHash().hasKey(key, item);
    }

    /**
     * hash递增 如果不存在,就会创建一个 并把新增后的值返回
     * @param key 键
     * @param item 项
     * @param by 要增加几(大于0)
     * @return
     */
    public double hincr(String key, String item,double by){
        return redisTemplate.opsForHash().increment(key, item, by);
    }

    /**
     * hash递减
     * @param key 键
     * @param item 项
     * @param by 要减少记(小于0)
     * @return
     */
    public double hdecr(String key, String item,double by){
        return redisTemplate.opsForHash().increment(key, item,-by);
    }

    //============================set=============================
    /**
     * 根据key获取Set中的所有值
     * @param key 键
     * @return
     */
    public Set<Object> sGet(String key){
        try {
            return redisTemplate.opsForSet().members(key);
        } catch (Exception e) {
            e.printStackTrace();
            return null;
        }
    }

    /**
     * 根据value从一个set中查询,是否存在
     * @param key 键
     * @param value 值
     * @return true 存在 false不存在
     */
    public boolean sHasKey(String key,Object value){
        try {
            return redisTemplate.opsForSet().isMember(key, value);
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }

    /**
     * 将数据放入set缓存
     * @param key 键
     * @param values 值 可以是多个
     * @return 成功个数
     */
    public long sSet(String key, Object...values) {
        try {
            return redisTemplate.opsForSet().add(key, values);
        } catch (Exception e) {
            e.printStackTrace();
            return 0;
        }
    }

    /**
     * 将set数据放入缓存
     * @param key 键
     * @param time 时间(秒)
     * @param values 值 可以是多个
     * @return 成功个数
     */
    public long sSetAndTime(String key,long time,Object...values) {
        try {
            Long count = redisTemplate.opsForSet().add(key, values);
            if(time>0) {
                expire(key, time);
            }
            return count;
        } catch (Exception e) {
            e.printStackTrace();
            return 0;
        }
    }

    /**
     * 获取set缓存的长度
     * @param key 键
     * @return
     */
    public long sGetSetSize(String key){
        try {
            return redisTemplate.opsForSet().size(key);
        } catch (Exception e) {
            e.printStackTrace();
            return 0;
        }
    }

    /**
     * 移除值为value的
     * @param key 键
     * @param values 值 可以是多个
     * @return 移除的个数
     */
    public long setRemove(String key, Object ...values) {
        try {
            Long count = redisTemplate.opsForSet().remove(key, values);
            return count;
        } catch (Exception e) {
            e.printStackTrace();
            return 0;
        }
    }
    //===============================list=================================

    /**
     * 获取list缓存的内容
     * @param key 键
     * @param start 开始
     * @param end 结束  0 到 -1代表所有值
     * @return
     */
    public List<Object> lGet(String key, long start, long end){
        try {
            return redisTemplate.opsForList().range(key, start, end);
        } catch (Exception e) {
            e.printStackTrace();
            return null;
        }
    }

    /**
     * 获取list缓存的长度
     * @param key 键
     * @return
     */
    public long lGetListSize(String key){
        try {
            return redisTemplate.opsForList().size(key);
        } catch (Exception e) {
            e.printStackTrace();
            return 0;
        }
    }

    /**
     * 通过索引 获取list中的值
     * @param key 键
     * @param index 索引  index>=0时, 0 表头,1 第二个元素,依次类推;index<0时,-1,表尾,-2倒数第二个元素,依次类推
     * @return
     */
    public Object lGetIndex(String key,long index){
        try {
            return redisTemplate.opsForList().index(key, index);
        } catch (Exception e) {
            e.printStackTrace();
            return null;
        }
    }

    /**
     * 将list放入缓存
     * @param key 键
     * @param value 值
     * @return
     */
    public boolean lSet(String key, Object value) {
        try {
            redisTemplate.opsForList().rightPush(key, value);
            return true;
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }

    /**
     * 将list放入缓存
     * @param key 键
     * @param value 值
     * @param time 时间(秒)
     * @return
     */
    public boolean lSet(String key, Object value, long time) {
        try {
            redisTemplate.opsForList().rightPush(key, value);
            if (time > 0) {
                expire(key, time);
            }
            return true;
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }

    /**
     * 将list放入缓存
     * @param key 键
     * @param value 值
     * @return
     */
    public boolean lSet(String key, List<Object> value) {
        try {
            redisTemplate.opsForList().rightPushAll(key, value);
            return true;
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }

    /**
     * 将list放入缓存
     * @param key 键
     * @param value 值
     * @param time 时间(秒)
     * @return
     */
    public boolean lSet(String key, List<Object> value, long time) {
        try {
            redisTemplate.opsForList().rightPushAll(key, value);
            if (time > 0) {
                expire(key, time);
            }
            return true;
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }

    /**
     * 根据索引修改list中的某条数据
     * @param key 键
     * @param index 索引
     * @param value 值
     * @return
     */
    public boolean lUpdateIndex(String key, long index,Object value) {
        try {
            redisTemplate.opsForList().set(key, index, value);
            return true;
        } catch (Exception e) {
            e.printStackTrace();
            return false;
        }
    }

    /**
     * 移除N个值为value
     * @param key 键
     * @param count 移除多少个
     * @param value 值
     * @return 移除的个数
     */
    public long lRemove(String key,long count,Object value) {
        try {
            Long remove = redisTemplate.opsForList().remove(key, count, value);
            return remove;
        } catch (Exception e) {
            e.printStackTrace();
            return 0;
        }
    }

}


  1. 创建测试对象
@Data
public class UserEntity implements Serializable {
    private Long id;
    private String guid;
    private String name;
    private String age;
    private Date createTime;
}

  1. 修改Application.java类
package com.foreknow;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.boot.autoconfigure.jdbc.DataSourceAutoConfiguration;

//注意此处一定要添加,否则启动报错,因为我们没有配制数据源
@SpringBootApplication(exclude =DataSourceAutoConfiguration.class )
public class ForeknowRedisApplication {

    public static void main(String[] args) {
        SpringApplication.run(ForeknowRedisApplication.class, args);
    }
}

  1. 测试类
package com.dist.controller;

import com.dist.entity.UserEntity;
import com.dist.util.RedisUtil;
import lombok.extern.slf4j.Slf4j;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;

import javax.annotation.Resource;
import java.util.Date;

@Slf4j
@RequestMapping("/redis")
@RestController
public class RedisController {

    private static int ExpireTime = 60;   // redis中存储的过期时间60s

    @Resource
    private RedisUtil redisUtil;

    @RequestMapping("set")
    public boolean redisset(String key, String value){
        UserEntity userEntity =new UserEntity();
        userEntity.setId(Long.valueOf(1));
        userEntity.setGuid(String.valueOf(1));
        userEntity.setName("zhangsan");
        userEntity.setAge(String.valueOf(20));
        userEntity.setCreateTime(new Date());

        //return redisUtil.set(key,userEntity,ExpireTime);

        return redisUtil.set(key,value);
    }

    @RequestMapping("get")
    public Object redisget(String key){
        return redisUtil.get(key);
    }

    @RequestMapping("expire")
    public boolean expire(String key){
        return redisUtil.expire(key,ExpireTime);
    }
}


尚硅谷是一个教育机构,他们提供了一份关于Redis学习笔记。根据提供的引用内容,我们可以了解到他们提到了一些关于Redis配置和使用的内容。 首先,在引用中提到了通过执行命令"vi /redis-6.2.6/redis.conf"来编辑Redis配置文件。这个命令可以让你进入只读模式来查询"daemonize"配置项的位置。 在引用中提到了Redis会根据键值计算出应该送往的插槽,并且如果不是该客户端对应服务器的插槽,Redis会报错并告知应该前往的Redis实例的地址和端口。 在引用中提到了通过修改Redis的配置文件来指定Redis的日志文件位置。可以使用命令"sudo vim /etc/redis.conf"来编辑Redis的配置文件,并且在文件中指定日志文件的位置。 通过这些引用内容,我们可以得出结论,尚硅谷的Redis学习笔记涵盖了关于Redis的配置和使用的内容,并提供了一些相关的命令和操作示例。<span class="em">1</span><span class="em">2</span><span class="em">3</span> #### 引用[.reference_title] - *1* [Redis学习笔记--尚硅谷](https://blog.csdn.net/HHCS231/article/details/123637379)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_2"}}] [.reference_item style="max-width: 50%"] - *2* *3* [Redis学习笔记——尚硅谷](https://blog.csdn.net/qq_48092631/article/details/129662119)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_2"}}] [.reference_item style="max-width: 50%"] [ .reference_list ]
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值