Redis入门

Redis

redis官网:https://redis.io/

redis中文网:https://www.redis.net.cn/

redis企业软件文档:https://docs.redis.com/latest/rs/

ps:本文演示环境为CentOS-7-Minimal、FinalShell、Redis-5.0.14

Redis是一个key-value存储系统。和Memcached类似,它支持存储的value类型相对更多,包括string(字符串)、list(链表)、set(集合)、zset(sorted set --有序集合)和hash(哈希类型)。这些数据类型都支持push/pop、add/remove及取交集并集和差集及更丰富的操作,而且这些操作都是原子性的。在此基础上,Redis支持各种不同方式的排序。与memcached一样,为了保证效率,数据都是缓存在内存中。区别的是redis会周期性的把更新的数据写入磁盘或者把修改操作写入追加的记录文件,并且在此基础上实现了master-slave(主从)同步。
Redis是一个高性能的key-value数据库(单线程+多路IO复用技术)。 Redis的出现,很大程度补偿了memcached这类key/value存储的不足,在部分场合可以对关系数据库起到很好的补充作用。

1 安装、部署与配置

1.1 安装Redis

下载Redis

Redis 5.0.14 下载地址,将得到的压缩包上传到服务器,我这里放到/disk1目录下。

[root@redis-server-master ~]# cd /
[root@redis-server-master /]# mkdir disk1
[root@redis-server-master /]# cd disk1/
[root@redis-server-master disk1]# ls
redis-5.0.14.tar.gz
# 解压文件
[root@redis-server-master disk1]# tar -zxvf redis-5.0.14.tar.gz 
# 安装依赖
[root@redis-server-master disk1]# yum install make
[root@redis-server-master disk1]# yum install gcc-c++
# 切换到解压后的目录
[root@redis-server-master disk1]# cd redis-5.0.14
# 编译源文件
[root@redis-server-master redis-5.0.14]# make
# 安装 将redis的命令安装到/usr/local/bin目录下
[root@redis-server-master redis-5.0.14]# make install

至此Redis安装完成。

1.2 配置Redis

[root@redis-server-master disk1]# cd redis-5.0.14
[root@redis-server-master redis-5.0.14]# cd utils/
[root@redis-server-master utils]# ls
build-static-symbols.tcl  lru
cluster_fail_time.tcl     redis-copy.rb
corrupt_rdb.c             redis_init_script
create-cluster            redis_init_script.tpl
generate-command-help.rb  redis-sha1.rb
graphs                    releasetools
hashtable                 speed-regression.tcl
hyperloglog               whatisdoing.sh
install_server.sh
# 拷贝Redis启动脚本到/etc/init.d/
[root@redis-server-master utils]# cp redis_init_script /etc/init.d/
[root@redis-server-master utils]# cd ../
# 创建Redis工作目录
[root@redis-server-master redis-5.0.14]# mkdir /usr/local/redis -p
# 拷贝Redis核心配置文件/usr/local/redis
[root@redis-server-master redis-5.0.14]# cp redis.conf /usr/local/redis
[root@redis-server-master redis-5.0.14]# cd /usr/local/redis/
[root@redis-server-master redis]# ls
redis.conf
#修改redis.conf
[root@redis-server-master redis]# vim redis.conf

修改内容如下

#修改运行方式——后台运行
daemonize yes
#修改redis工作目录
dir /usr/local/redis/persistence
#远程访问设置
bind 0.0.0.0
#关闭保护模式
protected-mode no
#设置密码
requirepass 123456
#指定端口号
port 6379

创建工作目录

[root@redis-server-master redis]# mkdir persistence
[root@redis-server-master redis]# ls
persistence  redis.conf

修改启动脚本

[root@redis-server-master redis]# cd /etc/init.d/
[root@redis-server-master redis]# vim redis_init_script

修改后完整的redis_init_script如下

#!/bin/sh
#
# Simple Redis init.d script conceived to work on Linux systems
# as it does use of the /proc filesystem.
### BEGIN INIT INFO

# Provides:     redis_6379
# Default-Start:        2 3 4 5
# Default-Stop:         0 1 6
# Short-Description:    Redis data structure server
# Description:          Redis data structure server. See https://redis.io
### END INIT INFO

#chkconfig: 22345 10 90
#description: Start and Stop redis

REDISPORT=6379
EXEC=/usr/local/bin/redis-server
CLIEXEC=/usr/local/bin/redis-cli

PIDFILE=/var/run/redis_${REDISPORT}.pid
CONF="/usr/local/redis/redis.conf"

case "$1" in
    start)
        if [ -f $PIDFILE ]
        then
                echo "$PIDFILE exists, process is already running or crashed"
        else
                echo "Starting Redis server..."
                $EXEC $CONF
        fi
        ;;
    stop)
        if [ ! -f $PIDFILE ]
        then
                echo "$PIDFILE does not exist, process is not running"
        else
                PID=$(cat $PIDFILE)
                echo "Stopping ..."
                $CLIEXEC -a "123456" -p $REDISPORT shutdown
                while [ -x /proc/${PID} ]
                do
                    echo "Waiting for Redis to shutdown ..."
                    sleep 1
                done
                echo "Redis stopped"
        fi
        ;;
    *)
        echo "Please use start or stop as first argument"
        ;;
esac

启动测试

#赋予权限
[root@redis-server-master init.d]# chmod 777 redis_init_script 
#通过脚本启动redis
[root@redis-server-master init.d]# ./redis_init_script start
Starting Redis server...
34661:C 12 May 2023 00:43:33.270 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
34661:C 12 May 2023 00:43:33.270 # Redis version=5.0.14, bits=64, commit=00000000, modified=0, pid=34661, just started
34661:C 12 May 2023 00:43:33.270 # Configuration loaded
#查看redis进程
[root@redis-server-master init.d]# ps -ef | grep redis
root      34662      1  0 00:43 ?        00:00:00 /usr/local/bin/redis-server 0.0.0.0:6379
root      36112   6846  0 00:44 pts/0    00:00:00 grep --color=auto redis
#加入开机自启动
[root@redis-server-master init.d]# chkconfig redis_init_script on
#重启
[root@redis-server-master init.d]# reboot
#重新连接查看redis是否自启动

连接断开
连接主机...
连接主机成功
Last login: Fri May 12 00:19:47 2023 from 192.168.18.107
[root@redis-server-master ~]# ps -ef | grep redis
root        673      1  0 15:43 ?        00:00:00 /usr/local/bin/redis-server 0.0.0.0:6379
root       8222   8018  0 15:44 pts/0    00:00:00 grep --color=auto redis

2 redis客户端(redis-cli)

2.1 redis-cli

官方文档参考:https://redis.io/docs/ui/cli/

redis-cli是Redis命令行界面,允许直接从终端向Redis发送命令,并读取服务器发送的回复。
它有两种主要模式:

  1. 交互模式,其中有一个 REPL(读取评估打印循环),用户可以在其中键入命令并获得回复;
  2. 命令模式,其中命令作为 的参数发送redis-cli、执行并打印在标准输出上

2.2 客户端常用命令

官方文档参考:https://redis.io/commands/

中文文档参考:https://www.redis.net.cn/order/

key
键是二进制安全的,这意味着您可以使用任何二进制序列作为键,可以是OneMoreStudy这样的字符串,也可以使图片文件的内容,空字符串也是有效的键。不过,还有一些其他规则:

  1. 不要使用过长的键,比如一个1KB的键。不仅是多占内存方面的问题,而是在数据集中查找键可能需要进行一些耗时的键比较。如果真的有比较大的键,先对它进行哈希(比如:MD5、SHA1)是一个好主意。当然也不要使用过短的键,比如:OMS100f,相对于one-more-study:100:fans,后者更具有可读性,虽然可能会占用更多内存,但是相对于值所占的内存,键所增加的内存还是小很多的。我们要找到一个平衡点,不长也不短。
  2. 多个字段以冒号分隔,一个字段内多个单词以连词符或点分隔,比如:one-more-study:100:fans,或者one.more.study:100:fans
  3. 键允许的最大值为512MB。

常见的key操作命令有以下这些:

命令描述
keys *查看当前库所有key
select db切换数据库
flushdb清空当前库
flushall清除所有数据
DEL key [key ...]删除 key
DUMP key序列化给定 key ,并返回被序列化的值。
EXISTS key [key ...]检查给定 key 是否存在。
EXPIRE key seconds为给定 key 设置过期时间。
RENAME key newkey修改 key 的名称
RENAMENX key newkey仅当 newkey 不存在时,将 key 改名为 newkey 。
TYPE key返回 key 所储存的值的类型。
TTL key以秒为单位,返回给定 key 的剩余生存时间(TTL, time to live)。

也可以通过命令 help @generic查看更多key相关的操作命令

[root@redis-server-master ~]# redis-cli
127.0.0.1:6379> auth 123456
OK
127.0.0.1:6379> help @generic
 
  DEL key [key ...]       # 命令格式
  summary: Delete a key   # 命令描述
  since: 1.0.0            # 命令发行版本
  
...

3 Redis常见的数据类型

3.1 string

Redis的string字符串类型是和键关联的最简单的类型。它是Memcached中唯一的数据类型,因此对于新手来说,在Redis中使用它也是很容易的。键是字符串类型,当我们也使用字符串类型作为值时,我们会可以从一个字符串映射到另一个字符串。字符串数据类型有很多应用场景,例如缓存HTML片段或页面。

下面简单介绍一下字符串的命令(在redis-cli中使用):

127.0.0.1:6379> set one-more-key OneMoreStudy
OK
127.0.0.1:6379> get one-more-key
"OneMoreStudy"

使用SET和GET命令来设置和查询字符串值的方式。需要注意的是,如果当前键已经和字符串值相关联,SET命令将会替换已存储在键中的现有值。字符串可以是任意的二进制数据,比如jpeg图像。字符串最多不能大于512MB。SET命令还有一些实用的可选参数,比如:

127.0.0.1:6379> set one-more-key Java nx   #如果key存在,则设置失败。
(nil)
127.0.0.1:6379> set one-more-key Java xx   #如果key存在,才设置成功。
OK

虽然字符串是Redis的基本值,但也可以使用它们执行一些实用的操作。比如:

127.0.0.1:6379> set one-more-counter 50
OK
127.0.0.1:6379> incr one-more-counter   #自增加1
(integer) 51
127.0.0.1:6379> incr one-more-counter   #自增加1
(integer) 52
127.0.0.1:6379> incrby one-more-counter 5   #自增加5
(integer) 57

INCR命令将字符串值解析为整数,将其自增加1,最后将获得的值设置为新值。还有其他类似的命令,例如INCRBYDECRDECRBY等命令。INCR命令是原子操作,即时有多个客户端同时同一个keyINCR命令,也不会进入竞态条件。比如,上面的例子先设置one-more-counter的值为50,即使两个客户端同时发出INCR命令,那么最后的值也肯定是52。

可以使用MSETMGET命令在单个命令中设置或查询多个键的值,对于减少延迟也很有用。比如:

127.0.0.1:6379> mset a 1 b 2 c 3
OK
127.0.0.1:6379> mget a b c
1) "1"
2) "2"
3) "3"

使用MGET命令时,Redis返回一个值的数组。

使用DEL命令可以删除键和相关联的值,存在指定的键则返回1,不存在指定的键则返回0。使用EXISTS命令判断Redis中是否存在指定的键,存在指定的键则返回1,不存在指定的键则返回0。比如:

127.0.0.1:6379> set one-more-key OneMoreStudy
OK
127.0.0.1:6379> exists one-more-key
(integer) 1
127.0.0.1:6379> del one-more-key
(integer) 1
127.0.0.1:6379> exists one-more-key
(integer) 0

使用TYPE命令,可以返回存储在指定key的值的数据类型,比如:

127.0.0.1:6379> set one-more-key OneMoreStudy
OK
127.0.0.1:6379> type one-more-key
string
127.0.0.1:6379> del one-more-key
(integer) 1
127.0.0.1:6379> type one-more-key
none

在讨论更复杂的数据结构之前,我们需要讨论另一个功能,该功能无论值类型是什么都适用,它就是EXPIRE命令。它可以为键设置到期时间,当超过这个到期时间后,该键将自动销毁,就像对这个键调用了DEL命令一样。比如:

127.0.0.1:6379> set one-more-key OneMoreStudy
OK
127.0.0.1:6379> expire one-more-key 5
(integer) 1
127.0.0.1:6379> get one-more-key #立刻调用
"OneMoreStudy"
127.0.0.1:6379> get one-more-key #5秒钟后调用
(nil)

上面的例子,适用了EXPIRE命令设置了过期时间,也可以使用PERSIST命令移除键的过期时间,这个键将持久保持。除了EXPIRE命令,还可以使用SET命令设置过期时间,比如:

127.0.0.1:6379> set one-more-key OneMoreStudy ex 10 #设置过期时间为10秒
OK
127.0.0.1:6379> ttl one-more-key
(integer) 9

上面的例子,设置了一个字符串值OneMoreStudyone-more-key,该键的到期时间为10秒。之后,调用TTL命令以检查该键的剩余生存时间。
到期时间可以使用秒或毫秒精度进行设置,但到期时间的分辨率始终为1毫秒。实际上,Redis服务器上存储的不是到期时间长度,而是该键到期的时间。

我们可以通过命令 help @string查看关于string类型的操作命令。

3.2 hash

Redis的hash哈希结构和人们期望的“哈希”结构是一样的,它是一个无序哈希,内部存储了很多键值对,比如:

127.0.0.1:6379> hmset one-more-fans:100 name Lily age 25
OK
127.0.0.1:6379> hget one-more-fans:100 name
"Lily"
127.0.0.1:6379> hget one-more-fans:100 age
"25"
127.0.0.1:6379> hgetall one-more-fans:100
1) "name"
2) "Lily"
3) "age"
4) "25"

尽管哈希很容易用来表示对象,但是实际上可以放入哈希中的字段数是没有实际限制的,因此您可以以更多种的不同方式使用哈希。除了HGET命令获取单个字段对应的值,也可以使用HMSET命令获取多个字段及对应的值,它返回的是一个数组,比如:

127.0.0.1:6379> hmget one-more-fans:100 name age non-existent-field
1) "Lily"
2) "25"
3) (nil)

还可以使用HINCRBY命令,为指定字段的值做增量,比如:

127.0.0.1:6379> hget one-more-fans:100 age
"25"
127.0.0.1:6379> hincrby one-more-fans:100 age 3
(integer) 28
127.0.0.1:6379> hget one-more-fans:100 age
"28"

Redis哈希的实现结构,和Java中的HashMap是一样的,也是“数组+链表”的结构,当发生数组位置碰撞是,就会将碰撞的元素用链表串起来。不过Redis为了追求高性能,rehash的方式不太一样。

我们可以通过命令 help @hash查看关于hash类型的操作命令。

3.3 list

Redis的list列表类型是使用链表实现的,这就意味着在头部或尾部增加或删除一个的元素的时间复杂度是O(1),非常快的。不过,按索引查询对应元素的时间复杂度就是O(n),慢很多。如果想快速查询大量数据,可以使用有序集合,后面会有介绍。
LPUSH命令将一个新元素添加到列表的左侧(顶部),而RPUSH命令将一个新元素添加到列表的右侧(底部)。最后,LRANGE命令可以从列表中按范围提取元素。比如:

127.0.0.1:6379> rpush one-more-list A
(integer) 1
127.0.0.1:6379> rpush one-more-list B
(integer) 2
127.0.0.1:6379> lpush one-more-list first
(integer) 3
127.0.0.1:6379> lrange one-more-list 0 -1
1) "first"
2) "A"
3) "B"

LRANGE命令需要另外两个参数,要返回的第一个元素的索引和最后一个元素的索引。如果索引为负值,Redis将从末尾开始计数,-1是列表的最后一个元素,-2是列表的倒数第二个元素,依此类推。

LPUSHRPUSH命令支持多个参数,可以使用一次命令添加多个元素,比如:

127.0.0.1:6379> rpush one-more-list 1 2 3 4 5 "last"
(integer) 9
127.0.0.1:6379> lrange one-more-list 0 -1
1) "first"
2) "A"
3) "B"
4) "1"
5) "2"
6) "3"
7) "4"
8) "5"
9) "last"

在Redis列表上,也可以移除并返回元素。与LPUSHRPUSH命令,对应的就是LPOPRPOP命令,LPOP命令是将列表的左侧(顶部)的元素移除并返回,RPOP命令是将列表的右侧(底部)的元素移除并返回。比如:

127.0.0.1:6379> rpush one-more-list a b c
(integer) 3
127.0.0.1:6379> rpop one-more-list
"c"
127.0.0.1:6379> rpop one-more-list
"b"
127.0.0.1:6379> rpop one-more-list
"a"

我们添加了三个元素,并移除并返回了三个元素,此时列表为空,没有任何元素。如果再使用RPOP命令,会返回一个NULL值:

127.0.0.1:6379> rpop one-more-list
(nil)

使用RPUSHRPOP命令,或者LPUSHLPOP命令可以实现栈的功能,使用LPUSHRPOP命令,或者RPUSHLPOP命令可以实现队列的功能。也可以实现生产者和消费者模式,比如多个生产者使用LPUSH命令将任务添加到列表中,多个消费者使用RPOP命令将任务从列表中取出。但是,有时列表可能为空,没有任何要处理的任务,因此RPOP命令仅返回NULL。在这种情况下,消费者被迫等待一段时间,然后使用RPOP命令重试。这就暴露了有几个缺点:

  1. 客户端和服务端之间可以处理无用的命令,因为在列表为空时的所有请求将无法完成任何实际工作,它们只会返回NULL。
  2. 由于消费者在收到NULL之后会等待一段时间,因此会增加任务处理的延迟。为了减小延迟,我们可以在两次调用RPOP之间等待更少的时间,这就扩大了更多对Redis的无用调用。

有什么办法可以解决呢?

使用BRPOPBLPOP的命令,它们和RPOPLPOP命令类似,唯一的区别是:如果列表为空时,命令会被阻塞,直到有新元素添加到列表中,或指定的超时时间到了时,它们才会返回到调用方。比如:

127.0.0.1:6379> brpop tasks 5

它含义是,列表为空时,等待列表中的元素,但如果5秒钟后没有新的元素被添加,则返回。您可以将超时时间传入0,表示永远等待元素添加。也可以传入多个列表,这时会按参数先后顺序依次检查各个列表,返回第一个非空列表的尾部元素。另外还有以下3点需要注意的:
当列表为空,并且有多个客户端在等待时,有一个新的元素被添加到列表中,它会被第一个等待的客户端获取到,以此类推。
返回值与RPOP命令相比有所不同,它是一个包含两个元素的数组,包含key和对应的元素,因为BRPOPBLPOP命令能够阻止等待来自多个列表的元素。
超过了超时时间,会返回NULL。
列表的创建和删除都是由Redis自动完成的,当尝试向不存在的键添加元素时,Redis会自动创建一个空的列表;当最后一个元素被移除时,Redis会自动删除这个列表。这不是特定于列表的,它适用于由多个元素组成的所有Redis数据类型,比如集合、有序集合、哈希,它们都有3条规则:

  1. 当我们将元素添加到聚合数据类型时,如果目标键不存在,则在添加元素之前会创建一个空的聚合数据类型。比如:

    127.0.0.1:6379> del one-more-list
    (integer) 1
    127.0.0.1:6379> lpush one-more-list 1 2 3
    (integer) 3
    

    但是,在键存在时,就不能操作错误的数据类型了,比如:

    127.0.0.1:6379> set one-more-key OneMoreStudy
    OK
    127.0.0.1:6379> lpush one-more-key 1 2 3
    (error) WRONGTYPE Operation against a key holding the wrong kind of value
    127.0.0.1:6379> type one-more-key
    string
    
  2. 当我们从聚合数据类型中删除元素时,如果该值保持为空,则key将自动销毁。比如:

    127.0.0.1:6379> lpush one-more-list 1 2 3
    (integer) 3
    127.0.0.1:6379> exists one-more-list
    (integer) 1
    127.0.0.1:6379> lpop one-more-list
    "3"
    127.0.0.1:6379> lpop one-more-list
    "2"
    127.0.0.1:6379> lpop one-more-list
    "1"
    127.0.0.1:6379> exists one-more-list
    (integer) 0
    
  3. 当对应key不存在,并且调用只读命令(如LLEN命令,获取列表长度)或写命令(如LPOP命令)时,都会返回空聚合数据类型的结果。比如:

    127.0.0.1:6379> del one-more-list
    (integer) 0
    127.0.0.1:6379> llen one-more-list
    (integer) 0
    127.0.0.1:6379> lpop one-more-list
    (nil)
    

Redis为了追求高性能,列表的内部实现不是一个简单的链表。

我们可以通过命令 help @list查看关于list类型的操作命令。

3.4 set

Redis的set集合类型是一个字符串的无序集合,SADD命令可以将新元素添加到集合中。还可以对集合进行许多其他操作,例如:判断给定元素是否已存在、执行多个集合之间的交集、并集或差等等。
比如:

127.0.0.1:6379> sadd one-more-set 1 2 3
(integer) 3
127.0.0.1:6379> smembers one-more-set
1) "1"
2) "3"
3) "2"

在上面的例子中,在集合中添加了三个元素,并让Redis返回所有元素。正如你所见,返回的元素是没有排序的。在每次调用时,元素的顺序都有可能不一样。

还可以使用SISMEMBER命令判断给定元素是否已存在,比如:

127.0.0.1:6379> sismember one-more-set 3
(integer) 1
127.0.0.1:6379> sismember one-more-set 30
(integer) 0

在上面的例子中,3在集合中,所以返回1;而30不在集合中,所以返回0。

可以使用SINTER命令,计算出多个集合的交集;使用SUNION命令,计算多个集合的并集;使用SPOP命令,移除并返回集合中的一个随机元素;使用SCARD命令,计算集合中的元素的数量。比如:

127.0.0.1:6379> sadd one-more-set1 1 2 3
(integer) 3
127.0.0.1:6379> sadd one-more-set2 2 3 4
(integer) 3
127.0.0.1:6379> sinter one-more-set1 one-more-set2 #交集
1) "3"
2) "2"
127.0.0.1:6379> sunion one-more-set1 one-more-set2 #并集
1) "1"
2) "3"
3) "2"
4) "4"
127.0.0.1:6379> spop one-more-set1 #随机移除一个元素
"3"
127.0.0.1:6379> scard one-more-set1 #元素数量
(integer) 2

我们可以通过命令 help @set查看关于set类型的操作命令。

3.5 zset(Sorted Set)

Redis的zset有序集合类型是一种类似于集合和哈希之间混合的数据类型。像集合一样,有序集合中由唯一的、非重复的字符串元素组成,因此从某种意义上说,有序集合也是一个集合。但是集合中的元素是没有排序的,而有序集合中的每个元素都与一个称为分数(score)的浮点值相关联,这就是为什么有序集合也类似于哈希的原因,因为每个元素都映射到一个值。有序集合的排序规则如下:

  1. 如果A和B是两个具有不同分数的元素,那么如果A.分数>B.分数,则A>B。
  2. 如果A和B的分数完全相同,那么如果A字符串在字典排序上大于B字符串,则A>B。A和B字符串不能相等,因为有序集合中的元素都是唯一的。

我们来举个例子,把LOL战队的名字和积分添加到有序集合中,其中把战队的名字作为值,把战队的积分作为分数。

127.0.0.1:6379> zadd lpl 12 "WE"
(integer) 1
127.0.0.1:6379> zadd lpl 12 "TT"
(integer) 1
127.0.0.1:6379> zadd lpl 10 "JDG"
(integer) 1
127.0.0.1:6379> zadd lpl 8 "EDG"
(integer) 1
127.0.0.1:6379> zadd lpl 8 "RNG"
(integer) 1
127.0.0.1:6379> zadd lpl 4 "TES"
(integer) 1
127.0.0.1:6379> zadd lpl 2 "AL"
(integer) 1

如上所示,ZADD命令和SADD命令相似,但是多了一个额外的参数(在要添加的元素的前面)作为分数。ZADD命令也支持多个参数,虽然在上面的例子中未使用它,但你也可以指定多个分数和值对。使用有序集合,快速地返回按其积分排序的战队列表,因为实际上它们已经被排序了。

需要注意的是,为了快速获取有序集合中的元素,每次添加元素的时间复杂度都为O(log(N)),这是因为有序集合是同时使用跳跃表和字典来实现的,具体原理这里先卖个关子,后续的文章会详细介绍。

可以使用ZRANGE命令按照升序获取对应的值,比如:

127.0.0.1:6379> zrange kpl 0 -1
1) "AL"
2) "TES"
3) "EDG"
4) "RNG"
5) "JDG"
6) "TT"
7) "WE"

0和-1代表查询从第一个到最后一个的元素。还可以使用ZREVRANGE命令按照降序获取对应的值,比如:

127.0.0.1:6379> zrevrange kpl 0 -1
1) "WE"
2) "TT"
3) "JDG"
4) "RNG"
5) "EDG"
6) "TES"
7) "AL"

加上WITHSCORES参数,就可以连同分数一起返回,比如:

127.0.0.1:6379> zrange kpl 0 -1 withscores
 1) "AL"
 2) "2"
 3) "TES"
 4) "4"
 5) "EDG"
 6) "8"
 7) "RNG"
 8) "8"
 9) "JDG"
10) "10"
11) "TT"
12) "12"
13) "WE"
14) "12"

有序集合还有更强大的功能,比如在分数范围内操作,让我们获取小于10(含)的战队,使用ZRANGEBYSCORE命令:

127.0.0.1:6379> zrangebyscore kpl -inf 10
1) "AL"
2) "TES"
3) "EDG"
4) "RNG"
5) "JDG"

这就是获取分数从负无穷到10所对应的值,同样的我们也可以获取分数从4到10所对应的值:

127.0.0.1:6379> zrangebyscore kpl 4 10
1) "TES"
2) "EDG"
3) "RNG"
4) "JDG"

另外有用的命令:ZRANK命令,它可以返回指定值的升序排名(从0开始);ZREVRANK命令,它可以返回指定值的降序排名(从0开始),比如:

127.0.0.1:6379> zrank kpl "EDG"
(integer) 2
127.0.0.1:6379> zrevrank kpl "EDG"
(integer) 4

有序集合的分数是随时更新的,只要对已有的有序集合调用ZADD命令,就会以O(log(N))时间复杂度更新其分数和排序。这样,当有大量更新时,有序集合是合适的。由于这种特性,常见的场景是排行榜,可以方便地显示排名前N位的用户和用户在排行榜中的排名。

我们可以通过命令 help @sorted_set查看关于zset类型的操作命令。

4 单线程架构

4.1 Redis的线程模型

redis线程模型

4.2 拓展——多路复用器

Redis多路复用,指Redis使用多路复用技术来同时处理多个客户端请求。
多路复用是指使用一个线程来检查多个文件描述符(Socket)的就绪状态,比如调用select和poll函数,传入多个文件描述符,如果有一个文件描述符就绪,则返回,否则阻塞直到超时。
Redis使用了一种叫做事件循环(event loop)的机制来实现多路复用。事件循环是一种非阻塞的I/O处理机制,它可以同时监听多个客户端连接。当有新的请求到达时,Redis会将其加入到请求队列中,然后在事件循环中处理这些请求。这样就可以同时处理多个请求,而不会阻塞其他请求的处理。Redis的事件循环使用了epoll(Linux)或kqueue(FreeBSD,MacOS)等多路复用机制,这些机制可以高效地监听多个客户端连接,并在有请求到达时触发事件。这样就可以高效地处理大量的并发请求,提高Redis的吞吐量。

关于多路复用模型原理可以参考这位博主的文章:https://blog.csdn.net/weixin_41605937/article/details/98882913

5 Redis发布(pub)与订阅(sub)

Redis 提供了发布订阅功能,可以用于消息的传输,Redis的发布(pub)与订阅(sub)包含2种角色:发布者和订阅者。发布者和订阅者都是 Redis 客户端,Channel 则为 Redis 服务器端,发布者可以向指定的频道(channel)发送消息;订阅者可以订阅一个或者多个频道(channel),所有订阅此频道的订阅者都会收到此消息。
redis发布与订阅
“发布/订阅” 包含2种角色:发布者和订阅者。发布者可以向指定的频道(channel)发送消息;订阅者可以订阅一个或者多个频道(channel),所有订阅此频道的订阅者都会收到此消息。

订阅消息

命令描述
subscribe channel [channel ... ]订阅一个或多个频道;多个频道使用空格分割;
例如:subscribe a b
unsubscribe channel [channel ... ]退订一个或多个频道;多个频道使用空格分割;
例如:unsubscribe a b
psubscribe pattern1 [pattern...]订阅一个或多个符合给定模式的频道;
说明:每个模式以 * 作为匹配符;
例如:psubscribe cn*
表示匹配所有以cn开头的频道:cn.java、cn.web。
punsubscribe [pattern [pattern ...] ]退订所有给定模式的频道;
说明:如果pattern 未指定,则退订所有订阅的频道。

有关订阅命令有两点需要注意:

  1. 客户端在执行订阅命令之后进入了订阅状态,只能接收 subscribepsubscribeunsubscribepunsubscribe 这四个命令。(这里的客户端指的是 jedis、lettuce的客户端,redis-cli是无法退出订阅状态的)
  2. 新开启的订阅客户端,无法收到该频道之前的消息,因为 Redis 不会对发布的消息进行持久化。

发布消息

命令描述
publish channel message发布消息给频道,channel频道名,message待发布的消息
pubsub channels [argument [atgument ...] ]查看订阅与发布系统的状态;说明:返回活跃频道列表(即至少有一个订阅者的频道,订阅模式的客户端除外)

demo

订阅者1客户端:

[root@redis-server-master ~]# redis-cli
127.0.0.1:6379> auth 123456
OK
127.0.0.1:6379>  SUBSCRIBE foot study-java
Reading messages... (press Ctrl-C to quit)
1) "subscribe"
2) "foot"
3) (integer) 1
1) "subscribe"
2) "study-java"
3) (integer) 2

订阅者2客户端:

[root@redis-server-master ~]# redis-cli
127.0.0.1:6379> auth 123456
OK
127.0.0.1:6379> psubscribe study*
Reading messages... (press Ctrl-C to quit)
1) "psubscribe"
2) "study*"
3) (integer) 1

发布者客户端:

[root@redis-server-master ~]# redis-cli
127.0.0.1:6379> auth 123456
OK
# 发布消息
127.0.0.1:6379> PUBLISH foot beef
(integer) 1
127.0.0.1:6379> PUBLISH study-java java
(integer) 2
127.0.0.1:6379> 

发布者 发布消息给频道后,此时我们再来观察订阅者客户端窗口变化:

订阅者1客户端:

[root@redis-server-master ~]# redis-cli
127.0.0.1:6379> auth 123456
OK
127.0.0.1:6379>  SUBSCRIBE foot study-java
Reading messages... (press Ctrl-C to quit)
1) "subscribe"
2) "foot"
3) (integer) 1
1) "subscribe"
2) "study-java"
3) (integer) 2
# 变化如下:以下是订阅者客户端接收到的订阅消息
1) "message"
2) "foot"
3) "beef"
1) "message"
2) "study-java"
3) "java"

订阅者2客户端::

[root@redis-server-master ~]# redis-cli
127.0.0.1:6379> auth 123456
OK
127.0.0.1:6379> psubscribe study*
Reading messages... (press Ctrl-C to quit)
1) "psubscribe"
2) "study*"
3) (integer) 1
# 变化如下:以下是订阅者客户端接收到的订阅消息
1) "pmessage"
2) "study*"
3) "study-java"
4) "java"

注意:如果是先发布消息,再订阅频道,不会收到订阅之前就发布到该频道的消息!

6 Redis persistence(持久化机制)

官方文档参考:https://redis.io/docs/management/persistence/

中文文档参考:暂无

persistence是指将数据写入持久存储,如固态硬盘(SSD)。Redis提供了一系列持久性选项。其中包括:

  • RDB(Redis Database):RDB持久性以指定的时间间隔执行数据集的时间点快照。
  • AOF (Append Only File):AOF持久性记录服务器接收的每个写操作。这些操作可以在服务器启动时再次重放,从而重建原始数据集。使用与Redis协议本身相同的格式记录命令。
  • No persistence:完全禁用持久性。这有时在缓存时使用。
  • RDB + AOF:在同一个实例中同时使用AOF和RDB。

6.1 RDB

什么是RDB

RDB(redis数据库)每隔一段时间,redis会把内存中的数据写入磁盘的临时文件,作为RDB快照。

在RDB模式下,如果宕机重启,内存里的数据丢失,在redis重新启动后,将直接从保存的RDB快照中恢复数据。

RDB快照并不是很可靠。如果你的电脑突然宕机了,或者电源断了,又或者不小心杀掉了进程,那么最新的数据就会丢失。

对应redis.conf配置如下:

################################ SNAPSHOTTING  ################################
#
# Save the DB on disk:
#
#   save <seconds> <changes>
#
#   Will save the DB if both the given number of seconds and the given
#   number of write operations against the DB occurred.
#
#   In the example below the behaviour will be to save:
#   after 900 sec (15 min) if at least 1 key changed
#   after 300 sec (5 min) if at least 10 keys changed
#   after 60 sec if at least 10000 keys changed
#
#   Note: you can disable saving completely by commenting out all "save" lines.
#
#   It is also possible to remove all the previously configured save
#   points by adding a save directive with a single empty string argument
#   like in the following example:
#
#   save ""
## 同步策略
save 900 1
save 300 10
save 60 10000

# By default Redis will stop accepting writes if RDB snapshots are enabled
# (at least one save point) and the latest background save failed.
# This will make the user aware (in a hard way) that data is not persisting
# on disk properly, otherwise chances are that no one will notice and some
# disaster will happen.
#
# If the background saving process will start working again Redis will
# automatically allow writes again.
#
# However if you have setup your proper monitoring of the Redis server
# and persistence, you may want to disable this feature so that Redis will
# continue to work as usual even if there are problems with disk,
# permissions, and so forth.
## 保存过程中发生错误是否停止写操作 默认yes
stop-writes-on-bgsave-error yes

# Compress string objects using LZF when dump .rdb databases?
# For default that's set to 'yes' as it's almost always a win.
# If you want to save some CPU in the saving child set it to 'no' but
# the dataset will likely be bigger if you have compressible values or keys.
## rdb文件压缩 默认yes
rdbcompression yes

# Since version 5 of RDB a CRC64 checksum is placed at the end of the file.
# This makes the format more resistant to corruption but there is a performance
# hit to pay (around 10%) when saving and loading RDB files, so you can disable it
# for maximum performances.
#
# RDB files created with checksum disabled have a checksum of zero that will
# tell the loading code to skip the check.
## rdb文件校验 默认yes
rdbchecksum yes

# The filename where to dump the DB
## 快照的文件名 默认是 dump.rdb
dbfilename dump.rdb

# The working directory.
#
# The DB will be written inside this directory, with the filename specified
# above using the 'dbfilename' configuration directive.
#
# The Append Only File will also be created inside this directory.
#
# Note that you must specify a directory here, not a file name.
## 持久化文件保存地址
dir /usr/local/redis/persistence

RDB优势

  • RDB是Redis数据的非常紧凑的单文件时间点表示。RDB文件非常适合备份。例如,您可能希望在最近24小时内每小时归档一次RDB文件,并在30天内每天保存一次RDB快照。这使您可以在发生灾难时轻松恢复数据集的不同版本。
  • RDB非常适合灾难恢复,作为一个单一的压缩文件,可以传输到远处的数据中心,或Amazon S3(可能加密)。
  • RDB最大限度地提高Redis的性能,因为Redis父进程为了持久化所需要做的唯一工作就是派生一个子进程来完成其余的工作。父进程永远不会执行磁盘I/O或类似操作。
  • 与AOF相比,RDB允许更快地重启大数据集。
  • 在复制品上,RDB支持重启和故障转移后的部分重新同步.

RDB的劣势

  • 如果您需要在Redis停止工作(例如断电后)的情况下将数据丢失的可能性降至最低,那么RDB并不好。您可以配置不同的保存积分产生RDB的位置(例如,在至少五分钟内对数据集进行100次写入后,您可以有多个保存点)。但是,您通常会每五分钟或更长时间创建一个RDB快照,因此,如果Redis因为任何原因停止工作而没有正确关闭,您应该准备好丢失最近几分钟的数据。
  • RDB需要经常fork(),以便使用子进程在磁盘上持久化。如果数据集很大,Fork()可能会很耗时,如果数据集很大并且CPU性能不好,可能会导致Redis停止服务客户端几毫秒甚至一秒钟。AOF也需要Fork(),但频率要低一些,并且您可以调整想要重写日志的频率,而不会牺牲持久性。

6.2 AOF

什么是AOF

AOF (仅追加文件)。redis会把每一次除了读之外的所有指令添加到AOF文件中,且该文件不允许进行除添加外的任何操作。

在RDB模式下,如果宕机重启,内存里的数据丢失,在redis重新启动后,将重写AOF文件中记录的指令,以此恢复数据。

AOF默认是关闭的,通过将redis.conf appendonly no修改为appendonly yes来开启AOF 持久化功能,如果服务器开始了 AOF 持久化功能,服务器会优先使用 AOF 文件来还原数据库状态。只有在 AOF 持久化功能处于关闭状态时,服务器才会使用 RDB 文件来还原数据库状态。

对应redis.conf配置如下:

############################## APPEND ONLY MODE ###############################

# By default Redis asynchronously dumps the dataset on disk. This mode is
# good enough in many applications, but an issue with the Redis process or
# a power outage may result into a few minutes of writes lost (depending on
# the configured save points).
#
# The Append Only File is an alternative persistence mode that provides
# much better durability. For instance using the default data fsync policy
# (see later in the config file) Redis can lose just one second of writes in a
# dramatic event like a server power outage, or a single write if something
# wrong with the Redis process itself happens, but the operating system is
# still running correctly.
#
# AOF and RDB persistence can be enabled at the same time without problems.
# If the AOF is enabled on startup Redis will load the AOF, that is the file
# with the better durability guarantees.
#
# Please check http://redis.io/topics/persistence for more information.
## 是否开启AOF模式 默认为否
appendonly no

# The name of the append only file (default: "appendonly.aof")
## AOF追加操作生成的文件名 默认是 appendonly.aof
appendfilename "appendonly.aof"

# The fsync() call tells the Operating System to actually write data on disk
# instead of waiting for more data in the output buffer. Some OS will really flush
# data on disk, some other OS will just try to do it ASAP.
#
# Redis supports three different modes:
#
# no: don't fsync, just let the OS flush the data when it wants. Faster.
# always: fsync after every write to the append only log. Slow, Safest.
# everysec: fsync only one time every second. Compromise.
#
# The default is "everysec", as that's usually the right compromise between
# speed and data safety. It's up to you to understand if you can relax this to
# "no" that will let the operating system flush the output buffer when
# it wants, for better performances (but if you can live with the idea of
# some data loss consider the default persistence mode that's snapshotting),
# or on the contrary, use "always" that's very slow but a bit safer than
# everysec.
#
# More details please check the following article:
# http://antirez.com/post/redis-persistence-demystified.html
#
# If unsure, use "everysec".
## 同步策略 always everysec no
# appendfsync always
appendfsync everysec
# appendfsync no

# When the AOF fsync policy is set to always or everysec, and a background
# saving process (a background save or AOF log background rewriting) is
# performing a lot of I/O against the disk, in some Linux configurations
# Redis may block too long on the fsync() call. Note that there is no fix for
# this currently, as even performing fsync in a different thread will block
# our synchronous write(2) call.
#
# In order to mitigate this problem it's possible to use the following option
# that will prevent fsync() from being called in the main process while a
# BGSAVE or BGREWRITEAOF is in progress.
#
# This means that while another child is saving, the durability of Redis is
# the same as "appendfsync none". In practical terms, this means that it is
# possible to lose up to 30 seconds of log in the worst scenario (with the
# default Linux settings).
#
# If you have latency problems turn this to "yes". Otherwise leave it as
# "no" that is the safest pick from the point of view of durability.
## 重写时是否继续同步
no-appendfsync-on-rewrite no

# Automatic rewrite of the append only file.
# Redis is able to automatically rewrite the log file implicitly calling
# BGREWRITEAOF when the AOF log size grows by the specified percentage.
#
# This is how it works: Redis remembers the size of the AOF file after the
# latest rewrite (if no rewrite has happened since the restart, the size of
# the AOF at startup is used).
#
# This base size is compared to the current size. If the current size is
# bigger than the specified percentage, the rewrite is triggered. Also
# you need to specify a minimal size for the AOF file to be rewritten, this
# is useful to avoid rewriting the AOF file even if the percentage increase
# is reached but it is still pretty small.
#
# Specify a percentage of zero in order to disable the automatic AOF
# rewrite feature.
## 重写触发策略 两个策略要同时满足
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb

# An AOF file may be found to be truncated at the end during the Redis
# startup process, when the AOF data gets loaded back into memory.
# This may happen when the system where Redis is running
# crashes, especially when an ext4 filesystem is mounted without the
# data=ordered option (however this can't happen when Redis itself
# crashes or aborts but the operating system still works correctly).
#
# Redis can either exit with an error when this happens, or load as much
# data as possible (the default now) and start if the AOF file is found
# to be truncated at the end. The following option controls this behavior.
#
# If aof-load-truncated is set to yes, a truncated AOF file is loaded and
# the Redis server starts emitting a log to inform the user of the event.
# Otherwise if the option is set to no, the server aborts with an error
# and refuses to start. When the option is set to no, the user requires
# to fix the AOF file using the "redis-check-aof" utility before to restart
# the server.
#
# Note that if the AOF file will be found to be corrupted in the middle
# the server will still exit with an error. This option only applies when
# Redis will try to read more data from the AOF file but not enough bytes
# will be found.
aof-load-truncated yes

# When rewriting the AOF file, Redis is able to use an RDB preamble in the
# AOF file for faster rewrites and recoveries. When this option is turned
# on the rewritten AOF file is composed of two different stanzas:
#
#   [RDB file][AOF tail]
#
# When loading Redis recognizes that the AOF file starts with the "REDIS"
# string and loads the prefixed RDB file, and continues loading the AOF
# tail.
aof-use-rdb-preamble yes

AOF优势

  • 使用AOF Redis要持久得多:你可以有不同的fsync策略:根本没有fsync,每秒fsync,每次查询都有fsync。使用每秒fsync的默认策略,写性能仍然很好。fsync是使用后台线程执行的,当没有fsync正在进行时,主线程会尽力执行写操作,所以你只能丢失一秒钟的写操作。
  • AOF日志是一个仅附加的日志,因此不会出现寻道,也不会在断电时出现损坏问题。即使日志由于某种原因(磁盘已满或其他原因)以一个写了一半的命令结束,redis-check-aof工具也能够很容易地修复它。
  • 当AOF变得太大时,Redis能够在后台自动重写它。重写是完全安全的,因为当Redis继续向旧文件追加内容时,会用创建当前数据集所需的最少操作生成一个全新的文件,一旦第二个文件准备就绪,Redis就会切换这两个文件并开始向新文件追加内容。
  • AOF以一种易于理解和解析的格式包含了所有操作的日志。你甚至可以轻松导出AOF文件。例如,即使您不小心用FLUSHALL命令,只要在此期间没有重写日志,您仍然可以通过停止服务器、删除最新的命令并重新启动Redis来保存数据集。

AOF的劣势

  • 对于相同的数据集,AOF文件通常比等效的RDB文件大。
  • AOF可能比RDB慢,这取决于具体的fsync政策。一般情况下,fsync设置为每秒钟性能仍然很高,禁用fsync后,即使在高负载下,它也应该和RDB一样快。即使在巨大的写入负载情况下,RDB也能够提供更多关于最大延迟的保证。

在redis7.0以下的版本中

  • 如果在重写过程中有对数据库的写入,AOF会使用大量内存(这些被缓冲在内存中,最后被写入新的AOF)。
  • 重写期间到达的所有写命令被写入磁盘两次。
  • Redis可以在重写结束时冻结对新AOF文件的写和fsyncing这些写命令。

7 主从模式

redis主从模式

7.1 主从复制原理

redis主从原理

7.2 搭建Redis主从复制(读写分离)

修改从服务器redisredis.conf配置文件

[root@redis-server-slave-1 ~]# cd /usr/local/redis/
[root@redis-server-slave-1 ~]# vim redis.conf

主从复制配置

################################# REPLICATION #################################

# Master-Replica replication. Use replicaof to make a Redis instance a copy of
# another Redis server. A few things to understand ASAP about Redis replication.
#
#   +------------------+      +---------------+
#   |      Master      | ---> |    Replica    |
#   | (receive writes) |      |  (exact copy) |
#   +------------------+      +---------------+
#
# 1) Redis replication is asynchronous, but you can configure a master to
#    stop accepting writes if it appears to be not connected with at least
#    a given number of replicas.
# 2) Redis replicas are able to perform a partial resynchronization with the
#    master if the replication link is lost for a relatively small amount of
#    time. You may want to configure the replication backlog size (see the next
#    sections of this file) with a sensible value depending on your needs.
# 3) Replication is automatic and does not need user intervention. After a
#    network partition replicas automatically try to reconnect to masters
#    and resynchronize with them.
#
# replicaof <masterip> <masterport>
# 设置主服务器的地址和端口
replicaof 192.168.18.210 6379

# If the master is password protected (using the "requirepass" configuration
# directive below) it is possible to tell the replica to authenticate before
# starting the replication synchronization process, otherwise the master will
# refuse the replica request.
#
# masterauth <master-password>
# 设置主服务器的连接密码
masterauth 123456

设置只读模式

# You can configure a replica instance to accept writes or not. Writing against
# a replica instance may be useful to store some ephemeral data (because data
# written on a replica will be easily deleted after resync with the master) but
# may also cause problems if clients are writing to it because of a
# misconfiguration.
#
# Since Redis 2.6 by default replicas are read-only.
#
# Note: read only replicas are not designed to be exposed to untrusted clients
# on the internet. It's just a protection layer against misuse of the instance.
# Still a read only replica exports by default all the administrative commands
# such as CONFIG, DEBUG, and so forth. To a limited extent you can improve
# security of read only replicas using 'rename-command' to shadow all the
# administrative / dangerous commands.
replica-read-only yes

重启redis

#停止redis服务
[root@redis-server-slave-1 redis]# /etc/init.d/redis_init_script stop
Stopping ...
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
Redis stopped
#启动redis服务
[root@redis-server-slave-1 redis]# /etc/init.d/redis_init_script start
Starting Redis server...
22927:C 13 May 2023 16:37:21.903 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
22927:C 13 May 2023 16:37:21.903 # Redis version=5.0.14, bits=64, commit=00000000, modified=0, pid=22927, just started
22927:C 13 May 2023 16:37:21.903 # Configuration loaded

查看从服务器redisreplication

#进入redis客户端
[root@redis-server-slave-1 redis]# redis-cli
127.0.0.1:6379> auth 123456
OK
#查看replication
127.0.0.1:6379> info replication
# Replication
role:slave #当前角色
master_host:192.168.18.210 #主节点ip
master_port:6379 #主节点端口   
master_link_status:up #主节点状态
master_last_io_seconds_ago:2
master_sync_in_progress:0
slave_repl_offset:336
slave_priority:100
slave_read_only:1
connected_slaves:0
master_replid:ea5d9f9a038ec9a9a2170768fb25025d78100c9a
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:336
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1
repl_backlog_histlen:336

查看主服务器redisreplication

[root@redis-server-master ~]# redis-cli
127.0.0.1:6379> auth 123456
OK
127.0.0.1:6379> info replication
# Replication
role:master #当前角色
connected_slaves:1 #连接中的slave数量
slave0:ip=192.168.18.211,port=6379,state=online,offset=630,lag=0 #从节点信息
master_replid:ea5d9f9a038ec9a9a2170768fb25025d78100c9a
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:630
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1
repl_backlog_histlen:630

至此,一主一从搭建完毕,一主多从搭建方式同上,再增加一台slave即可

一主两从,主服务器redisreplication

[root@redis-server-master ~]# redis-cli
127.0.0.1:6379> auth 123456
OK
127.0.0.1:6379> info replication
# Replication
role:master
connected_slaves:2
slave0:ip=192.168.18.211,port=6379,state=online,offset=1302,lag=1
slave1:ip=192.168.18.212,port=6379,state=online,offset=1302,lag=1
master_replid:ea5d9f9a038ec9a9a2170768fb25025d78100c9a
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:1302
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1
repl_backlog_histlen:1302

7.3 无磁盘化复制

redis无磁盘化复制
对应redis.conf配置如下,设置repl-diskless-sync yes 表示开启无磁盘化复制,默认为no。

目前无磁盘化复制模式尚处于实用阶段,生产中不推荐使用。

# Replication SYNC strategy: disk or socket.
#
# -------------------------------------------------------
# WARNING: DISKLESS REPLICATION IS EXPERIMENTAL CURRENTLY
# -------------------------------------------------------
#
# New replicas and reconnecting replicas that are not able to continue the replication
# process just receiving differences, need to do what is called a "full
# synchronization". An RDB file is transmitted from the master to the replicas.
# The transmission can happen in two different ways:
#
# 1) Disk-backed: The Redis master creates a new process that writes the RDB
#                 file on disk. Later the file is transferred by the parent
#                 process to the replicas incrementally.
# 2) Diskless: The Redis master creates a new process that directly writes the
#              RDB file to replica sockets, without touching the disk at all.
#
# With disk-backed replication, while the RDB file is generated, more replicas
# can be queued and served with the RDB file as soon as the current child producing
# the RDB file finishes its work. With diskless replication instead once
# the transfer starts, new replicas arriving will be queued and a new transfer
# will start when the current one terminates.
#
# When diskless replication is used, the master waits a configurable amount of
# time (in seconds) before starting the transfer in the hope that multiple replicas
# will arrive and the transfer can be parallelized.
#
# With slow disks and fast (large bandwidth) networks, diskless replication
# works better.
repl-diskless-sync no

8 缓存过期处理与内存淘汰机制

Redis提供了EXPIREEXPIREAT来为 key 设置过期时间,当到达过期时间后,就需要将这个key删除掉(准确来说,是该key会变为不可使用,而后再利用专门的过期策略对其进行删除)。Redis中提供了两种删除策略:定时删除和惰性删除。

缓存过期机制

  1. (主动)定时删除

    Redis会定期抽查一个随机的key,检查其是否过期,如果过期就删除。Redis默认一秒抽查10次,可以在redis.conf文件中进行设置,对应redis.conf配置文件如下

    # Redis calls an internal function to perform many background tasks, like
    # closing connections of clients in timeout, purging expired keys that are
    # never requested, and so forth.
    #
    # Not all tasks are performed with the same frequency, but Redis checks for
    # tasks to perform according to the specified "hz" value.
    #
    # By default "hz" is set to 10. Raising the value will use more CPU when
    # Redis is idle, but at the same time will make Redis more responsive when
    # there are many keys expiring at the same time, and timeouts may be
    # handled with more precision.
    #
    # The range is between 1 and 500, however a value over 100 is usually not
    # a good idea. Most users should use the default of 10 and raise this up to
    # 100 only in environments where very low latency is required.
    hz 10
    

    优点:该策略可以立即清除过期的数据,对内存很友好;

    缺点:若过期key很多,删除这些key会会占用大量的CPU资源去处理过期的数据,从而影响性能。

  2. (被动) 惰性删除

    当访问一个key时,判断该key是否已过期,如果过期就删除。

    优点:该策略可以最大化地节省CPU资源:删除操作只发生在取出key的时候发生,而且只删除当前key,所以对CPU时间的占用是比较少的,并且此时的删除是已经到了非做不可的地步(如果此时还不删除的话,我们就会获取到了已经过期的key了)

    缺点:对内存非常不友好。极端情况可能出现大量的过期key没有再次被访问,从而不会被清除,占用大量内存,甚至导致内存泄漏。

无论是定期删除还是惰性删除,都是一种不完全精确的删除策略,始终还是会存在已经过期的key无法被删除的场景。而且这两种过期策略都是只针对设置了过期时间的key,不适用于没有设置过期时间的key的淘汰,所以,Redis还提供了内存淘汰策略,用来筛选淘汰指定的key。

内存淘汰管理机制

在配置文件redis.conf 中,可以通过参数 maxmemory 来设定最大内存(一般会将该参数设置为物理内存的四分之三);当 Redis 的内存超过最大允许的内存之后,Redis 会触发内存淘汰策略。(过期策略是指正常情况下清除过期键,内存淘汰是指内存超过最大值时的保护策略)。内存淘汰策略可以通过 maxmemory-policy 进行配置,目前Redis提供了以下几种(2个LFU的策略是Redis 4.0后出现的):

策略名描述
volatile-lru针对设置了过期时间的key,使用lru算法进行淘汰。
allkeys-lru针对所有key使用lru算法进行淘汰。
volatile-lfu针对设置了过期时间的key,使用lfu算法进行淘汰。
allkeys-lfu针对所有key使用lfu算法进行淘汰。
volatile-random从所有设置了过期时间的key中使用随机淘汰的方式进行淘汰。
allkeys-random针对所有的key使用随机淘汰机制进行淘汰。
volatile-ttl针对设置了过期时间的key,越早过期的越先被淘汰。
noeviction不会淘汰任何数据
当使用的内存空间超过 maxmemory 值时,再有写请求来时返回错误。

除了比较特殊的noeviction与volatile-ttl,其余6种策略都有一定的关联性。我们可以通过前缀将它们分为2类,volatile-与allkeys-,这两类策略的区别在于二者选择要清除的键时的字典不同,volatile-前缀的策略代表从设置了过期时间的key中选择键进行清除;allkeys-开头的策略代表从所有key中选择键进行清除。

详细解析参考:https://blog.csdn.net/wzngzaixiaomantou/article/details/125533413

对应redis.conf配置文件如下

############################## MEMORY MANAGEMENT ################################

# Set a memory usage limit to the specified amount of bytes.
# When the memory limit is reached Redis will try to remove keys
# according to the eviction policy selected (see maxmemory-policy).
#
# If Redis can't remove keys according to the policy, or if the policy is
# set to 'noeviction', Redis will start to reply with errors to commands
# that would use more memory, like SET, LPUSH, and so on, and will continue
# to reply to read-only commands like GET.
#
# This option is usually useful when using Redis as an LRU or LFU cache, or to
# set a hard memory limit for an instance (using the 'noeviction' policy).
#
# WARNING: If you have replicas attached to an instance with maxmemory on,
# the size of the output buffers needed to feed the replicas are subtracted
# from the used memory count, so that network problems / resyncs will
# not trigger a loop where keys are evicted, and in turn the output
# buffer of replicas is full with DELs of keys evicted triggering the deletion
# of more keys, and so forth until the database is completely emptied.
#
# In short... if you have replicas attached it is suggested that you set a lower
# limit for maxmemory so that there is some free RAM on the system for replica
# output buffers (but this is not needed if the policy is 'noeviction').
#
## 设置内存阀值
# maxmemory <bytes>

# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
# is reached. You can select among five behaviors:
#
## 内存淘汰策略(算法)
# volatile-lru -> Evict using approximated LRU among the keys with an expire set.
# allkeys-lru -> Evict any key using approximated LRU.
# volatile-lfu -> Evict using approximated LFU among the keys with an expire set.
# allkeys-lfu -> Evict any key using approximated LFU.
# volatile-random -> Remove a random key among the ones with an expire set.
# allkeys-random -> Remove a random key, any key.
# volatile-ttl -> Remove the key with the nearest expire time (minor TTL)
# noeviction -> Don't evict anything, just return an error on write operations.
#
# LRU means Least Recently Used
# LFU means Least Frequently Used
#
# Both LRU, LFU and volatile-ttl are implemented using approximated
# randomized algorithms.
#
# Note: with any of the above policies, Redis will return an error on write
#       operations, when there are no suitable keys for eviction.
#
#       At the date of writing these commands are: set setnx setex append
#       incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
#       sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
#       zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
#       getset mset msetnx exec sort
#
# The default is:
#
## 默认的内存淘汰策略(算法)
# maxmemory-policy noeviction

# LRU, LFU and minimal TTL algorithms are not precise algorithms but approximated
# algorithms (in order to save memory), so you can tune it for speed or
# accuracy. For default Redis will check five keys and pick the one that was
# used less recently, you can change the sample size using the following
# configuration directive.
#
# The default of 5 produces good enough results. 10 Approximates very closely
# true LRU but costs more CPU. 3 is faster but not very accurate.
#
# maxmemory-samples 5

# Starting from Redis 5, by default a replica will ignore its maxmemory setting
# (unless it is promoted to master after a failover or manually). It means
# that the eviction of keys will be just handled by the master, sending the
# DEL commands to the replica as keys evict in the master side.
#
# This behavior ensures that masters and replicas stay consistent, and is usually
# what you want, however if your replica is writable, or you want the replica to have
# a different memory setting, and you are sure all the writes performed to the
# replica are idempotent, then you may change this default (but be sure to understand
# what you are doing).
#
# Note that since the replica by default does not evict, it may end using more
# memory than the one set via maxmemory (there are certain buffers that may
# be larger on the replica, or data structures may sometimes take more memory and so
# forth). So make sure you monitor your replicas and make sure they have enough
# memory to never hit a real out-of-memory condition before the master hits
# the configured maxmemory setting.
#
# replica-ignore-maxmemory yes

9 Redis哨兵

9.1 Redis哨兵机制

官方文档参考:https://redis.io/docs/management/sentinel/

中文文档参考:暂无

redis哨兵机制
哨兵(sentinel)在Redis主从架构中是一个非常重要的组件,是在Redis2.8版本引入的。它的主要作用就是监控所有的Redis实例,并实现master节点的故障转移。哨兵是一个特殊的redis服务,它不负责数据的读写,只用来监控Redis实例。

工作原理

在哨兵模式架构中,客户端在首次访问Redis服务时,实际上访问的是哨兵(sentinel),sentinel会将自己监控的Redis实例的master节点信息返回给客户端,客户端后续就会直接访问Redis的master节点,并不是每次都从哨兵处获取master节点的信息。

sentinel会实时监控所有的Redis实例是否可用,当监控到Redis的master节点发生故障后,会从剩余的slave节点中选举出一个作为新的master节点提供服务,并将新master节点的地址通知给客户端,其他的slave节点会通过slaveof命令重新挂载到新的master节点下。当原来的master节点恢复后,也会作为slave节点挂在新的master节点下。

9.2 Redis哨兵实现

配置哨兵(修改主服务器Redis的sentinel.conf配置)

[root@redis-server-master disk1]# cd redis-5.0.14
[root@redis-server-master redis-5.0.14]# ls
00-RELEASENOTES  COPYING  Makefile   redis.conf       runtest-moduleapi  src
BUGS             deps     MANIFESTO  runtest          runtest-sentinel   tests
CONTRIBUTING     INSTALL  README.md  runtest-cluster  sentinel.conf      utils
[root@redis-server-master redis-5.0.14]# cp sentinel.conf /usr/local/redis/
[root@redis-server-master redis-5.0.14]# cd /usr/local/redis/
[root@redis-server-master redis]# ls
persistence  redis.conf  sentinel.conf
[root@redis-server-master redis]# vim sentinel.conf 

sentinel.conf 核心配置如下

# *** IMPORTANT ***
#
# By default Sentinel will not be reachable from interfaces different than
# localhost, either use the 'bind' directive to bind to a list of network
# interfaces, or disable protected mode with "protected-mode no" by
# adding it to this configuration file.
#
# Before doing that MAKE SURE the instance is protected from the outside
# world via firewalling or other means.
#
# For example you may use one of the following:
#
# bind 127.0.0.1 192.168.1.1
#
## 是否开启保护模式 
## 若不开启 任何人都可以访问
## 若开启 注释下面的配置 开放上面的配置 bind [允许访问的ip ... ]
protected-mode no

# port <sentinel-port>
# The port that this sentinel instance will run on
## 端口
port 26379

# By default Redis Sentinel does not run as a daemon. Use 'yes' if you need it.
# Note that Redis will write a pid file in /var/run/redis-sentinel.pid when
# daemonized.
## 是否后台运行
daemonize yes

# When running daemonized, Redis Sentinel writes a pid file in
# /var/run/redis-sentinel.pid by default. You can specify a custom pid file
# location here.
pidfile /var/run/redis-sentinel.pid

# Specify the log file name. Also the empty string can be used to force
# Sentinel to log on the standard output. Note that if you use standard
# output for logging but daemonize, logs will be sent to /dev/null
## 日志保存地址
logfile /usr/local/redis/sentinel/redis-sentinel.log

# sentinel announce-ip <ip>
# sentinel announce-port <port>
#
# The above two configuration directives are useful in environments where,
# because of NAT, Sentinel is reachable from outside via a non-local address.
#
# When announce-ip is provided, the Sentinel will claim the specified IP address
# in HELLO messages used to gossip its presence, instead of auto-detecting the
# local address as it usually does.
#
# Similarly when announce-port is provided and is valid and non-zero, Sentinel
# will announce the specified TCP port.
#
# The two options don't need to be used together, if only announce-ip is
# provided, the Sentinel will announce the specified IP and the server port
# as specified by the "port" option. If only announce-port is provided, the
# Sentinel will announce the auto-detected local IP and the specified port.
#
# Example:
#
# sentinel announce-ip 1.2.3.4

# dir <working-directory>
# Every long running process should have a well-defined working directory.
# For Redis Sentinel to chdir to /tmp at startup is the simplest thing
# for the process to don't interfere with administrative tasks such as
# unmounting filesystems.
## 工作空间
dir /usr/local/redis/sentinel

# sentinel monitor <master-name> <ip> <redis-port> <quorum>
#
# Tells Sentinel to monitor this master, and to consider it in O_DOWN
# (Objectively Down) state only if at least <quorum> sentinels agree.
#
# Note that whatever is the ODOWN quorum, a Sentinel will require to
# be elected by the majority of the known Sentinels in order to
# start a failover, so no failover can be performed in minority.
#
# Replicas are auto-discovered, so you don't need to specify replicas in
# any way. Sentinel itself will rewrite this configuration file adding
# the replicas using additional configuration options.
# Also note that the configuration file is rewritten when a
# replica is promoted to master.
#
# Note: master name should not include special characters or spaces.
# The valid charset is A-z 0-9 and the three characters ".-_".
## 哨兵核心配置 sentinel monitor 主机名 主机ip 主机端口 哨兵数量
sentinel monitor redis-master 192.168.18.210 6379 2

# sentinel auth-pass <master-name> <password>
#
# Set the password to use to authenticate with the master and replicas.
# Useful if there is a password set in the Redis instances to monitor.
#
# Note that the master password is also used for replicas, so it is not
# possible to set a different password in masters and replicas instances
# if you want to be able to monitor these instances with Sentinel.
#
# However you can have Redis instances without the authentication enabled
# mixed with Redis instances requiring the authentication (as long as the
# password set is the same for all the instances requiring the password) as
# the AUTH command will have no effect in Redis instances with authentication
# switched off.
#
# Example:
#
## 主节点密码 
sentinel auth-pass redis-master 123456

# sentinel down-after-milliseconds <master-name> <milliseconds>
#
# Number of milliseconds the master (or any attached replica or sentinel) should
# be unreachable (as in, not acceptable reply to PING, continuously, for the
# specified period) in order to consider it in S_DOWN state (Subjectively
# Down).
#
# Default is 30 seconds.
## 主节点心跳检测时间
sentinel down-after-milliseconds redis-master 30000

# sentinel parallel-syncs <master-name> <numreplicas>
#
# How many replicas we can reconfigure to point to the new replica simultaneously
# during the failover. Use a low number if you use the replicas to serve query
# to avoid that all the replicas will be unreachable at about the same
# time while performing the synchronization with the master.
## slave并行同步数量
sentinel parallel-syncs redis-master 1

# sentinel failover-timeout <master-name> <milliseconds>
#
# Specifies the failover timeout in milliseconds. It is used in many ways:
#
# - The time needed to re-start a failover after a previous failover was
#   already tried against the same master by a given Sentinel, is two
#   times the failover timeout.
#
# - The time needed for a replica replicating to a wrong master according
#   to a Sentinel current configuration, to be forced to replicate
#   with the right master, is exactly the failover timeout (counting since
#   the moment a Sentinel detected the misconfiguration).
#
# - The time needed to cancel a failover that is already in progress but
#   did not produced any configuration change (SLAVEOF NO ONE yet not
#   acknowledged by the promoted replica).
#
# - The maximum time a failover in progress waits for all the replicas to be
#   reconfigured as replicas of the new master. However even after this time
#   the replicas will be reconfigured by the Sentinels anyway, but not with
#   the exact parallel-syncs progression as specified.
#
# Default is 3 minutes.
## 主备切换超时时间
sentinel failover-timeout redis-master 180000

传输sentinel.conf配置文件到从服务器(scp命令远程传输或手动上传)

[root@redis-server-master redis]# scp sentinel.conf root@192.168.18.211:/usr/local/redis/
The authenticity of host '192.168.18.211 (192.168.18.211)' can't be established.
ECDSA key fingerprint is SHA256:SmTmvoE1CQSfauRIzFKOPqWEUzs+kdp4bmh1hAmtTmA.
ECDSA key fingerprint is MD5:dc:2a:71:31:29:d8:a5:99:07:d1:26:6e:23:16:31:30.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.18.211' (ECDSA) to the list of known hosts.
root@192.168.18.211's password: 
sentinel.conf                                                               100% 9775    12.1MB/s   00:00    
[root@redis-server-master redis]# scp sentinel.conf root@192.168.18.212:/usr/local/redis/
The authenticity of host '192.168.18.212 (192.168.18.212)' can't be established.
ECDSA key fingerprint is SHA256:SmTmvoE1CQSfauRIzFKOPqWEUzs+kdp4bmh1hAmtTmA.
ECDSA key fingerprint is MD5:dc:2a:71:31:29:d8:a5:99:07:d1:26:6e:23:16:31:30.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '192.168.18.212' (ECDSA) to the list of known hosts.
root@192.168.18.212's password: 
sentinel.conf                                                               100% 9775    11.3MB/s   00:00    
[root@redis-server-master redis]# 

启动哨兵

# 通过配置文件启动哨兵
[root@redis-server-master redis]# redis-sentinel /usr/local/redis/sentinel.conf 
*** FATAL CONFIG FILE ERROR ***
Reading the configuration file, at line 36
>>> 'logfile /usr/local/redis/sentinel/redis-sentinel.log'
Can't open the log file: No such file or directory
# 这里报错是因为我配置的日志目录不存在 手动创建一下
[root@redis-server-master redis]# mkdir /usr/local/redis/sentinel -p
# 启动哨兵
[root@redis-server-master redis]# redis-sentinel /usr/local/redis/sentinel.conf 
# 查看进程 启动成功 redis-sentinel *:26379
[root@redis-server-master redis]# ps -ef | grep redis
root        681      1  0 5月13 ?       00:00:30 /usr/local/bin/redis-server 0.0.0.0:6379
root     102784      1  0 02:51 ?        00:00:00 redis-sentinel *:26379 [sentinel]
root     104886  81302  0 02:53 pts/0    00:00:00 grep --color=auto redis

打开redis-sentinel.log日志监控,再依次启动从服务器哨兵(启动方式同上),观察日志变化。

[root@redis-server-master redis]# cd  /usr/local/redis/sentinel
[root@redis-server-master sentinel]# tail -f redis-sentinel.log 
102783:X 14 May 2023 02:51:21.498 # Redis version=5.0.14, bits=64, commit=00000000, modified=0, pid=102783, just started
102783:X 14 May 2023 02:51:21.498 # Configuration loaded
102784:X 14 May 2023 02:51:21.500 * Increased maximum number of open files to 10032 (it was originally set to 1024).
102784:X 14 May 2023 02:51:21.502 * Running mode=sentinel, port=26379.
102784:X 14 May 2023 02:51:21.502 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
102784:X 14 May 2023 02:51:21.504 # Sentinel ID is 9d7d21b5d7f7854a77ba1be4d04c0f68beb9dfb6
102784:X 14 May 2023 02:51:21.504 # +monitor master redis-master 192.168.18.210 6379 quorum 2
102784:X 14 May 2023 02:51:21.505 * +slave slave 192.168.18.211:6379 192.168.18.211 6379 @ redis-master 192.168.18.210 6379
102784:X 14 May 2023 02:51:21.505 * +slave slave 192.168.18.212:6379 192.168.18.212 6379 @ redis-master 192.168.18.210 6379
# 从服务器哨兵启动后日志变化如下
102784:X 14 May 2023 02:55:24.384 * +sentinel sentinel e1f60a7be69ad0cc5f74e7db1c093a29cf61750d 192.168.18.211 26379 @ redis-master 192.168.18.210 6379
102784:X 14 May 2023 02:56:16.244 * +sentinel sentinel cd2cc88653a2fc8a9cdfc877ccbc5e30828ce623 192.168.18.212 26379 @ redis-master 192.168.18.210 6379

测试

查看主从三台服务器(下面用master、slave-1、slave-2代替)Redis主/从服务器的详细信息

master:

[root@redis-server-master ~]# redis-cli
127.0.0.1:6379> auth 123456
OK
127.0.0.1:6379> info replication
# Replication
role:master
connected_slaves:2
slave0:ip=192.168.18.211,port=6379,state=online,offset=130537,lag=1
slave1:ip=192.168.18.212,port=6379,state=online,offset=130684,lag=1
master_replid:d50b8930ac2ea02530d2b38ae1e7ceb67332ec0a
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:130684
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:18284
repl_backlog_histlen:112401

slave-1:

[root@redis-server-slave-1 redis]# redis-cli
127.0.0.1:6379> auth 123456
OK
127.0.0.1:6379> info replication
# Replication
role:slave
master_host:192.168.18.210
master_port:6379
master_link_status:up
master_last_io_seconds_ago:1
master_sync_in_progress:0
slave_repl_offset:119736
slave_priority:100
slave_read_only:1
connected_slaves:0
master_replid:d50b8930ac2ea02530d2b38ae1e7ceb67332ec0a
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:119736
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:18284
repl_backlog_histlen:101453

slave-2:

[root@redis-server-slave-2 ~]# redis-cli
127.0.0.1:6379> auth123456
(error) ERR unknown command `auth123456`, with args beginning with: 
127.0.0.1:6379> auth 123456
OK
127.0.0.1:6379> info replication
# Replication
role:slave
master_host:192.168.18.210
master_port:6379
master_link_status:up
master_last_io_seconds_ago:1
master_sync_in_progress:0
slave_repl_offset:157753
slave_priority:100
slave_read_only:1
connected_slaves:0
master_replid:d50b8930ac2ea02530d2b38ae1e7ceb67332ec0a
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:157753
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:18298
repl_backlog_histlen:139456

关闭master的Redis服务

[root@redis-server-master ~]# /etc//init.d/redis_init_script stop
Stopping ...
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
Redis stopped

redis-sentinel.log日志日志增加如下内容:

102784:X 14 May 2023 03:09:56.339 # +sdown master redis-master 192.168.18.210 6379
102784:X 14 May 2023 03:09:56.411 # +new-epoch 1
102784:X 14 May 2023 03:09:56.412 # +vote-for-leader e1f60a7be69ad0cc5f74e7db1c093a29cf61750d 1
102784:X 14 May 2023 03:09:56.431 # +odown master redis-master 192.168.18.210 6379 #quorum 3/2
102784:X 14 May 2023 03:09:56.431 # Next failover delay: I will not start a failover before Sun May 14 03:15:56 2023
102784:X 14 May 2023 03:09:57.602 # +config-update-from sentinel e1f60a7be69ad0cc5f74e7db1c093a29cf61750d 192.168.18.211 26379 @ redis-master 192.168.18.210 6379
102784:X 14 May 2023 03:09:57.602 # +switch-master redis-master 192.168.18.210 6379 192.168.18.211 6379
102784:X 14 May 2023 03:09:57.603 * +slave slave 192.168.18.212:6379 192.168.18.212 6379 @ redis-master 192.168.18.211 6379
102784:X 14 May 2023 03:09:57.603 * +slave slave 192.168.18.210:6379 192.168.18.210 6379 @ redis-master 192.168.18.211 6379
102784:X 14 May 2023 03:10:27.616 # +sdown slave 192.168.18.210:6379 192.168.18.210 6379 @ redis-master 192.168.18.211 6379

查看两台从节点服务器Redis主/从服务器的详细信息。

slave-1:

127.0.0.1:6379> clear
127.0.0.1:6379> auth 123456
OK
127.0.0.1:6379> info replication
# Replication
role:master
connected_slaves:1
slave0:ip=192.168.18.212,port=6379,state=online,offset=269755,lag=1
master_replid:1d21e8251e6584b8bc247ef7b2effc9cddbedd8e
master_replid2:d50b8930ac2ea02530d2b38ae1e7ceb67332ec0a
master_repl_offset:269902
second_repl_offset:219445
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:18284
repl_backlog_histlen:251619

slave-2:

127.0.0.1:6379> clear
127.0.0.1:6379> auth 123456
OK
127.0.0.1:6379> info replication
# Replication
role:slave
master_host:192.168.18.211
master_port:6379
master_link_status:up
master_last_io_seconds_ago:1
master_sync_in_progress:0
slave_repl_offset:275236
slave_priority:100
slave_read_only:1
connected_slaves:0
master_replid:1d21e8251e6584b8bc247ef7b2effc9cddbedd8e
master_replid2:d50b8930ac2ea02530d2b38ae1e7ceb67332ec0a
master_repl_offset:275236
second_repl_offset:219445
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:18298
repl_backlog_histlen:256939

重新启动master的Redis服务

[root@redis-server-master ~]# /etc//init.d/redis_init_script start
Starting Redis server...
23543:C 14 May 2023 03:19:03.482 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
23543:C 14 May 2023 03:19:03.482 # Redis version=5.0.14, bits=64, commit=00000000, modified=0, pid=23543, just started
23543:C 14 May 2023 03:19:03.482 # Configuration loaded

redis-sentinel.log日志日志增加如下内容:

102784:X 14 May 2023 03:19:03.617 # -sdown slave 192.168.18.210:6379 192.168.18.210 6379 @ redis-master 192.168.18.211 6379
# 日志提示增加了一个从节点192.168.18.210:6379它的主节点为192.168.18.211 6379
102784:X 14 May 2023 03:19:13.582 * +convert-to-slave slave 192.168.18.210:6379 192.168.18.210 6379 @ redis-master 192.168.18.211 6379

此时查看主服务器Redis主/从服务器的详细信息,发现主服务器的Redis角色确实变为了slave

[root@redis-server-master ~]# redis-cli
127.0.0.1:6379> auth 123456
OK
127.0.0.1:6379> info replication
# Replication
role:slave
master_host:192.168.18.211
master_port:6379
master_link_status:down
master_last_io_seconds_ago:-1
master_sync_in_progress:0
slave_repl_offset:1
master_link_down_since_seconds:1684005646
slave_priority:100
slave_read_only:1
connected_slaves:0
master_replid:7168a5479b10c74444800045ab3d9102a2202490
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:0
second_repl_offset:-1
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0

9.3 SpringBoot整合Redis哨兵

yml配置

spring:
	redis:
		database: 1
		password: 123456
		sentinel:
			master: redis-master
			nodes: 192.168.18.210:26379,192.168.18.211:26379,192.168.18.212:26379

10 Redis Cluster(集群)

redis集群

10.1 Redis集群搭建

Redis集群环境准备

准备六台已经安装并配置好单节点Redis的服务器(安装/配置参考1.1/1.2

修改redis.conf配置文件(每台机器都按下面的方式操作),修改内容如下:

################################ REDIS CLUSTER  ###############################

# Normal Redis instances can't be part of a Redis Cluster; only nodes that are
# started as cluster nodes can. In order to start a Redis instance as a
# cluster node enable the cluster support uncommenting the following:
#
# cluster-enabled yes
## 是否开启redis集群模式
cluster-enabled yes
# Every cluster node has a cluster configuration file. This file is not
# intended to be edited by hand. It is created and updated by Redis nodes.
# Every Redis Cluster node requires a different cluster configuration file.
# Make sure that instances running in the same system do not have
# overlapping cluster configuration file names.
#
# cluster-config-file nodes-6379.conf
## 当前集群节点对应的配置文件
cluster-config-file nodes-6379.conf
# Cluster node timeout is the amount of milliseconds a node must be unreachable
# for it to be considered in failure state.
# Most other internal time limits are multiple of the node timeout.
#
# cluster-node-timeout 15000
## 超时时间
cluster-node-timeout 5000

############################## APPEND ONLY MODE ###############################

# By default Redis asynchronously dumps the dataset on disk. This mode is
# good enough in many applications, but an issue with the Redis process or
# a power outage may result into a few minutes of writes lost (depending on
# the configured save points).
#
# The Append Only File is an alternative persistence mode that provides
# much better durability. For instance using the default data fsync policy
# (see later in the config file) Redis can lose just one second of writes in a
# dramatic event like a server power outage, or a single write if something
# wrong with the Redis process itself happens, but the operating system is
# still running correctly.
#
# AOF and RDB persistence can be enabled at the same time without problems.
# If the AOF is enabled on startup Redis will load the AOF, that is the file
# with the better durability guarantees.
#
# Please check http://redis.io/topics/persistence for more information.
## 开启AOF
appendonly yes

重启Redis服务,查看进程状态

[root@redis-server-cluster-1 persistence]# /etc/init.d/redis_init_script stop
Stopping ...
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
Redis stopped
[root@redis-server-cluster-1 persistence]# /etc/init.d/redis_init_script start
Starting Redis server...
30406:C 26 May 2023 23:17:45.253 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
30406:C 26 May 2023 23:17:45.253 # Redis version=5.0.14, bits=64, commit=00000000, modified=0, pid=30406, just started
30406:C 26 May 2023 23:17:45.253 # Configuration loaded
[root@redis-server-cluster-1 persistence]#  ps -ef | grep redis
root      30407      1  0 23:17 ?        00:00:00 /usr/local/bin/redis-server 127.0.0.1:6379 [cluster]
root      30676   1164  0 23:17 pts/0    00:00:00 grep --color=auto redis

构建Redis集群

Redis Cluster在Redis 5.0之后取消了ruby脚本redis-trib.rb的支持(手动命令行添加集群的方式不变),集合到redis-cli里,避免了再安装ruby的相关环境。直接使用redis-cli的参数–cluster来取代。

在任意节点执行如下命令:

[root@redis-server-cluster-1 redis]# redis-cli -a 123456 --cluster create 192.168.18.201:6379 192.168.18.202:6379 192.168.18.203:6379 192.168.18.204:6379 192.168.18.205:6379 192.168.18.206:6379 --cluster-replicas 1
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 192.168.18.205:6379 to 192.168.18.201:6379
Adding replica 192.168.18.206:6379 to 192.168.18.202:6379
Adding replica 192.168.18.204:6379 to 192.168.18.203:6379
M: bec7ad89dbc24c421a06456ceb36a9e6066d4b22 192.168.18.201:6379
   slots:[0-5460] (5461 slots) master
M: 613082942c5f9bb7b02980effdb787d66582e532 192.168.18.202:6379
   slots:[5461-10922] (5462 slots) master
M: 5d81bf4473cc70087de89401d62683b7c62c7c0c 192.168.18.203:6379
   slots:[10923-16383] (5461 slots) master
S: 597f1b30149eaa1fffba0750ae10603512782b7a 192.168.18.204:6379
   replicates 5d81bf4473cc70087de89401d62683b7c62c7c0c
S: 006e30805ae5f8c2efa9189fa4fd59fd79f68173 192.168.18.205:6379
   replicates bec7ad89dbc24c421a06456ceb36a9e6066d4b22
S: 8b5e6b4aff2edae41a53bca9d013d441b77a8ccf 192.168.18.206:6379
   replicates 613082942c5f9bb7b02980effdb787d66582e532
Can I set the above configuration? (type 'yes' to accept):

输入yes继续

Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
.
>>> Performing Cluster Check (using node 192.168.18.201:6379)
M: bec7ad89dbc24c421a06456ceb36a9e6066d4b22 192.168.18.201:6379
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
M: 5d81bf4473cc70087de89401d62683b7c62c7c0c 192.168.18.203:6379
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: 006e30805ae5f8c2efa9189fa4fd59fd79f68173 192.168.18.205:6379
   slots: (0 slots) slave
   replicates bec7ad89dbc24c421a06456ceb36a9e6066d4b22
S: 597f1b30149eaa1fffba0750ae10603512782b7a 192.168.18.204:6379
   slots: (0 slots) slave
   replicates 5d81bf4473cc70087de89401d62683b7c62c7c0c
M: 613082942c5f9bb7b02980effdb787d66582e532 192.168.18.202:6379
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
S: 8b5e6b4aff2edae41a53bca9d013d441b77a8ccf 192.168.18.206:6379
   slots: (0 slots) slave
   replicates 613082942c5f9bb7b02980effdb787d66582e532
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

检查Redis Cluster集群节点

[root@redis-server-cluster-1 redis]# redis-cli -a 123456 --cluster check 192.168.18.201:6379
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
192.168.18.201:6379 (bec7ad89...) -> 0 keys | 5461 slots | 1 slaves.
192.168.18.203:6379 (5d81bf44...) -> 0 keys | 5461 slots | 1 slaves.
192.168.18.202:6379 (61308294...) -> 0 keys | 5462 slots | 1 slaves.
[OK] 0 keys in 3 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.18.201:6379)
M: bec7ad89dbc24c421a06456ceb36a9e6066d4b22 192.168.18.201:6379
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
M: 5d81bf4473cc70087de89401d62683b7c62c7c0c 192.168.18.203:6379
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: 006e30805ae5f8c2efa9189fa4fd59fd79f68173 192.168.18.205:6379
   slots: (0 slots) slave
   replicates bec7ad89dbc24c421a06456ceb36a9e6066d4b22
S: 597f1b30149eaa1fffba0750ae10603512782b7a 192.168.18.204:6379
   slots: (0 slots) slave
   replicates 5d81bf4473cc70087de89401d62683b7c62c7c0c
M: 613082942c5f9bb7b02980effdb787d66582e532 192.168.18.202:6379
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
S: 8b5e6b4aff2edae41a53bca9d013d441b77a8ccf 192.168.18.206:6379
   slots: (0 slots) slave
   replicates 613082942c5f9bb7b02980effdb787d66582e532
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

10.2 slot槽节点

Redis中的slot(插槽)是用于实现集群分片(Cluster Sharding)的一种机制。Redis Cluster方案采用哈希槽(Hash Slot)来处理数据和实例之间的映射关系。

在Redis Cluster方案中,一个切片集群共有16384个哈希槽,每个节点可以负责处理其中的一部分插槽,一个插槽只会被分配给一个节点处理。当一个节点需要处理一个未分配的插槽时,它会向其他节点发送一个“槽迁移”请求,将该插槽从原节点迁移到自己的节点上。

插槽的划分是由Redis集群的节点自动进行的:当一个新的节点加入集群时,它会接收一部分插槽,当一个节点从集群中移除时,它的插槽会被自动迁移到其他节点上。这种自动化的插槽管理机制,使得Redis集群具有较好的扩展性和容错性。

这些哈希槽类似于数据分区,每个键值对都会根据它的key被映射到一个哈希槽中。

具体的映射过程分为两大步:首先根据键值对的key按照CRC16算法计算一个16bit的值;然后再用这个16bit值对16384取模,得到0~16383范围内的模数,每个模数代表一个相应编号的哈希槽。

使用Redis集群时,可以通过CLUSTER SLOTS命令查看当前集群节点的插槽分配情况,如下所示:

127.0.0.1:6379> CLUSTER SLOTS
1) 1) (integer) 0
   2) (integer) 5460
   3) 1) "127.0.0.1"
      2) (integer) 6379
      3) "54aae82506a68c52291f0b1d2fa1839e9a2c5826"
2) 1) (integer) 5461
   2) (integer) 10922
   3) 1) "127.0.0.1"
      2) (integer) 6380
      3) "9a9eb33e37d732025f1d2bb1d08f063c853524a8"

以上命令返回了一个包含所有插槽分配信息的列表,其中每个子列表包含了一组相邻的插槽的信息,包括起始插槽编号、结束插槽编号和负责处理该插槽的节点信息。

Redis Cluster通过hash slot映射数据,但是如何把两个key映射到同一个slot中呢?

上面的问题可以参考这位博主的文章:https://juejin.cn/post/7045122646958161950

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值