redis
install
官方安装教程
https://redis.io/docs/getting-started/installation/install-redis-from-source/
详细安装步骤见 ./redis源码目录/README.md
make
make install
./util/install_server.sh
默认安装设置
Port : 6379
Config file : /etc/redis/6379.conf
Log file : /var/log/redis_6379.log
Data dir : /var/lib/redis/6379
Executable : /usr/local/bin/redis-server
Cli Executable : /usr/local/bin/redis-cli
-
JVM线程成本
JVM一个线程成本 约1MB
1、线程多了CPU调度浪费
2、内存成本
-
顺序性 (单线程 单进程)
每连接的命令是顺序的
命令
redis-cli -p [port]
set k1:1 hello // :1 存到第一个库
help @generic
key是对象 包含type encoding 长度
String
set key value [expiration EX seconds|PX milliseconds] [NX|XX]
-
字符串
get k1
set k1 “hello”
append k1 " haha"
getrange k1 0 2
getrange k1 0 -1 // 正反向索引
setrange k1 2 “haha”
strlen k1
mget
mset k1 “aa” k2 “bb”
msetnx // 原子性 一个key set失败 全部失败
append k1 中 // utf-8占三个长度 二进制安全 只取字节流
redis-cli --raw 显示格式化
-
数值
incr 抢购 秒杀 详情页 规避并发下 对数据库的事务操作 完全由redis内存操作
object encoding k1
incr k1
incrby k2 2
DECR k2
DECRBY k2 2
INCRBYFLOAT k2 0.5
-
bitmap
setbit key offset value
字符集 ascii
其他一般叫做扩展字符集
其他字符级不再对ascii重编码
127.0.0.1:6379> setbit k1 1 1 0 127.0.0.1:6379> strlen k1 1 127.0.0.1:6379> get k1 @ 127.0.0.1:6379> setbit k1 7 1 0 127.0.0.1:6379> strlen k1 1 127.0.0.1:6379> get k1 A 127.0.0.1:6379> setbit k1 9 1 0 127.0.0.1:6379> strlen k1 2 127.0.0.1:6379> get k1 A@
-
offset
0000 0000
0123 4567
-
bitpos key start end
-
bitcount key start end
-
bitop operation destkey key [key …] 逻辑(与或非…)运算
and or
-
统计某个用户在一个时间范围内的登录天数
一个二进制位表示这一天是否登录
key为用户id value 每位表示该用户每天登录情况
bitcount user1
-
统计一段时间内有多少用户登录过
key为日期 value 每位表示该天所有用户登录情况
bitop or destkey 20220101 20220102
bitcount destkey 0 -1
-
list
-
栈 lpush lpop
-
队列 lpush rpop
-
数组 lindex lset
-
lrem
count > 0
: Remove elements equal toelement
moving from head to tail.count < 0
: Remove elements equal toelement
moving from tail to head.count = 0
: Remove all elements equal toelement
. -
ltrim key start stop 裁剪list
-
blpop
127.0.0.1:6379> rpush k1 a b c d e f
(integer) 6
127.0.0.1:6379> lrange k1 0 -1
1) "a"
2) "b"
3) "c"
4) "d"
5) "e"
6) "f"
127.0.0.1:6379> ltrim k1 1 3
OK
127.0.0.1:6379> lrange k1 0 -1
1) "b"
2) "c"
3) "d"
hash
127.0.0.1:6379> hset k1 name "zhangsan"
1
127.0.0.1:6379> hget k1 name
zhangsan
127.0.0.1:6379> hset k1 age 22
1
127.0.0.1:6379> hkeys k1
name
age
127.0.0.1:6379> hvals k1
zhangsan
22
127.0.0.1:6379> hgetall k1
name
zhangsan
age
22
127.0.0.1:6379> hmget k1 name age
zhangsan
22
127.0.0.1:6379> hlen k1
2
127.0.0.1:6379> hset k1 name "lisi"
0
127.0.0.1:6379> hget k1 name
lisi
127.0.0.1:6379> hdel k1 name
1
127.0.0.1:6379> hgetall k1
age
22
set
无序 去重
127.0.0.1:6379> sadd k1 "hello"
(integer) 1
127.0.0.1:6379> sadd k1 "world"
(integer) 1
127.0.0.1:6379> sadd k1 "world"
(integer) 0
127.0.0.1:6379> SMEMBERS k1
1) "world"
2) "hello"
127.0.0.1:6379> SCARD k1
(integer) 2
127.0.0.1:6379> srem k1 "hello"
(integer) 1
集合逻辑运算
随机事件
SRANDMEMBER
count > 0 distinct elements
count < 0 not distinct elements
127.0.0.1:6379> sadd k1 one two three
(integer) 3
127.0.0.1:6379> SRANDMEMBER k1
"one"
127.0.0.1:6379> SRANDMEMBER k1 2
1) "two"
2) "one"
127.0.0.1:6379> SRANDMEMBER k1 5
1) "two"
2) "three"
3) "one"
127.0.0.1:6379> SRANDMEMBER k1 -5
1) "one"
2) "one"
3) "one"
4) "three"
5) "two"
127.0.0.1:6379> SRANDMEMBER k1 -3
1) "three"
2) "three"
3) "one"
127.0.0.1:6379> SRANDMEMBER k1 0
(empty list or set)
SPOP
sortedset
物理内存左小右大 不随命令所改变
BZPOPMAX key [key …] timeout
summary: Remove and return the member with the highest score from one or more sorted sets, or block until one is available
since: 5.0.0
BZPOPMIN key [key …] timeout
summary: Remove and return the member with the lowest score from one or more sorted sets, or block until one is available
since: 5.0.0
ZADD key [NX|XX] [CH] [INCR] score member [score member …]
summary: Add one or more members to a sorted set, or update its score if it already exists
since: 1.2.0
ZCARD key
summary: Get the number of members in a sorted set
since: 1.2.0
ZCOUNT key min max
summary: Count the members in a sorted set with scores within the given values
since: 2.0.0
ZINCRBY key increment member
summary: Increment the score of a member in a sorted set
since: 1.2.0
ZINTERSTORE destination numkeys key [key …] [WEIGHTS weight] [AGGREGATE SUM|MIN|MAX]
summary: Intersect multiple sorted sets and store the resulting sorted set in a new key
since: 2.0.0
ZLEXCOUNT key min max
summary: Count the number of members in a sorted set between a given lexicographical range
since: 2.8.9
ZPOPMAX key [count]
summary: Remove and return members with the highest scores in a sorted set
since: 5.0.0
ZPOPMIN key [count]
summary: Remove and return members with the lowest scores in a sorted set
since: 5.0.0
ZRANGE key start stop [WITHSCORES]
summary: Return a range of members in a sorted set, by index
since: 1.2.0
ZRANGEBYLEX key min max [LIMIT offset count]
summary: Return a range of members in a sorted set, by lexicographical range
since: 2.8.9
ZRANGEBYSCORE key min max [WITHSCORES] [LIMIT offset count]
summary: Return a range of members in a sorted set, by score
since: 1.0.5
ZRANK key member
summary: Determine the index of a member in a sorted set
since: 2.0.0
ZREM key member [member …]
summary: Remove one or more members from a sorted set
since: 1.2.0
ZREMRANGEBYLEX key min max
summary: Remove all members in a sorted set between the given lexicographical range
since: 2.8.9
ZREMRANGEBYRANK key start stop
summary: Remove all members in a sorted set within the given indexes
since: 2.0.0
ZREMRANGEBYSCORE key min max
summary: Remove all members in a sorted set within the given scores
since: 1.2.0
ZREVRANGE key start stop [WITHSCORES]
summary: Return a range of members in a sorted set, by index, with scores ordered from high to low
since: 1.2.0
ZREVRANGEBYLEX key max min [LIMIT offset count]
summary: Return a range of members in a sorted set, by lexicographical range, ordered from higher to lower strings.
since: 2.8.9
ZREVRANGEBYSCORE key max min [WITHSCORES] [LIMIT offset count]
summary: Return a range of members in a sorted set, by score, with scores ordered from high to low
since: 2.2.0
ZREVRANK key member
summary: Determine the index of a member in a sorted set, with scores ordered from high to low
since: 2.0.0
ZSCAN key cursor [MATCH pattern] [COUNT count]
summary: Incrementally iterate sorted sets elements and associated scores
since: 2.8.0
ZSCORE key member
summary: Get the score associated with the given member in a sorted set
since: 1.2.0
ZUNIONSTORE destination numkeys key [key …] [WEIGHTS weight] [AGGREGATE SUM|MIN|MAX]
summary: Add multiple sorted sets and store the resulting sorted set in a new key
since: 2.0.0
127.0.0.1:6379> zadd k1 10 g1
(integer) 1
127.0.0.1:6379> zadd k1 22 g2
(integer) 1
127.0.0.1:6379> zadd k1 9 g3
(integer) 1
127.0.0.1:6379> zrange k1 0 -1 withscores
1) "g3"
2) "9"
3) "g1"
4) "10"
5) "g2"
6) "22"
127.0.0.1:6379> zrevrange k1 0 -1 withscores
1) "g2"
2) "22"
3) "g1"
4) "10"
5) "g3"
6) "9"
127.0.0.1:6379> zrem k1 g1
(integer) 1
127.0.0.1:6379> zrevrange k1 0 -1 withscores
1) "g2"
2) "22"
3) "g3"
4) "9"
127.0.0.1:6379> zrank k1 g2
(integer) 1
127.0.0.1:6379> zrank k1 g3
(integer) 0
127.0.0.1:6379> zrevrank k1 g2
(integer) 0
127.0.0.1:6379> zscore k1 g2
"22"
127.0.0.1:6379> zadd k1 20 g1
(integer) 1
127.0.0.1:6379> zrange k1 0 -1 withscores
1) "g3"
2) "9"
3) "g1"
4) "20"
5) "g2"
6) "22"
127.0.0.1:6379> zcount k1 0 20
(integer) 2
127.0.0.1:6379> zadd k2 0 a 0 b 0 d 0 c
(integer) 4
127.0.0.1:6379> zrangebylex k2 (a (g
1) "b"
2) "c"
3) "d"
127.0.0.1:6379> zrangebylex k2 - (g
1) "a"
2) "b"
3) "c"
4) "d"
127.0.0.1:6379> zrangebylex k2 - [c
1) "a"
2) "b"
3) "c"
# ZUNIONSTORE destination numkeys key [key ...] [WEIGHTS weight] [AGGREGATE SUM|MIN|MAX]
127.0.0.1:6379> zadd zset1 1 "one" 2 "two"
(integer) 2
127.0.0.1:6379> zadd zset2 1 "one" 2 "two" 3 "three"
(integer) 3
127.0.0.1:6379> ZUNIONSTORE out 2 zset1 zset2
(integer) 3
127.0.0.1:6379> zrange out 0 -1 withscores
1) "one"
2) "2"
3) "three"
4) "3"
5) "two"
6) "4"
127.0.0.1:6379> ZUNIONSTORE out2 2 zset1 zset2 weights 2 3
(integer) 3
127.0.0.1:6379> zrange out2 0 -1 withscores
1) "one"
2) "5"
3) "three"
4) "9"
5) "two"
6) "10"
127.0.0.1:6379> ZUNIONSTORE out3 2 zset1 zset2 weights 2 3 aggregate max
(integer) 3
127.0.0.1:6379> zrange out3 0 -1 withscores
1) "one"
2) "3"
3) "two"
4) "6"
5) "three"
6) "9"
127.0.0.1:6379>
实现原理 skip list
pipeline
[root@centos7 ~]# (printf "PING\r\nPING\r\nPING\r\n"; sleep 1) | nc localhost 6379
+PONG
+PONG
+PONG
发布订阅
help @pubsub
事务
MULTI
, EXEC
, DISCARD
and WATCH
127.0.0.1:6379> help @transactions
DISCARD
summary: Discard all commands issued after MULTI
since: 2.0.0
EXEC
summary: Execute all commands issued after MULTI
since: 1.2.0
MULTI
summary: Mark the start of a transaction block
since: 1.2.0
UNWATCH
summary: Forget about all watched keys
since: 2.2.0
WATCH key [key …]
summary: Watch the given keys to determine execution of the MULTI/EXEC block
since: 2.2.0
谁的exec先到谁先执行(redis单线程)
# 谁exec先到达先执行谁
127.0.0.1:6379> set k1 1
OK
127.0.0.1:6379> multi
OK
127.0.0.1:6379_2> multi
OK
127.0.0.1:6379(TX)> set k1 21
QUEUED
127.0.0.1:6379(TX)_2> incr k1
QUEUED
127.0.0.1:6379(TX)_2> exec
1) (integer) 2
127.0.0.1:6379_2> get k1
"2"
127.0.0.1:6379(TX)> exec
1) OK
127.0.0.1:6379> get k1
"21"
127.0.0.1:6379_2> get k1
"21"
# 事务执行过程中操作都返回QUEUED
127.0.0.1:6379> set k1 11
OK
127.0.0.1:6379> multi
OK
127.0.0.1:6379(TX)> incr k1
QUEUED
127.0.0.1:6379(TX)> get k1
QUEUED
127.0.0.1:6379(TX)> exec
1) (integer) 12
2) "12"
# DISCARD丢弃事务
127.0.0.1:6379> multi
OK
127.0.0.1:6379(TX)> incr k1
QUEUED
127.0.0.1:6379(TX)> DISCARD
OK
127.0.0.1:6379> get k1
"12"
# 除了报错的操作,其他操作还是会被执行
127.0.0.1:6379> multi
OK
127.0.0.1:6379(TX)> set k1 one
QUEUED
127.0.0.1:6379(TX)> lpop k1
QUEUED
127.0.0.1:6379(TX)> append k1 haha
QUEUED
127.0.0.1:6379(TX)> exec
1) OK
2) (error) WRONGTYPE Operation against a key holding the wrong kind of value
3) (integer) 7
127.0.0.1:6379> get k1
"onehaha"
# watch CAS乐观锁 如果在exec之前被watch的key值被修改了,就放弃改事务
# exec执行结束 自动unwatch
# 可以再事务开启前手动unwatch
127.0.0.1:6379> watch k1
OK
127.0.0.1:6379> get k1
"22"
127.0.0.1:6379> multi
OK
127.0.0.1:6379(TX)> incr k1
QUEUED
127.0.0.1:6379(TX)> exec k1
(error) EXECABORT Transaction discarded because of: wrong number of arguments for 'exec' command
127.0.0.1:6379(TX)> get k1
"23"
RedisBloom
使用多个不同hash方法进行映射,将映射的值保存到bitmap中,判断一个key是否存在,对这个key进行所有的hash映射,得到的值在bimap都存在,即返回存在。
相对于传统方法只进行一次hash映射,大大节省了空间,而且错误率低
用法:
- client包含 client内存要求高
- client只包含算法 数据保存在redis
- reids使用bloom模块
Bloom vs. Cuckoo filters
Bloom filters typically exhibit better performance and scalability when inserting items (so if you’re often adding items to your dataset, then a Bloom filter may be ideal).
Cuckoo filters are quicker on check operations and also allow deletions.
redis作为数据库/缓存的区别
-
缓存的数据是不重要的,
-
不是全量数据
-
缓存的数据应该随着访问变化
-
是热数据
因为内存有限,如何只保存热数据?
maxmemory 100mb
maxmemory-policy allkeys-lru
key的有效期
-
读数据 不影响过期时间
-
写 重置
-
不能手动延长
-
定时 expireat k1 timestamp
-
过期判定
- 被动访问时判定
- 周期轮询主动判定
- 稍微牺牲下内存 但保证了性能
最近最少使用算法(LRU)
这个缓存算法将最近使用的条目存放到靠近缓存顶部的位置。当一个新条目被访问时,LRU将它放置到缓存的顶部。当缓存达到极限时,较早之前访问的条目将从缓存底部开始被移除
最不经常使用算法(LFU)
这个缓存算法使用一个计数器来记录条目被访问的频率。通过使用LFU缓存算法,最低访问数的条目首先被移除。这个方法并不经常使用,因为它无法对一个拥有最初高访问率之后长时间没有被访问的条目缓存负责。
持久化
都用了copy-on-write
-
主线程fork子线程
-
子线程进行持久化,主线程继续工作
RDB
快照
定时生成数据库快照
- 优点: 恢复速度快
- 缺点:如果reids异常关闭,时点和时点之间的数据会丢失
- save 60 1000
- dbfilename dump.rdb
- dir /var/lib/redis/6379
AOF
append only file
记录每步操作,但也会触发重写
- 优点:丢失数据少,推荐使用每秒fsync,性能好和数据也最多丢失1s
- 缺点:如果只用aof 数据会非常大
- appendonly yes
- appendfilename “appendonly.aof”
- appendfsync everysec
- auto-aof-rewrite-percentage 100
- auto-aof-rewrite-min-size 64mb
echo $$ | more
1218
echo $BASHPID | more
1422
$$优先级高于管道
$BASHPID低于管道
redis fork()一个紫金城进行持久化数据
copy-on-write
创建子进程后,子进程的内存空间不会复制父进程的数据,而是复制了父进程的虚拟内存的指针数据,即保存的是指向实际数据内存的地址的指针,当父进程数据改变时,会在内存新开辟一块空间保存新数据,父进程的上的指针指向新的数据的地址
fork()
父进程数据修改
实验
配置
daemonize no # 前台运行
#logfile /var/log/redis_6379.log # 日志地址 注释掉则前台输出日志
appendonly yes #是否开启aof
aof-use-rdb-preamble no #是否开启rdb aof混合
bgsave rdb持久化
redis-check-rdb dump.rbd
bgrewriteaof aof重写
集群
单机、单节点、单实例 问题:
- 单点故障
- 容量有限
- IO(连接数、数据读写)、cpu性能有限
AKF
x: 全量 镜像
Y: 业务、功能
Z: 数据分片
CAP
数据同步方案
主从复制配置
replica-serve-stale-data yes
repl-diskless-sync yes
队列日志大小 从挂了重连 如果offset在队列里能找到 就直接同步增量数据即可,不必全量同步
repl-backlog-size 1mb
趋向同步阻塞 慎用
#min-replicas-to-write 3
#min-replicas-max-lag 10
主从同步
replicaof 127.0.0.1 6379
sentinel 配置
sentinel_1.conf
port 26379
sentinel monitor mymaster 127.0.0.1 6379 2
sentinel down-after-milliseconds mymaster 60000
sentinel failover-timeout mymaster 180000
sentinel parallel-syncs mymaster 1
sentinel monitor resque 127.0.0.1 6380 2
sentinel down-after-milliseconds resque 60000
sentinel failover-timeout resque 180000
sentinel parallel-syncs resque 1
启动sentinel
redis-sentinel /zl/redis/sentinel/sentinel_1.conf
sharding分片
-
逻辑业务拆分
-
算法 hash取模
取模的数值固定 无法新增机器
-
radom
消息队列
-
一致性hash
散列算法:Ketama算法
规划一个hash环
-
新增的机器进行hash算出在环上的点 可以定义环上自己的点到前个机器的点之前的范围都是数据都是存储在自己上的
-
请求可以对key进行hash 计算得到在环上的位置 可得出存储到哪个机器里
-
优点
不会造成全局洗牌
-
缺点
新增机器后 原来那部分范围的数据没有 会导致请求打到mysql
可使用的解决办法:查询离自己最近的两个节点(但会增加复杂度 自己权衡)
主动迁移数据会有更多问题:
需要重新计算一遍hash 成本很高
迁移这段时间数据可能会发生更改
-
对于老节点那部分请求分配到新节点的数据,等待过期即可
-
代理
客户端对集群的每个redis连接 连接太多了 成本太高!!
框架
模数现有的redis数量多点,例如redis有2个,用10取模,然后模数值为0-4是到redis1,模数值为5-9到redis2。
新加机器redis3后,把redis1中的3、4, redis2中的8、9移到redis中即可
-
twemproxy
# 配置redis实例 vi nutcracker.yml beta: listen: 127.0.0.1:22122 hash: fnv1a_64 hash_tag: "{}" distribution: ketama auto_eject_hosts: false timeout: 400 redis: true servers: - 127.0.0.1:6379:1 - 127.0.0.1:6280:1
-
predixy
# 配置sentinel vi sentinel.conf SentinelServerPool { Databases 16 Hash crc16 HashTag "{}" Distribution modula MasterReadPriority 60 StaticSlaveReadPriority 50 DynamicSlaveReadPriority 50 RefreshInterval 1 ServerTimeout 1 ServerFailureLimit 10 ServerRetryTimeout 1 KeepAlive 120 Sentinels { + 10.2.2.2:7500 + 10.2.2.3:7500 + 10.2.2.4:7500 } Group shard001 { } Group shard002 { } }
-
cluster
redis 自支持 无主模型 客户端无须实现分片算法逻辑
每个redis保存着其他redis的数据映射,client随机请求一个redis,若存在数据则返回数据,若不存在,则返回存放数据的redis名称,client重新向正确的客户端请求
面试常问
击穿 穿透 雪崩
-
击穿
某个key访问并发很高,然后这个key突然过期了,造成大量请求打到数据库
解决方案:
使用setnx() -> 锁,所有客户端按如下过程进行:
- get key 看缓存数据有没有, 有则直接取得缓存数据返回
- 没有则进行 setnx
- 返回ok则抢到锁 去db查数据,查到数据后更新缓存
- 返回false 则过一段时间 继续setnx
问题:
- 抢到锁的人挂了 -> 可以设置锁的过期时间
- 抢到锁的人没挂,但锁过期了 -> 再开启一个线程,定时判断数据是否取回,若没取回,更新锁过期时间
-
穿透
查询缓存和数据库都不存在的数据,使请求打到数据库
解决方案:
bloom过滤器
-
雪崩
大量key某个时间点同时过期,造成大量请求打到数据库
解决方案:
随机过期时间
分布式锁
- setnx
- 过期时间
- 守护线程延长过期