1Redis简介
1.1 什么是redis
Redis 是一种开源(BSD 许可)、内存中数据结构存储,用作数据库、缓存和消息代理。Redis 提供了诸如字符串、散列、列表、集合、带范围查询的排序集合、位图、超级日志、地理空间索引和流等数据结构。Redis 内置复制、Lua 脚本、LRU 驱逐、事务和不同级别的磁盘持久化,并通过 Redis Sentinel 和 Redis Cluster 自动分区提供高可用性。
1.2 Redis 集群数据分片
Redis 集群不使用一致散列,而是一种不同形式的分片,其中每个键在概念上都是我们所谓的散列槽的一部分。
Redis 集群中有 16384 个哈希槽,要计算给定键的哈希槽是多少,我们只需取密钥的 CRC16 模数 16384。
Redis 集群中的每个节点都负责哈希槽的一个子集,例如,您可能有一个包含 3 个节点的集群,其中:
-
节点 A 包含从 0 到 5500 的哈希槽。
-
节点 B 包含从 5501 到 11000 的哈希槽。
-
节点 C 包含从 11001 到 16383 的哈希槽。
2 环境准备
2.1 主机规划
服务器 | 主备关系 | 端口 |
---|---|---|
192.168.56.101 | 主 | 7001 |
192.168.56.102 | 主 | 7001 |
192.168.56.103 | 主 | 7001 |
192.168.56.101 | 备 --> 192.168.56.102.7001 | 7002 |
192.168.56.102 | 备 --> 192.168.56.103.7001 | 7002 |
192.168.56.103 | 备 --> 192.168.56.101.7001 | 7002 |
Redis 集群有16384个哈希槽,每个key通过CRC16校验后对16384取模来决定放置哪个槽.集群的每个节点负责一部分hash槽 所以通常的redis集群在某一个节点挂掉后都会导致集群不可用。而这样三台服务器做集群时就可以保证在任意一台服务器挂了的情况下redis集群都可以正常运行
注意,主节点和备节点不在同一主机上
2.2 Redis版本
本次Redis安装版本为:Redis-6.2.5 (直接从官网下载安装包)
2.3 安装Redis
本操作在3台主机上都有执行
tar -xvf redis-6.2.5.tar.gz -C /opt/modules
cd /opt/modules/redis-6.2.5 && make && make install
2.4 创建集群目录
本操作在3台主机上都有执行,根据自己的情况在自己喜欢的地方创建集群目录,我这里直接在redis安装目录创建
cd /opt/modules/redis-6.2.5
mkdir -p cluster/7001 cluster/7002
2.5 修改配置文件
本操作在3台主机上都有执行
将redis-server和redis.conf文件拷贝到cluster/7001 cluster/7002目录中,并修改redis.conf文件,也可以统一修改后再拷贝到相应的文件夹下
# 针对所有端口配置文件修改
sed -i "s/daemonize no/daemonize yes/" redis.conf
HOST=`ip a | grep "global ens37" | awk -F '/' '{print $1}' | awk '{print $2}'` # 此命令需要根据自己机器的实际情况进行修改
sed -i "s/bind 127.0.0.1/bind $HOST/" redis.conf
sed -i "s/# cluster-enabled yes/cluster-enabled yes/" redis.conf
sed -i "s/appendonly no/appendonly yes/" redis.conf
sed -i "s/# cluster-node-timeout 15000/cluster-node-timeout 5000/" redis.conf
# 针对7001端口配置文件修改
sed -i "s/port 6379/port 7001/" redis.conf
sed -i "s/redis_6379.pid/redis_7001.pid/" redis.conf
# 针对7002端口配置文件修改
sed -i "s/port 6379/port 7002/" redis.conf
sed -i "s/redis_6379.pid/redis_7002.pid/" redis.conf
修改后的配置文件:
# 7001
[hadoop@hadoop101 7001]$ cat redis.conf | grep -v "#" | grep -v "^$"
bind 192.168.56.101 -::1
protected-mode yes
port 7001
tcp-backlog 511
timeout 0
tcp-keepalive 300
daemonize yes
pidfile /var/run/redis_7001.pid
loglevel notice
logfile ""
databases 16
always-show-logo no
set-proc-title yes
proc-title-template "{title} {listen-addr} {server-mode}"
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
rdb-del-sync-files no
dir ./
replica-serve-stale-data yes
replica-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-diskless-load disabled
repl-disable-tcp-nodelay no
replica-priority 100
acllog-max-len 128
lazyfree-lazy-eviction no
lazyfree-lazy-expire no
lazyfree-lazy-server-del no
replica-lazy-flush no
lazyfree-lazy-user-del no
lazyfree-lazy-user-flush no
oom-score-adj no
oom-score-adj-values 0 200 800
disable-thp yes
appendonly yes
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
aof-use-rdb-preamble yes
lua-time-limit 5000
cluster-enabled yes
cluster-node-timeout 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
stream-node-max-bytes 4096
stream-node-max-entries 100
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit replica 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
dynamic-hz yes
aof-rewrite-incremental-fsync yes
rdb-save-incremental-fsync yes
jemalloc-bg-thread yes
# 7002
[hadoop@hadoop101 7002]$ cat redis.conf | grep -v "#" | grep -v "^$"
bind 192.168.56.101 -::1
protected-mode yes
port 7002
tcp-backlog 511
timeout 0
tcp-keepalive 300
daemonize yes
pidfile /var/run/redis_7002.pid
loglevel notice
logfile ""
databases 16
always-show-logo no
set-proc-title yes
proc-title-template "{title} {listen-addr} {server-mode}"
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
rdb-del-sync-files no
dir ./
replica-serve-stale-data yes
replica-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-diskless-load disabled
repl-disable-tcp-nodelay no
replica-priority 100
acllog-max-len 128
lazyfree-lazy-eviction no
lazyfree-lazy-expire no
lazyfree-lazy-server-del no
replica-lazy-flush no
lazyfree-lazy-user-del no
lazyfree-lazy-user-flush no
oom-score-adj no
oom-score-adj-values 0 200 800
disable-thp yes
appendonly yes
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
aof-use-rdb-preamble yes
lua-time-limit 5000
cluster-enabled yes
cluster-node-timeout 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
stream-node-max-bytes 4096
stream-node-max-entries 100
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit replica 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
dynamic-hz yes
aof-rewrite-incremental-fsync yes
rdb-save-incremental-fsync yes
jemalloc-bg-thread yes
3 启动redis
在每台主机上针对每个端口启动redis
./redis-server redis.conf # 所有端口启动redis
启动成功之后会在每个端口目录生成以下文件
appendonly.aof dump.rdb nodes.conf
4 创建集群
# 此操作只需在其中一台主机执行即可
redis-cli --cluster create 192.168.56.101:7001 192.168.56.102:7001 192.168.56.103:7001 192.168.56.102:7002 192.168.56.103:7002 192.168.56.101:7002 --cluster-replicas 1
>>> Performing hash slots allocation on 6 nodes...
>>> Master[0] -> Slots 0 - 5460
>>> Master[1] -> Slots 5461 - 10922
>>> Master[2] -> Slots 10923 - 16383
>>> Adding replica 192.168.56.102:7002 to 192.168.56.101:7001
>>> Adding replica 192.168.56.103:7002 to 192.168.56.102:7001
>>> Adding replica 192.168.56.101:7002 to 192.168.56.103:7001
>>> M: b854243c449a36cd620a8de843859c83b5979074 192.168.56.101:7001
>>> slots:[0-5460] (5461 slots) master
>>> M: 97411c86f329f2e4a02ce48f56c9e2bcd4566f33 192.168.56.102:7001
>>> slots:[5461-10922] (5462 slots) master
>>> M: d7c00f800985fc504c3eb19456198e9d6b638763 192.168.56.103:7001
>>> slots:[10923-16383] (5461 slots) master
>>> S: d75e649807da5c1f277e32a3772f118fdb7ca834 192.168.56.102:7002
>>> replicates b854243c449a36cd620a8de843859c83b5979074
>>> S: ee28b67f110b2129af347fe1408954316c626af0 192.168.56.103:7002
>>> replicates 97411c86f329f2e4a02ce48f56c9e2bcd4566f33
>>> S: 98520ca0d4ee3b63d6cac1d260ffce9420764d91 192.168.56.101:7002
>>> replicates d7c00f800985fc504c3eb19456198e9d6b638763
>>> Can I set the above configuration? (type 'yes' to accept): yes # 此处输入yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
>>> Waiting for the cluster to join
>>> ..
>>> Performing Cluster Check (using node 192.168.56.101:7001)
>>> M: b854243c449a36cd620a8de843859c83b5979074 192.168.56.101:7001
>>> slots:[0-5460] (5461 slots) master
>>> 1 additional replica(s)
>>> S: d75e649807da5c1f277e32a3772f118fdb7ca834 192.168.56.102:7002
>>> slots: (0 slots) slave
>>> replicates b854243c449a36cd620a8de843859c83b5979074
>>> S: 98520ca0d4ee3b63d6cac1d260ffce9420764d91 192.168.56.101:7002
>>> slots: (0 slots) slave
>>> replicates d7c00f800985fc504c3eb19456198e9d6b638763
>>> M: d7c00f800985fc504c3eb19456198e9d6b638763 192.168.56.103:7001
>>> slots:[10923-16383] (5461 slots) master
>>> 1 additional replica(s)
>>> S: ee28b67f110b2129af347fe1408954316c626af0 192.168.56.103:7002
>>> slots: (0 slots) slave
>>> replicates 97411c86f329f2e4a02ce48f56c9e2bcd4566f33
>>> M: 97411c86f329f2e4a02ce48f56c9e2bcd4566f33 192.168.56.102:7001
>>> slots:[5461-10922] (5462 slots) master
>>> 1 additional replica(s)
>>> [OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
>>> [OK] All 16384 slots covered.
集群安装成功!
5 集群测试
[hadoop@hadoop101 ~]$ redis-cli -c -p 7001 -h 192.168.56.101 # 登录集群
192.168.56.101:7001> set k1 v1
-> Redirected to slot [12706] located at 192.168.56.103:7001
OK
192.168.56.103:7001> set k2 v2
-> Redirected to slot [449] located at 192.168.56.101:7001
OK
192.168.56.101:7001> set k3 v3
OK
192.168.56.101:7001> set k4 v4
-> Redirected to slot [8455] located at 192.168.56.102:7001
OK
192.168.56.102:7001> set k5 v5
-> Redirected to slot [12582] located at 192.168.56.103:7001
OK
192.168.56.103:7001> set k6 v6
-> Redirected to slot [325] located at 192.168.56.101:7001
OK
192.168.56.101:7001> keys *
1) "k6"
2) "k2"
3) "k3"