Docker04 Redis集群的安装与配置

文章详细介绍了Redis集群的安装配置过程,包括哈希槽算法、主从复制和容错机制。在分布式存储中,通过哈希槽算法实现了数据的均衡分布,而主从复制提高了数据安全性。在主从容错场景下,展示了服务器宕机时集群如何自动调整,保持服务正常运行。此外,还探讨了集群的扩容和缩容操作,确保系统能根据需求动态调整。
摘要由CSDN通过智能技术生成

04 Redis集群的安装与配置

Redis集群的作用:大量数据需要缓存、主从复制(提高数据安全性)等

(一)Redis分布存储算法

1. 哈希取余算法

假设有N台Redis服务器,编号为0 ~ N-1。则每次缓存的服务器编号为Hash(key) % N。
优点:计算简便,实现容易。
缺点:服务器节点均被规划好,拓展性差。如果某台服务器发生故障,则部分数据无法缓存。更改哈希取余公式后则需要对数据进行重新分布。

2. 一致性哈希算法

构造一个长度为232次方的环,将Hash(key)对232进行取余,则结果均会落在环上。Redis节点服务器按一定规则放置在环上。在计算出Hash(key)后,从计算结果的节点开始,顺时针寻找第一台节点服务器缓存。
优点:相较于哈希取余算法容错性和可扩展性提高。
缺点:若服务器在环上分布不均匀,可能会造成数据倾斜,使得大多数数据被缓存到少数几台服务器上。

3.哈希槽算法

经过CRC校验后的Hash(key)对214取余,使得结果落到0 ~ 16383上,然后根据服务器的个数对这16384个哈希槽进行分配。
优点:通过对哈希槽的均匀分配,避免了一致性哈希算法出现数据倾斜的现象;在服务器宕机后,集群将原服务器持有的哈希槽分配给其他服务器,容错性高;在新增或删除节点后,集群会根据用户定义的策略对哈希槽进行转移。

(二)Redis三主三从集群的搭建

第一步

启动六台Redis容器实例,端口号分别为6381 ~ 6386

docker run -d --name redis-01 --net host -v /docker/redis-01/data:/data --privileged=true redis:7.0.8 --cluster-enabled yes --appendonly yes --port 6381

docker run -d --name redis-02 --net host -v /docker/redis-02/data:/data --privileged=true redis:7.0.8 --cluster-enabled yes --appendonly yes --port 6382

docker run -d --name redis-03 --net host -v /docker/redis-03/data:/data --privileged=true redis:7.0.8 --cluster-enabled yes --appendonly yes --port 6383

docker run -d --name redis-04 --net host -v /docker/redis-04/data:/data --privileged=true redis:7.0.8 --cluster-enabled yes --appendonly yes --port 6384

docker run -d --name redis-05 --net host -v /docker/redis-05/data:/data --privileged=true redis:7.0.8 --cluster-enabled yes --appendonly yes --port 6385

docker run -d --name redis-06 --net host -v /docker/redis-06/data:/data --privileged=true redis:7.0.8 --cluster-enabled yes --appendonly yes --port 6386
第二步

查看全部镜像

[root@iZ2ze0v2pan6jiw4jb3g2cZ ~]# docker ps -a
CONTAINER ID   IMAGE          COMMAND                  CREATED         STATUS       	PORTS		NAMES
449c9a22e52f   redis:7.0.8    "docker-entrypoint.s…"   4 seconds ago   Up 3 seconds                 redis-06
6140e03a99c9   redis:7.0.8    "docker-entrypoint.s…"   6 seconds ago   Up 5 seconds                 redis-05
3b040f3572b6   redis:7.0.8    "docker-entrypoint.s…"   6 seconds ago   Up 5 seconds                 redis-04
193c89c7e070   redis:7.0.8    "docker-entrypoint.s…"   6 seconds ago   Up 5 seconds                 redis-03
f91e421bbec2   redis:7.0.8    "docker-entrypoint.s…"   6 seconds ago   Up 6 seconds                 redis-02
6c46e438b6d4   redis:7.0.8    "docker-entrypoint.s…"   7 seconds ago   Up 6 seconds                 redis-01
第三步

进入redis-01构建集群

redis-cli --cluster create 127.0.0.1:6381 127.0.0.1:6382 127.0.0.1:6383 127.0.0.1:6384 127.0.0.1:6385 127.0.0.1:6386 --cluster-replicas 1

[root@iZ2ze0v2pan6jiw4jb3g2cZ ~]# docker exec -it redis-01 /bin/bash
root@iZ2ze0v2pan6jiw4jb3g2cZ:/data# redis-cli --cluster create 127.0.0.1:6381 127.0.0.1:6382 127.0.0.1:6383 127.0.0.1:6384 127.0.0.1:6385 127.0.0.1:6386 --cluster-replicas 1
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 127.0.0.1:6385 to 127.0.0.1:6381
Adding replica 127.0.0.1:6386 to 127.0.0.1:6382
Adding replica 127.0.0.1:6384 to 127.0.0.1:6383
>>> Trying to optimize slaves allocation for anti-affinity
[WARNING] Some slaves are in the same host as their master
M: 88fee408e0047fbbe4eafe1602d54534c3dc78a3 127.0.0.1:6381
   slots:[0-5460] (5461 slots) master
M: eebab70db048fe5377fe17eaf9c60bd086348cec 127.0.0.1:6382
   slots:[5461-10922] (5462 slots) master
M: 00a4480f9d7ba0a964633d67060ed025790e4587 127.0.0.1:6383
   slots:[10923-16383] (5461 slots) master
S: ba150b06e1b9696b6c62e1339afc1418855460d0 127.0.0.1:6384
   replicates 88fee408e0047fbbe4eafe1602d54534c3dc78a3
S: 9ddf68adf8a799b5ec1f1512182a497cacde1f11 127.0.0.1:6385
   replicates eebab70db048fe5377fe17eaf9c60bd086348cec
S: bad00ad1ac74a54303a53283fca11af9df749b3d 127.0.0.1:6386
   replicates 00a4480f9d7ba0a964633d67060ed025790e4587
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
.
>>> Performing Cluster Check (using node 127.0.0.1:6381)
M: 88fee408e0047fbbe4eafe1602d54534c3dc78a3 127.0.0.1:6381
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: ba150b06e1b9696b6c62e1339afc1418855460d0 127.0.0.1:6384
   slots: (0 slots) slave
   replicates 88fee408e0047fbbe4eafe1602d54534c3dc78a3
M: eebab70db048fe5377fe17eaf9c60bd086348cec 127.0.0.1:6382
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
S: bad00ad1ac74a54303a53283fca11af9df749b3d 127.0.0.1:6386
   slots: (0 slots) slave
   replicates 00a4480f9d7ba0a964633d67060ed025790e4587
M: 00a4480f9d7ba0a964633d67060ed025790e4587 127.0.0.1:6383
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: 9ddf68adf8a799b5ec1f1512182a497cacde1f11 127.0.0.1:6385
   slots: (0 slots) slave
   replicates eebab70db048fe5377fe17eaf9c60bd086348cec
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

集群会自己对哈希槽进行分配,并随机进行主从配对。

# 为每个主节点均分配一个从节点
--cluster-replicas 1

若采用服务器IP地址,则需要开放6381 ~ 6386端口,以及集群总线端口 16381~ 16386

第四步

进入6381节点,查看集群状态

root@iZ2ze0v2pan6jiw4jb3g2cZ:/data# redis-cli -p 6381
127.0.0.1:6381> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:164
cluster_stats_messages_pong_sent:177
cluster_stats_messages_sent:341
cluster_stats_messages_ping_received:172
cluster_stats_messages_pong_received:164
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:341
total_cluster_links_buffer_limit_exceeded:0

查看主从关系

127.0.0.1:6381> cluster nodes
ba150b06e1b9696b6c62e1339afc1418855460d0 127.0.0.1:6384@16384 slave 88fee408e0047fbbe4eafe1602d54534c3dc78a3 0 1676300121108 1 connected
eebab70db048fe5377fe17eaf9c60bd086348cec 127.0.0.1:6382@16382 master - 0 1676300125122 2 connected 5461-10922
bad00ad1ac74a54303a53283fca11af9df749b3d 127.0.0.1:6386@16386 slave 00a4480f9d7ba0a964633d67060ed025790e4587 0 1676300123115 3 connected
00a4480f9d7ba0a964633d67060ed025790e4587 127.0.0.1:6383@16383 master - 0 1676300123000 3 connected 10923-16383
88fee408e0047fbbe4eafe1602d54534c3dc78a3 127.0.0.1:6381@16381 myself,master - 0 1676300124000 1 connected 0-5460
9ddf68adf8a799b5ec1f1512182a497cacde1f11 127.0.0.1:6385@16385 slave eebab70db048fe5377fe17eaf9c60bd086348cec 0 1676300124118 2 connected

(三)Redis集群的主从容错

1. 向集群缓存数据

以集群模式登陆6381节点,数据成功通过哈希槽算法分布缓存在相应的Redis节点上。

root@iZ2ze0v2pan6jiw4jb3g2cZ:/data# redis-cli -p 6381 -c
127.0.0.1:6381> set k1 v1
-> Redirected to slot [12706] located at 127.0.0.1:6383
OK
127.0.0.1:6383> set k2 v2
-> Redirected to slot [449] located at 127.0.0.1:6381
OK
127.0.0.1:6381> set k3 v4
OK
127.0.0.1:6381> set k4 v5
-> Redirected to slot [8455] located at 127.0.0.1:6382
OK
2. 模拟服务器宕机

停止6381节点的服务

docker stop redis-01

可以观察到,此时6382节点成为集群的主节点,并且原6381的从节点——6384节点成为了主节点

[root@iZ2ze0v2pan6jiw4jb3g2cZ ~]# docker exec -it redis-02 /bin/bash
root@iZ2ze0v2pan6jiw4jb3g2cZ:/data# redis-cli -p 6382 -c
127.0.0.1:6382> cluster nodes
00a4480f9d7ba0a964633d67060ed025790e4587 127.0.0.1:6383@16383 master - 0 1676300839000 3 connected 10923-16383
9ddf68adf8a799b5ec1f1512182a497cacde1f11 127.0.0.1:6385@16385 slave eebab70db048fe5377fe17eaf9c60bd086348cec 0 1676300840031 9 connected
ba150b06e1b9696b6c62e1339afc1418855460d0 127.0.0.1:6384@16384 master - 0 1676300839000 10 connected 0-5460
eebab70db048fe5377fe17eaf9c60bd086348cec 127.0.0.1:6382@16382 myself,master - 0 1676300839000 9 connected 5461-10922
bad00ad1ac74a54303a53283fca11af9df749b3d 127.0.0.1:6386@16386 slave 00a4480f9d7ba0a964633d67060ed025790e4587 0 1676300839027 3 connected
88fee408e0047fbbe4eafe1602d54534c3dc78a3 127.0.0.1:6381@16381 master,fail - 1676300747495 1676300744481 1 disconnected

重新启动6381节点的服务

docker start redis-01

可以观察到,在6381节点恢复后,成为6384节点的从节点,并未恢复为主节点。

127.0.0.1:6382> cluster nodes
00a4480f9d7ba0a964633d67060ed025790e4587 127.0.0.1:6383@16383 master - 0 1676301027068 3 connected 10923-16383
9ddf68adf8a799b5ec1f1512182a497cacde1f11 127.0.0.1:6385@16385 slave eebab70db048fe5377fe17eaf9c60bd086348cec 0 1676301029074 9 connected
ba150b06e1b9696b6c62e1339afc1418855460d0 127.0.0.1:6384@16384 master - 0 1676301027000 10 connected 0-5460
eebab70db048fe5377fe17eaf9c60bd086348cec 127.0.0.1:6382@16382 myself,master - 0 1676301026000 9 connected 5461-10922
bad00ad1ac74a54303a53283fca11af9df749b3d 127.0.0.1:6386@16386 slave 00a4480f9d7ba0a964633d67060ed025790e4587 0 1676301028071 3 connected
88fee408e0047fbbe4eafe1602d54534c3dc78a3 127.0.0.1:6381@16381 slave ba150b06e1b9696b6c62e1339afc1418855460d0 0 1676301024060 10 connected

最后恢复初始状态:暂停6384节点,将主节点交还给6381节点,再启动6484节点。

(四)集群的扩容与缩容

1. 集群扩容

加入两个新节点

docker run -d --name redis-07 --net host -v /docker/redis-07/data:/data --privileged=true redis:7.0.8 --cluster-enabled yes --appendonly yes --port 6387

docker run -d --name redis-08 --net host -v /docker/redis-08/data:/data --privileged=true redis:7.0.8 --cluster-enabled yes --appendonly yes --port 6388

将6387节点加入集群,6382节点此时为集群的主节点

redis-cli --cluster add-node 127.0.0.1:6387 127.0.0.1:6382

查看集群状态,6387节点此时作为主节点加入集群中(未分配哈希槽)

root@iZ2ze0v2pan6jiw4jb3g2cZ:/data# redis-cli --cluster check 127.0.0.1:6382
127.0.0.1:6382 (eebab70d...) -> 1 keys | 5462 slots | 1 slaves.
127.0.0.1:6383 (00a4480f...) -> 1 keys | 5461 slots | 1 slaves.
127.0.0.1:6387 (490b215a...) -> 0 keys | 0 slots | 0 slaves.
127.0.0.1:6381 (88fee408...) -> 2 keys | 5461 slots | 1 slaves.
[OK] 4 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 127.0.0.1:6382)
M: eebab70db048fe5377fe17eaf9c60bd086348cec 127.0.0.1:6382
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
M: 00a4480f9d7ba0a964633d67060ed025790e4587 127.0.0.1:6383
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: 9ddf68adf8a799b5ec1f1512182a497cacde1f11 127.0.0.1:6385
   slots: (0 slots) slave
   replicates eebab70db048fe5377fe17eaf9c60bd086348cec
M: 490b215a9a999dd17c1825fc6d4dd3ec2e8e002f 127.0.0.1:6387
   slots: (0 slots) master
S: ba150b06e1b9696b6c62e1339afc1418855460d0 127.0.0.1:6384
   slots: (0 slots) slave
   replicates 88fee408e0047fbbe4eafe1602d54534c3dc78a3
S: bad00ad1ac74a54303a53283fca11af9df749b3d 127.0.0.1:6386
   slots: (0 slots) slave
   replicates 00a4480f9d7ba0a964633d67060ed025790e4587
M: 88fee408e0047fbbe4eafe1602d54534c3dc78a3 127.0.0.1:6381
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

重新分配哈希槽

root@iZ2ze0v2pan6jiw4jb3g2cZ:/data# redis-cli --cluster reshard 127.0.0.1:6382
>>> Performing Cluster Check (using node 127.0.0.1:6382)
M: eebab70db048fe5377fe17eaf9c60bd086348cec 127.0.0.1:6382
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
M: 00a4480f9d7ba0a964633d67060ed025790e4587 127.0.0.1:6383
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: 9ddf68adf8a799b5ec1f1512182a497cacde1f11 127.0.0.1:6385
   slots: (0 slots) slave
   replicates eebab70db048fe5377fe17eaf9c60bd086348cec
M: 490b215a9a999dd17c1825fc6d4dd3ec2e8e002f 127.0.0.1:6387
   slots: (0 slots) master
S: ba150b06e1b9696b6c62e1339afc1418855460d0 127.0.0.1:6384
   slots: (0 slots) slave
   replicates 88fee408e0047fbbe4eafe1602d54534c3dc78a3
S: bad00ad1ac74a54303a53283fca11af9df749b3d 127.0.0.1:6386
   slots: (0 slots) slave
   replicates 00a4480f9d7ba0a964633d67060ed025790e4587
M: 88fee408e0047fbbe4eafe1602d54534c3dc78a3 127.0.0.1:6381
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 4096
What is the receiving node ID? 490b215a9a999dd17c1825fc6d4dd3ec2e8e002f
Please enter all the source node IDs.
  Type 'all' to use all the nodes as source nodes for the hash slots.
  Type 'done' once you entered all the source nodes IDs.
Source node #1: all

查看此时集群状态,此时6387节点被成功分配哈希槽

root@iZ2ze0v2pan6jiw4jb3g2cZ:/data# redis-cli --cluster check 127.0.0.1:6382
127.0.0.1:6382 (eebab70d...) -> 1 keys | 4096 slots | 1 slaves.
127.0.0.1:6383 (00a4480f...) -> 1 keys | 4096 slots | 1 slaves.
127.0.0.1:6387 (490b215a...) -> 1 keys | 4096 slots | 0 slaves.
127.0.0.1:6381 (88fee408...) -> 1 keys | 4096 slots | 1 slaves.
[OK] 4 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 127.0.0.1:6382)
M: eebab70db048fe5377fe17eaf9c60bd086348cec 127.0.0.1:6382
   slots:[6827-10922] (4096 slots) master
   1 additional replica(s)
M: 00a4480f9d7ba0a964633d67060ed025790e4587 127.0.0.1:6383
   slots:[12288-16383] (4096 slots) master
   1 additional replica(s)
S: 9ddf68adf8a799b5ec1f1512182a497cacde1f11 127.0.0.1:6385
   slots: (0 slots) slave
   replicates eebab70db048fe5377fe17eaf9c60bd086348cec
M: 490b215a9a999dd17c1825fc6d4dd3ec2e8e002f 127.0.0.1:6387
   slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master
S: ba150b06e1b9696b6c62e1339afc1418855460d0 127.0.0.1:6384
   slots: (0 slots) slave
   replicates 88fee408e0047fbbe4eafe1602d54534c3dc78a3
S: bad00ad1ac74a54303a53283fca11af9df749b3d 127.0.0.1:6386
   slots: (0 slots) slave
   replicates 00a4480f9d7ba0a964633d67060ed025790e4587
M: 88fee408e0047fbbe4eafe1602d54534c3dc78a3 127.0.0.1:6381
   slots:[1365-5460] (4096 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

为6387节点分配从机6388节点

redis-cli --cluster add-node 127.0.0.1:6388 127.0.0.1:6387 --cluster-slave --cluster-master-id 490b215a9a999dd17c1825fc6d4dd3ec2e8e002f

查看集群状态,此时6387节点已配对从机6388节点。

root@iZ2ze0v2pan6jiw4jb3g2cZ:/data# redis-cli --cluster check 127.0.0.1:6382
127.0.0.1:6382 (eebab70d...) -> 1 keys | 4096 slots | 1 slaves.
127.0.0.1:6383 (00a4480f...) -> 1 keys | 4096 slots | 1 slaves.
127.0.0.1:6387 (490b215a...) -> 1 keys | 4096 slots | 1 slaves.
127.0.0.1:6381 (88fee408...) -> 1 keys | 4096 slots | 1 slaves.
[OK] 4 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 127.0.0.1:6382)
M: eebab70db048fe5377fe17eaf9c60bd086348cec 127.0.0.1:6382
   slots:[6827-10922] (4096 slots) master
   1 additional replica(s)
M: 00a4480f9d7ba0a964633d67060ed025790e4587 127.0.0.1:6383
   slots:[12288-16383] (4096 slots) master
   1 additional replica(s)
S: 6560339f06833f705ce1a00a297f1cbd74255658 127.0.0.1:6388
   slots: (0 slots) slave
   replicates 490b215a9a999dd17c1825fc6d4dd3ec2e8e002f
S: 9ddf68adf8a799b5ec1f1512182a497cacde1f11 127.0.0.1:6385
   slots: (0 slots) slave
   replicates eebab70db048fe5377fe17eaf9c60bd086348cec
M: 490b215a9a999dd17c1825fc6d4dd3ec2e8e002f 127.0.0.1:6387
   slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master
   1 additional replica(s)
S: ba150b06e1b9696b6c62e1339afc1418855460d0 127.0.0.1:6384
   slots: (0 slots) slave
   replicates 88fee408e0047fbbe4eafe1602d54534c3dc78a3
S: bad00ad1ac74a54303a53283fca11af9df749b3d 127.0.0.1:6386
   slots: (0 slots) slave
   replicates 00a4480f9d7ba0a964633d67060ed025790e4587
M: 88fee408e0047fbbe4eafe1602d54534c3dc78a3 127.0.0.1:6381
   slots:[1365-5460] (4096 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
2. 集群缩容

删除6388节点

root@iZ2ze0v2pan6jiw4jb3g2cZ:/data# redis-cli --cluster del-node 127.0.0.1:6388 6560339f06833f705ce1a00a297f1cbd74255658
>>> Removing node 6560339f06833f705ce1a00a297f1cbd74255658 from cluster 127.0.0.1:6388
>>> Sending CLUSTER FORGET messages to the cluster...
>>> Sending CLUSTER RESET SOFT to the deleted node.

清空6387节点的哈希槽,重新分配给其他节点(本例中分配给6381节点)

root@iZ2ze0v2pan6jiw4jb3g2cZ:/data# redis-cli --cluster reshard 127.0.0.1:6382
>>> Performing Cluster Check (using node 127.0.0.1:6382)
M: eebab70db048fe5377fe17eaf9c60bd086348cec 127.0.0.1:6382
   slots:[6827-10922] (4096 slots) master
   1 additional replica(s)
M: 00a4480f9d7ba0a964633d67060ed025790e4587 127.0.0.1:6383
   slots:[12288-16383] (4096 slots) master
   1 additional replica(s)
S: 9ddf68adf8a799b5ec1f1512182a497cacde1f11 127.0.0.1:6385
   slots: (0 slots) slave
   replicates eebab70db048fe5377fe17eaf9c60bd086348cec
M: 490b215a9a999dd17c1825fc6d4dd3ec2e8e002f 127.0.0.1:6387
   slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master
S: ba150b06e1b9696b6c62e1339afc1418855460d0 127.0.0.1:6384
   slots: (0 slots) slave
   replicates 88fee408e0047fbbe4eafe1602d54534c3dc78a3
S: bad00ad1ac74a54303a53283fca11af9df749b3d 127.0.0.1:6386
   slots: (0 slots) slave
   replicates 00a4480f9d7ba0a964633d67060ed025790e4587
M: 88fee408e0047fbbe4eafe1602d54534c3dc78a3 127.0.0.1:6381
   slots:[1365-5460] (4096 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 4096
What is the receiving node ID? 88fee408e0047fbbe4eafe1602d54534c3dc78a3
Please enter all the source node IDs.
  Type 'all' to use all the nodes as source nodes for the hash slots.
  Type 'done' once you entered all the source nodes IDs.
Source node #1: 490b215a9a999dd17c1825fc6d4dd3ec2e8e002f
Source node #2: done

查看集群状态,6387节点的哈希槽已被清空,6381接受到了被分配的哈希槽

root@iZ2ze0v2pan6jiw4jb3g2cZ:/data# redis-cli --cluster check 127.0.0.1:6382
127.0.0.1:6382 (eebab70d...) -> 1 keys | 4096 slots | 1 slaves.
127.0.0.1:6383 (00a4480f...) -> 1 keys | 4096 slots | 1 slaves.
127.0.0.1:6381 (88fee408...) -> 2 keys | 8192 slots | 2 slaves.
[OK] 4 keys in 3 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 127.0.0.1:6382)
M: eebab70db048fe5377fe17eaf9c60bd086348cec 127.0.0.1:6382
   slots:[6827-10922] (4096 slots) master
   1 additional replica(s)
M: 00a4480f9d7ba0a964633d67060ed025790e4587 127.0.0.1:6383
   slots:[12288-16383] (4096 slots) master
   1 additional replica(s)
S: 9ddf68adf8a799b5ec1f1512182a497cacde1f11 127.0.0.1:6385
   slots: (0 slots) slave
   replicates eebab70db048fe5377fe17eaf9c60bd086348cec
S: 490b215a9a999dd17c1825fc6d4dd3ec2e8e002f 127.0.0.1:6387
   slots: (0 slots) slave
   replicates 88fee408e0047fbbe4eafe1602d54534c3dc78a3
S: ba150b06e1b9696b6c62e1339afc1418855460d0 127.0.0.1:6384
   slots: (0 slots) slave
   replicates 88fee408e0047fbbe4eafe1602d54534c3dc78a3
S: bad00ad1ac74a54303a53283fca11af9df749b3d 127.0.0.1:6386
   slots: (0 slots) slave
   replicates 00a4480f9d7ba0a964633d67060ed025790e4587
M: 88fee408e0047fbbe4eafe1602d54534c3dc78a3 127.0.0.1:6381
   slots:[0-6826],[10923-12287] (8192 slots) master
   2 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

删除6387节点

root@iZ2ze0v2pan6jiw4jb3g2cZ:/data# redis-cli --cluster del-node 127.0.0.1:6387 490b215a9a999dd17c1825fc6d4dd3ec2e8e002f
>>> Removing node 490b215a9a999dd17c1825fc6d4dd3ec2e8e002f from cluster 127.0.0.1:6387
>>> Sending CLUSTER FORGET messages to the cluster...
>>> Sending CLUSTER RESET SOFT to the deleted node.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值