redis5 集群缩扩容

查看了下多数博客都是按扩缩容来介绍这一块, 我想已经有个搭建的3主3从的集群了, 先缩再扩也是一样的

放在前面的命令, 现在redis 已经不支持用ruby操作了,使用下面的命令帮助可以很好地完成集群操作

[root@localhost src]# ./redis-cli --cluster help

下面看步骤

  1. 先看下 之前的集群节点信息
127.0.0.1:6379> cluster nodes
# 这个是要删除的主节点
5c831c468474b918911965a0d4e7f45f9a490c21 192.168.152.129:6379@16379 myself,master - 0 1590672980000 2 connected 5461-10922
55c211b52bf0830931d7111387b38e2822132dbf 192.168.152.129:6380@16380 master - 0 1590672981000 7 connected 0-5460
187372b2148f33dfa089af1396a2b427d9e3196d 192.168.152.130:6379@16379 master - 0 1590672981068 3 connected 10923-16383
# 这个是要删除的从节点
97816e6462753f1a8a4644517ac865794311e807 192.168.152.130:6380@16380 slave 5c831c468474b918911965a0d4e7f45f9a490c21 0 1590672982075 6 connected
c4d26cbd322b9ebabb81e315667a2dbdb8102cef 192.168.152.131:6380@16380 slave 187372b2148f33dfa089af1396a2b427d9e3196d 0 1590672983083 4 connected
9d27a18c320f588332cda1ad19a1d6e08e08d458 192.168.152.131:6379@16379 slave 55c211b52bf0830931d7111387b38e2822132dbf 0 1590672982000 7 connected


  1. 先删除从节点
# 192.168.152.131:6380 为从节点的ip和端口号   c4d26cbd322b9ebabb81e315667a2dbdb8102cef 为从节点的id
[root@localhost src]# ./redis-cli --cluster del-node 192.168.152.131:6380 c4d26cbd322b9ebabb81e315667a2dbdb8102cef
  1. 迁移主节点的数据
# 192.168.152.129:6379 为主节点的ip和端口
[root@localhost src]# ./redis-cli --cluster reshard 192.168.152.129:6379

  1. 删除主节点
# 192.168.152.129:6379 为主节点ip和端口号  5c831c468474b918911965a0d4e7f45f9a490c21为主节点的id
[root@localhost src]# ./redis-cli --cluster del-node 192.168.152.129:6379 5c831c468474b918911965a0d4e7f45f9a490c21
>>> Removing node 5c831c468474b918911965a0d4e7f45f9a490c21 from cluster 192.168.152.129:6379
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node.

删除后要讲节点redis数据删除掉,我的redis数据目录在/usr/local/data/redis 不删除的话,重启redis加入集群会报错,因为重启后加载了redis持久化文件; 从节点一样也是要删除持久化数据重启

[ERR] Node 192.168.152.129:6379 is not empty. Either the node already knows other nodes (check with CLUSTER NODES) or contains some key in database 0.
[root@localhost redis]# ls
appendonly_6380.aof  dump_6379.rdb  dump_6380.rdb  nodes-6379.conf  nodes-6380.conf
[root@localhost redis]# pwd
/usr/local/data/redis
[root@localhost redis]# rm *6379*
rm: remove regular file ‘dump_6379.rdb’? y
rm: remove regular file ‘nodes-6379.conf’? y

  1. 查看缩容后的集群信息
192.168.152.129:6380> cluster nodes
97816e6462753f1a8a4644517ac865794311e807 192.168.152.130:6380@16380 slave 55c211b52bf0830931d7111387b38e2822132dbf 0 1590674239974 7 connected
55c211b52bf0830931d7111387b38e2822132dbf 192.168.152.129:6380@16380 myself,master - 0 1590674239000 7 connected 0-10922
187372b2148f33dfa089af1396a2b427d9e3196d 192.168.152.130:6379@16379 master - 0 1590674238000 3 connected 10923-16383
9d27a18c320f588332cda1ad19a1d6e08e08d458 192.168.152.131:6379@16379 slave 55c211b52bf0830931d7111387b38e2822132dbf 0 1590674238962 7 connected

对比第一步查到的集群信息会发现192.168.152.131:6380 从节点和 192.168.152.129:6379 主节点已经不在集群里了,
而且我们将槽都迁移到了192.168.152.129:6380@16380 节点上

至此缩容成功

接下来我们在把删除的节点加回来

  1. 启动192.168.152.129:6379 主节点
[root@localhost src]# /usr/local/redis-5.0.3/src/redis-server /usr/local/redis-5.0.3/conf/6379/redis.conf
  1. 将主节点加到集群中
[root@localhost src]# ./redis-cli --cluster add-node 192.168.152.129:6379 192.168.152.129:6380@16380
>>> Adding node 192.168.152.129:6379 to cluster 192.168.152.129:6380
>>> Performing Cluster Check (using node 192.168.152.129:6380)
M: 55c211b52bf0830931d7111387b38e2822132dbf 192.168.152.129:6380
   slots:[0-10922] (10923 slots) master
   2 additional replica(s)
S: 97816e6462753f1a8a4644517ac865794311e807 192.168.152.130:6380
   slots: (0 slots) slave
   replicates 55c211b52bf0830931d7111387b38e2822132dbf
M: 187372b2148f33dfa089af1396a2b427d9e3196d 192.168.152.130:6379
   slots:[10923-16383] (5461 slots) master
S: 9d27a18c320f588332cda1ad19a1d6e08e08d458 192.168.152.131:6379
   slots: (0 slots) slave
   replicates 55c211b52bf0830931d7111387b38e2822132dbf
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 192.168.152.129:6379 to make it join the cluster.
[OK] New node added correctly.
[root@localhost src]# 

查看集群节点信息

92.168.152.129:6380> cluster nodes
97816e6462753f1a8a4644517ac865794311e807 192.168.152.130:6380@16380 slave 55c211b52bf0830931d7111387b38e2822132dbf 0 1590676311000 7 connected
55c211b52bf0830931d7111387b38e2822132dbf 192.168.152.129:6380@16380 myself,master - 0 1590676311000 7 connected 0-10922
187372b2148f33dfa089af1396a2b427d9e3196d 192.168.152.130:6379@16379 master - 0 1590676312000 3 connected 10923-16383
#  发现192.168.152.129:6379 已经加入到集群中来了,但是还没有分配到槽
de487014ee7d76dbd6f5674067804d19102acdb6 192.168.152.129:6379@16379 master - 0 1590676312000 0 connected
9d27a18c320f588332cda1ad19a1d6e08e08d458 192.168.152.131:6379@16379 slave 55c211b52bf0830931d7111387b38e2822132dbf 0 1590676312618 7 connected

  1. 给192.168.152.129:6379 分配槽位
[root@localhost src]# ./redis-cli --cluster reshard 192.168.152.129:6379
>>> Performing Cluster Check (using node 192.168.152.129:6379)
M: de487014ee7d76dbd6f5674067804d19102acdb6 192.168.152.129:6379
   slots: (0 slots) master
S: 9d27a18c320f588332cda1ad19a1d6e08e08d458 192.168.152.131:6379
   slots: (0 slots) slave
   replicates 55c211b52bf0830931d7111387b38e2822132dbf
M: 55c211b52bf0830931d7111387b38e2822132dbf 192.168.152.129:6380
   slots:[0-10922] (10923 slots) master
   2 additional replica(s)
M: 187372b2148f33dfa089af1396a2b427d9e3196d 192.168.152.130:6379
   slots:[10923-16383] (5461 slots) master
S: 97816e6462753f1a8a4644517ac865794311e807 192.168.152.130:6380
   slots: (0 slots) slave
   replicates 55c211b52bf0830931d7111387b38e2822132dbf
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 5462    
# de487014ee7d76dbd6f5674067804d19102acdb6 为要分配槽位192.168.152.129:6379 的id  
What is the receiving node ID? de487014ee7d76dbd6f5674067804d19102acdb6
Please enter all the source node IDs.
  Type 'all' to use all the nodes as source nodes for the hash slots.
  Type 'done' once you entered all the source nodes IDs.
Source node #1: 55c211b52bf0830931d7111387b38e2822132dbf
Source node #2: done

查看分配后的集群节点信息

192.168.152.129:6380> cluster nodes
97816e6462753f1a8a4644517ac865794311e807 192.168.152.130:6380@16380 slave de487014ee7d76dbd6f5674067804d19102acdb6 0 1590676789099 8 connected
55c211b52bf0830931d7111387b38e2822132dbf 192.168.152.129:6380@16380 myself,master - 0 1590676789000 7 connected 5462-10922
187372b2148f33dfa089af1396a2b427d9e3196d 192.168.152.130:6379@16379 master - 0 1590676791114 3 connected 10923-16383
# 发现192.168.152.129:6379 主节点已经分配了槽位0-5461
de487014ee7d76dbd6f5674067804d19102acdb6 192.168.152.129:6379@16379 master - 0 1590676790107 8 connected 0-5461
9d27a18c320f588332cda1ad19a1d6e08e08d458 192.168.152.131:6379@16379 slave 55c211b52bf0830931d7111387b38e2822132dbf 0 1590676790000 7 connected
192.168.152.129:6380> 

  1. 给192.168.152.129:6379 主节点添加一个从节点192.168.152.131:6380
[root@localhost src]# ./redis-cli --cluster add-node 192.168.152.131:6380 192.168.152.129:6379
>>> Adding node 192.168.152.131:6380 to cluster 192.168.152.129:6379
>>> Performing Cluster Check (using node 192.168.152.129:6379)
M: de487014ee7d76dbd6f5674067804d19102acdb6 192.168.152.129:6379
   slots:[0-5461] (5462 slots) master
   1 additional replica(s)
S: 9d27a18c320f588332cda1ad19a1d6e08e08d458 192.168.152.131:6379
   slots: (0 slots) slave
   replicates 55c211b52bf0830931d7111387b38e2822132dbf
M: 55c211b52bf0830931d7111387b38e2822132dbf 192.168.152.129:6380
   slots:[5462-10922] (5461 slots) master
   1 additional replica(s)
M: 187372b2148f33dfa089af1396a2b427d9e3196d 192.168.152.130:6379
   slots:[10923-16383] (5461 slots) master
S: 97816e6462753f1a8a4644517ac865794311e807 192.168.152.130:6380
   slots: (0 slots) slave
   replicates de487014ee7d76dbd6f5674067804d19102acdb6
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 192.168.152.131:6380 to make it join the cluster.
[OK] New node added correctly.

查看 集群节点信息,发现192.168.152.131:6380 已经加入到集群中了,但是是个master

127.0.0.1:6379> cluster nodes
d50c4f7bf3dd3e3d8f9ff53f5ed26f9023fed5b6 192.168.152.131:6380@16380 master - 0 1590677229000 0 connected
187372b2148f33dfa089af1396a2b427d9e3196d 192.168.152.130:6379@16379 myself,master - 0 1590677227000 3 connected 10923-16383
9d27a18c320f588332cda1ad19a1d6e08e08d458 192.168.152.131:6379@16379 slave 55c211b52bf0830931d7111387b38e2822132dbf 0 1590677228000 7 connected
de487014ee7d76dbd6f5674067804d19102acdb6 192.168.152.129:6379@16379 master - 0 1590677229378 8 connected 0-5461
97816e6462753f1a8a4644517ac865794311e807 192.168.152.130:6380@16380 slave de487014ee7d76dbd6f5674067804d19102acdb6 0 1590677227000 8 connected
55c211b52bf0830931d7111387b38e2822132dbf 192.168.152.129:6380@16380 master - 0 1590677227360 7 connected 5462-10922

  1. 192.168.152.131:6380 作为 192.168.152.129:6379的从节点,
    需要登录192.168.152.131:6380 进行操作
[root@localhost src]# ./redis-cli -c -h 192.168.152.131 -p 6380
#192.168.152.131:6380 将自己设为192.168.152.129:6379的从节点
192.168.152.131:6380> cluster replicate de487014ee7d76dbd6f5674067804d19102acdb6
OK
192.168.152.131:6380> cluster nodes
# 查看设置生效,一切又回到最初的样子
9d27a18c320f588332cda1ad19a1d6e08e08d458 192.168.152.131:6379@16379 slave 55c211b52bf0830931d7111387b38e2822132dbf 0 1590677457000 7 connected
55c211b52bf0830931d7111387b38e2822132dbf 192.168.152.129:6380@16380 master - 0 1590677459000 7 connected 5462-10922
d50c4f7bf3dd3e3d8f9ff53f5ed26f9023fed5b6 192.168.152.131:6380@16380 myself,slave de487014ee7d76dbd6f5674067804d19102acdb6 0 1590677458000 0 connected
de487014ee7d76dbd6f5674067804d19102acdb6 192.168.152.129:6379@16379 master - 0 1590677457385 8 connected 0-5461
187372b2148f33dfa089af1396a2b427d9e3196d 192.168.152.130:6379@16379 master - 0 1590677457000 3 connected 10923-16383
97816e6462753f1a8a4644517ac865794311e807 192.168.152.130:6380@16380 slave de487014ee7d76dbd6f5674067804d19102acdb6 0 1590677459401 8 connected
192.168.152.131:6380> 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值