Redis6 集群新增节点

先确保新增的节点中没有数据,否则会报错;

当前的节点信息

127.0.0.1:8000> cluster slots
1) 1) (integer) 0
   2) (integer) 5461
   3) 1) "127.0.0.1"
      2) (integer) 8000
      3) "25d25af226ac55e9c03723288c20f53520420767"
   4) 1) "127.0.0.1"
      2) (integer) 8100
      3) "e4ec27a4f61c66131faebab76d0c33c38fb5695c"
2) 1) (integer) 5462
   2) (integer) 10922
   3) 1) "127.0.0.1"
      2) (integer) 8001
      3) "68507c82e45915e6a257afbfc2626c2424684879"
   4) 1) "127.0.0.1"
      2) (integer) 8101
      3) "30bb3d720a0c7dad6aed79f17ab33313246a0629"
3) 1) (integer) 10923
   2) (integer) 16383
   3) 1) "127.0.0.1"
      2) (integer) 8002
      3) "3db06c21c6dea8701fadbebfebf1aa92e5b13037"
   4) 1) "127.0.0.1"
      2) (integer) 8102
      3) "e4df1b413eb5731f4de442e3e38a14612dc65700"

现在需要新增节点 127.0.0.1:8003,127.0.0.1:8103

5.0之后版本新增节点命令:  

redis-cli  --cluster add-node {新节点IP:PORT} {集群中任一节点IP:PORT}

[root@XXX ~]# redis-cli  --cluster add-node 127.0.0.1:8003 127.0.0.1:8002
>>> Adding node 127.0.0.1:8003 to cluster 127.0.0.1:8002
>>> Performing Cluster Check (using node 127.0.0.1:8002)
M: 3db06c21c6dea8701fadbebfebf1aa92e5b13037 127.0.0.1:8002
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: 30bb3d720a0c7dad6aed79f17ab33313246a0629 127.0.0.1:8101
   slots: (0 slots) slave
   replicates 68507c82e45915e6a257afbfc2626c2424684879
M: 68507c82e45915e6a257afbfc2626c2424684879 127.0.0.1:8001
   slots:[5462-10922] (5461 slots) master
   1 additional replica(s)
S: e4df1b413eb5731f4de442e3e38a14612dc65700 127.0.0.1:8102
   slots: (0 slots) slave
   replicates 3db06c21c6dea8701fadbebfebf1aa92e5b13037
S: e4ec27a4f61c66131faebab76d0c33c38fb5695c 127.0.0.1:8100
   slots: (0 slots) slave
   replicates 25d25af226ac55e9c03723288c20f53520420767
M: 25d25af226ac55e9c03723288c20f53520420767 127.0.0.1:8000
   slots:[0-5461] (5462 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 127.0.0.1:8003 to make it join the cluster.
[OK] New node added correctly.
[root@XXX ~]# redis-cli  --cluster add-node 127.0.0.1:8103 127.0.0.1:8002 
>>> Adding node 127.0.0.1:8103 to cluster 127.0.0.1:8002
>>> Performing Cluster Check (using node 127.0.0.1:8002)
M: 3db06c21c6dea8701fadbebfebf1aa92e5b13037 127.0.0.1:8002
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: 30bb3d720a0c7dad6aed79f17ab33313246a0629 127.0.0.1:8101
   slots: (0 slots) slave
   replicates 68507c82e45915e6a257afbfc2626c2424684879
M: 68507c82e45915e6a257afbfc2626c2424684879 127.0.0.1:8001
   slots:[5462-10922] (5461 slots) master
   1 additional replica(s)
M: e9aac3ea026f8b5b14267861021a282103671a9c 127.0.0.1:8003
   slots: (0 slots) master
S: e4df1b413eb5731f4de442e3e38a14612dc65700 127.0.0.1:8102
   slots: (0 slots) slave
   replicates 3db06c21c6dea8701fadbebfebf1aa92e5b13037
S: e4ec27a4f61c66131faebab76d0c33c38fb5695c 127.0.0.1:8100
   slots: (0 slots) slave
   replicates 25d25af226ac55e9c03723288c20f53520420767
M: 25d25af226ac55e9c03723288c20f53520420767 127.0.0.1:8000
   slots:[0-5461] (5462 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 127.0.0.1:8103 to make it join the cluster.
[OK] New node added correctly.

执行后,集群中每个节点都会将新节点添加到自己的"通讯录"中;

添加后默认情况下,新节点角色是master,且没有任何hash slot;

127.0.0.1:8000> cluster nodes
e9aac3ea026f8b5b14267861021a282103671a9c 127.0.0.1:8003@18003 master - 0 1626230308792 0 connected
e4ec27a4f61c66131faebab76d0c33c38fb5695c 127.0.0.1:8100@18100 slave 25d25af226ac55e9c03723288c20f53520420767 0 1626230307788 16 connected
68507c82e45915e6a257afbfc2626c2424684879 127.0.0.1:8001@18001 master - 0 1626230306000 14 connected 5462-10922
89609f9d318bbca243c622195dcffb0c4c739c21 127.0.0.1:8103@18103 master - 0 1626230309795 17 connected
25d25af226ac55e9c03723288c20f53520420767 127.0.0.1:8000@18000 myself,master - 0 1626230306000 16 connected 0-5461
e4df1b413eb5731f4de442e3e38a14612dc65700 127.0.0.1:8102@18102 slave 3db06c21c6dea8701fadbebfebf1aa92e5b13037 0 1626230306785 12 connected
30bb3d720a0c7dad6aed79f17ab33313246a0629 127.0.0.1:8101@18101 slave 68507c82e45915e6a257afbfc2626c2424684879 0 1626230307000 14 connected
3db06c21c6dea8701fadbebfebf1aa92e5b13037 127.0.0.1:8002@18002 master - 0 1626230307000 12 connected 10923-16383

现在来分配slot,

命令: 

redis-cli  --cluster reshard 127.0.0.1:8000  

127.0.0.1:8000  为集群中的任意一个节点

[root@xxx ~]# redis-cli  --cluster reshard 127.0.0.1:8000                
>>> Performing Cluster Check (using node 127.0.0.1:8000)
M: 25d25af226ac55e9c03723288c20f53520420767 127.0.0.1:8000
   slots:[0-5461] (5462 slots) master      # 0-5461
   1 additional replica(s)
M: e9aac3ea026f8b5b14267861021a282103671a9c 127.0.0.1:8003
   slots: (0 slots) master                #新节点没有slots
S: e4ec27a4f61c66131faebab76d0c33c38fb5695c 127.0.0.1:8100
   slots: (0 slots) slave
   replicates 25d25af226ac55e9c03723288c20f53520420767
M: 68507c82e45915e6a257afbfc2626c2424684879 127.0.0.1:8001
   slots:[5462-10922] (5461 slots) master #5462-10922
   1 additional replica(s)
M: 89609f9d318bbca243c622195dcffb0c4c739c21 127.0.0.1:8103
   slots: (0 slots) master                #新节点没有slots
S: e4df1b413eb5731f4de442e3e38a14612dc65700 127.0.0.1:8102
   slots: (0 slots) slave                
   replicates 3db06c21c6dea8701fadbebfebf1aa92e5b13037
S: 30bb3d720a0c7dad6aed79f17ab33313246a0629 127.0.0.1:8101
   slots: (0 slots) slave
   replicates 68507c82e45915e6a257afbfc2626c2424684879
M: 3db06c21c6dea8701fadbebfebf1aa92e5b13037 127.0.0.1:8002
   slots:[10923-16383] (5461 slots) master #10923-16383
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 

How many slots do you want to move (from 1 to 16384)?

Redis一共16384个slots,之前是3个Master,现在变成4个Master,需要移动16384/4 = 4096个槽位

How many slots do you want to move (from 1 to 16384)? 4096
What is the receiving node ID? e9aac3ea026f8b5b14267861021a282103671a9c
Please enter all the source node IDs.
  Type 'all' to use all the nodes as source nodes for the hash slots.
  Type 'done' once you entered all the source nodes IDs.
Source node #1: all

Ready to move 4096 slots.

What is the receiving node ID?  输入要接收槽位的节点node-id,填入127.0.0.1:8003的node-id

Source node #1:  从哪个Master移动4096个slots过来? 填all; 从另外3个Master平均移动4096个slots过来,每个Master移动1365个slots;

Ready to move 4096 slots.
  Source nodes:
    M: 25d25af226ac55e9c03723288c20f53520420767 127.0.0.1:8000
       slots:[0-5461] (5462 slots) master
       1 additional replica(s)
    M: 68507c82e45915e6a257afbfc2626c2424684879 127.0.0.1:8001
       slots:[5462-10922] (5461 slots) master
       1 additional replica(s)
    M: 89609f9d318bbca243c622195dcffb0c4c739c21 127.0.0.1:8103
       slots: (0 slots) master
    M: 3db06c21c6dea8701fadbebfebf1aa92e5b13037 127.0.0.1:8002
       slots:[10923-16383] (5461 slots) master
       1 additional replica(s)
  Destination node:
    M: e9aac3ea026f8b5b14267861021a282103671a9c 127.0.0.1:8003
       slots: (0 slots) master

展示了Source nodes 机器当前所包含的slots

Destination node为 127.0.0.1:8003

  Resharding plan:
    Moving slot 0 from 25d25af226ac55e9c03723288c20f53520420767
    Moving slot 1 from 25d25af226ac55e9c03723288c20f53520420767
    Moving slot 2 from 25d25af226ac55e9c03723288c20f53520420767
......
    Moving slot 1363 from 25d25af226ac55e9c03723288c20f53520420767
    Moving slot 1364 from 25d25af226ac55e9c03723288c20f53520420767
    Moving slot 1365 from 25d25af226ac55e9c03723288c20f53520420767
    Moving slot 5462 from 68507c82e45915e6a257afbfc2626c2424684879
    Moving slot 5463 from 68507c82e45915e6a257afbfc2626c2424684879
    Moving slot 5464 from 68507c82e45915e6a257afbfc2626c2424684879
......
    Moving slot 6823 from 68507c82e45915e6a257afbfc2626c2424684879
    Moving slot 6824 from 68507c82e45915e6a257afbfc2626c2424684879
    Moving slot 6825 from 68507c82e45915e6a257afbfc2626c2424684879
    Moving slot 6826 from 68507c82e45915e6a257afbfc2626c2424684879
    Moving slot 10923 from 3db06c21c6dea8701fadbebfebf1aa92e5b13037
    Moving slot 10924 from 3db06c21c6dea8701fadbebfebf1aa92e5b13037
    Moving slot 10925 from 3db06c21c6dea8701fadbebfebf1aa92e5b13037
......
    Moving slot 12285 from 3db06c21c6dea8701fadbebfebf1aa92e5b13037
    Moving slot 12286 from 3db06c21c6dea8701fadbebfebf1aa92e5b13037
    Moving slot 12287 from 3db06c21c6dea8701fadbebfebf1aa92e5b13037
Do you want to proceed with the proposed reshard plan (yes/no)? yes
Moving slot 0 from 127.0.0.1:8000 to 127.0.0.1:8003: 
Moving slot 1 from 127.0.0.1:8000 to 127.0.0.1:8003: 
Moving slot 2 from 127.0.0.1:8000 to 127.0.0.1:8003: 
... ...
Moving slot 12285 from 127.0.0.1:8002 to 127.0.0.1:8003: 
Moving slot 12286 from 127.0.0.1:8002 to 127.0.0.1:8003: 
Moving slot 12287 from 127.0.0.1:8002 to 127.0.0.1:8003: 

上面执行结果可以看到,会从每个分片里面取1365个slot到新增的Master中;

查看node的slot分配情况可以看到当前是平均分配的,每个节点4096个slots

最后还将127.0.0.1:8103修改为127.0.0.1:8003的从库;

127.0.0.1:8103> cluster replicate e9aac3ea026f8b5b14267861021a282103671a9c
OK

 查看nodes状态,当前是4主4从

127.0.0.1:8103> cluster nodes
e9aac3ea026f8b5b14267861021a282103671a9c 127.0.0.1:8003@18003 master - 0 1626233452000 18 connected 0-1365 5462-6826 10923-12287
25d25af226ac55e9c03723288c20f53520420767 127.0.0.1:8000@18000 master - 0 1626233451000 16 connected 1366-5461
3db06c21c6dea8701fadbebfebf1aa92e5b13037 127.0.0.1:8002@18002 master - 0 1626233451999 12 connected 12288-16383
68507c82e45915e6a257afbfc2626c2424684879 127.0.0.1:8001@18001 master - 0 1626233453002 14 connected 6827-10922
e4df1b413eb5731f4de442e3e38a14612dc65700 127.0.0.1:8102@18102 slave 3db06c21c6dea8701fadbebfebf1aa92e5b13037 0 1626233452000 12 connected
e4ec27a4f61c66131faebab76d0c33c38fb5695c 127.0.0.1:8100@18100 slave 25d25af226ac55e9c03723288c20f53520420767 0 1626233450000 16 connected
30bb3d720a0c7dad6aed79f17ab33313246a0629 127.0.0.1:8101@18101 slave 68507c82e45915e6a257afbfc2626c2424684879 0 1626233451000 14 connected
89609f9d318bbca243c622195dcffb0c4c739c21 127.0.0.1:8103@18103 myself,slave e9aac3ea026f8b5b14267861021a282103671a9c 0 1626233451000 18 connected

查看slots分布

127.0.0.1:8103> cluster slots
1) 1) (integer) 0
   2) (integer) 1365
   3) 1) "127.0.0.1"
      2) (integer) 8003
      3) "e9aac3ea026f8b5b14267861021a282103671a9c"
   4) 1) "127.0.0.1"
      2) (integer) 8103
      3) "89609f9d318bbca243c622195dcffb0c4c739c21"
2) 1) (integer) 1366
   2) (integer) 5461
   3) 1) "127.0.0.1"
      2) (integer) 8000
      3) "25d25af226ac55e9c03723288c20f53520420767"
   4) 1) "127.0.0.1"
      2) (integer) 8100
      3) "e4ec27a4f61c66131faebab76d0c33c38fb5695c"
3) 1) (integer) 5462
   2) (integer) 6826
   3) 1) "127.0.0.1"
      2) (integer) 8003
      3) "e9aac3ea026f8b5b14267861021a282103671a9c"
   4) 1) "127.0.0.1"
      2) (integer) 8103
      3) "89609f9d318bbca243c622195dcffb0c4c739c21"
4) 1) (integer) 6827
   2) (integer) 10922
   3) 1) "127.0.0.1"
      2) (integer) 8001
      3) "68507c82e45915e6a257afbfc2626c2424684879"
   4) 1) "127.0.0.1"
      2) (integer) 8101
      3) "30bb3d720a0c7dad6aed79f17ab33313246a0629"
5) 1) (integer) 10923
   2) (integer) 12287
   3) 1) "127.0.0.1"
      2) (integer) 8003
      3) "e9aac3ea026f8b5b14267861021a282103671a9c"
   4) 1) "127.0.0.1"
      2) (integer) 8103
      3) "89609f9d318bbca243c622195dcffb0c4c739c21"
6) 1) (integer) 12288
   2) (integer) 16383
   3) 1) "127.0.0.1"
      2) (integer) 8002
      3) "3db06c21c6dea8701fadbebfebf1aa92e5b13037"
   4) 1) "127.0.0.1"
      2) (integer) 8102
      3) "e4df1b413eb5731f4de442e3e38a14612dc65700"

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Redis 集群是通过 Redis 的哨兵(Sentinel)或自动分片(Cluster Mode)来实现的高可用性和扩展性的解决方案。在集群中添加新节点通常涉及以下步骤: 1. **计划和准备**: - 确保你有足够资源的新节点,并且已经配置好 Redis 服务器。 - 了解新节点的网络位置,以便它能与现有集群中的节点正常通信。 2. **加入集群**: - 对于 Sentinel 集群模式,你需要在 Sentinel 中添加新节点。首先,配置 Sentinel 认识新节点,然后使用 `sentinel add-slave` 命令指定主节点和新节点,设置相应的参数如权重、端口等。 - 对于 Cluster 模式,直接启动新节点,配置为集群节点。在启动时,新节点会自动发现并加入到集群中,通过交互式命令行或配置文件配置 `cluster add-node`。 3. **数据同步**: - 新加入的节点会自动从其他节点同步数据。这可能是一个耗时的过程,取决于集群的大小和数据量。 - 如果是首次加入,可能需要手动进行部分或全量的数据复制。 4. **节点角色确认**: - 在数据同步完成后,新节点的角色(master, slave, 或者 replica)可能需要被 Sentinel 或者集群本身确认。 5. **监控和测试**: - 使用 `CLUSTER NODES` 和 `INFO CLUSTER` 等命令检查节点状态和连接情况。 - 测试新节点的性能和功能,确保它能正常参与到集群服务中。 **相关问题**: 1. 哨兵模式和自动分片模式有什么区别? 2. 如何通过 Sentinel 配置新的 Sentinel 节点? 3. 新节点如何识别并加入到 Cluster 模式中的其他节点? 4. 在哪些情况下需要手动干预数据同步过程?

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值