redis6 集群,故障转移后,添加主,添加从,迁移slot,

一、环境

CentOS 7.9

redis 6.2.11

3台服务器集群,分别装的主和从。

场景:服务器1:63,服务器被重置了。

主IP

从复制谁

(不是自己)

端口集群端口备注
10.10.10.6310.10.10.65

主:6479

从:6579

16479

16579

主节点

从节点

10.10.10.6410.10.10.63

主:6479

从:6579

16479

16579

主节点

从节点

10.10.10.6510.10.10.64

主:6479

从:6579

16479

16579

主节点

从节点

二、检查集群

redis-cli --cluster help

[root@cespprod1 redis-6.2.11]# redis-cli --cluster check 10.10.10.64:6479 -a myPassword
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
10.10.10.64:6479 (05f48182...) -> 28 keys | 5462 slots | 1 slaves.
10.10.10.65:6479 (903a7fc4...) -> 32 keys | 5461 slots | 0 slaves.
10.10.10.64:6579 (da133856...) -> 25 keys | 5461 slots | 0 slaves.
[OK] 85 keys in 3 masters.
0.01 keys per slot on average.
>>> Performing Cluster Check (using node 10.10.10.64:6479)
M: 05f481827fc5d3a0362a3d7a06eb1fbd52c73303 10.10.10.64:6479
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
M: 903a7fc4ee63c8cde8276fa5d7be5d4a258e2005 10.10.10.65:6479
   slots:[10923-16383] (5461 slots) master
M: da133856054e4e160d962782389bc461af21a704 10.10.10.64:6579
   slots:[0-5460] (5461 slots) master
S: 9b11853186c0c4d48b4210ddab042755ba68fd06 10.10.10.65:6579
   slots: (0 slots) slave
   replicates 05f481827fc5d3a0362a3d7a06eb1fbd52c73303
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

原来的三主三从,故障转移,有个一从自动升成主了。

三、添加新主节点,新从节点

1、添加主节点

redis-cli --cluster add-node 10.10.10.63:6479 10.10.10.64:6479 -a myPassword

前面是要添加的节点,后面的是集群中某一个节点

2、添加从节点

redis-cli --cluster add-node 10.10.10.63:6579 10.10.10.64:6479 --cluster-slave --cluster-master-id be7a6314fa34e2e3957d68368c99cb2c24c22b01 -a myPassword 

前面是要添加的节点,后面的是集群中某一个节点

--cluster-slave 添加从节点

--cluster-master-id 指定主节点的ID

3、添加主节点

[root@cespprod1 redis-6.2.11]# redis-cli --cluster add-node 10.10.10.63:6479 10.10.10.64:6479 -a myPassword
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
>>> Adding node 10.10.10.63:6479 to cluster 10.10.10.64:6479
>>> Performing Cluster Check (using node 10.10.10.64:6479)
M: 05f481827fc5d3a0362a3d7a06eb1fbd52c73303 10.10.10.64:6479
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
M: 903a7fc4ee63c8cde8276fa5d7be5d4a258e2005 10.10.10.65:6479
   slots:[10923-16383] (5461 slots) master
M: da133856054e4e160d962782389bc461af21a704 10.10.10.64:6579
   slots:[0-5460] (5461 slots) master
S: 9b11853186c0c4d48b4210ddab042755ba68fd06 10.10.10.65:6579
   slots: (0 slots) slave
   replicates 05f481827fc5d3a0362a3d7a06eb1fbd52c73303
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 10.10.10.63:6479 to make it join the cluster.
[OK] New node added correctly.

4、查看主节点

[root@cespprod1 redis-6.2.11]# redis-cli --cluster check 10.10.10.64:6479 -a myPassword
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
10.10.10.64:6479 (05f48182...) -> 27 keys | 5462 slots | 1 slaves.
10.10.10.63:6479 (be7a6314...) -> 0 keys | 0 slots | 0 slaves.
10.10.10.65:6479 (903a7fc4...) -> 30 keys | 5461 slots | 0 slaves.
10.10.10.64:6579 (da133856...) -> 25 keys | 5461 slots | 0 slaves.
[OK] 82 keys in 4 masters.
0.01 keys per slot on average.
>>> Performing Cluster Check (using node 10.10.10.64:6479)
M: 05f481827fc5d3a0362a3d7a06eb1fbd52c73303 10.10.10.64:6479
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
M: be7a6314fa34e2e3957d68368c99cb2c24c22b01 10.10.10.63:6479
   slots: (0 slots) master
M: 903a7fc4ee63c8cde8276fa5d7be5d4a258e2005 10.10.10.65:6479
   slots:[10923-16383] (5461 slots) master
M: da133856054e4e160d962782389bc461af21a704 10.10.10.64:6579
   slots:[0-5460] (5461 slots) master
S: 9b11853186c0c4d48b4210ddab042755ba68fd06 10.10.10.65:6579
   slots: (0 slots) slave
   replicates 05f481827fc5d3a0362a3d7a06eb1fbd52c73303
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

5、添加从节点

如果主节点指定错了,可以删除从节点,重新添加,也可以改变从的主节点。

[root@cespprod1 redis-6.2.11]# redis-cli --cluster add-node 10.10.10.63:6579 10.10.10.64:6479 --cluster-slave --cluster-master-id be7a6314fa34e2e3957d68368c99cb2c24c22b01 -a myPassword 
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
>>> Adding node 10.10.10.63:6579 to cluster 10.10.10.64:6479
>>> Performing Cluster Check (using node 10.10.10.64:6479)
M: 05f481827fc5d3a0362a3d7a06eb1fbd52c73303 10.10.10.64:6479
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
M: be7a6314fa34e2e3957d68368c99cb2c24c22b01 10.10.10.63:6479
   slots: (0 slots) master
M: 903a7fc4ee63c8cde8276fa5d7be5d4a258e2005 10.10.10.65:6479
   slots:[10923-16383] (5461 slots) master
M: da133856054e4e160d962782389bc461af21a704 10.10.10.64:6579
   slots:[0-5460] (5461 slots) master
S: 9b11853186c0c4d48b4210ddab042755ba68fd06 10.10.10.65:6579
   slots: (0 slots) slave
   replicates 05f481827fc5d3a0362a3d7a06eb1fbd52c73303
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 10.10.10.63:6579 to make it join the cluster.
Waiting for the cluster to join
>>> Configure node as replica of 10.10.10.63:6479.
[OK] New node added correctly.

6、查看从节点

[root@cespprod1 redis-6.2.11]# redis-cli --cluster check 10.10.10.64:6479 -a myPassword
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
10.10.10.64:6479 (05f48182...) -> 27 keys | 5462 slots | 1 slaves.
10.10.10.63:6479 (be7a6314...) -> 0 keys | 0 slots | 1 slaves.
10.10.10.65:6479 (903a7fc4...) -> 30 keys | 5461 slots | 0 slaves.
10.10.10.64:6579 (da133856...) -> 25 keys | 5461 slots | 0 slaves.
[OK] 82 keys in 4 masters.
0.01 keys per slot on average.
>>> Performing Cluster Check (using node 10.10.10.64:6479)
M: 05f481827fc5d3a0362a3d7a06eb1fbd52c73303 10.10.10.64:6479
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
M: be7a6314fa34e2e3957d68368c99cb2c24c22b01 10.10.10.63:6479
   slots: (0 slots) master
   1 additional replica(s)
M: 903a7fc4ee63c8cde8276fa5d7be5d4a258e2005 10.10.10.65:6479
   slots:[10923-16383] (5461 slots) master
M: da133856054e4e160d962782389bc461af21a704 10.10.10.64:6579
   slots:[0-5460] (5461 slots) master
S: 9b11853186c0c4d48b4210ddab042755ba68fd06 10.10.10.65:6579
   slots: (0 slots) slave
   replicates 05f481827fc5d3a0362a3d7a06eb1fbd52c73303
S: ec2a04d625c62cee5efd7c6de38873ea4b043702 10.10.10.63:6579
   slots: (0 slots) slave
   replicates be7a6314fa34e2e3957d68368c99cb2c24c22b01
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

7、改变从的主

方式一

[root@ ~]# redis-cli -h 127.0.0.1 -p 6579 -a myPassword

127.0.0.1:6579> replicaof 10.10.10.64 6479 (新版本)

127.0.0.1:6579> slaveof 10.10.10.64 6479 (老版本)

方式二

改变当前从的主

CLUSTER REPLICATE 主的ID

[root@ ~]# redis-cli -h 127.0.0.1 -p 6579 -a myPassword
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
127.0.0.1:6579> CLUSTER REPLICATE 903a7fc4ee63c8cde8276fa5d7be5d4a258e2005
OK
127.0.0.1:6579> CLUSTER SAVECONFIG
OK

8、查看改变的从

[root@cespprod1 redis-6.2.11]# redis-cli --cluster check 10.10.10.64:6479 -a myPassword
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
10.10.10.64:6479 (05f48182...) -> 28 keys | 5462 slots | 1 slaves.
10.10.10.63:6479 (be7a6314...) -> 0 keys | 0 slots | 0 slaves.
10.10.10.65:6479 (903a7fc4...) -> 31 keys | 5461 slots | 1 slaves.
10.10.10.64:6579 (da133856...) -> 25 keys | 5461 slots | 0 slaves.
[OK] 84 keys in 4 masters.
0.01 keys per slot on average.
>>> Performing Cluster Check (using node 10.10.10.64:6479)
M: 05f481827fc5d3a0362a3d7a06eb1fbd52c73303 10.10.10.64:6479
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
M: be7a6314fa34e2e3957d68368c99cb2c24c22b01 10.10.10.63:6479
   slots: (0 slots) master
M: 903a7fc4ee63c8cde8276fa5d7be5d4a258e2005 10.10.10.65:6479
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
M: da133856054e4e160d962782389bc461af21a704 10.10.10.64:6579
   slots:[0-5460] (5461 slots) master
S: 9b11853186c0c4d48b4210ddab042755ba68fd06 10.10.10.65:6579
   slots: (0 slots) slave
   replicates 05f481827fc5d3a0362a3d7a06eb1fbd52c73303
S: ec2a04d625c62cee5efd7c6de38873ea4b043702 10.10.10.63:6579
   slots: (0 slots) slave
   replicates 903a7fc4ee63c8cde8276fa5d7be5d4a258e2005
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

四、迁移slot

1、迁移slot

        host:最好写局域网IP,不写127.0.0.1(可能容易失败)

         --cluster-slots:迁移的solt数量,可通过check查询,如果迁移部分失败后再次迁移,这个数量会变 。

        --cluster-timeout:超时时间(毫秒),最好写上,要不默认60s,容易失败

        迁移slot,中途失败,被迁移的原主节点可能会从master降为slave,被迁移的原从节点可能升级为master,后续可以调整。

redis-cli --cluster reshard 10.10.10.64:6479 --cluster-from da133856054e4e160d962782389bc461af21a704 --cluster-to be7a6314fa34e2e3957d68368c99cb2c24c22b01 --cluster-slots 5461 --cluster-yes --cluster-timeout 3600000 -a myPassword
Moving slot 5458 from 10.10.10.64:6579 to 10.10.10.63:6479: 
Moving slot 5459 from 10.10.10.64:6579 to 10.10.10.63:6479: 
Moving slot 5460 from 10.10.10.64:6579 to 10.10.10.63:6479: 
Node 10.10.10.64:6579 replied with error:
ERR Please use SETSLOT only with masters.

这个错误,可忽略,是因为迁移完slot后,节点从主降到从了。

2、检查集群

[root@cespprod1 redis-6.2.11]# redis-cli --cluster check 10.10.10.64:6479 -a myPassword
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
10.10.10.64:6479 (05f48182...) -> 28 keys | 5462 slots | 1 slaves.
10.10.10.63:6479 (be7a6314...) -> 25 keys | 5461 slots | 1 slaves.
10.10.10.65:6479 (903a7fc4...) -> 30 keys | 5461 slots | 1 slaves.
[OK] 83 keys in 3 masters.
0.01 keys per slot on average.
>>> Performing Cluster Check (using node 10.10.10.64:6479)
M: 05f481827fc5d3a0362a3d7a06eb1fbd52c73303 10.10.10.64:6479
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
M: be7a6314fa34e2e3957d68368c99cb2c24c22b01 10.10.10.63:6479
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
M: 903a7fc4ee63c8cde8276fa5d7be5d4a258e2005 10.10.10.65:6479
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: da133856054e4e160d962782389bc461af21a704 10.10.10.64:6579
   slots: (0 slots) slave
   replicates be7a6314fa34e2e3957d68368c99cb2c24c22b01
S: 9b11853186c0c4d48b4210ddab042755ba68fd06 10.10.10.65:6579
   slots: (0 slots) slave
   replicates 05f481827fc5d3a0362a3d7a06eb1fbd52c73303
S: ec2a04d625c62cee5efd7c6de38873ea4b043702 10.10.10.63:6579
   slots: (0 slots) slave
   replicates 903a7fc4ee63c8cde8276fa5d7be5d4a258e2005
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

参考

报错处理,修复集群

redis 6.2 集群 迁移节点,添加节点,删除节点

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
Redis 集群中的某个节点出现故障时,我们需要进行故障恢复操作以保证集群的正常工作。以下是一个简单的 Redis 集群节点故障恢复步骤: 1. 检查故障节点:首先需要检查出现故障Redis 节点,可以通过查看 Redis 日志或者执行 Redis 集群节点状态命令 cluster nodes 来查看。 2. 从节点晋升为节点:如果故障的节点是一个节点,我们需要从该节点的从节点中选择一个晋升为新的节点。可以在任意一个从节点上执行如下命令: ``` redis-cli cluster failover ``` 此时当前从节点会成为新的节点,同时集群中的其他节点会重新选举从节点。 3. 添加新节点:如果故障的节点是一个从节点,我们需要添加一个新节点来替代故障节点。可以在新节点上执行如下命令: ``` redis-server /path/to/redis.conf --slaveof <master-ip> <master-port> ``` 其中,--slaveof <master-ip> <master-port> 指定新节点为某个节点的从节点。 4. 向集群添加新节点:在任意一个节点上执行如下命令将新节点添加集群中: ``` redis-cli --cluster add-node <new-node> <existing-node>:<existing-node-port> ``` 其中,<new-node> 表示新节点的 IP 地址和端口号,<existing-node>:<existing-node-port> 表示集群中的任意一个节点。 5. 迁移槽位数据:如果故障节点为节点,需要将原节点的槽位数据迁移到新的节点。可以在新节点上执行如下命令: ``` redis-cli cluster addslots <slot1> <slot2> ... <slotn> ``` 其中,<slot1>、<slot2>、...、<slotn> 表示需要迁移的槽位号。 6. 测试集群:完成节点故障恢复后,需要测试集群是否正常工作。 以上是一个简单的 Redis 集群节点故障恢复步骤,实际操作中还需要根据具体情况进行调整。如果你需要更加详细的操作步骤和注意事项,建议参考 Redis 官方文档。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值