独步全球的纯终端命令玩法 ------ Redis Cluster模式群集配置 群集管理 群集运维排错

在虚拟机VMware® Workstation 12 Pro上配置7台CentOS 7系统 ,

http://mirrors.163.com/centos/7/isos/x86_64/CentOS-7-x86_64-Minimal-1804.iso

你的每个节点分配512M来缓存数据, 另外512M内存用来执行BGSAVE这样的命令,也就是Redis需要1个GB的物理内存
CentOS 7 本身启动需要的物理内存 + 其它系统服务需要的物理内存 + 并发请求连接系统分配的文件句柄占用的物理内存
大概可以推测出 你需要多大的物理内存  Redis真是个很浪费内存的玩意。

如果你的物理内存只有16G ,建议你安装Debian这个版本的Linux操作系统 因为它省内存 ,可以给每个Redis节点分配256M的内存来缓存数据用,这个操作系统在虚拟机里面估计至少需要分配1个GB的内存吧。

本人家用台式电脑很给力,有32G的物理内存,玩个这样的群集还是很富裕的,但是像搞大型架构系统开发32GB的内存还真不够折腾,物理内存有128G可能还凑合,这需要花数万的银子去购买一台服务器放在家里才能玩。

hostname    ip address      node(M/S)      slots
s11         192.168.10.11   Master Node    5960≤slot≤10921
s12         192.168.10.12   Master Node    11423≤slot≤16383
s13         192.168.10.13   Master Node    0≤slot≤5959  10922≤slot≤11422
s14         192.168.10.14   Slave Node        -
s15         192.168.10.15   Slave Node        -
s16         192.168.10.16   Slave Node        -
s17         192.168.10.17   将来扩容主节点   待定

分别在以上7台CentOS 7操作系统执行如下命令安装Redis

yum install -y redis


[root@s11 ~]# vi /etc/redis.conf
#======================================#
port 6379 #默认端口
cluster-enabled yes #开启集群功能
cluster-config-file nodes-6379.conf #节点配置文件是服务启动时自己创建的
cluster-node-timeout 15000
appendonly yes
bind 192.168.10.11
#======================================#

[root@s12 ~]# vi /etc/redis.conf
#======================================#
port 6379 #默认端口
cluster-enabled yes #开启集群功能
cluster-config-file nodes-6379.conf #节点配置文件是服务启动时自己创建的
cluster-node-timeout 15000
appendonly yes
bind 192.168.10.12
#======================================#


[root@s13 ~]# vi /etc/redis.conf
#======================================#
port 6379 #默认端口
cluster-enabled yes #开启集群功能
cluster-config-file nodes-6379.conf #节点配置文件是服务启动时自己创建的
cluster-node-timeout 15000
appendonly yes
bind 192.168.10.13
#======================================#


[root@s14 ~]# vi /etc/redis.conf
#======================================#
port 6379
daemonize no
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 15000
appendonly yes
bind 192.168.10.14
#======================================#

[root@s15 ~]# vi /etc/redis.conf
#======================================#
port 6379
daemonize no
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 15000
appendonly yes
bind 192.168.10.15
#======================================#


[root@s16 ~]# vi /etc/redis.conf
#======================================#
port 6379
daemonize no
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 15000
appendonly yes
bind 192.168.10.16
#======================================#


[root@s17 ~]# vi /etc/redis.conf
#======================================#
port 6379
daemonize no
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 15000
appendonly yes
bind 192.168.10.17
#======================================#




必须掌握的理论知识点:
如何通过哈希算法去读写键-值对数据从Redis主节点?
首先计算key的哈希值,然后通过哈希值算出slot值,每个slot值与配置文件中的IP地址为多对一映射关系
这样很容易找到key-value应该存储到那个Redis主节点上,也就是说找到了key-value数据应该存储在哪个IP地址对应的服务器上,
这样在读写key-value之前,任何编程语言中connect函数就知道了应该填写那个Redis主节点IP地址了。
一个槽slot里可以存储多个键-值对数据。

补充点:
因为所有Redis从节点上没有slot值,所以每个Redis从节点不能读写数据,
从节点在主节点正常的情况下,啥事也不会干,就理解成它们在睡大觉就行啦。

Cluster模式的Redis群集 可不是读写分离模式哦,这个跟Redis Master-Slave读写分离模式群集是不一样的,
这些理论你无法搞懂掌握,你永远就是个菜鸟,不可能成为大神级别的玩家。


在3个Redis主节点上分配槽slot区间范围,每个Redis从节点上是不允许分配槽slot值(因为Redis从节点上分配slot的终端命令无法执行 ------ 即不允许你这么玩)

[root@s11 ~]# systemctl restart redis
[root@s12 ~]# systemctl restart redis
[root@s13 ~]# systemctl restart redis
[root@s14 ~]# systemctl restart redis
[root@s15 ~]# systemctl restart redis
[root@s16 ~]# systemctl restart redis

[root@s11 ~]# cat /var/lib/redis/nodes-6379.conf
0c1a6802a1e78f1da4c2da5301cffa18cbb91aec :0 myself,master - 0 0 0 connected
vars currentEpoch 0 lastVoteEpoch 0

[root@s12 ~]# cat /var/lib/redis/nodes-6379.conf

421257b37276bfff7a0f6cfab149083cf9472934 :0 myself,master - 0 0 0 connected
vars currentEpoch 0 lastVoteEpoch 0
[root@s13 ~]# cat /var/lib/redis/nodes-6379.conf
e3dfbeb93bc239d87bd1e04e13ee0a158df2b584 :0 myself,master - 0 0 0 connected
vars currentEpoch 0 lastVoteEpoch 0

[root@s14 ~]# cat /var/lib/redis/nodes-6379.conf

7ca9a4a226e0fd6a3e2ce9b51767168b2d3426dc :0 myself,master - 0 0 0 connected
vars currentEpoch 0 lastVoteEpoch 0

[root@s15 ~]# cat /var/lib/redis/nodes-6379.conf

3522dbcff1c5fad72d412cce8a78ce9542df7ece :0 myself,master - 0 0 0 connected
vars currentEpoch 0 lastVoteEpoch 0

[root@s16 ~]# cat /var/lib/redis/nodes-6379.conf

f07b64dbad0911690d755ee09ab55815257b71a8 :0 myself,master - 0 0 0 connected
vars currentEpoch 0 lastVoteEpoch 0
[root@s11 ~]# redis-cli -c -h 192.168.10.11 -p 6379
192.168.10.11:6379> cluster info
cluster_state:fail
cluster_slots_assigned:0
cluster_slots_ok:0
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:1
cluster_size:0
cluster_current_epoch:0
cluster_my_epoch:0
cluster_stats_messages_sent:0
cluster_stats_messages_received:0

192.168.10.11:6379> cluster nodes

0c1a6802a1e78f1da4c2da5301cffa18cbb91aec :6379 myself,master - 0 0 0 connected
192.168.10.11:6379> cluster slots
(empty list or set)


[root@s12 ~]# redis-cli -c -h 192.168.10.12 -p 6379
192.168.10.12:6379> cluster info

cluster_state:fail
cluster_slots_assigned:0
cluster_slots_ok:0
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:1
cluster_size:0
cluster_current_epoch:0
cluster_my_epoch:0
cluster_stats_messages_sent:0
cluster_stats_messages_received:0
192.168.10.12:6379> cluster nodes
421257b37276bfff7a0f6cfab149083cf9472934 :6379 myself,master - 0 0 0 connected
192.168.10.12:6379> cluster slots
(empty list or set)

[root@s13 ~]# redis-cli -c -h 192.168.10.13 -p 6379
192.168.10.13:6379> cluster info

cluster_state:fail
cluster_slots_assigned:0
cluster_slots_ok:0
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:1
cluster_size:0
cluster_current_epoch:0
cluster_my_epoch:0
cluster_stats_messages_sent:0
cluster_stats_messages_received:0
192.168.10.13:6379> cluster nodes
e3dfbeb93bc239d87bd1e04e13ee0a158df2b584 :6379 myself,master - 0 0 0 connected
192.168.10.13:6379> cluster slots
(empty list or set)


=====================================================

[root@s14 ~]# redis-cli -c -h 192.168.10.14 -p 6379
192.168.10.14:6379> cluster info

cluster_state:fail
cluster_slots_assigned:0
cluster_slots_ok:0
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:1
cluster_size:0
cluster_current_epoch:0
cluster_my_epoch:0
cluster_stats_messages_sent:0
cluster_stats_messages_received:0
192.168.10.14:6379> cluster nodes
7ca9a4a226e0fd6a3e2ce9b51767168b2d3426dc :6379 myself,master - 0 0 0 connected
192.168.10.14:6379> cluster slots
(empty list or set)


[root@s15 ~]# redis-cli -c -h 192.168.10.15 -p 6379
192.168.10.15:6379> cluster info

cluster_state:fail
cluster_slots_assigned:0
cluster_slots_ok:0
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:1
cluster_size:0
cluster_current_epoch:0
cluster_my_epoch:0
cluster_stats_messages_sent:0
cluster_stats_messages_received:0
192.168.10.15:6379> cluster nodes
3522dbcff1c5fad72d412cce8a78ce9542df7ece :6379 myself,master - 0 0 0 connected
192.168.10.15:6379> cluster slots
(empty list or set)

[root@s16 ~]# redis-cli -c -h 192.168.10.16 -p 6379
192.168.10.16:6379> cluster info

cluster_state:fail
cluster_slots_assigned:0
cluster_slots_ok:0
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:1
cluster_size:0
cluster_current_epoch:0
cluster_my_epoch:0
cluster_stats_messages_sent:0
cluster_stats_messages_received:0
192.168.10.16:6379> cluster nodes
f07b64dbad0911690d755ee09ab55815257b71a8 :6379 myself,master - 0 0 0 connected
192.168.10.16:6379> cluster slots
(empty list or set)

解释说明命令1与命令2之间的差别:

命令1)  
[root@s11 ~]# cat /var/lib/redis/nodes-6379.conf
0c1a6802a1e78f1da4c2da5301cffa18cbb91aec :0 myself,master - 0 0 0 connected
vars currentEpoch 0 lastVoteEpoch 0

命令2)
192.168.10.11:6379> cluster nodes
0c1a6802a1e78f1da4c2da5301cffa18cbb91aec :6379 myself,master - 0 0 0 connected


命令1输出的内容来自硬盘, 而命令2输出的内容来自内存

一旦它们输出的内容不一致时(vars currentEpoch 0 lastVoteEpoch 0 除外),

你又想当然的认为它们是一样的时候,那么我可以肯定告诉你,你的群集就开始进入不正常状态啦


給3个Redis主节点分配slot区间

[root@s11 ~]# cat > /tmp/addslots.sh
#!/bin/bash
for ((i = 5960;i < 10922;i++))
do
  echo "redis-cli -c -h 192.168.10.11 -p 6379 cluster addslots $i"
  redis-cli -c -h 192.168.10.11 -p 6379 cluster addslots $i
done
[root@s11 ~]# bash /tmp/addslots.sh
[root@s11 ~]# rm -f /tmp/addslots.sh

[root@s12 ~]# cat > /tmp/addslots.sh
#!/bin/bash
for ((i = 11423;i < 16384;i++))
do
  echo "redis-cli -c -h 192.168.10.12 -p 6379 cluster addslots $i"
  redis-cli -c -h 192.168.10.12 -p 6379 cluster addslots $i
done
[root@s12 ~]# bash /tmp/addslots.sh
[root@s12 ~]# rm -f /tmp/addslots.sh


[root@s13 ~]# cat > /tmp/addslots.sh
#!/bin/bash
for ((i = 0;i < 5960;i++))
do
  echo "redis-cli -c -h 192.168.10.13 -p 6379 cluster addslots $i"
  redis-cli -c -h 192.168.10.13 -p 6379 cluster addslots $i
done
for ((j = 10922;j < 11423;j++))
do
  echo "redis-cli -c -h 192.168.10.13 -p 6379 cluster addslots $j"
  redis-cli -c -h 192.168.10.13 -p 6379 cluster addslots $j
done
[root@s13 ~]# bash /tmp/addslots.sh

[root@s13 ~]# rm -f /tmp/addslots.sh

                  查看 3个 Master Nodes 节点的配置信息(重点请看slot定义区间)
[root@s11 ~]# redis-cli -c -h 192.168.10.11 -p 6379 cluster nodes
a387b85af0f849a08dfb4375ec91690d956e52d6 :6379 myself,master - 0 0 0 connected 5960-10921
[root@s11 ~]# redis-cli -c -h 192.168.10.11 -p 6379 cluster slots
1) 1) (integer) 5960
   2) (integer) 10921
   3) 1) ""
      2) (integer) 6379

[root@s12 ~]# redis-cli -c -h 192.168.10.12 -p 6379 cluster nodes
4acc5977a2a569ff8ed701a186e3c5ba9a6029b9 :6379 myself,master - 0 0 0 connected 11423-16383
[root@s12 ~]# redis-cli -c -h 192.168.10.12 -p 6379 cluster slots
1) 1) (integer) 11423
   2) (integer) 16383
   3) 1) ""
      2) (integer) 6379

[root@s13 ~]# redis-cli -c -h 192.168.10.13 -p 6379 cluster nodes
e3dfbeb93bc239d87bd1e04e13ee0a158df2b584 :6379 myself,master - 0 0 0 connected 0-5959 10922-11422
[root@s13 ~]# redis-cli -c -h 192.168.10.13 -p 6379 cluster slots
1) 1) (integer) 0
   2) (integer) 5959
   3) 1) ""
      2) (integer) 6379
2) 1) (integer) 10922
   2) (integer) 11422
   3) 1) ""
      2) (integer) 6379


                    查看 3个 Slave Nodes 节点的配置信息(重点请看slot永远为空)
[root@s14 ~]# redis-cli -c -h 192.168.10.14 -p 6379 cluster nodes

7ca9a4a226e0fd6a3e2ce9b51767168b2d3426dc :6379 myself,master - 0 0 0 connected
[root@s14 ~]# redis-cli -c -h 192.168.10.14 -p 6379 cluster slots
(empty list or set)
[root@s15 ~]# redis-cli -c -h 192.168.10.15 -p 6379 cluster nodes
3522dbcff1c5fad72d412cce8a78ce9542df7ece :6379 myself,master - 0 0 0 connected
[root@s15 ~]# redis-cli -c -h 192.168.10.15 -p 6379 cluster slots
(empty list or set)
[root@s16 ~]# redis-cli -c -h 192.168.10.16 -p 6379 cluster nodes
f07b64dbad0911690d755ee09ab55815257b71a8 :6379 myself,master - 0 0 0 connected
[root@s16 ~]# redis-cli -c -h 192.168.10.16 -p 6379 cluster slots
(empty list or set)


定义192.168.10.11作为Cluster群集当中的第1个Redis主节点,定义它对应的Redis从节点为192.168.10.14,
我们需要在Redis从节点192.168.10.14上执行如下终端命令:

[root@s14 ~]# redis-cli -c -h 192.168.10.14 -p 6379
192.168.10.14:6379> cluster meet 192.168.10.11 6379
192.168.10.14:6379> cluster nodes
192.168.10.14:6379> cluster slots
192.168.10.14:6379> cluster replicate a387b85af0f849a08dfb4375ec91690d956e52d6
192.168.10.14:6379> cluster info

[root@s14 ~]# redis-cli -c -h 192.168.10.14 -p 6379 
192.168.10.14:6379> cluster replicate a387b85af0f849a08dfb4375ec91690d956e52d6
(error) ERR Unknown node a387b85af0f849a08dfb4375ec91690d956e52d6  // cluster meet命令必须在cluster replicate命令之前执行
192.168.10.14:6379> cluster meet 192.168.10.11 6379
OK
192.168.10.14:6379> cluster nodes
7ca9a4a226e0fd6a3e2ce9b51767168b2d3426dc 192.168.10.14:6379 myself,master - 0 0 1 connected
a387b85af0f849a08dfb4375ec91690d956e52d6 192.168.10.11:6379 master - 0 1430830481350 0 connected 5960-10921
192.168.10.14:6379> cluster slots
1) 1) (integer) 5960
   2) (integer) 10921
   3) 1) "192.168.10.11"
	  2) (integer) 6379
192.168.10.14:6379> cluster replicate a387b85af0f849a08dfb4375ec91690d956e52d6
OK
192.168.10.14:6379> cluster info
cluster_state:fail
cluster_slots_assigned:4962
cluster_slots_ok:4962
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:2
cluster_size:1
cluster_current_epoch:1
cluster_my_epoch:0
cluster_stats_messages_sent:1006
cluster_stats_messages_received:1006
192.168.10.14:6379> 
192.168.10.14:6379> cluster nodes
7ca9a4a226e0fd6a3e2ce9b51767168b2d3426dc 192.168.10.14:6379 myself,slave a387b85af0f849a08dfb4375ec91690d956e52d6 0 0 1 connected
a387b85af0f849a08dfb4375ec91690d956e52d6 192.168.10.11:6379 master - 0 1430832826714 0 connected 5960-10921
192.168.10.14:6379> cluster slots
1) 1) (integer) 5960
   2) (integer) 10921
   3) 1) "192.168.10.11"
	  2) (integer) 6379
   4) 1) "192.168.10.14"
	  2) (integer) 6379
192.168.10.14:6379> 


定义192.168.10.12作为Cluster群集当中的第2个Redis主节点,定义它对应的Redis从节点为192.168.10.15,
我们需要在Redis从节点192.168.10.15上执行如下终端命令:

[root@s15 ~]# redis-cli -c -h 192.168.10.15 -p 6379
192.168.10.15:6379> cluster meet 192.168.10.12 6379
192.168.10.15:6379> cluster nodes
192.168.10.15:6379> cluster slots
192.168.10.15:6379> cluster replicate 4acc5977a2a569ff8ed701a186e3c5ba9a6029b9
192.168.10.15:6379> cluster info

[root@s15 ~]# redis-cli -c -h 192.168.10.15 -p 6379
192.168.10.15:6379> cluster meet 192.168.10.12 6379
OK
192.168.10.15:6379> cluster nodes
3522dbcff1c5fad72d412cce8a78ce9542df7ece 192.168.10.15:6379 myself,master - 0 0 1 connected
4acc5977a2a569ff8ed701a186e3c5ba9a6029b9 192.168.10.12:6379 master - 0 1430832147649 0 connected 11423-16383
192.168.10.15:6379> cluster slots
1) 1) (integer) 11423
   2) (integer) 16383
   3) 1) "192.168.10.12"
	  2) (integer) 6379
192.168.10.15:6379> cluster replicate 4acc5977a2a569ff8ed701a186e3c5ba9a6029b9
OK
192.168.10.15:6379> cluster info
cluster_state:fail
cluster_slots_assigned:4961
cluster_slots_ok:4961
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:2
cluster_size:1
cluster_current_epoch:1
cluster_my_epoch:0
cluster_stats_messages_sent:212
cluster_stats_messages_received:212
192.168.10.15:6379> 
192.168.10.15:6379> cluster nodes
3522dbcff1c5fad72d412cce8a78ce9542df7ece 192.168.10.15:6379 myself,slave 4acc5977a2a569ff8ed701a186e3c5ba9a6029b9 0 0 1 connected
4acc5977a2a569ff8ed701a186e3c5ba9a6029b9 192.168.10.12:6379 master - 0 1430832978103 0 connected 11423-16383
192.168.10.15:6379> cluster slots
1) 1) (integer) 11423
   2) (integer) 16383
   3) 1) "192.168.10.12"
	  2) (integer) 6379
   4) 1) "192.168.10.15"
	  2) (integer) 6379
192.168.10.15:6379> 

定义192.168.10.13作为Cluster群集当中的第3个Redis主节点,定义它对应的Redis从节点为192.168.10.16,
我们需要在Redis从节点192.168.10.16上执行如下终端命令:

[root@s16 ~]# redis-cli -c -h 192.168.10.16 -p 6379
192.168.10.16:6379> cluster meet 192.168.10.13 6379
192.168.10.16:6379> cluster nodes
192.168.10.16:6379> cluster slots
192.168.10.16:6379> cluster replicate e3dfbeb93bc239d87bd1e04e13ee0a158df2b584
192.168.10.16:6379> cluster info

[root@s16 ~]# redis-cli -c -h 192.168.10.16 -p 6379
192.168.10.16:6379> cluster meet 192.168.10.13 6379
OK
192.168.10.16:6379> cluster nodes
f07b64dbad0911690d755ee09ab55815257b71a8 192.168.10.16:6379 myself,master - 0 0 0 connected
e3dfbeb93bc239d87bd1e04e13ee0a158df2b584 192.168.10.13:6379 master - 0 1430832542383 1 connected 0-5959 10922-11422
192.168.10.16:6379> cluster slots
1) 1) (integer) 0
   2) (integer) 5959
   3) 1) "192.168.10.13"
	  2) (integer) 6379
2) 1) (integer) 10922
   2) (integer) 11422
   3) 1) "192.168.10.13"
	  2) (integer) 6379
192.168.10.16:6379> cluster replicate e3dfbeb93bc239d87bd1e04e13ee0a158df2b584
OK
192.168.10.16:6379> cluster info
cluster_state:fail
cluster_slots_assigned:6461
cluster_slots_ok:6461
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:2
cluster_size:1
cluster_current_epoch:1
cluster_my_epoch:1
cluster_stats_messages_sent:135
cluster_stats_messages_received:135
192.168.10.16:6379>
192.168.10.16:6379> cluster nodes
f07b64dbad0911690d755ee09ab55815257b71a8 192.168.10.16:6379 myself,slave e3dfbeb93bc239d87bd1e04e13ee0a158df2b584 0 0 0 connected
e3dfbeb93bc239d87bd1e04e13ee0a158df2b584 192.168.10.13:6379 master - 0 1430833100945 1 connected 0-5959 10922-11422
192.168.10.16:6379> cluster slots
1) 1) (integer) 0
   2) (integer) 5959
   3) 1) "192.168.10.13"
	  2) (integer) 6379
   4) 1) "192.168.10.16"
	  2) (integer) 6379
2) 1) (integer) 10922
   2) (integer) 11422
   3) 1) "192.168.10.13"
	  2) (integer) 6379
   4) 1) "192.168.10.16"
	  2) (integer) 6379
192.168.10.16:6379> 


现在分别在每个Redis主节点执行cluster nodes终端命令,认真对比一下192.168.10.11(主)和

192.168.10.14(从)这一对主从节点输出的内容的差异描述

不管终端输出的内容怎么表达,它们之间谁是主节点,谁是从节点的身份都不会有任何冲突

[root@s11 ~]# redis-cli -c -h 192.168.10.11 -p 6379
192.168.10.11:6379> cluster nodes
a387b85af0f849a08dfb4375ec91690d956e52d6 192.168.10.11:6379 myself,master - 0 0 0 connected 5960-10921
7ca9a4a226e0fd6a3e2ce9b51767168b2d3426dc 192.168.10.14:6379 slave a387b85af0f849a08dfb4375ec91690d956e52d6 0 1430833511312 1 connected
192.168.10.11:6379> cluster slots
1) 1) (integer) 5960
   2) (integer) 10921
   3) 1) "192.168.10.11"
      2) (integer) 6379
   4) 1) "192.168.10.14"
      2) (integer) 6379
192.168.10.11:6379> cluster info
cluster_state:fail
cluster_slots_assigned:4962
cluster_slots_ok:4962
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:2
cluster_size:1
cluster_current_epoch:1
cluster_my_epoch:0
cluster_stats_messages_sent:5870
cluster_stats_messages_received:5870
192.168.10.11:6379>
[root@s12 ~]# redis-cli -c -h 192.168.10.12 -p 6379
192.168.10.12:6379> cluster nodes
3522dbcff1c5fad72d412cce8a78ce9542df7ece 192.168.10.15:6379 slave 4acc5977a2a569ff8ed701a186e3c5ba9a6029b9 0 1430833538605 1 connected
4acc5977a2a569ff8ed701a186e3c5ba9a6029b9 192.168.10.12:6379 myself,master - 0 0 0 connected 11423-16383
192.168.10.12:6379> cluster slots
1) 1) (integer) 11423
   2) (integer) 16383
   3) 1) "192.168.10.12"
      2) (integer) 6379
   4) 1) "192.168.10.15"
      2) (integer) 6379
192.168.10.12:6379> cluster info
cluster_state:fail
cluster_slots_assigned:4961
cluster_slots_ok:4961
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:2
cluster_size:1
cluster_current_epoch:1
cluster_my_epoch:0
cluster_stats_messages_sent:2746
cluster_stats_messages_received:2746
192.168.10.12:6379> 
[root@s13 ~]# redis-cli -c -h 192.168.10.13 -p 6379
192.168.10.13:6379> cluster nodes
f07b64dbad0911690d755ee09ab55815257b71a8 192.168.10.16:6379 slave e3dfbeb93bc239d87bd1e04e13ee0a158df2b584 0 1430833565875 1 connected
e3dfbeb93bc239d87bd1e04e13ee0a158df2b584 192.168.10.13:6379 myself,master - 0 0 1 connected 0-5959 10922-11422
192.168.10.13:6379> cluster slots
1) 1) (integer) 0
   2) (integer) 5959
   3) 1) "192.168.10.13"
      2) (integer) 6379
   4) 1) "192.168.10.16"
      2) (integer) 6379
2) 1) (integer) 10922
   2) (integer) 11422
   3) 1) "192.168.10.13"
      2) (integer) 6379
   4) 1) "192.168.10.16"
      2) (integer) 6379
192.168.10.13:6379> cluster info
cluster_state:fail
cluster_slots_assigned:6461
cluster_slots_ok:6461
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:2
cluster_size:1
cluster_current_epoch:1
cluster_my_epoch:1
cluster_stats_messages_sent:1979
cluster_stats_messages_received:1979
192.168.10.13:6379> 

上面的所有终端命令执行到这儿,只是完成了3组主从节点配置,群集还没有启动起来

你可以从cluster info 命令输出的内容中很容易看出来:

cluster_state:fail   cluster_known_nodes:2

现在每个Redis主节点还不知道对方是Cluster群集中的主节点身份,

我们需要执行如下命令通知对方,我自己也是主节点的身份:

192.168.10.12:6379> cluster meet 192.168.10.11 6379
192.168.10.13:6379> cluster meet 192.168.10.11 6379


接下来我们要干的事:
作为第1个Redis主节点192.168.10.11已经知道了192.168.10.12是第2个主节点,但是还不认识你的从节点啊
作为第1个Redis主节点192.168.10.11已经知道了192.168.10.13是第3个主节点,但是还不认识你的从节点啊
所以在第1个Redis主节点192.168.10.11上,我们需要执行如下命令格式终端命令,
把其它主节点下面的从节点配置信息全部复制到主节点192.168.10.11这边来保存:

192.168.10.11:6379> cluster info
192.168.10.11:6379> cluster nodes
192.168.10.11:6379> cluster slots
192.168.10.11:6379> cluster slaves a387b85af0f849a08dfb4375ec91690d956e52d6
192.168.10.11:6379> cluster slaves 4acc5977a2a569ff8ed701a186e3c5ba9a6029b9
192.168.10.11:6379> cluster slaves e3dfbeb93bc239d87bd1e04e13ee0a158df2b584

192.168.10.11:6379> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:2
cluster_my_epoch:0
cluster_stats_messages_sent:6689
cluster_stats_messages_received:6689
192.168.10.11:6379> cluster nodes
f07b64dbad0911690d755ee09ab55815257b71a8 192.168.10.16:6379 slave e3dfbeb93bc239d87bd1e04e13ee0a158df2b584 0 1430833984629 1 connected
a387b85af0f849a08dfb4375ec91690d956e52d6 192.168.10.11:6379 myself,master - 0 0 0 connected 5960-10921
4acc5977a2a569ff8ed701a186e3c5ba9a6029b9 192.168.10.12:6379 master - 0 1430833982579 2 connected 11423-16383
e3dfbeb93bc239d87bd1e04e13ee0a158df2b584 192.168.10.13:6379 master - 0 1430833981556 1 connected 0-5959 10922-11422
7ca9a4a226e0fd6a3e2ce9b51767168b2d3426dc 192.168.10.14:6379 slave a387b85af0f849a08dfb4375ec91690d956e52d6 0 1430833983604 1 connected
3522dbcff1c5fad72d412cce8a78ce9542df7ece 192.168.10.15:6379 slave 4acc5977a2a569ff8ed701a186e3c5ba9a6029b9 0 1430833980532 2 connected
192.168.10.11:6379> cluster slots
1) 1) (integer) 5960
   2) (integer) 10921
   3) 1) "192.168.10.11"
	  2) (integer) 6379
   4) 1) "192.168.10.14"
	  2) (integer) 6379
2) 1) (integer) 11423
   2) (integer) 16383
   3) 1) "192.168.10.12"
	  2) (integer) 6379
   4) 1) "192.168.10.15"
	  2) (integer) 6379
3) 1) (integer) 0
   2) (integer) 5959
   3) 1) "192.168.10.13"
	  2) (integer) 6379
   4) 1) "192.168.10.16"
	  2) (integer) 6379
4) 1) (integer) 10922
   2) (integer) 11422
   3) 1) "192.168.10.13"
	  2) (integer) 6379
   4) 1) "192.168.10.16"
	  2) (integer) 6379
192.168.10.11:6379>
192.168.10.11:6379> cluster slaves a387b85af0f849a08dfb4375ec91690d956e52d6
1) "7ca9a4a226e0fd6a3e2ce9b51767168b2d3426dc 192.168.10.14:6379 slave a387b85af0f849a08dfb4375ec91690d956e52d6 0 1430837747863 1 connected"
192.168.10.11:6379> cluster slaves 4acc5977a2a569ff8ed701a186e3c5ba9a6029b9
1) "3522dbcff1c5fad72d412cce8a78ce9542df7ece 192.168.10.15:6379 slave 4acc5977a2a569ff8ed701a186e3c5ba9a6029b9 0 1430837758083 2 connected"
192.168.10.11:6379> cluster slaves e3dfbeb93bc239d87bd1e04e13ee0a158df2b584
1) "f07b64dbad0911690d755ee09ab55815257b71a8 192.168.10.16:6379 slave e3dfbeb93bc239d87bd1e04e13ee0a158df2b584 0 1430837766267 1 connected"
192.168.10.11:6379> cluster slaves 7ca9a4a226e0fd6a3e2ce9b51767168b2d3426dc
(error) ERR The specified node is not a master    因为node_id值是从节点的 node id值,所以才报告这样的错误
192.168.10.11:6379> cluster slaves 3522dbcff1c5fad72d412cce8a78ce9542df7ece
(error) ERR The specified node is not a master    因为node_id值是从节点的 node id值,所以才报告这样的错误
192.168.10.11:6379> cluster slaves f07b64dbad0911690d755ee09ab55815257b71a8
(error) ERR The specified node is not a master    因为node_id值是从节点的 node id值,所以才报告这样的错误
192.168.10.11:6379>

接下来我们要干的事:
作为第2个Redis主节点192.168.10.12已经知道了192.168.10.11是第1个主节点,但是还不认识你的从节点啊
作为第2个Redis主节点192.168.10.12已经知道了192.168.10.13是第3个主节点,但是还不认识你的从节点啊
所以在第2个Redis主节点192.168.10.12上,我们需要执行如下命令格式终端命令,
把其它主节点下面的从节点配置信息全部复制到主节点192.168.10.12这边来保存:
cluster slaves {node_id}   注意 node_id 是其它主节点的node id 值
 
192.168.10.12:6379> cluster info
192.168.10.12:6379> cluster nodes
192.168.10.12:6379> cluster slots
192.168.10.12:6379> cluster slaves a387b85af0f849a08dfb4375ec91690d956e52d6
192.168.10.12:6379> cluster slaves 4acc5977a2a569ff8ed701a186e3c5ba9a6029b9
192.168.10.12:6379> cluster slaves e3dfbeb93bc239d87bd1e04e13ee0a158df2b584
192.168.10.12:6379> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:2
cluster_my_epoch:2
cluster_stats_messages_sent:6075
cluster_stats_messages_received:6075
192.168.10.12:6379> cluster nodes
3522dbcff1c5fad72d412cce8a78ce9542df7ece 192.168.10.15:6379 slave 4acc5977a2a569ff8ed701a186e3c5ba9a6029b9 0 1430835100450 2 connected
7ca9a4a226e0fd6a3e2ce9b51767168b2d3426dc 192.168.10.14:6379 slave a387b85af0f849a08dfb4375ec91690d956e52d6 0 1430835102090 0 connected
a387b85af0f849a08dfb4375ec91690d956e52d6 192.168.10.11:6379 master - 0 1430835100040 0 connected 5960-10921
f07b64dbad0911690d755ee09ab55815257b71a8 192.168.10.16:6379 slave e3dfbeb93bc239d87bd1e04e13ee0a158df2b584 0 1430835101069 1 connected
e3dfbeb93bc239d87bd1e04e13ee0a158df2b584 192.168.10.13:6379 master - 0 1430835103113 1 connected 0-5959 10922-11422
4acc5977a2a569ff8ed701a186e3c5ba9a6029b9 192.168.10.12:6379 myself,master - 0 0 2 connected 11423-16383
192.168.10.12:6379> cluster slots
1) 1) (integer) 5960
   2) (integer) 10921
   3) 1) "192.168.10.11"
	  2) (integer) 6379
   4) 1) "192.168.10.14"
	  2) (integer) 6379
2) 1) (integer) 0
   2) (integer) 5959
   3) 1) "192.168.10.13"
	  2) (integer) 6379
   4) 1) "192.168.10.16"
	  2) (integer) 6379
3) 1) (integer) 10922
   2) (integer) 11422
   3) 1) "192.168.10.13"
	  2) (integer) 6379
   4) 1) "192.168.10.16"
	  2) (integer) 6379
4) 1) (integer) 11423
   2) (integer) 16383
   3) 1) "192.168.10.12"
	  2) (integer) 6379
   4) 1) "192.168.10.15"
	  2) (integer) 6379
192.168.10.12:6379> 
192.168.10.12:6379> cluster slaves a387b85af0f849a08dfb4375ec91690d956e52d6
1) "7ca9a4a226e0fd6a3e2ce9b51767168b2d3426dc 192.168.10.14:6379 slave a387b85af0f849a08dfb4375ec91690d956e52d6 0 1430837971847 0 connected"
192.168.10.12:6379> cluster slaves 4acc5977a2a569ff8ed701a186e3c5ba9a6029b9
1) "3522dbcff1c5fad72d412cce8a78ce9542df7ece 192.168.10.15:6379 slave 4acc5977a2a569ff8ed701a186e3c5ba9a6029b9 0 1430837981053 2 connected"
192.168.10.12:6379> cluster slaves e3dfbeb93bc239d87bd1e04e13ee0a158df2b584
1) "f07b64dbad0911690d755ee09ab55815257b71a8 192.168.10.16:6379 slave e3dfbeb93bc239d87bd1e04e13ee0a158df2b584 0 1430837996394 1 connected"
192.168.10.12:6379> cluster slaves 7ca9a4a226e0fd6a3e2ce9b51767168b2d3426dc
(error) ERR The specified node is not a master   因为node_id值是从节点的 node id值,所以才报告这样的错误
192.168.10.12:6379> cluster slaves 3522dbcff1c5fad72d412cce8a78ce9542df7ece
(error) ERR The specified node is not a master   因为node_id值是从节点的 node id值,所以才报告这样的错误
192.168.10.12:6379> cluster slaves f07b64dbad0911690d755ee09ab55815257b71a8
(error) ERR The specified node is not a master   因为node_id值是从节点的 node id值,所以才报告这样的错误
192.168.10.12:6379> 

接下来我们要干的事:
作为第3个Redis主节点192.168.10.13已经知道了192.168.10.11是第1个主节点,但是还不认识你的从节点啊
作为第3个Redis主节点192.168.10.13已经知道了192.168.10.12是第2个主节点,但是还不认识你的从节点啊
所以在第3个Redis主节点192.168.10.13上,我们需要执行如下命令格式终端命令,
把其它主节点下面的从节点配置信息全部复制到主节点192.168.10.13这边来保存:
cluster slaves {node_id}   注意 node_id 是其它主节点的node id 值

192.168.10.13:6379> cluster info
192.168.10.13:6379> cluster nodes
192.168.10.13:6379> cluster slots
192.168.10.13:6379> cluster slaves a387b85af0f849a08dfb4375ec91690d956e52d6
192.168.10.13:6379> cluster slaves 4acc5977a2a569ff8ed701a186e3c5ba9a6029b9
192.168.10.13:6379> cluster slaves e3dfbeb93bc239d87bd1e04e13ee0a158df2b584

192.168.10.13:6379> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:2
cluster_my_epoch:1
cluster_stats_messages_sent:5562
cluster_stats_messages_received:5562
192.168.10.13:6379> cluster nodes
7ca9a4a226e0fd6a3e2ce9b51767168b2d3426dc 192.168.10.14:6379 slave a387b85af0f849a08dfb4375ec91690d956e52d6 0 1430835355742 0 connected
a387b85af0f849a08dfb4375ec91690d956e52d6 192.168.10.11:6379 master - 0 1430835353081 0 connected 5960-10921
e3dfbeb93bc239d87bd1e04e13ee0a158df2b584 192.168.10.13:6379 myself,master - 0 0 1 connected 0-5959 10922-11422
f07b64dbad0911690d755ee09ab55815257b71a8 192.168.10.16:6379 slave e3dfbeb93bc239d87bd1e04e13ee0a158df2b584 0 1430835357789 1 connected
3522dbcff1c5fad72d412cce8a78ce9542df7ece 192.168.10.15:6379 slave 4acc5977a2a569ff8ed701a186e3c5ba9a6029b9 0 1430835358812 2 connected
4acc5977a2a569ff8ed701a186e3c5ba9a6029b9 192.168.10.12:6379 master - 0 1430835356765 2 connected 11423-16383
192.168.10.13:6379> cluster slots
1) 1) (integer) 5960
   2) (integer) 10921
   3) 1) "192.168.10.11"
	  2) (integer) 6379
   4) 1) "192.168.10.14"
	  2) (integer) 6379
2) 1) (integer) 0
   2) (integer) 5959
   3) 1) "192.168.10.13"
	  2) (integer) 6379
   4) 1) "192.168.10.16"
	  2) (integer) 6379
3) 1) (integer) 10922
   2) (integer) 11422
   3) 1) "192.168.10.13"
	  2) (integer) 6379
   4) 1) "192.168.10.16"
	  2) (integer) 6379
4) 1) (integer) 11423
   2) (integer) 16383
   3) 1) "192.168.10.12"
	  2) (integer) 6379
   4) 1) "192.168.10.15"
	  2) (integer) 6379
192.168.10.13:6379>
192.168.10.13:6379> cluster slaves a387b85af0f849a08dfb4375ec91690d956e52d6
1) "7ca9a4a226e0fd6a3e2ce9b51767168b2d3426dc 192.168.10.14:6379 slave a387b85af0f849a08dfb4375ec91690d956e52d6 0 1430838280311 0 connected"
192.168.10.13:6379> cluster slaves 4acc5977a2a569ff8ed701a186e3c5ba9a6029b9
1) "3522dbcff1c5fad72d412cce8a78ce9542df7ece 192.168.10.15:6379 slave 4acc5977a2a569ff8ed701a186e3c5ba9a6029b9 0 1430838290539 2 connected"
192.168.10.13:6379> cluster slaves e3dfbeb93bc239d87bd1e04e13ee0a158df2b584
1) "f07b64dbad0911690d755ee09ab55815257b71a8 192.168.10.16:6379 slave e3dfbeb93bc239d87bd1e04e13ee0a158df2b584 0 1430838309972 1 connected"
192.168.10.13:6379> cluster slaves 7ca9a4a226e0fd6a3e2ce9b51767168b2d3426dc
(error) ERR The specified node is not a master   因为node_id值是从节点的 node id值,所以才报告这样的错误
192.168.10.13:6379> cluster slaves 3522dbcff1c5fad72d412cce8a78ce9542df7ece
(error) ERR The specified node is not a master   因为node_id值是从节点的 node id值,所以才报告这样的错误
192.168.10.13:6379> cluster slaves f07b64dbad0911690d755ee09ab55815257b71a8
(error) ERR The specified node is not a master   因为node_id值是从节点的 node id值,所以才报告这样的错误
192.168.10.13:6379> 

=========================================================


接下来我们要干的事:
作为第1个Redis从节点192.168.10.14已经知道了192.168.10.11是第1个主节点
作为第1个Redis从节点192.168.10.14还不知道192.168.10.12是第2个主节点,还不认识你的从节点啊
作为第1个Redis从节点192.168.10.14还不知道192.168.10.13是第3个主节点,还不认识你的从节点啊
所以在第1个Redis从节点192.168.10.14上,我们需要执行如下命令格式终端命令,
把其它主节点下面的从节点配置信息全部复制到从节点192.168.10.14这边来保存:
cluster slaves {node_id}   注意 node_id 是其它主节点的node id 值

192.168.10.14:6379> cluster info
192.168.10.14:6379> cluster nodes
192.168.10.14:6379> cluster slots
192.168.10.14:6379> cluster slaves a387b85af0f849a08dfb4375ec91690d956e52d6
192.168.10.14:6379> cluster slaves 4acc5977a2a569ff8ed701a186e3c5ba9a6029b9
192.168.10.14:6379> cluster slaves e3dfbeb93bc239d87bd1e04e13ee0a158df2b584

192.168.10.14:6379> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:2
cluster_my_epoch:0
cluster_stats_messages_sent:10023
cluster_stats_messages_received:10023
192.168.10.14:6379> cluster nodes
3522dbcff1c5fad72d412cce8a78ce9542df7ece 192.168.10.15:6379 slave 4acc5977a2a569ff8ed701a186e3c5ba9a6029b9 0 1430835671631 2 connected
7ca9a4a226e0fd6a3e2ce9b51767168b2d3426dc 192.168.10.14:6379 myself,slave a387b85af0f849a08dfb4375ec91690d956e52d6 0 0 1 connected
4acc5977a2a569ff8ed701a186e3c5ba9a6029b9 192.168.10.12:6379 master - 0 1430835670606 2 connected 11423-16383
f07b64dbad0911690d755ee09ab55815257b71a8 192.168.10.16:6379 slave e3dfbeb93bc239d87bd1e04e13ee0a158df2b584 0 1430835668571 1 connected
e3dfbeb93bc239d87bd1e04e13ee0a158df2b584 192.168.10.13:6379 master - 0 1430835669584 1 connected 0-5959 10922-11422
a387b85af0f849a08dfb4375ec91690d956e52d6 192.168.10.11:6379 master - 0 1430835667549 0 connected 5960-10921
192.168.10.14:6379> cluster slots
1) 1) (integer) 11423
   2) (integer) 16383
   3) 1) "192.168.10.12"
	  2) (integer) 6379
   4) 1) "192.168.10.15"
	  2) (integer) 6379
2) 1) (integer) 0
   2) (integer) 5959
   3) 1) "192.168.10.13"
	  2) (integer) 6379
   4) 1) "192.168.10.16"
	  2) (integer) 6379
3) 1) (integer) 10922
   2) (integer) 11422
   3) 1) "192.168.10.13"
	  2) (integer) 6379
   4) 1) "192.168.10.16"
	  2) (integer) 6379
4) 1) (integer) 5960
   2) (integer) 10921
   3) 1) "192.168.10.11"
	  2) (integer) 6379
   4) 1) "192.168.10.14"
	  2) (integer) 6379
192.168.10.14:6379> 
192.168.10.14:6379> cluster slaves a387b85af0f849a08dfb4375ec91690d956e52d6
1) "7ca9a4a226e0fd6a3e2ce9b51767168b2d3426dc 192.168.10.14:6379 myself,slave a387b85af0f849a08dfb4375ec91690d956e52d6 0 0 1 connected"
192.168.10.14:6379> cluster slaves 4acc5977a2a569ff8ed701a186e3c5ba9a6029b9
1) "3522dbcff1c5fad72d412cce8a78ce9542df7ece 192.168.10.15:6379 slave 4acc5977a2a569ff8ed701a186e3c5ba9a6029b9 0 1430836897482 2 connected"
192.168.10.14:6379> cluster slaves e3dfbeb93bc239d87bd1e04e13ee0a158df2b584
1) "f07b64dbad0911690d755ee09ab55815257b71a8 192.168.10.16:6379 slave e3dfbeb93bc239d87bd1e04e13ee0a158df2b584 0 1430836936450 1 connected"
192.168.10.14:6379> cluster slaves 7ca9a4a226e0fd6a3e2ce9b51767168b2d3426dc
(error) ERR The specified node is not a master   因为node_id值是从节点的 node id值,所以才报告这样的错误
192.168.10.14:6379> cluster slaves 3522dbcff1c5fad72d412cce8a78ce9542df7ece
(error) ERR The specified node is not a master   因为node_id值是从节点的 node id值,所以才报告这样的错误
192.168.10.14:6379> cluster slaves f07b64dbad0911690d755ee09ab55815257b71a8
(error) ERR The specified node is not a master   因为node_id值是从节点的 node id值,所以才报告这样的错误
192.168.10.14:6379> 

接下来我们要干的事:
作为第2个Redis从节点192.168.10.15已经知道了192.168.10.12是第2个主节点
作为第2个Redis从节点192.168.10.15还不知道192.168.10.11是第1个主节点,还不认识你的从节点啊
作为第2个Redis从节点192.168.10.15还不知道192.168.10.13是第3个主节点,还不认识你的从节点啊
所以在第2个Redis从节点192.168.10.15上,我们需要执行如下命令格式终端命令,
把其它主节点下面的从节点配置信息全部复制到从节点192.168.10.15这边来保存:
cluster slaves {node_id}   注意 node_id 是其它主节点的node id 值

192.168.10.15:6379> cluster info
192.168.10.15:6379> cluster nodes
192.168.10.15:6379> cluster slots
168.168.10.15:6379> cluster slaves a387b85af0f849a08dfb4375ec91690d956e52d6
192.168.10.15:6379> cluster slaves 4acc5977a2a569ff8ed701a186e3c5ba9a6029b9
192.168.10.15:6379> cluster slaves e3dfbeb93bc239d87bd1e04e13ee0a158df2b584
192.168.10.15:6379> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:2
cluster_my_epoch:2
cluster_stats_messages_sent:7632
cluster_stats_messages_received:7632
192.168.10.15:6379> cluster nodes
f07b64dbad0911690d755ee09ab55815257b71a8 192.168.10.16:6379 slave e3dfbeb93bc239d87bd1e04e13ee0a158df2b584 0 1430835862526 1 connected
3522dbcff1c5fad72d412cce8a78ce9542df7ece 192.168.10.15:6379 myself,slave 4acc5977a2a569ff8ed701a186e3c5ba9a6029b9 0 0 1 connected
4acc5977a2a569ff8ed701a186e3c5ba9a6029b9 192.168.10.12:6379 master - 0 1430835863141 2 connected 11423-16383
a387b85af0f849a08dfb4375ec91690d956e52d6 192.168.10.11:6379 master - 0 1430835863551 0 connected 5960-10921
7ca9a4a226e0fd6a3e2ce9b51767168b2d3426dc 192.168.10.14:6379 slave a387b85af0f849a08dfb4375ec91690d956e52d6 0 1430835862116 0 connected
e3dfbeb93bc239d87bd1e04e13ee0a158df2b584 192.168.10.13:6379 master - 0 1430835861093 1 connected 0-5959 10922-11422
192.168.10.15:6379> cluster slots
1) 1) (integer) 11423
   2) (integer) 16383
   3) 1) "192.168.10.12"
	  2) (integer) 6379
   4) 1) "192.168.10.15"
	  2) (integer) 6379
2) 1) (integer) 5960
   2) (integer) 10921
   3) 1) "192.168.10.11"
	  2) (integer) 6379
   4) 1) "192.168.10.14"
	  2) (integer) 6379
3) 1) (integer) 0
   2) (integer) 5959
   3) 1) "192.168.10.13"
	  2) (integer) 6379
   4) 1) "192.168.10.16"
	  2) (integer) 6379
4) 1) (integer) 10922
   2) (integer) 11422
   3) 1) "192.168.10.13"
	  2) (integer) 6379
   4) 1) "192.168.10.16"
	  2) (integer) 6379
192.168.10.15:6379> 
192.168.10.15:6379> cluster slaves a387b85af0f849a08dfb4375ec91690d956e52d6
1) "7ca9a4a226e0fd6a3e2ce9b51767168b2d3426dc 192.168.10.14:6379 slave a387b85af0f849a08dfb4375ec91690d956e52d6 0 1430838627659 0 connected"
192.168.10.15:6379> cluster slaves 4acc5977a2a569ff8ed701a186e3c5ba9a6029b9
1) "3522dbcff1c5fad72d412cce8a78ce9542df7ece 192.168.10.15:6379 myself,slave 4acc5977a2a569ff8ed701a186e3c5ba9a6029b9 0 0 1 connected"
192.168.10.15:6379> cluster slaves e3dfbeb93bc239d87bd1e04e13ee0a158df2b584
1) "f07b64dbad0911690d755ee09ab55815257b71a8 192.168.10.16:6379 slave e3dfbeb93bc239d87bd1e04e13ee0a158df2b584 0 1430838663433 1 connected"
192.168.10.15:6379> cluster slaves 7ca9a4a226e0fd6a3e2ce9b51767168b2d3426dc
(error) ERR The specified node is not a master   因为node_id值是从节点的 node id值,所以才报告这样的错误
192.168.10.15:6379> cluster slaves 3522dbcff1c5fad72d412cce8a78ce9542df7ece
(error) ERR The specified node is not a master   因为node_id值是从节点的 node id值,所以才报告这样的错误
192.168.10.15:6379> cluster slaves f07b64dbad0911690d755ee09ab55815257b71a8
(error) ERR The specified node is not a master   因为node_id值是从节点的 node id值,所以才报告这样的错误
192.168.10.15:6379>

接下来我们要干的事:
作为第3个Redis从节点192.168.10.16已经知道了192.168.10.13是第3个主节点
作为第3个Redis从节点192.168.10.16还不知道192.168.10.11是第1个主节点,还不认识你的从节点啊
作为第3个Redis从节点192.168.10.16还不知道192.168.10.12是第2个主节点,还不认识你的从节点啊
所以在第3个Redis从节点192.168.10.16上,我们需要执行如下命令格式终端命令,
把其它主节点下面的从节点配置信息全部复制到从节点192.168.10.16这边来保存:
cluster slaves {node_id}   注意 node_id 是其它主节点的node id 值

192.168.10.16:6379> cluster info
192.168.10.16:6379> cluster nodes
192.168.10.16:6379> cluster slots
168.168.10.16:6379> cluster slaves a387b85af0f849a08dfb4375ec91690d956e52d6
192.168.10.16:6379> cluster slaves 4acc5977a2a569ff8ed701a186e3c5ba9a6029b9
192.168.10.16:6379> cluster slaves e3dfbeb93bc239d87bd1e04e13ee0a158df2b584

192.168.10.16:6379> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:2
cluster_my_epoch:1
cluster_stats_messages_sent:7342
cluster_stats_messages_received:7342
192.168.10.16:6379> cluster nodes
a387b85af0f849a08dfb4375ec91690d956e52d6 192.168.10.11:6379 master - 0 1430836122465 0 connected 5960-10921
e3dfbeb93bc239d87bd1e04e13ee0a158df2b584 192.168.10.13:6379 master - 0 1430836117347 1 connected 0-5959 10922-11422
7ca9a4a226e0fd6a3e2ce9b51767168b2d3426dc 192.168.10.14:6379 slave a387b85af0f849a08dfb4375ec91690d956e52d6 0 1430836119392 0 connected
3522dbcff1c5fad72d412cce8a78ce9542df7ece 192.168.10.15:6379 slave 4acc5977a2a569ff8ed701a186e3c5ba9a6029b9 0 1430836120418 2 connected
f07b64dbad0911690d755ee09ab55815257b71a8 192.168.10.16:6379 myself,slave e3dfbeb93bc239d87bd1e04e13ee0a158df2b584 0 0 0 connected
4acc5977a2a569ff8ed701a186e3c5ba9a6029b9 192.168.10.12:6379 master - 0 1430836121440 2 connected 11423-16383
192.168.10.16:6379> cluster slots
1) 1) (integer) 5960
   2) (integer) 10921
   3) 1) "192.168.10.11"
	  2) (integer) 6379
   4) 1) "192.168.10.14"
	  2) (integer) 6379
2) 1) (integer) 0
   2) (integer) 5959
   3) 1) "192.168.10.13"
	  2) (integer) 6379
   4) 1) "192.168.10.16"
	  2) (integer) 6379
3) 1) (integer) 10922
   2) (integer) 11422
   3) 1) "192.168.10.13"
	  2) (integer) 6379
   4) 1) "192.168.10.16"
	  2) (integer) 6379
4) 1) (integer) 11423
   2) (integer) 16383
   3) 1) "192.168.10.12"
	  2) (integer) 6379
   4) 1) "192.168.10.15"
	  2) (integer) 6379
192.168.10.16:6379> 
192.168.10.16:6379> cluster slaves a387b85af0f849a08dfb4375ec91690d956e52d6
1) "7ca9a4a226e0fd6a3e2ce9b51767168b2d3426dc 192.168.10.14:6379 slave a387b85af0f849a08dfb4375ec91690d956e52d6 0 1430838820167 0 connected"
192.168.10.16:6379> cluster slaves 4acc5977a2a569ff8ed701a186e3c5ba9a6029b9
1) "3522dbcff1c5fad72d412cce8a78ce9542df7ece 192.168.10.15:6379 slave 4acc5977a2a569ff8ed701a186e3c5ba9a6029b9 0 1430838828335 2 connected"
192.168.10.16:6379> cluster slaves e3dfbeb93bc239d87bd1e04e13ee0a158df2b584
1) "f07b64dbad0911690d755ee09ab55815257b71a8 192.168.10.16:6379 myself,slave e3dfbeb93bc239d87bd1e04e13ee0a158df2b584 0 0 0 connected"
192.168.10.16:6379> cluster slaves 7ca9a4a226e0fd6a3e2ce9b51767168b2d3426dc
(error) ERR The specified node is not a master   因为node_id值是从节点的 node id值,所以才报告这样的错误
192.168.10.16:6379> cluster slaves 3522dbcff1c5fad72d412cce8a78ce9542df7ece
(error) ERR The specified node is not a master   因为node_id值是从节点的 node id值,所以才报告这样的错误
192.168.10.16:6379> cluster slaves f07b64dbad0911690d755ee09ab55815257b71a8
(error) ERR The specified node is not a master   因为node_id值是从节点的 node id值,所以才报告这样的错误
192.168.10.16:6379>


到这儿整个Cluster模式的Redis群集也就配置好了,这是个3主3从的Cluster模式群集,在跟别人交流时,你一定要告诉别人
你玩的Redis是单机版的,还是Master-Slave模式Redis群集,还是Cluster模式Redis群集,
这样别人知道你在说那个群集模式下的问题,给人留下你的提问很专业,不至于提问把底裤都暴露给别人了! ^-^

[root@s11 ~]# cat /var/lib/redis/nodes-6379.conf > /var/lib/redis/nodes-6379.conf.bak
[root@s12 ~]# cat /var/lib/redis/nodes-6379.conf > /var/lib/redis/nodes-6379.conf.bak
[root@s13 ~]# cat /var/lib/redis/nodes-6379.conf > /var/lib/redis/nodes-6379.conf.bak
[root@s14 ~]# cat /var/lib/redis/nodes-6379.conf > /var/lib/redis/nodes-6379.conf.bak
[root@s15 ~]# cat /var/lib/redis/nodes-6379.conf > /var/lib/redis/nodes-6379.conf.bak
[root@s16 ~]# cat /var/lib/redis/nodes-6379.conf > /var/lib/redis/nodes-6379.conf.bak
上面6条终端命令是把正常的群集节点配置文件都备份一份,以后排除错误对照上面那些扩展名为bak的文件即可,
如果你没正确的参照物,你又怎么知道哪儿不正常了昵?


=====================================================



测试迁移数据新增节点扩容
1.新节点192.168.10.17还没有加入到Redis的Cluster模式集群,此节点没有被分配任何槽slot区间,
  执行如下命令只是通知其它3个主节点,192.168.10.17节点也是该群集里的主节点:
  192.168.10.17:6379> cluster meet 192.168.10.11 6379

2.添加测试数据
  192.168.10.17:6379> set foo14308 bar
  我们可以通过终端命令 cluster keyslot foo14308 获得测试数据它的键foo14308对应的槽slot=7,
  回头看看slot=7我们是不是已经分配给了主节点192.168.10.13 ?是的,这没错!

3.在Redis的主节点192.168.10.17上执行终端命令目的是
(node id = e3dfbeb93bc239d87bd1e04e13ee0a158df2b584的节点就是192.168.10.13这个主节点)
  准备从Redis主节点192.168.10.13上
  把slot=7的槽迁入到192.168.10.17这个Redis主节点上(是准备迁入slot,还没有真的发生)
  192.168.10.17:6379> cluster setslot 7 importing e3dfbeb93bc239d87bd1e04e13ee0a158df2b584

4.在Redis的主节点192.168.10.13上执行终端命令目的是
 (node id = 1a8df529fdc6ce0a65e209cf402b7f16a87954ff的节点就是192.168.10.17这个主节点)
  准备从Redis主节点192.168.10.13上
  把slot=7的槽迁出到192.168.10.17这个Redis主节点上(是准备迁出slot,还没有真的发生)
  192.168.10.13:6379> cluster setslot 7 migrating 1a8df529fdc6ce0a65e209cf402b7f16a87954ff

5.整个数据库被分成16384(2的14次方)个槽,每个键-值对的key存储的容器都属于16384个slot槽当中的一个。
  终端命令的格式说明:CLUSTER GETKEYSINSLOT slot count 以下命令中的参数7表示槽位(把slot理解成容器就行啦),
  参数10表示从该槽位7当中一共返回键(key)的个数为10
  192.168.10.13:6379> cluster getkeysinslot 7 10
  终端命令的格式说明:MIGRATE host port key destination-db timeout
  集群节点只能使用0号数据库中的键值对,还记得玩单机版Redis时,默认一共有16个数据库编号吗?
  从0号数据库一直到15号数据库,还没听懂?就是下面这个命令中倒数第2参数的值0是固定死的
  192.168.10.13:6379> migrate 192.168.10.17 6379 foo14308 0 15000    // slot的迁出和迁入在此刻就真的发生啦

6.向集群中的任一节点发送终端命令CLUSTER SETSLOT <slot> NODE <target_node_id>
  通知所有其它节点slot=7的槽已经发生转移,已将该slot分配给了目标节点;
  192.168.10.13:6379> cluster setslot 7 node 1a8df529fdc6ce0a65e209cf402b7f16a87954ff
  192.168.10.17:6379> cluster setslot 7 node 1a8df529fdc6ce0a65e209cf402b7f16a87954ff

  结论:当一个Redis主节点上的slot被迁移到另外一个Redis主节点上之后,
        这个/这些slot上存储的所有键-值对数据就随着一起发生了转移 ------ 从而就实现了Redis主节点之间数据的迁移。
 
  上面只是演示了Redis主节点上迁移一个slot情形,实际生产环境上,我们迁移肯定是很多个slot(slot值一般是连续的),
  甚至是这个Redis主节点上的全部slot被迁移,那么我们只要循环重复第3步到第6步的过程即可。



以上6步终端命令执行详细过程如下:

到目前为止,我们还没有启动IP为192.168.10.17这台虚拟机操作系统上配置好了的Redis服务,执行如下启动命令:

[root@s17 ~]# systemctl restart redis
[root@s17 ~]# cat /var/lib/redis/nodes-6379.conf

1a8df529fdc6ce0a65e209cf402b7f16a87954ff 192.168.10.17:6379 myself,master - 0 0 0 connected
vars currentEpoch 0 lastVoteEpoch 0
[root@s17 ~]# redis-cli -c -h 192.168.10.17 -p 6379
192.168.10.17:6379> cluster meet 192.168.10.11 6379      //通知其它主节点,192.168.10.17作为第4个主节点加入群集OK
OK

192.168.10.17:6379> cluster nodes

7ca9a4a226e0fd6a3e2ce9b51767168b2d3426dc 192.168.10.14:6379 slave a387b85af0f849a08dfb4375ec91690d956e52d6 0 1430954514347 0 connected
e3dfbeb93bc239d87bd1e04e13ee0a158df2b584 192.168.10.13:6379 master - 0 1430954517373 6 connected 0-5959 10922-11422
f07b64dbad0911690d755ee09ab55815257b71a8 192.168.10.16:6379 slave e3dfbeb93bc239d87bd1e04e13ee0a158df2b584 0 1430954519391 6 connected
a387b85af0f849a08dfb4375ec91690d956e52d6 192.168.10.11:6379 master - 0 1430954516367 0 connected 5960-10921
4acc5977a2a569ff8ed701a186e3c5ba9a6029b9 192.168.10.12:6379 master - 0 1430954515357 2 connected 11423-16383
3522dbcff1c5fad72d412cce8a78ce9542df7ece 192.168.10.15:6379 slave 4acc5977a2a569ff8ed701a186e3c5ba9a6029b9 0 1430954518382 2 connected
1a8df529fdc6ce0a65e209cf402b7f16a87954ff 192.168.10.17:6379 myself,master - 0 0 7 connected

在主节点192.168.10.17上实验性的写入一个键-值对数据,注意哦,上面命令输出你看到主节点192.168.10.17

输出这行配置信息1a8df529fdc6ce0a65e209cf402b7f16a87954ff 192.168.10.17:6379 myself,master - 0 0 7 connected了吗?

connected 英文单词后面啥内容也没有 ------ 表示192.168.10.17这个主节点还没有被分配slot值,试试没有分配slot值的主节点还能否写入键-值对数据,下面命令输出内容在告诉你,本主节点没有slot=7的槽,本来1个slot都没有,又怎么能写入键-值对数据,这不是开玩笑么?

终端命令输出-> Redirected to slot [7] located at 192.168.10.13:6379

表示的意思是在告诉你,这是Redis内置哈希算法计算key=foo14308应该存储在slot=7的那个主节点上,那么后面的

located at 192.168.10.13:6379 又进一步补充地告诉你slot=7的这个槽在IP地址为192.168.10.13这台操作系统上,

这个输出内容就是个提示信息,SET 命令其实并没有把foo14308 : bar写入到群集里去

这要是在编程环境里报告这样的信息,就表示你的connect方法连接错了节点的IP,

你不应该连接192.168.10.17这个Redis主节点,

而是应该连接192.168.10.13这个Redis主节点,

重要提示:没有被分配1个slot 的Redis主节点,跟所有的Redis从节点一样,你偏偏要读写数据,

返回的提示信息就如下面终端命令输出的提示内容格式一样!

192.168.10.17:6379> set foo14308 bar

-> Redirected to slot [7] located at 192.168.10.13:6379
OK
192.168.10.13:6379> cluster nodes
7ca9a4a226e0fd6a3e2ce9b51767168b2d3426dc 192.168.10.14:6379 slave a387b85af0f849a08dfb4375ec91690d956e52d6 0 1430954797448 0 connected
f07b64dbad0911690d755ee09ab55815257b71a8 192.168.10.16:6379 slave e3dfbeb93bc239d87bd1e04e13ee0a158df2b584 0 1430954802499 6 connected
e3dfbeb93bc239d87bd1e04e13ee0a158df2b584 192.168.10.13:6379 myself,master - 0 0 6 connected 0-5959 10922-11422
a387b85af0f849a08dfb4375ec91690d956e52d6 192.168.10.11:6379 master - 0 1430954801490 0 connected 5960-10921
3522dbcff1c5fad72d412cce8a78ce9542df7ece 192.168.10.15:6379 slave 4acc5977a2a569ff8ed701a186e3c5ba9a6029b9 0 1430954799469 2 connected
4acc5977a2a569ff8ed701a186e3c5ba9a6029b9 192.168.10.12:6379 master - 0 1430954798459 2 connected 11423-16383
1a8df529fdc6ce0a65e209cf402b7f16a87954ff 192.168.10.17:6379 master - 0 1430954800479 7 connected
192.168.10.17:6379> cluster setslot 7 importing e3dfbeb93bc239d87bd1e04e13ee0a158df2b584
OK
192.168.10.17:6379> cluster nodes
7ca9a4a226e0fd6a3e2ce9b51767168b2d3426dc 192.168.10.14:6379 slave a387b85af0f849a08dfb4375ec91690d956e52d6 0 1430955747124 0 connected
e3dfbeb93bc239d87bd1e04e13ee0a158df2b584 192.168.10.13:6379 master - 0 1430955750147 6 connected 0-5959 10922-11422
f07b64dbad0911690d755ee09ab55815257b71a8 192.168.10.16:6379 slave e3dfbeb93bc239d87bd1e04e13ee0a158df2b584 0 1430955749139 6 connected
a387b85af0f849a08dfb4375ec91690d956e52d6 192.168.10.11:6379 master - 0 1430955752162 0 connected 5960-10921
4acc5977a2a569ff8ed701a186e3c5ba9a6029b9 192.168.10.12:6379 master - 0 1430955751155 2 connected 11423-16383
3522dbcff1c5fad72d412cce8a78ce9542df7ece 192.168.10.15:6379 slave 4acc5977a2a569ff8ed701a186e3c5ba9a6029b9 0 1430955753170 2 connected
1a8df529fdc6ce0a65e209cf402b7f16a87954ff 192.168.10.17:6379 myself,master - 0 0 7 connected [7-<-e3dfbeb93bc239d87bd1e04e13ee0a158df2b584]
192.168.10.17:6379> cluster getkeysinslot 7 10
1) "foo14308"
192.168.10.17:6379> get foo14308
-> Redirected to slot [7] located at 192.168.10.13:6379
(error) ASK 7 192.168.10.17:6379
192.168.10.13:6379> cluster nodes
7ca9a4a226e0fd6a3e2ce9b51767168b2d3426dc 192.168.10.14:6379 slave a387b85af0f849a08dfb4375ec91690d956e52d6 0 1430955771013 0 connected
f07b64dbad0911690d755ee09ab55815257b71a8 192.168.10.16:6379 slave e3dfbeb93bc239d87bd1e04e13ee0a158df2b584 0 1430955770007 6 connected
e3dfbeb93bc239d87bd1e04e13ee0a158df2b584 192.168.10.13:6379 myself,master - 0 0 6 connected 0-5959 10922-11422
a387b85af0f849a08dfb4375ec91690d956e52d6 192.168.10.11:6379 master - 0 1430955772022 0 connected 5960-10921
3522dbcff1c5fad72d412cce8a78ce9542df7ece 192.168.10.15:6379 slave 4acc5977a2a569ff8ed701a186e3c5ba9a6029b9 0 1430955773031 2 connected
4acc5977a2a569ff8ed701a186e3c5ba9a6029b9 192.168.10.12:6379 master - 0 1430955769503 2 connected 11423-16383
1a8df529fdc6ce0a65e209cf402b7f16a87954ff 192.168.10.17:6379 master - 0 1430955774038 7 connected
192.168.10.13:6379> cluster setslot 7 migrating 1a8df529fdc6ce0a65e209cf402b7f16a87954ff
OK
192.168.10.13:6379> cluster nodes
7ca9a4a226e0fd6a3e2ce9b51767168b2d3426dc 192.168.10.14:6379 slave a387b85af0f849a08dfb4375ec91690d956e52d6 0 1430955936466 0 connected
f07b64dbad0911690d755ee09ab55815257b71a8 192.168.10.16:6379 slave e3dfbeb93bc239d87bd1e04e13ee0a158df2b584 0 1430955933439 6 connected
e3dfbeb93bc239d87bd1e04e13ee0a158df2b584 192.168.10.13:6379 myself,master - 0 0 6 connected 0-5959 10922-11422 [7->-1a8df529fdc6ce0a65e209cf402b7f16a87954ff]
a387b85af0f849a08dfb4375ec91690d956e52d6 192.168.10.11:6379 master - 0 1430955935459 0 connected 5960-10921
3522dbcff1c5fad72d412cce8a78ce9542df7ece 192.168.10.15:6379 slave 4acc5977a2a569ff8ed701a186e3c5ba9a6029b9 0 1430955938483 2 connected
4acc5977a2a569ff8ed701a186e3c5ba9a6029b9 192.168.10.12:6379 master - 0 1430955937475 2 connected 11423-16383
1a8df529fdc6ce0a65e209cf402b7f16a87954ff 192.168.10.17:6379 master - 0 1430955934448 7 connected

192.168.10.13:6379> cluster getkeysinslot 7 10

1) "foo14308"

192.168.10.13:6379> migrate 192.168.10.17 6379 foo14308 0 15000

OK
192.168.10.13:6379> cluster nodes
7ca9a4a226e0fd6a3e2ce9b51767168b2d3426dc 192.168.10.14:6379 slave a387b85af0f849a08dfb4375ec91690d956e52d6 0 1430956933005 0 connected
f07b64dbad0911690d755ee09ab55815257b71a8 192.168.10.16:6379 slave e3dfbeb93bc239d87bd1e04e13ee0a158df2b584 0 1430956937047 6 connected
e3dfbeb93bc239d87bd1e04e13ee0a158df2b584 192.168.10.13:6379 myself,master - 0 0 6 connected 0-5959 10922-11422 [7->-1a8df529fdc6ce0a65e209cf402b7f16a87954ff]
a387b85af0f849a08dfb4375ec91690d956e52d6 192.168.10.11:6379 master - 0 1430956935026 0 connected 5960-10921
3522dbcff1c5fad72d412cce8a78ce9542df7ece 192.168.10.15:6379 slave 4acc5977a2a569ff8ed701a186e3c5ba9a6029b9 0 1430956931997 2 connected
4acc5977a2a569ff8ed701a186e3c5ba9a6029b9 192.168.10.12:6379 master - 0 1430956936035 2 connected 11423-16383
1a8df529fdc6ce0a65e209cf402b7f16a87954ff 192.168.10.17:6379 master - 0 1430956934015 7 connected
192.168.10.13:6379> cluster getkeysinslot 7 10
(empty list or set)
192.168.10.13:6379> cluster setslot 7 node 1a8df529fdc6ce0a65e209cf402b7f16a87954ff
OK
192.168.10.13:6379> cluster nodes
7ca9a4a226e0fd6a3e2ce9b51767168b2d3426dc 192.168.10.14:6379 slave a387b85af0f849a08dfb4375ec91690d956e52d6 0 1430960533750 0 connected
f07b64dbad0911690d755ee09ab55815257b71a8 192.168.10.16:6379 slave e3dfbeb93bc239d87bd1e04e13ee0a158df2b584 0 1430960535766 6 connected
e3dfbeb93bc239d87bd1e04e13ee0a158df2b584 192.168.10.13:6379 myself,master - 0 0 6 connected 0-6 8-5959 10922-11422
a387b85af0f849a08dfb4375ec91690d956e52d6 192.168.10.11:6379 master - 0 1430960535262 0 connected 5960-10921
3522dbcff1c5fad72d412cce8a78ce9542df7ece 192.168.10.15:6379 slave 4acc5977a2a569ff8ed701a186e3c5ba9a6029b9 0 1430960531733 2 connected
4acc5977a2a569ff8ed701a186e3c5ba9a6029b9 192.168.10.12:6379 master - 0 1430960534757 2 connected 11423-16383
1a8df529fdc6ce0a65e209cf402b7f16a87954ff 192.168.10.17:6379 master - 0 1430960532742 7 connected 7

  192.168.10.13:6379> cluster setslot 7 node 1a8df529fdc6ce0a65e209cf402b7f16a87954ff
  192.168.10.17:6379> cluster setslot 7 node 1a8df529fdc6ce0a65e209cf402b7f16a87954ff


====================================================



删除节点的步骤为:
- 如果节点是master节点,先将节点上的slot迁移,然后从集群中删除该master节点以及它的所有slave节点;
- 如果节点是slave节点,则直接从集群中删除即可;
  注意:
      需要对集群中的每一个节点(无论是master还是slave)都调用`CLUSTER FORGET`来“忘记”该节点,
      如果集群中有一个节点还“记得”被删除的节点,那么通过群集节点之间消息,
      很快Redis集群中的所有节点都会重新“认识”被没有完全删除的Redis节点了;
     一个slave节点不能”忘记“它的master节点;
     一个节点不能”忘记“它自己

第1种情况:
如果删除的节点是主节点,首先要把被删除的Redis主节点上所有的slot槽转移到其它的Redis主节点中去
为了演示删除Redis主节点(主节点上有数据),我们就把刚刚新增的第4个Redis主节点192.168.10.17删掉

192.168.10.17:6379> cluster nodes
192.168.10.17:6379> cluster keyslot foo14308
7
192.168.10.13:6379> cluster setslot 7 importing 1a8df529fdc6ce0a65e209cf402b7f16a87954ff    // 准备迁入slot
192.168.10.17:6379> cluster setslot 7 migrating e3dfbeb93bc239d87bd1e04e13ee0a158df2b584    // 准备迁出slot
192.168.10.17:6379> cluster getkeysinslot 7 10
1) "foo14308"   // 这个是7号槽上的key
192.168.10.17:6379> migrate 192.168.10.13 6379 foo14308 0 15000   // 槽上的数据随着slot的迁出和迁入一起真的发生转移啦
192.168.10.17:6379> cluster setslot 7 node e3dfbeb93bc239d87bd1e04e13ee0a158df2b584
192.168.10.13:6379> cluster setslot 7 node e3dfbeb93bc239d87bd1e04e13ee0a158df2b584

节点192.168.10.17:6379成为一个没有分配任何槽的节点,然后才能删除主节点
先迁移出被删除节点上的槽(槽上存储有键值对,自然就达到了迁移数据的目的)
192.168.10.17:6379> cluster forget a387b85af0f849a08dfb4375ec91690d956e52d6 //参数<node id>是主节点192.168.10.11的node id值
192.168.10.17:6379> cluster forget 4acc5977a2a569ff8ed701a186e3c5ba9a6029b9 //参数<node id>是主节点192.168.10.12的node id值
192.168.10.17:6379> cluster forget e3dfbeb93bc239d87bd1e04e13ee0a158df2b584 //参数<node id>是主节点192.168.10.13的node id值
192.168.10.17:6379> cluster forget 7ca9a4a226e0fd6a3e2ce9b51767168b2d3426dc //参数<node id>是从节点192.168.10.14的node id值
192.168.10.17:6379> cluster forget 3522dbcff1c5fad72d412cce8a78ce9542df7ece //参数<node id>是从节点192.168.10.15的node id值
192.168.10.17:6379> cluster forget f07b64dbad0911690d755ee09ab55815257b71a8 //参数<node id>是从节点192.168.10.16的node id值
查看一下192.168.10.17这个主节点已经被集群删除,因为除了输出自己的配置信息外,没有其它任何节点配置信息:
192.168.10.17:6379> cluster nodes
1a8df529fdc6ce0a65e209cf402b7f16a87954ff 192.168.10.17:6379 myself,master - 0 0 7 connected

节点192.168.10.11至节点192.168.10.16这个6个节点对应的配置文件/var/lib/redis/nodes-6379.conf中还包含被删除的节点信息,
必须彻底完全删掉节点的配置信息,故还需要执行如下一系列指令:
192.168.10.11:6379> cluster forget 1a8df529fdc6ce0a65e209cf402b7f16a87954ff //参数<node id>是主节点192.168.10.17的node id值
192.168.10.12:6379> cluster forget 1a8df529fdc6ce0a65e209cf402b7f16a87954ff //参数<node id>是主节点192.168.10.17的node id值
192.168.10.13:6379> cluster forget 1a8df529fdc6ce0a65e209cf402b7f16a87954ff //参数<node id>是主节点192.168.10.17的node id值
192.168.10.14:6379> cluster forget 1a8df529fdc6ce0a65e209cf402b7f16a87954ff //参数<node id>是主节点192.168.10.17的node id值
192.168.10.15:6379> cluster forget 1a8df529fdc6ce0a65e209cf402b7f16a87954ff //参数<node id>是主节点192.168.10.17的node id值
192.168.10.16:6379> cluster forget 1a8df529fdc6ce0a65e209cf402b7f16a87954ff //参数<node id>是主节点192.168.10.17的node id值


第2种情况:如果删除的节点是从节点的,我们演示把从节点192.168.10.14删掉,看看详细步骤怎么做?

先看看删除前所有节点的配置信息吧,只需要在其中一个节点执行如下终端命令:

192.168.10.11:6379> CLUSTER NODES

ca41e2d0fc8a0042c342020d5f098a1fd18e59ab 192.168.10.14:6379 slave e248b989c7f6cec37015152dbf76a1f48e8ef4eb 0 1436193635024 1 connected
5acd1bfba3239096cc445d7053b4311de33d70eb 192.168.10.12:6379 master - 0 1436193632984 3 connected 11423-16383
e248b989c7f6cec37015152dbf76a1f48e8ef4eb 192.168.10.11:6379 myself,master - 0 0 0 connected 5960-10921
78fe0d2a94427f1466b12055e60d214884b80d81 192.168.10.15:6379 slave d016be8b271bada13f8dffaad18250f8d28626a3 0 1436193638087 2 connected
d016be8b271bada13f8dffaad18250f8d28626a3 192.168.10.13:6379 master - 0 1436193636046 2 connected 0-5959 10922-11422
de0275dc2c38abd7c975203f51ae1a65165324ed 192.168.10.16:6379 slave d016be8b271bada13f8dffaad18250f8d28626a3 0 1436193637064 2 connected
[root@s14 ~]# systemctl stop redis   这步很重要:首先停止被删节点的服务

192.168.10.11:6379> CLUSTER NODES

ca41e2d0fc8a0042c342020d5f098a1fd18e59ab 192.168.10.14:6379 slave,fail e248b989c7f6cec37015152dbf76a1f48e8ef4eb 1436197868156 1436197867151 0 disconnected
5acd1bfba3239096cc445d7053b4311de33d70eb 192.168.10.12:6379 master - 0 1436197885347 3 connected 11423-16383
e248b989c7f6cec37015152dbf76a1f48e8ef4eb 192.168.10.11:6379 myself,master - 0 0 0 connected 5960-10921
78fe0d2a94427f1466b12055e60d214884b80d81 192.168.10.15:6379 slave e248b989c7f6cec37015152dbf76a1f48e8ef4eb 0 1436197887367 2 connected
d016be8b271bada13f8dffaad18250f8d28626a3 192.168.10.13:6379 master - 0 1436197884335 2 connected 0-5959 10922-11422
de0275dc2c38abd7c975203f51ae1a65165324ed 192.168.10.16:6379 slave d016be8b271bada13f8dffaad18250f8d28626a3 0 1436197886361 2 connected
192.168.10.11:6379> CLUSTER FORGET ca41e2d0fc8a0042c342020d5f098a1fd18e59ab  //让节点192.168.10.11忘掉节点192.168.10.14
192.168.10.12:6379> CLUSTER FORGET ca41e2d0fc8a0042c342020d5f098a1fd18e59ab  //让节点192.168.10.12忘掉节点192.168.10.14
192.168.10.13:6379> CLUSTER FORGET ca41e2d0fc8a0042c342020d5f098a1fd18e59ab  //让节点192.168.10.13忘掉节点192.168.10.14
192.168.10.15:6379> CLUSTER FORGET ca41e2d0fc8a0042c342020d5f098a1fd18e59ab  //让节点192.168.10.15忘掉节点192.168.10.14
192.168.10.16:6379> CLUSTER FORGET ca41e2d0fc8a0042c342020d5f098a1fd18e59ab  //让节点192.168.10.16忘掉节点192.168.10.14


192.168.10.11:6379> CLUSTER NODES

5acd1bfba3239096cc445d7053b4311de33d70eb 192.168.10.12:6379 master - 0 1436198099308 3 connected 11423-16383
e248b989c7f6cec37015152dbf76a1f48e8ef4eb 192.168.10.11:6379 myself,master - 0 0 0 connected 5960-10921
78fe0d2a94427f1466b12055e60d214884b80d81 192.168.10.15:6379 slave e248b989c7f6cec37015152dbf76a1f48e8ef4eb 0 1436198097291 2 connected
d016be8b271bada13f8dffaad18250f8d28626a3 192.168.10.13:6379 master - 0 1436198100316 2 connected 0-5959 10922-11422
de0275dc2c38abd7c975203f51ae1a65165324ed 192.168.10.16:6379 slave d016be8b271bada13f8dffaad18250f8d28626a3 0 1436198101322 2 connected
192.168.10.11:6379> 
192.168.10.12:6379> CLUSTER NODES
d016be8b271bada13f8dffaad18250f8d28626a3 192.168.10.13:6379 master - 0 1436198162061 2 connected 0-5959 10922-11422
de0275dc2c38abd7c975203f51ae1a65165324ed 192.168.10.16:6379 slave d016be8b271bada13f8dffaad18250f8d28626a3 0 1436198159033 2 connected
78fe0d2a94427f1466b12055e60d214884b80d81 192.168.10.15:6379 slave e248b989c7f6cec37015152dbf76a1f48e8ef4eb 0 1436198163071 2 connected
5acd1bfba3239096cc445d7053b4311de33d70eb 192.168.10.12:6379 myself,master - 0 0 3 connected 11423-16383
e248b989c7f6cec37015152dbf76a1f48e8ef4eb 192.168.10.11:6379 master - 0 1436198161051 0 connected 5960-10921
192.168.10.12:6379>

192.168.10.13:6379> CLUSTER NODES

de0275dc2c38abd7c975203f51ae1a65165324ed 192.168.10.16:6379 slave d016be8b271bada13f8dffaad18250f8d28626a3 0 1436198172785 2 connected
e248b989c7f6cec37015152dbf76a1f48e8ef4eb 192.168.10.11:6379 master - 0 1436198174806 0 connected 5960-10921
d016be8b271bada13f8dffaad18250f8d28626a3 192.168.10.13:6379 myself,master - 0 0 2 connected 0-5959 10922-11422
5acd1bfba3239096cc445d7053b4311de33d70eb 192.168.10.12:6379 master - 0 1436198173795 3 connected 11423-16383
78fe0d2a94427f1466b12055e60d214884b80d81 192.168.10.15:6379 slave e248b989c7f6cec37015152dbf76a1f48e8ef4eb 0 1436198171775 2 connected
192.168.10.13:6379>

192.168.10.15:6379> CLUSTER NODES

e248b989c7f6cec37015152dbf76a1f48e8ef4eb 192.168.10.11:6379 master - 0 1436198178896 0 connected 5960-10921
78fe0d2a94427f1466b12055e60d214884b80d81 192.168.10.15:6379 myself,slave e248b989c7f6cec37015152dbf76a1f48e8ef4eb 0 0 1 connected
de0275dc2c38abd7c975203f51ae1a65165324ed 192.168.10.16:6379 slave d016be8b271bada13f8dffaad18250f8d28626a3 0 1436198177887 2 connected
5acd1bfba3239096cc445d7053b4311de33d70eb 192.168.10.12:6379 master - 0 1436198180909 3 connected 11423-16383
d016be8b271bada13f8dffaad18250f8d28626a3 192.168.10.13:6379 master - 0 1436198173852 2 connected 0-5959 10922-11422
192.168.10.15:6379> 
192.168.10.16:6379> CLUSTER NODES
78fe0d2a94427f1466b12055e60d214884b80d81 192.168.10.15:6379 slave e248b989c7f6cec37015152dbf76a1f48e8ef4eb 0 1436198181550 2 connected
d016be8b271bada13f8dffaad18250f8d28626a3 192.168.10.13:6379 master - 0 1436198183570 2 connected 0-5959 10922-11422
e248b989c7f6cec37015152dbf76a1f48e8ef4eb 192.168.10.11:6379 master - 0 1436198182560 0 connected 5960-10921
5acd1bfba3239096cc445d7053b4311de33d70eb 192.168.10.12:6379 master - 0 1436198180541 3 connected 11423-16383
de0275dc2c38abd7c975203f51ae1a65165324ed 192.168.10.16:6379 myself,slave d016be8b271bada13f8dffaad18250f8d28626a3 0 0 0 connected
192.168.10.16:6379>
结论:以上5个节点命令输出的内容中没有关于192.168.10.14这个节点任何配置信息 ------- 192.168.10.14这个从节点已被干掉啦!


有关生产环境上的Redis群集运维知识,这台虚拟机是没法模拟的,只有在工作中,不断积累啦,当你熟悉群集的详细配置步骤以及工作原理,你离成为大神级别的玩家就不远啦!


本篇博客完毕!

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值