Redis cluster集群的搭建及一些基本使用

        我们都知道Redis的分布式除了有cluster以外还有sentinel机制,他们都能实现主从复制和故障转移,但是为什么还要有这两种实现的区别呢?因为在官方也没有具体介绍,所以这里根据个人理解来说一下,我们这里想一下,如果在redis中的数据超过了一个机器的内存可用量,如果在往redis中新增数据,这个时候是不是会出现内存泄漏之类的错误,这个时候我们该怎么办?sentinel机制的一个主节点显然是不能解决了,这个时候我们就要用到redis 的cluster机制了,因此cluster主要应该就是为了解决sentinel的这个缺点的。我们这里就先不介绍Redis cluster的机制了,会在后面一篇文章中在说明,我们主要看下cluster集群的搭建及使用。

       首先我这有3台Linux虚拟机(需要说明的是Redis的Windows版本不能支持最新的特性)分别是192.168.1.5、192.168.1.6、192.168.1.7,因为按照分布式系统的基本要求是服务器数量为3主3从的结构,这种配置数量能够更好的保证“大多数”的要求,这里为了节约资源我会在每台虚拟机上面配置1主1从的结构,端口号分别是6359和6360,我们以192.168.1.5为例,首先从https://redis.io/download上通过以下命令获得redis

$ wget http://download.redis.io/releases/redis-5.0.5.tar.gz
$ tar xzf redis-5.0.5.tar.gz
$ cd redis-5.0.5
$ make

       编译完redis后进入到src目录会看到多出来一些如redis-server、redis-cli的可执行文件,再次回到redis主目录 ,我们新建一个conf文件夹,这个文件夹用来存放我们的redis-6359.conf和redis-6360.conf配置文件,我们这里看下最简单的配置

#后台启动的意思
daemonize yes 
 #端口号
port 6359
# IP绑定,redis不建议对公网开放,直接绑定0.0.0.0
bind 0.0.0.0
# redis数据文件存放的目录
dir /usr/local/redis/data
# 开启AOF
appendonly yes
 # 开启集群
cluster-enabled yes
# 会自动生成在上面配置的dir目录下
cluster-config-file nodes-6359.conf 
# 节点不能连接上集群的超时时间,如果是主节点将会执行故障转移
cluster-node-timeout 5000
# 从节点不能进行故障转移的超时因子,如上面timeout为5秒,当从节点与主节点失去连接的时间超过5*10的时候,从节点将不能被选举为进行故障转移的节点,如果设置为0则一直尝试进行故障转移
cluster-slave-validity-factor 10
# 当主节点A没有从节点的时候,如果要在主节点B上迁移一个从节点到主节点A的时候,主节点B必须还剩余的从节点数量的最小值
cluster-migration-barrier 2
# 这个值默认为yes,它表示如果某个百分比的键空间没有被任何节点覆盖,则集群将停止接受写操作,设置为no则相反
cluster-require-full-coverage yes
# 这个文件会自动生成
pidfile /var/run/redis_6359.pid 

       其他服务器的配置也只是修改一下端口号和生成的文件名称就可以了,其他的可以暂时保持不变,完成配置后,我们通过如下命令来启动相应的redis节点

[root@web-server01 redis-5.0.5]$ ./src/redis-server conf/redis-6359.conf 

       6360端口的启动只是修改下启动命令的配置文件为redis-6360.conf即可,通过ps命令可以看到有了以下信息

[root@web-server01 home]$ ps -ef|grep redis
root    5361     1  0 Nov01 ?        00:09:17 ./src/redis-server 0.0.0.0:6360 [cluster]
root   15715     1  0 Nov01 ?        00:09:10 ./src/redis-server 0.0.0.0:6359 [cluster]

       看到没有每一行后面都有一个[cluster]字符串,这就代表该redis进程是一个集群模式了。当我们启动完成所有服务器上的redis节点后我们可以通过以下命令来查看如何进行集群关联

[root@web-server01 redis-5.0.5]$ ./src/redis-cli --cluster help
Cluster Manager Commands:
  # 创建新的集群关系
  create         host1:port1 ... hostN:portN
                 --cluster-replicas <arg>
  # 查看集群中的服务器关系
  check          host:port
                 --cluster-search-multiple-owners
  # 展示集群中的master节点信息
  info           host:port
  # 和check属性差不多
  fix            host:port
                 --cluster-search-multiple-owners
  # 分配hash slots
  reshard        host:port
                 --cluster-from <arg>
                 --cluster-to <arg>
                 --cluster-slots <arg>
                 --cluster-yes
                 --cluster-timeout <arg>
                 --cluster-pipeline <arg>
                 --cluster-replace
  # 重新平衡集群的权重
  rebalance      host:port
                 --cluster-weight <node1=w1...nodeN=wN>
                 --cluster-use-empty-masters
                 --cluster-timeout <arg>
                 --cluster-simulate
                 --cluster-pipeline <arg>
                 --cluster-threshold <arg>
                 --cluster-replace
  # 新增集群节点
  add-node       new_host:new_port existing_host:existing_port
                 --cluster-slave
                 --cluster-master-id <arg>
  # 删除集群节点
  del-node       host:port node_id
  call           host:port command arg arg .. arg
  # 设置节点连接过期时间
  set-timeout    host:port milliseconds
  # 导入指定节点信息到当前节点
  import         host:port
                 --cluster-from <arg>
                 --cluster-copy
                 --cluster-replace
  help           
# check, fix, reshard, del-node, set-timeout这几个命令都可以随便指定集群中的一个ip:port
For check, fix, reshard, del-node, set-timeout you can specify the host and port of any working node in the cluster.

        当我们第一次关联redis cluster的时候可以使用create进行直接关联,如果是已经关联过了,可以通过add-node的方式进行,通过上面的help信息我们先通过create进行创建


[root@web-server01 redis-5.0.5]$ ./src/redis-cli --cluster create 192.168.1.5:6359 192.168.1.5:6360 192.168.1.6:6359 192.168.1.6:6360 192.168.1.7:6359 192.168.1.7:6360 --cluster-replicas 1

       解释下命令,create后面的是跟需要进行集群关联的服务器和端口号,--cluster-replicas表示每个主节点有多少个从节点。当这个命令执行完成后它会产生一个对话框让你选择是否进行默认的hash slots的分配,如果你不想自己去手动操作的话就选择yes吧,关联完成后有以下输出

>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 192.168.1.6:6360 to 192.168.1.5:6359
Adding replica 192.168.1.7:6360 to 192.168.1.6:6359
Adding replica 192.168.1.5:6360 to 192.168.1.7:6359
M: 717ac1ea8c6577c7885ed4ee8e17e344c0f1c529 192.168.1.5:6359
   slots:[0-5460] (5461 slots) master
S: eb3d099c6ef9474b3da4f5241dedc5233fc926fb 192.168.1.5:6360
   replicates db7fc16fbb9a8eeb758418578a53d0ab602b8dba
M: 21acfaaa67e78e04648ed3aa242296c7290dec60 192.168.1.6:6359
   slots:[5461-10922] (5462 slots) master
S: 5bb1866000d39693f9dcd85387babf200cb8f078 192.168.1.6:6360
   replicates 717ac1ea8c6577c7885ed4ee8e17e344c0f1c529
M: db7fc16fbb9a8eeb758418578a53d0ab602b8dba 192.168.1.7:6359
   slots:[10923-16383] (5461 slots) master
S: 66e4c6df9ca97c7133d0def9566258083a52b8eb 192.168.1.7:6360
   replicates 21acfaaa67e78e04648ed3aa242296c7290dec60
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
...
>>> Performing Cluster Check (using node 192.168.1.5:6359)
M: 717ac1ea8c6577c7885ed4ee8e17e344c0f1c529 192.168.1.5:6359
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: 5bb1866000d39693f9dcd85387babf200cb8f078 192.168.1.6:6360
   slots: (0 slots) slave
   replicates 717ac1ea8c6577c7885ed4ee8e17e344c0f1c529
M: 21acfaaa67e78e04648ed3aa242296c7290dec60 192.168.1.6:6359
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
M: db7fc16fbb9a8eeb758418578a53d0ab602b8dba 192.168.1.7:6359
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: eb3d099c6ef9474b3da4f5241dedc5233fc926fb 192.168.1.5:6360
   slots: (0 slots) slave
   replicates db7fc16fbb9a8eeb758418578a53d0ab602b8dba
S: 66e4c6df9ca97c7133d0def9566258083a52b8eb 192.168.1.7:6360
   slots: (0 slots) slave
   replicates 21acfaaa67e78e04648ed3aa242296c7290dec60
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

        经过上面的步骤我们就完成了Redis cluster集群的搭建工作,现在假设我们的192.168.1.7:6359这个master节点挂掉了,通过kill的方式强制执行,看看会发生什么?

[root@Cache-server redis-5.0.5]$ ./src/redis-cli --cluster check 192.168.1.5:6359
Could not connect to Redis at 192.168.1.7:6359: Connection refused
192.168.1.5:6359 (717ac1ea...) -> 0 keys | 5461 slots | 1 slaves.
192.168.1.6:6359 (21acfaaa...) -> 0 keys | 5462 slots | 1 slaves.
192.168.1.5:6360 (eb3d099c...) -> 0 keys | 5461 slots | 0 slaves.
[OK] 0 keys in 3 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.1.5:6359)
M: 717ac1ea8c6577c7885ed4ee8e17e344c0f1c529 192.168.1.5:6359
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: 5bb1866000d39693f9dcd85387babf200cb8f078 192.168.1.6:6360
   slots: (0 slots) slave
   replicates 717ac1ea8c6577c7885ed4ee8e17e344c0f1c529
M: 21acfaaa67e78e04648ed3aa242296c7290dec60 192.168.1.6:6359
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
M: eb3d099c6ef9474b3da4f5241dedc5233fc926fb 192.168.1.5:6360
   slots:[10923-16383] (5461 slots) master
S: 66e4c6df9ca97c7133d0def9566258083a52b8eb 192.168.1.7:6360
   slots: (0 slots) slave
   replicates 21acfaaa67e78e04648ed3aa242296c7290dec60
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

       注意看到没有,从节点192.168.1.5:6360被提升为了master节点,这个Redis cluster的机制就自动了解决了主从切换、故障转移的问题,不需要人工干预,当我们重新恢复192.168.1.7:6359这个节点的时候又会发生什么呢?

[root@Cache-server redis-5.0.5]$ ./src/redis-cli --cluster check 192.168.1.5:6359
192.168.1.6:6359 (21acfaaa...) -> 0 keys | 5462 slots | 1 slaves.
192.168.1.5:6360 (eb3d099c...) -> 0 keys | 5461 slots | 1 slaves.
192.168.1.5:6359 (717ac1ea...) -> 0 keys | 5461 slots | 1 slaves.
[OK] 0 keys in 3 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.1.5:6359)
M: 21acfaaa67e78e04648ed3aa242296c7290dec60 192.168.1.6:6359
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
S: 5bb1866000d39693f9dcd85387babf200cb8f078 192.168.1.6:6360
   slots: (0 slots) slave
   replicates 717ac1ea8c6577c7885ed4ee8e17e344c0f1c529
M: eb3d099c6ef9474b3da4f5241dedc5233fc926fb 192.168.1.5:6360
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: 66e4c6df9ca97c7133d0def9566258083a52b8eb 192.168.1.7:6360
   slots: (0 slots) slave
   replicates 21acfaaa67e78e04648ed3aa242296c7290dec60
M: 717ac1ea8c6577c7885ed4ee8e17e344c0f1c529 192.168.1.5:6359
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: db7fc16fbb9a8eeb758418578a53d0ab602b8dba 192.168.1.7:6359
   slots: (0 slots) slave
   replicates eb3d099c6ef9474b3da4f5241dedc5233fc926fb
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

       看到没有即使是192.168.1.7:6359重新恢复了,它也只能作为从节点来使用了,因为在Redis中有一个类似于raft算法的epoch机制,它会在从节点变为主节点的时候自动加1,这个时候如果原来的主节点恢复了连接,它会去比较epoch的大小,如果发现当前其他节点的epoch大于自己的epoch的时候,那么它也会更新自己的epoch值为当前其他节点的epoch值,同时也只能作为其他master节点的slave节点。

        这个时候如果我们需要新增一个节点到当前cluster集群中该怎么办呢?通过上面看到的help命令展示的信息将会看到一个add-node属性,这个属性就是我们新增集群节点的方法

[root@web-server01 redis-5.0.5]$ ./src/redis-cli --cluster add-node 192.168.1.5:6361 192.168.1.6:6359

         通过上面的这个命令我们就可以生成一个没有分配hash slots的master节点

M: 7cbe92a7b45f4ba38ebdcef60f95df7e765c3fa4 192.168.1.5:6361
   slots: (0 slots) master

         或者通过命令

[root@web-server01 redis-5.0.5]$ ./src/redis-cli --cluster add-node 192.168.1.5:6362 192.168.1.7:6359 --cluster-slave --cluster-master-id eb3d099c6ef9474b3da4f5241dedc5233fc926fb

        来给新节点指定一个master节点

S: c60c642692cf64c9810e258d04f8c427c8afd456 192.168.1.5:6362
   slots: (0 slots) slave
   replicates eb3d099c6ef9474b3da4f5241dedc5233fc926fb

      我们先看下现在的节点情况如下

[root@web-server01 redis-5.0.5]$ ./src/redis-cli --cluster check 192.168.1.6:6359
192.168.1.6:6359 (21acfaaa...) -> 0 keys | 5462 slots | 1 slaves.
192.168.1.5:6359 (717ac1ea...) -> 0 keys | 5461 slots | 1 slaves.
192.168.1.5:6361 (7cbe92a7...) -> 0 keys | 0 slots | 0 slaves.
192.168.1.5:6362 (c60c6426...) -> 0 keys | 5461 slots | 2 slaves.
[OK] 0 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.1.6:6359)
M: 21acfaaa67e78e04648ed3aa242296c7290dec60 192.168.1.6:6359
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
S: 5bb1866000d39693f9dcd85387babf200cb8f078 192.168.1.6:6360
   slots: (0 slots) slave
   replicates 717ac1ea8c6577c7885ed4ee8e17e344c0f1c529
S: eb3d099c6ef9474b3da4f5241dedc5233fc926fb 192.168.1.5:6360
   slots: (0 slots) slave
   replicates c60c642692cf64c9810e258d04f8c427c8afd456
M: 717ac1ea8c6577c7885ed4ee8e17e344c0f1c529 192.168.1.5:6359
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
M: 7cbe92a7b45f4ba38ebdcef60f95df7e765c3fa4 192.168.1.5:6361
   slots: (0 slots) master
M: c60c642692cf64c9810e258d04f8c427c8afd456 192.168.1.5:6362
   slots:[10923-16383] (5461 slots) master
   2 additional replica(s)
S: 66e4c6df9ca97c7133d0def9566258083a52b8eb 192.168.1.7:6360
   slots: (0 slots) slave
   replicates 21acfaaa67e78e04648ed3aa242296c7290dec60
S: db7fc16fbb9a8eeb758418578a53d0ab602b8dba 192.168.1.7:6359
   slots: (0 slots) slave
   replicates c60c642692cf64c9810e258d04f8c427c8afd456
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

      这个时候我们从master节点192.168.1.5:6359给master节点192.168.1.5:6361分配1000个hash slots

[root@web-server01 redis-5.0.5]$ ./src/redis-cli --cluster reshard 192.168.1.7:6359 --cluster-from 717ac1ea8c6577c7885ed4ee8e17e344c0f1c529 --cluster-to 7cbe92a7b45f4ba38ebdcef60f95df7e765c3fa4 --cluster-slots --cluster-yes

      分配完成后我们我们在看下节点信息

192.168.1.6:6359 (21acfaaa...) -> 0 keys | 5462 slots | 1 slaves.
192.168.1.5:6359 (717ac1ea...) -> 0 keys | 4461 slots | 1 slaves.
172.19.3.157:6361 (7cbe92a7...) -> 0 keys | 1000 slots | 1 slaves.
172.19.3.157:6362 (c60c6426...) -> 0 keys | 5461 slots | 1 slaves.
[OK] 0 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.1.6:6359)
M: 21acfaaa67e78e04648ed3aa242296c7290dec60 192.168.1.6:6359
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
S: 5bb1866000d39693f9dcd85387babf200cb8f078 192.168.1.6:6360
   slots: (0 slots) slave
   replicates 717ac1ea8c6577c7885ed4ee8e17e344c0f1c529
S: eb3d099c6ef9474b3da4f5241dedc5233fc926fb 192.168.1.5:6360
   slots: (0 slots) slave
   replicates c60c642692cf64c9810e258d04f8c427c8afd456
M: 717ac1ea8c6577c7885ed4ee8e17e344c0f1c529 192.168.1.5:6359
   slots:[1000-5460] (4461 slots) master
   1 additional replica(s)
M: 7cbe92a7b45f4ba38ebdcef60f95df7e765c3fa4 192.168.1.5:6361
   slots:[0-999] (1000 slots) master
   1 additional replica(s)
M: c60c642692cf64c9810e258d04f8c427c8afd456 192.168.1.5:6362
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: 66e4c6df9ca97c7133d0def9566258083a52b8eb 192.168.1.7:6360
   slots: (0 slots) slave
   replicates 21acfaaa67e78e04648ed3aa242296c7290dec60
S: db7fc16fbb9a8eeb758418578a53d0ab602b8dba 192.168.1.7:6359
   slots: (0 slots) slave
   replicates 7cbe92a7b45f4ba38ebdcef60f95df7e765c3fa4
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

        看到没有在192.168.1.5:6361的master节点上不但有hash slots了,而且还给它分配了一个slave节点,从这里就反映出为了保证集群的可用性,Redis cluster将会保证至少一个master节点有1个slave节点。

       接着看下节点的删除操作,当删除的节点为主节点并且有hash slots的时候

[root@web-server01 redis-5.0.5]$ ./src/redis-cli --cluster del-node 192.168.1.6:6359 7cbe92a7b45f4ba38ebdcef60f95df7e765c3fa4
>>> Removing node 7cbe92a7b45f4ba38ebdcef60f95df7e765c3fa4 from cluster 192.168.1.6:6359
[ERR] Node 192.168.1.5:6361 is not empty! Reshard data away and try again.

       它会告诉我们必须迁移走hash slots才能做删除操作,我们在进行slave节点192.168.1.5:6360的删除

[root@web-server01 redis-5.0.5]$ ./src/redis-cli --cluster del-node 192.168.1.6:6359  eb3d099c6ef9474b3da4f5241dedc5233fc926fb
>>> Removing node eb3d099c6ef9474b3da4f5241dedc5233fc926fb from cluster 192.168.1.6:6359
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node.

       可以看到它直接把slave节点删除并且将服务也关掉了,当我们重启192.168.1.5:6360服务的时候,通过check检查如下

[root@web-server01 redis-5.0.5]$ ./src/redis-cli --cluster check 192.168.1.7:6359
192.168.1.5:6359 (717ac1ea...) -> 0 keys | 4461 slots | 1 slaves.
192.168.1.6:6359 (21acfaaa...) -> 0 keys | 5462 slots | 1 slaves.
192.168.1.5:6361 (7cbe92a7...) -> 0 keys | 1000 slots | 1 slaves.
192.168.1.5:6362 (c60c6426...) -> 0 keys | 5461 slots | 0 slaves.
[OK] 0 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 192.168.1.7:6359)
S: db7fc16fbb9a8eeb758418578a53d0ab602b8dba 192.168.1.7:6359
   slots: (0 slots) slave
   replicates 7cbe92a7b45f4ba38ebdcef60f95df7e765c3fa4
S: 5bb1866000d39693f9dcd85387babf200cb8f078 192.168.1.6:6360
   slots: (0 slots) slave
   replicates 717ac1ea8c6577c7885ed4ee8e17e344c0f1c529
M: 717ac1ea8c6577c7885ed4ee8e17e344c0f1c529 192.168.1.5:6359
   slots:[1000-5460] (4461 slots) master
   1 additional replica(s)
M: 21acfaaa67e78e04648ed3aa242296c7290dec60 192.168.1.6:6359
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
S: 66e4c6df9ca97c7133d0def9566258083a52b8eb 192.168.1.7:6360
   slots: (0 slots) slave
   replicates 21acfaaa67e78e04648ed3aa242296c7290dec60
M: 7cbe92a7b45f4ba38ebdcef60f95df7e765c3fa4 192.168.1.5:6361
   slots:[0-999] (1000 slots) master
   1 additional replica(s)
M: c60c642692cf64c9810e258d04f8c427c8afd456 192.168.1.5:6362
   slots:[10923-16383] (5461 slots) master
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

          从结果来看192.168.1.5:6360节点并没有添加到Redis cluster集群中,但是当我们通过如下命令来查看的时候

[root@web-server01 redis-5.0.5]$ ./src/redis-cli -p 6360
127.0.0.1:6360> cluster nodes
21acfaaa67e78e04648ed3aa242296c7290dec60 192.168.1.6:6359@16359 master - 0 1573025665645 10 connected 5461-10922
717ac1ea8c6577c7885ed4ee8e17e344c0f1c529 192.168.1.5:6359@16359 master - 0 1573025664643 1 connected 1000-5460
c60c642692cf64c9810e258d04f8c427c8afd456 192.168.1.5:6362@16362 master - 0 1573025664000 13 connected 10923-16383
47818a6cce6f288b8196d00677df6a826bd8a372 :0@0 master,noaddr - 1573025607607 1573025607607 10 disconnected
7cbe92a7b45f4ba38ebdcef60f95df7e765c3fa4 192.168.1.5:6361@16361 master - 0 1573025663000 14 connected 0-999
66e4c6df9ca97c7133d0def9566258083a52b8eb 192.168.1.7:6360@16360 slave 21acfaaa67e78e04648ed3aa242296c7290dec60 0 1573025662638 11 connected
5bb1866000d39693f9dcd85387babf200cb8f078 192.168.1.6:6360@16360 slave 717ac1ea8c6577c7885ed4ee8e17e344c0f1c529 0 1573025661637 8 connected
db7fc16fbb9a8eeb758418578a53d0ab602b8dba 192.168.1.7:6359@16359 slave 7cbe92a7b45f4ba38ebdcef60f95df7e765c3fa4 0 1573025661000 14 connected
eb3d099c6ef9474b3da4f5241dedc5233fc926fb 192.168.1.5:6360@16360 myself,slave c60c642692cf64c9810e258d04f8c427c8afd456 0 1573025607607 12 connected

      看到没192.168.1.5:6360节点已经作为192.168.1.5:6362的slave节点了,但是在上面的那个命令中没有并没有检测到192.168.1.5:6360节点,所以看来下面这个命令展示出来的数据是有问题的,那我们该怎么解决了,当然我们还是通过cluster命令来解决,看下它的属性

127.0.0.1:6360> cluster help
 1) CLUSTER <subcommand> arg arg ... arg. Subcommands are:
 # 添加新的hash slots到当前节点
 2) ADDSLOTS <slot> [slot ...] -- Assign slots to current node.
 # 查看当前epoch值
 3) BUMPEPOCH -- Advance the cluster config epoch.
 # 统计指定节点的错误信息
 4) COUNT-failure-reports <node-id> -- Return number of failure reports for <node-id>.
 # 统计在指定hash slots中可以key值的数量
 5) COUNTKEYSINSLOT <slot> - Return the number of keys in <slot>.
 # 删除指定的hash slots从当前节点
 6) DELSLOTS <slot> [slot ...] -- Delete slots information from current node.
 # 进行故障转移、主从切换操作
 7) FAILOVER [force|takeover] -- Promote current replica node to being a master.
 # 从集群中移除某个节点
 8) FORGET <node-id> -- Remove a node from the cluster.
 # 获取当前节点的hash slots中的key
 9) GETKEYSINSLOT <slot> <count> -- Return key names stored by current node in a slot.
# 删除当前节点中的所有hash slots信息
10) FLUSHSLOTS -- Delete current node own slots information.
# 获取当前节点的信息
11) INFO - Return onformation about the cluster.
# 获取key所在的hash slot
12) KEYSLOT <key> -- Return the hash slot for <key>.
# 添加一个节点到集群中
13) MEET <ip> <port> [bus-port] -- Connect nodes into a working cluster.
# 获取当前节点的node id
14) MYID -- Return the node id.
# 获取集群中所有的节点信息
15) NODES -- Return cluster configuration seen by node. Output format:
# cluster nodes展示的信息顺序
16)     <id> <ip:port> <flags> <master> <pings> <pongs> <epoch> <link> <slot> ... <slot>
# 指定当前节点为给定节点的slave
17) REPLICATE <node-id> -- Configure current node as replica to <node-id>.
# 重置当前节点的所有信息
#soft和hard都会重置一个slave节点为master节点、如果为master节点且有数据则会报错、释放所有的hash slots、手动的故障切换状态被重置、节点表中的所有其他节点都被删除;
#hard还会将currentEpoch、configEpoch和lastVoteEpoch重置为0、节点id也会被修改为一个新的随机id
18) RESET [hard|soft] -- Reset current node (default: soft).
# 设置当前节点的epoch值
19) SET-config-epoch <epoch> - Set config epoch of current node.
# 设置slot的状态
20) SETSLOT <slot> (importing|migrating|stable|node <node-id>) -- Set slot state.
# 获取指定节点的所有slave节点
21) REPLICAS <node-id> -- Return <node-id> replicas.
# 获取节点的hash slots信息
22) SLOTS -- Return information about slots range mappings. Each range is made of:
23)     start, end, master and replicas IP addresses, ports and ids

       从上面的命令我们可以知道,我们完全可以使用 cluster reset命令来重新恢复为最初始的配置来重新按照前面介绍过的新增节点的方法来向集群中添加192.168.1.5:6360节点,当然如果这个节点中有值了,我们还需要通过flushall命令来清楚数据在执行cluster reset来重置该节点。

       如果我们需要把已经搭建好的集群重新搭建,并且在这个集群中已经有一些数据了,如果这些数据对我们来说不重要或者可以重新导入,我们可以通过命令

[root@web-server01 redis-5.0.5]$ ./src/redis-cli -p 6359

       分别进入对应的节点中,分别进行如下操作:

127.0.0.1:6359> FLUSHALL
127.0.0.1:6359> cluster reset soft|hard

      重置完节点后就可以按照前面的步骤来一步步的进行集群的重新搭建了。

      我们在一个节点进行新增数据时,如果该hash值不在当前的节点上面是不能正常添加的

[root@web-server01 redis-5.0.5]$ ./src/redis-cli -p 6359
127.0.0.1:6359> set map1 value1
(error) MOVED 8740 192.168.3.6:6359

      这个时候我们可以采用如下方式进行转移添加

[root@web-server01 redis-5.0.5]$ ./src/redis-cli -c -p 6359
127.0.0.1:6359> set map1 value1
-> Redirected to slot [8740] located at 192.168.3.6:6359
OK
192.168.3.6:6359> get map1
"value1"

 

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值