redis入门(8):redis集群试验

试验环境:

参照:

试验问题:

  • 单个linux虚拟机运行6个redis进程,构成redis集群
  • 读写集群中的数据
  • 新增主/从节点到集群
  • 从集群中删除主/从节点

一、准备环境

1.1 在redis的bin目录下新建文件夹clusterconf

1.2 在redis的bin目录下新建6个redis配置文件,此时的bin目录结构如下:

在这里插入图片描述

1.3 配置文件的内容如下:

redis-6500.conf:

port 6500
daemonize yes
pidfile "/var/run/redis_6500.pid"
logfile "/usr/local/redis/bin/redis-6500.log"
dir "/usr/local/redis/bin"
dbfilename "dump-6500.rdb"

cluster-enabled yes
cluster-config-file "clusterconf/cluster_6500.conf"
cluster-node-timeout 5000

redis-6501.conf:

port 6501
daemonize yes
pidfile "/var/run/redis_6501.pid"
logfile "/usr/local/redis/bin/redis-6501.log"
dir "/usr/local/redis/bin"
dbfilename "dump-6501.rdb"

cluster-enabled yes
cluster-config-file "clusterconf/cluster_6501.conf"
cluster-node-timeout 5000

redis-6502.conf:

port 6502
daemonize yes
pidfile "/var/run/redis_6502.pid"
logfile "/usr/local/redis/bin/redis-6502.log"
dir "/usr/local/redis/bin"
dbfilename "dump-6502.rdb"

cluster-enabled yes
cluster-config-file "clusterconf/cluster_6502.conf"
cluster-node-timeout 5000

redis-6503.conf:

port 6503
daemonize yes
pidfile "/var/run/redis_6503.pid"
logfile "/usr/local/redis/bin/redis-6503.log"
dir "/usr/local/redis/bin"
dbfilename "dump-6503.rdb"

cluster-enabled yes
cluster-config-file "clusterconf/cluster_6503.conf"
cluster-node-timeout 5000

redis-6504.conf:

port 6504
daemonize yes
pidfile "/var/run/redis_6504.pid"
logfile "/usr/local/redis/bin/redis-6504.log"
dir "/usr/local/redis/bin"
dbfilename "dump-6504.rdb"

cluster-enabled yes
cluster-config-file "clusterconf/cluster_6504.conf"
cluster-node-timeout 5000

redis-6505.conf:

port 6505
daemonize yes
pidfile "/var/run/redis_6505.pid"
logfile "/usr/local/redis/bin/redis-6505.log"
dir "/usr/local/redis/bin"
dbfilename "dump-6505.rdb"

cluster-enabled yes
cluster-config-file "clusterconf/cluster_6505.conf"
cluster-node-timeout 5000

配置解释:

  1. cluster-enabled yes:表示开启集群模式
  2. cluster-config-file “clusterconf/cluster_6505.conf”:某个redis节点保存集群状态的文件,redis会自动新建和管理,指定个位置就行了。
  3. cluster-node-timeout 5000:集群中通讯的超时时间

二、创建集群

2.1 根据6个配置文件启动6个redis进程:./redis-server redis-6500.conf

效果如下图:
在这里插入图片描述
查看这些redis服务情况:

[root@localhost bin]# ./redis-cli -p 6500
127.0.0.1:6500> set name 123
(error) CLUSTERDOWN Hash slot not served
127.0.0.1:6500> get name
(error) CLUSTERDOWN Hash slot not served

可以看到以集群模式启动的redis由于没有分配槽,还不能提供服务。

2.2 使用这6个redis进程创建集群

输入命令:./redis-cli --cluster create 127.0.0.1:6500 127.0.0.1:6501 127.0.0.1:6502 127.0.0.1:6503 127.0.0.1:6504 127.0.0.1:6505 --cluster-replicas 1
在随后的提示中输入yes即可。
整个过程如下:

[root@localhost bin]# ./redis-cli --cluster create 127.0.0.1:6500 127.0.0.1:6501 127.0.0.1:6502 127.0.0.1:6503 127.0.0.1:6504 127.0.0.1:6505 --cluster-replicas 1
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 127.0.0.1:6504 to 127.0.0.1:6500
Adding replica 127.0.0.1:6505 to 127.0.0.1:6501
Adding replica 127.0.0.1:6503 to 127.0.0.1:6502
>>> Trying to optimize slaves allocation for anti-affinity
[WARNING] Some slaves are in the same host as their master
M: 6158065dfe4af8d827c76e876cbc3324dc0de568 127.0.0.1:6500
   slots:[0-5460] (5461 slots) master
M: 682672924f6fdfbc91f81b31b54f6ce3a5a384be 127.0.0.1:6501
   slots:[5461-10922] (5462 slots) master
M: b8fc1f69333998c452d6fc0673273dde75f7131a 127.0.0.1:6502
   slots:[10923-16383] (5461 slots) master
S: 9f6ba917c94a321dab8a9808e26b404e2aea20d1 127.0.0.1:6503
   replicates b8fc1f69333998c452d6fc0673273dde75f7131a
S: 69f94f14efaa4e69330b8fec9d5d7658e77354f7 127.0.0.1:6504
   replicates 6158065dfe4af8d827c76e876cbc3324dc0de568
S: c54571e2e6a8e6ef031d29308bcac27dbd7b1abe 127.0.0.1:6505
   replicates 682672924f6fdfbc91f81b31b54f6ce3a5a384be
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
...
>>> Performing Cluster Check (using node 127.0.0.1:6500)
M: 6158065dfe4af8d827c76e876cbc3324dc0de568 127.0.0.1:6500
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
M: b8fc1f69333998c452d6fc0673273dde75f7131a 127.0.0.1:6502
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: c54571e2e6a8e6ef031d29308bcac27dbd7b1abe 127.0.0.1:6505
   slots: (0 slots) slave
   replicates 682672924f6fdfbc91f81b31b54f6ce3a5a384be
M: 682672924f6fdfbc91f81b31b54f6ce3a5a384be 127.0.0.1:6501
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
S: 9f6ba917c94a321dab8a9808e26b404e2aea20d1 127.0.0.1:6503
   slots: (0 slots) slave
   replicates b8fc1f69333998c452d6fc0673273dde75f7131a
S: 69f94f14efaa4e69330b8fec9d5d7658e77354f7 127.0.0.1:6504
   slots: (0 slots) slave
   replicates 6158065dfe4af8d827c76e876cbc3324dc0de568
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

2.3 查看redis集群状态:

2.3.1 ./redis-cli -p 6500 cluster nodes

操作界面如下:
在这里插入图片描述

从上面的图中可以看到集群中的三个master节点和slave节点对应的关系、master节点分配的槽情况等

2.3.2 ./redis-cli -p 6500 cluster info

操作界面如下:
在这里插入图片描述

2.3.3 ./redis-cli --cluster check 127.0.0.1:6500

这个命令可以用来检查集群的节点情况,下面的 “三、集群数据存取和检查” 中会用到。

三、集群数据存取和检查

连接集群:./redis-cli -c -p 6500

注意:要加上-c,可以连接到任意一个集群节点(主从节点都可以)

存取值:和正常的redis一样(唯一区别是:再存取数据的时候会有个重定向,根据键的值自动重定向到负责存储这个键的redis节点上)
存取的整个操作流程如下:

[root@localhost bin]# ./redis-cli -c -p 6500
127.0.0.1:6500> keys *
(empty list or set)
127.0.0.1:6500> set name xiaoming
-> Redirected to slot [5798] located at 127.0.0.1:6501
OK
127.0.0.1:6501> get name
"xiaoming"
127.0.0.1:6501> quit
[root@localhost bin]# ./redis-cli -c -p 6505
127.0.0.1:6505> get name
-> Redirected to slot [5798] located at 127.0.0.1:6501
"xiaoming"
127.0.0.1:6501> keys *
1) "name"
127.0.0.1:6501> set age 12
-> Redirected to slot [741] located at 127.0.0.1:6500
OK
127.0.0.1:6500> get age
"12"
127.0.0.1:6500> keys *
1) "age"

注意上面在set或get的时候有个重定向,重定向后发现命令行的端口提示发生了变化。keys命令并不会返回整个集群的键而是返回当前操作的redis节点的键

检查集群情况:./redis-cli --cluster check 127.0.0.1:6500

[root@localhost bin]# ./redis-cli --cluster check 127.0.0.1:6500
127.0.0.1:6500 (6158065d...) -> 1 keys | 5461 slots | 1 slaves.
127.0.0.1:6502 (b8fc1f69...) -> 0 keys | 5461 slots | 1 slaves.
127.0.0.1:6501 (68267292...) -> 1 keys | 5462 slots | 1 slaves.
[OK] 2 keys in 3 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 127.0.0.1:6500)
M: 6158065dfe4af8d827c76e876cbc3324dc0de568 127.0.0.1:6500
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
M: b8fc1f69333998c452d6fc0673273dde75f7131a 127.0.0.1:6502
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: c54571e2e6a8e6ef031d29308bcac27dbd7b1abe 127.0.0.1:6505
   slots: (0 slots) slave
   replicates 682672924f6fdfbc91f81b31b54f6ce3a5a384be
M: 682672924f6fdfbc91f81b31b54f6ce3a5a384be 127.0.0.1:6501
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
S: 9f6ba917c94a321dab8a9808e26b404e2aea20d1 127.0.0.1:6503
   slots: (0 slots) slave
   replicates b8fc1f69333998c452d6fc0673273dde75f7131a
S: 69f94f14efaa4e69330b8fec9d5d7658e77354f7 127.0.0.1:6504
   slots: (0 slots) slave
   replicates 6158065dfe4af8d827c76e876cbc3324dc0de568
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

从上面的检查中可以看到,2个键分别存储到了不同的master节点中。

四、新增集群节点

4.1 新增集群主节点

4.1.1 在redis的bin目录下新建配置文件“redis-6506.conf”

port 6506
daemonize yes
pidfile "/var/run/redis_6506.pid"
logfile "/usr/local/redis/bin/redis-6506.log"
dir "/usr/local/redis/bin"
dbfilename "dump-6506.rdb"

cluster-enabled yes
cluster-config-file "clusterconf/cluster_6506.conf"
cluster-node-timeout 5000

4.1.2 启动这个redis服务

./redis-server redis-6506.conf

[root@localhost bin]# ./redis-server redis-6506.conf 
[root@localhost bin]# cat redis-6506.log 
10459:C 24 Aug 2019 18:54:40.054 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
10459:C 24 Aug 2019 18:54:40.054 # Redis version=5.0.5, bits=64, commit=00000000, modified=0, pid=10459, just started
10459:C 24 Aug 2019 18:54:40.055 # Configuration loaded
10460:M 24 Aug 2019 18:54:40.063 * Increased maximum number of open files to 10032 (it was originally set to 1024).
10460:M 24 Aug 2019 18:54:40.066 * No cluster configuration found, I'm 20bba532711d156a14fe6068b4bc0692d5f0e875
10460:M 24 Aug 2019 18:54:40.076 * Running mode=cluster, port=6506.
10460:M 24 Aug 2019 18:54:40.077 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
10460:M 24 Aug 2019 18:54:40.077 # Server initialized
10460:M 24 Aug 2019 18:54:40.078 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
10460:M 24 Aug 2019 18:54:40.080 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
10460:M 24 Aug 2019 18:54:40.080 * Ready to accept connections

4.1.3 将这个redis进程加入到集群作为master

./redis-cli --cluster add-node 127.0.0.1:6506 127.0.0.1:6500

解释:

  1. 第一个ip:端口 表示的是新加入到集群的redis地址
  2. 第二个ip:端口表示集群中任意一个redis节点地址
  3. 这个命令添加的是master,如果要加入slave需要在这个命令后面加上--cluster-slave参数,后面会用到

操作界面如下:

[root@localhost bin]# ./redis-cli --cluster add-node 127.0.0.1:6506 127.0.0.1:6500
>>> Adding node 127.0.0.1:6506 to cluster 127.0.0.1:6500
>>> Performing Cluster Check (using node 127.0.0.1:6500)
M: 6158065dfe4af8d827c76e876cbc3324dc0de568 127.0.0.1:6500
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
M: b8fc1f69333998c452d6fc0673273dde75f7131a 127.0.0.1:6502
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: c54571e2e6a8e6ef031d29308bcac27dbd7b1abe 127.0.0.1:6505
   slots: (0 slots) slave
   replicates 682672924f6fdfbc91f81b31b54f6ce3a5a384be
M: 682672924f6fdfbc91f81b31b54f6ce3a5a384be 127.0.0.1:6501
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
S: 9f6ba917c94a321dab8a9808e26b404e2aea20d1 127.0.0.1:6503
   slots: (0 slots) slave
   replicates b8fc1f69333998c452d6fc0673273dde75f7131a
S: 69f94f14efaa4e69330b8fec9d5d7658e77354f7 127.0.0.1:6504
   slots: (0 slots) slave
   replicates 6158065dfe4af8d827c76e876cbc3324dc0de568
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 127.0.0.1:6506 to make it join the cluster.
[OK] New node added correctly.

4.1.4 此时检查集群节点情况:./redis-cli -p 6500 cluster nodes./redis-cli --cluster check 127.0.0.1:6500

在这里插入图片描述
在这里插入图片描述

从上面的操作上可以看到,虽然已经添加了一个redis主节点但并没有给它分配槽,所以它是不能被使用的。

4.1.5 给这个新的redis主节点分配槽

输入命令:./redis-cli --cluster reshard 127.0.0.1:6500
提示如下:

[root@localhost bin]# ./redis-cli --cluster reshard 127.0.0.1:6500
>>> Performing Cluster Check (using node 127.0.0.1:6500)
M: 6158065dfe4af8d827c76e876cbc3324dc0de568 127.0.0.1:6500
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
M: 20bba532711d156a14fe6068b4bc0692d5f0e875 127.0.0.1:6506
   slots: (0 slots) master
M: b8fc1f69333998c452d6fc0673273dde75f7131a 127.0.0.1:6502
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
S: c54571e2e6a8e6ef031d29308bcac27dbd7b1abe 127.0.0.1:6505
   slots: (0 slots) slave
   replicates 682672924f6fdfbc91f81b31b54f6ce3a5a384be
M: 682672924f6fdfbc91f81b31b54f6ce3a5a384be 127.0.0.1:6501
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
S: 9f6ba917c94a321dab8a9808e26b404e2aea20d1 127.0.0.1:6503
   slots: (0 slots) slave
   replicates b8fc1f69333998c452d6fc0673273dde75f7131a
S: 69f94f14efaa4e69330b8fec9d5d7658e77354f7 127.0.0.1:6504
   slots: (0 slots) slave
   replicates 6158065dfe4af8d827c76e876cbc3324dc0de568
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 

这是在询问想要转移多少个槽,这里先填个5(实际中的数量做平均算法即可)

How many slots do you want to move (from 1 to 16384)? 5
What is the receiving node ID? 

这是在询问目的redis节点ID,redis-6506的ID值可以从上面的提示中找到:20bba532711d156a14fe6068b4bc0692d5f0e875,将它输入:

How many slots do you want to move (from 1 to 16384)? 5
What is the receiving node ID? 20bba532711d156a14fe6068b4bc0692d5f0e875
Please enter all the source node IDs.
  Type 'all' to use all the nodes as source nodes for the hash slots.
  Type 'done' once you entered all the source nodes IDs.
Source node #1:

这是在询问从哪些redis节点抽取槽,这里直接输入“all”:

How many slots do you want to move (from 1 to 16384)? 5
What is the receiving node ID? 20bba532711d156a14fe6068b4bc0692d5f0e875
Please enter all the source node IDs.
  Type 'all' to use all the nodes as source nodes for the hash slots.
  Type 'done' once you entered all the source nodes IDs.
Source node #1: all

Ready to move 5 slots.
  Source nodes:
    M: 6158065dfe4af8d827c76e876cbc3324dc0de568 127.0.0.1:6500
       slots:[0-5460] (5461 slots) master
       1 additional replica(s)
    M: b8fc1f69333998c452d6fc0673273dde75f7131a 127.0.0.1:6502
       slots:[10923-16383] (5461 slots) master
       1 additional replica(s)
    M: 682672924f6fdfbc91f81b31b54f6ce3a5a384be 127.0.0.1:6501
       slots:[5461-10922] (5462 slots) master
       1 additional replica(s)
  Destination node:
    M: 20bba532711d156a14fe6068b4bc0692d5f0e875 127.0.0.1:6506
       slots: (0 slots) master
  Resharding plan:
    Moving slot 5461 from 682672924f6fdfbc91f81b31b54f6ce3a5a384be
    Moving slot 5462 from 682672924f6fdfbc91f81b31b54f6ce3a5a384be
    Moving slot 0 from 6158065dfe4af8d827c76e876cbc3324dc0de568
    Moving slot 10923 from b8fc1f69333998c452d6fc0673273dde75f7131a
Do you want to proceed with the proposed reshard plan (yes/no)? 

直接输入:“yes”

Do you want to proceed with the proposed reshard plan (yes/no)? yes
Moving slot 5461 from 127.0.0.1:6501 to 127.0.0.1:6506: 
Moving slot 5462 from 127.0.0.1:6501 to 127.0.0.1:6506: 
Moving slot 0 from 127.0.0.1:6500 to 127.0.0.1:6506: 
Moving slot 10923 from 127.0.0.1:6502 to 127.0.0.1:6506: 

这里可以看到已经抽取槽到了新添加的redis-6506上面,虽然实际上是抽取到了四个槽。

4.1.6 检查集群状态:./redis-cli --cluster check 127.0.0.1:6500

[root@localhost bin]# ./redis-cli --cluster check 127.0.0.1:6500
127.0.0.1:6500 (6158065d...) -> 1 keys | 5460 slots | 1 slaves.
127.0.0.1:6506 (20bba532...) -> 0 keys | 4 slots | 0 slaves.
127.0.0.1:6502 (b8fc1f69...) -> 0 keys | 5460 slots | 1 slaves.
127.0.0.1:6501 (68267292...) -> 1 keys | 5460 slots | 1 slaves.
[OK] 2 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 127.0.0.1:6500)
M: 6158065dfe4af8d827c76e876cbc3324dc0de568 127.0.0.1:6500
   slots:[1-5460] (5460 slots) master
   1 additional replica(s)
M: 20bba532711d156a14fe6068b4bc0692d5f0e875 127.0.0.1:6506
   slots:[0],[5461-5462],[10923] (4 slots) master
M: b8fc1f69333998c452d6fc0673273dde75f7131a 127.0.0.1:6502
   slots:[10924-16383] (5460 slots) master
   1 additional replica(s)
S: c54571e2e6a8e6ef031d29308bcac27dbd7b1abe 127.0.0.1:6505
   slots: (0 slots) slave
   replicates 682672924f6fdfbc91f81b31b54f6ce3a5a384be
M: 682672924f6fdfbc91f81b31b54f6ce3a5a384be 127.0.0.1:6501
   slots:[5463-10922] (5460 slots) master
   1 additional replica(s)
S: 9f6ba917c94a321dab8a9808e26b404e2aea20d1 127.0.0.1:6503
   slots: (0 slots) slave
   replicates b8fc1f69333998c452d6fc0673273dde75f7131a
S: 69f94f14efaa4e69330b8fec9d5d7658e77354f7 127.0.0.1:6504
   slots: (0 slots) slave
   replicates 6158065dfe4af8d827c76e876cbc3324dc0de568
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

从上面可以清楚的看到集群整体的槽分布情况。

4.1.7 平衡分配集群中槽的分布

其实上面给新添加的master节点分配了槽之后就已经可以使用了,这里主要是做一下槽的均衡分布
输入命令:./redis-cli --cluster rebalance --cluster-threshold 1 127.0.0.1:6500
操作界面如下:

[root@localhost bin]# ./redis-cli --cluster rebalance --cluster-threshold 1 127.0.0.1:6500
>>> Performing Cluster Check (using node 127.0.0.1:6500)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Rebalancing across 4 nodes. Total weight = 4.00
Moving 1364 slots from 127.0.0.1:6501 to 127.0.0.1:6506
####################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################
Moving 1364 slots from 127.0.0.1:6502 to 127.0.0.1:6506
####################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################
Moving 1364 slots from 127.0.0.1:6500 to 127.0.0.1:6506
####################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################################

上面的操作中可以看到redis的槽在做平衡中都向redis-6506这个节点做个转移

为了观察数据情况,再次检查集群情况:./redis-cli --cluster check 127.0.0.1:6500

[root@localhost bin]# ./redis-cli --cluster check 127.0.0.1:6500
127.0.0.1:6500 (6158065d...) -> 0 keys | 4096 slots | 1 slaves.
127.0.0.1:6506 (20bba532...) -> 2 keys | 4096 slots | 0 slaves.
127.0.0.1:6502 (b8fc1f69...) -> 0 keys | 4096 slots | 1 slaves.
127.0.0.1:6501 (68267292...) -> 0 keys | 4096 slots | 1 slaves.
[OK] 2 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 127.0.0.1:6500)
M: 6158065dfe4af8d827c76e876cbc3324dc0de568 127.0.0.1:6500
   slots:[1365-5460] (4096 slots) master
   1 additional replica(s)
M: 20bba532711d156a14fe6068b4bc0692d5f0e875 127.0.0.1:6506
   slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master
M: b8fc1f69333998c452d6fc0673273dde75f7131a 127.0.0.1:6502
   slots:[12288-16383] (4096 slots) master
   1 additional replica(s)
S: c54571e2e6a8e6ef031d29308bcac27dbd7b1abe 127.0.0.1:6505
   slots: (0 slots) slave
   replicates 682672924f6fdfbc91f81b31b54f6ce3a5a384be
M: 682672924f6fdfbc91f81b31b54f6ce3a5a384be 127.0.0.1:6501
   slots:[6827-10922] (4096 slots) master
   1 additional replica(s)
S: 9f6ba917c94a321dab8a9808e26b404e2aea20d1 127.0.0.1:6503
   slots: (0 slots) slave
   replicates b8fc1f69333998c452d6fc0673273dde75f7131a
S: 69f94f14efaa4e69330b8fec9d5d7658e77354f7 127.0.0.1:6504
   slots: (0 slots) slave
   replicates 6158065dfe4af8d827c76e876cbc3324dc0de568
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

4.2 新增集群从节点

4.2.1 在redis的bin目录下新建配置文件“redis-6507.conf”,内容如下:

port 6507
daemonize yes
pidfile "/var/run/redis_6507.pid"
logfile "/usr/local/redis/bin/redis-6507.log"
dir "/usr/local/redis/bin"
dbfilename "dump-6507.rdb"

cluster-enabled yes
cluster-config-file "clusterconf/cluster_6507.conf"
cluster-node-timeout 5000

4.2.2 启动这个redis节点(6507):./redis-server redis-6507.conf

[root@localhost bin]# ./redis-server redis-6507.conf
[root@localhost bin]# cat redis-6507.log 
10599:C 24 Aug 2019 19:44:25.786 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
10599:C 24 Aug 2019 19:44:25.787 # Redis version=5.0.5, bits=64, commit=00000000, modified=0, pid=10599, just started
10599:C 24 Aug 2019 19:44:25.787 # Configuration loaded
10600:M 24 Aug 2019 19:44:25.795 * Increased maximum number of open files to 10032 (it was originally set to 1024).
10600:M 24 Aug 2019 19:44:25.797 * No cluster configuration found, I'm 739f4b7fc37e349db608654c95ab95eb6ae3fec7
10600:M 24 Aug 2019 19:44:25.807 * Running mode=cluster, port=6507.
10600:M 24 Aug 2019 19:44:25.809 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
10600:M 24 Aug 2019 19:44:25.809 # Server initialized
10600:M 24 Aug 2019 19:44:25.809 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
10600:M 24 Aug 2019 19:44:25.811 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
10600:M 24 Aug 2019 19:44:25.811 * Ready to accept connections

4.2.3 将这个redis节点(6507)加入到集群:./redis-cli --cluster add-node 127.0.0.1:6507 127.0.0.1:6500 --cluster-slave

操作界面如下:

[root@localhost bin]# ./redis-cli --cluster add-node 127.0.0.1:6507 127.0.0.1:6500 --cluster-slave
>>> Adding node 127.0.0.1:6507 to cluster 127.0.0.1:6500
>>> Performing Cluster Check (using node 127.0.0.1:6500)
M: 6158065dfe4af8d827c76e876cbc3324dc0de568 127.0.0.1:6500
   slots:[1365-5460] (4096 slots) master
   1 additional replica(s)
M: 20bba532711d156a14fe6068b4bc0692d5f0e875 127.0.0.1:6506
   slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master
M: b8fc1f69333998c452d6fc0673273dde75f7131a 127.0.0.1:6502
   slots:[12288-16383] (4096 slots) master
   1 additional replica(s)
S: c54571e2e6a8e6ef031d29308bcac27dbd7b1abe 127.0.0.1:6505
   slots: (0 slots) slave
   replicates 682672924f6fdfbc91f81b31b54f6ce3a5a384be
M: 682672924f6fdfbc91f81b31b54f6ce3a5a384be 127.0.0.1:6501
   slots:[6827-10922] (4096 slots) master
   1 additional replica(s)
S: 9f6ba917c94a321dab8a9808e26b404e2aea20d1 127.0.0.1:6503
   slots: (0 slots) slave
   replicates b8fc1f69333998c452d6fc0673273dde75f7131a
S: 69f94f14efaa4e69330b8fec9d5d7658e77354f7 127.0.0.1:6504
   slots: (0 slots) slave
   replicates 6158065dfe4af8d827c76e876cbc3324dc0de568
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
Automatically selected master 127.0.0.1:6506
>>> Send CLUSTER MEET to node 127.0.0.1:6507 to make it join the cluster.
Waiting for the cluster to join

>>> Configure node as replica of 127.0.0.1:6506.
[OK] New node added correctly.

从上面的显示中可以看到,新加的节点6507自动成为了6506的从节点

再做次集群检查:

[root@localhost bin]# ./redis-cli --cluster check 127.0.0.1:6500
127.0.0.1:6500 (6158065d...) -> 0 keys | 4096 slots | 1 slaves.
127.0.0.1:6506 (20bba532...) -> 2 keys | 4096 slots | 1 slaves.
127.0.0.1:6502 (b8fc1f69...) -> 0 keys | 4096 slots | 1 slaves.
127.0.0.1:6501 (68267292...) -> 0 keys | 4096 slots | 1 slaves.
[OK] 2 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 127.0.0.1:6500)
M: 6158065dfe4af8d827c76e876cbc3324dc0de568 127.0.0.1:6500
   slots:[1365-5460] (4096 slots) master
   1 additional replica(s)
M: 20bba532711d156a14fe6068b4bc0692d5f0e875 127.0.0.1:6506
   slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master
   1 additional replica(s)
M: b8fc1f69333998c452d6fc0673273dde75f7131a 127.0.0.1:6502
   slots:[12288-16383] (4096 slots) master
   1 additional replica(s)
S: c54571e2e6a8e6ef031d29308bcac27dbd7b1abe 127.0.0.1:6505
   slots: (0 slots) slave
   replicates 682672924f6fdfbc91f81b31b54f6ce3a5a384be
M: 682672924f6fdfbc91f81b31b54f6ce3a5a384be 127.0.0.1:6501
   slots:[6827-10922] (4096 slots) master
   1 additional replica(s)
S: 9f6ba917c94a321dab8a9808e26b404e2aea20d1 127.0.0.1:6503
   slots: (0 slots) slave
   replicates b8fc1f69333998c452d6fc0673273dde75f7131a
S: 739f4b7fc37e349db608654c95ab95eb6ae3fec7 127.0.0.1:6507
   slots: (0 slots) slave
   replicates 20bba532711d156a14fe6068b4bc0692d5f0e875
S: 69f94f14efaa4e69330b8fec9d5d7658e77354f7 127.0.0.1:6504
   slots: (0 slots) slave
   replicates 6158065dfe4af8d827c76e876cbc3324dc0de568
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

4.2.4 另外如果想指定redis的主从关系也是可以的。

比如说想指定6507节点为6506节点的从节点,那么:首先登陆6507,然后执行cluster replicate 20bba532711d156a14fe6068b4bc0692d5f0e875

注意:这个命令未做测试;后面跟的ID是6506节点的

五、删除集群中的redis节点

5.1 删除redis从节点

这个比较简单,只需要执行命令:./redis-cli --cluster del-node 127.0.0.1:6500 739f4b7fc37e349db608654c95ab95eb6ae3fec7

注意上面的ID是redis-6507节点的ID

界面操作效果:

[root@localhost bin]# ./redis-cli --cluster del-node 127.0.0.1:6500 739f4b7fc37e349db608654c95ab95eb6ae3fec7
>>> Removing node 739f4b7fc37e349db608654c95ab95eb6ae3fec7 from cluster 127.0.0.1:6500
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node.
[root@localhost bin]# ./redis-cli --cluster check 127.0.0.1:6500
127.0.0.1:6500 (6158065d...) -> 0 keys | 4096 slots | 1 slaves.
127.0.0.1:6506 (20bba532...) -> 2 keys | 4096 slots | 0 slaves.
127.0.0.1:6502 (b8fc1f69...) -> 0 keys | 4096 slots | 1 slaves.
127.0.0.1:6501 (68267292...) -> 0 keys | 4096 slots | 1 slaves.
[OK] 2 keys in 4 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 127.0.0.1:6500)
M: 6158065dfe4af8d827c76e876cbc3324dc0de568 127.0.0.1:6500
   slots:[1365-5460] (4096 slots) master
   1 additional replica(s)
M: 20bba532711d156a14fe6068b4bc0692d5f0e875 127.0.0.1:6506
   slots:[0-1364],[5461-6826],[10923-12287] (4096 slots) master
M: b8fc1f69333998c452d6fc0673273dde75f7131a 127.0.0.1:6502
   slots:[12288-16383] (4096 slots) master
   1 additional replica(s)
S: c54571e2e6a8e6ef031d29308bcac27dbd7b1abe 127.0.0.1:6505
   slots: (0 slots) slave
   replicates 682672924f6fdfbc91f81b31b54f6ce3a5a384be
M: 682672924f6fdfbc91f81b31b54f6ce3a5a384be 127.0.0.1:6501
   slots:[6827-10922] (4096 slots) master
   1 additional replica(s)
S: 9f6ba917c94a321dab8a9808e26b404e2aea20d1 127.0.0.1:6503
   slots: (0 slots) slave
   replicates b8fc1f69333998c452d6fc0673273dde75f7131a
S: 69f94f14efaa4e69330b8fec9d5d7658e77354f7 127.0.0.1:6504
   slots: (0 slots) slave
   replicates 6158065dfe4af8d827c76e876cbc3324dc0de568
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
[root@localhost bin]# ps -aux|grep redis
root      10044  0.2  0.3 145888  3136 ?        Ssl  16:09   0:35 ./redis-server *:6500 [cluster]
root      10049  0.2  0.3 145852  3160 ?        Ssl  16:09   0:35 ./redis-server *:6501 [cluster]
root      10054  0.2  0.3 145972  3168 ?        Ssl  16:09   0:35 ./redis-server *:6502 [cluster]
root      10059  0.2  0.2 145688  2904 ?        Ssl  16:09   0:31 ./redis-server *:6503 [cluster]
root      10064  0.2  0.2 145688  2900 ?        Ssl  16:09   0:31 ./redis-server *:6504 [cluster]
root      10069  0.2  0.2 145716  2972 ?        Ssl  16:09   0:31 ./redis-server *:6505 [cluster]
root      10460  0.4  0.3 145724  3004 ?        Ssl  18:54   0:16 ./redis-server *:6506 [cluster]
root      10719  0.0  0.0 112724   992 pts/5    R+   19:59   0:00 grep --color=auto redis
[root@localhost bin]# cat redis-6507.log 
10599:C 24 Aug 2019 19:44:25.786 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
10599:C 24 Aug 2019 19:44:25.787 # Redis version=5.0.5, bits=64, commit=00000000, modified=0, pid=10599, just started
10599:C 24 Aug 2019 19:44:25.787 # Configuration loaded
10600:M 24 Aug 2019 19:44:25.795 * Increased maximum number of open files to 10032 (it was originally set to 1024).
10600:M 24 Aug 2019 19:44:25.797 * No cluster configuration found, I'm 739f4b7fc37e349db608654c95ab95eb6ae3fec7
10600:M 24 Aug 2019 19:44:25.807 * Running mode=cluster, port=6507.
10600:M 24 Aug 2019 19:44:25.809 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
10600:M 24 Aug 2019 19:44:25.809 # Server initialized
10600:M 24 Aug 2019 19:44:25.809 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
10600:M 24 Aug 2019 19:44:25.811 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
10600:M 24 Aug 2019 19:44:25.811 * Ready to accept connections
10600:M 24 Aug 2019 19:46:14.277 # IP address for this node updated to 127.0.0.1
10600:S 24 Aug 2019 19:46:15.166 * Before turning into a replica, using my master parameters to synthesize a cached master: I may be able to synchronize with the new master with just a partial transfer.
10600:S 24 Aug 2019 19:46:15.167 # Cluster state changed: ok
10600:S 24 Aug 2019 19:46:16.113 * Connecting to MASTER 127.0.0.1:6506
10600:S 24 Aug 2019 19:46:16.113 * MASTER <-> REPLICA sync started
10600:S 24 Aug 2019 19:46:16.114 * Non blocking connect for SYNC fired the event.
10600:S 24 Aug 2019 19:46:16.114 * Master replied to PING, replication can continue...
10600:S 24 Aug 2019 19:46:16.115 * Trying a partial resynchronization (request f45f1484b47b0bc2e513c2091a4b613c79c7b9e0:1).
10600:S 24 Aug 2019 19:46:16.117 * Full resync from master: a18ac93871ce637c902ce2592564eee7ba916cff:0
10600:S 24 Aug 2019 19:46:16.117 * Discarding previously cached master state.
10600:S 24 Aug 2019 19:46:16.215 * MASTER <-> REPLICA sync: receiving 202 bytes from master
10600:S 24 Aug 2019 19:46:16.215 * MASTER <-> REPLICA sync: Flushing old data
10600:S 24 Aug 2019 19:46:16.215 * MASTER <-> REPLICA sync: Loading DB in memory
10600:S 24 Aug 2019 19:46:16.216 * MASTER <-> REPLICA sync: Finished with success
10600:S 24 Aug 2019 19:57:58.196 # User requested shutdown...
10600:S 24 Aug 2019 19:57:58.196 * Removing the pid file.
10600:S 24 Aug 2019 19:57:58.196 # Redis is now ready to exit, bye bye...

从上面的操作中可以看出,一个命令就可以简单移除redis节点,并且移除后这个redis的进程也结束了。

5.2 删除主节点(不再做试验,前面中都已涉及)

其实删除主节点和从节点是类似的,只不过从节点可以直接删掉,而主节点删除前要先清理一下,整个步骤如下:

  1. 把主节点的所有从节点要么删除掉,要么转移到其他的主节点上
  2. 把主节点分配的槽转移到其他的主节点上
  3. 使用命令删除主节点(和从节点一个命令)
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
1、安装文档 2、 实验:(1) 启动redis(2) 停止redis(3)测试连接(发送命令的两种方法) 字符串类型: (4)设置一个键,获得该键值,并判断一个键是否存在 (5)删除键 (6)增和减 (7) 向尾部追加值 (8) 获取字符串长度 (9) 同时获得/设置多个键值 (10) 位操作 散列类型: (11) 为键值car设置price、name、model等“属性” (12) 判断model字段是否存在,不存在的话添加该字段,最后删除 列表类型: (13)向列表添加元素、弹出元素、获取表中元素个数、获得列表片段 (14)获得/设置指定索引的元素值 集合类型: (16) 增加删除元素以及获得集合中所有元素: (17) 判断元素是否存在于集合中: (18) 集合的运算差sdiff,交sinter,并sunion 有序集合类型: (19) 增加元素、获得元素分数、获得排名在某个范围的元素列表、获得制定分数范围的元素 事务: (20)MULTI开启事务,EXEC结束事务 (21)错误处理:语法错误(命令本身错误)和运行错误(命令使用类型错误) (22)watch命令(监控作用) (23)Expire,ttl,persist命令控制键的存活时间 (24)Sort排序命令 (25)构造如图表结构: (26) Sort key1 by key2 将key1按照key2的大小进行排序 (27)get:sort key1 get key2 按照key1大小进行排序,将key2结果显示出来 (28)sort、by、get结合使用: (29)获取外部键但不进行排序: (30)将下表存在哈希结构中,并使用sort、by、get方法进行操作 (31)Store:将排完序的结果进行存储 (32)订阅者-发布者模式 任务队列: (33)优先级队列:

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

jackletter

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值