redis集群

引言

redis集群是redis官方提供的分布式数据库方案,集群通过分片来进行数据共享,并提供复制和故障转移功能
本文使用docker镜像部署redis,并用redis-cli对集群进行创建,修复,校验集群槽状态,加入节点,删除节点,槽负载均衡

不知从redis哪个版本开始redis-cli已经提供全部redis-trib的功能。已知版本6.0.5可以使用redis-cli操作集群
集群的命令包括如下,本文将会演示create、check、info、fix、reshard、rebalance、add-node、del-node命令

root@3d4514dc0f17:/data# redis-cli --cluster help 
Cluster Manager Commands:
  create         host1:port1 ... hostN:portN
                 --cluster-replicas <arg>
  check          host:port
                 --cluster-search-multiple-owners
  info           host:port
  fix            host:port
                 --cluster-search-multiple-owners
                 --cluster-fix-with-unreachable-masters
  reshard        host:port
                 --cluster-from <arg>
                 --cluster-to <arg>
                 --cluster-slots <arg>
                 --cluster-yes
                 --cluster-timeout <arg>
                 --cluster-pipeline <arg>
                 --cluster-replace
  rebalance      host:port
                 --cluster-weight <node1=w1...nodeN=wN>
                 --cluster-use-empty-masters
                 --cluster-timeout <arg>
                 --cluster-simulate
                 --cluster-pipeline <arg>
                 --cluster-threshold <arg>
                 --cluster-replace
  add-node       new_host:new_port existing_host:existing_port
                 --cluster-slave
                 --cluster-master-id <arg>
  del-node       host:port node_id
  call           host:port command arg arg .. arg
  set-timeout    host:port milliseconds
  import         host:port
                 --cluster-from <arg>
                 --cluster-copy
                 --cluster-replace
  backup         host:port backup_directory
  help 

集群创建

本文的基础是已经安装好docker,并且下载了redis的最新镜像
运行环境是win10

第一步 创建专属network

创建用于redis集群的互联网络
演示

C:\Users>docker network create rc-demo
3ff7721f8c06519410de82e36874c073f1167c197476204972233e0aa0156b4b

第二步 启动三台redis镜像

简单介绍docker的各项参数。

  • -d是后台启动
  • -p是容器端口与主机端口的映射
  • –name是指定容器的名字
  • -m是指定容器的内存大小
  • –network是指定容器的网络
  • -v是把主机文件挂载到镜像
  • redis-server /data/redis.conf是执行命令

用来挂载到镜像的redis.conf文件是redis的github中提供的。
修改了以下部分

# daemonize yes --注释了这一行,以主进程启动
cluster-enabled yes --开启了集群配置

演示

C:\Users>docker run -d -p8002:6379 --name rc-demo-3 -m50m --network rc-demo -v D:\data\rc-demo\redis.conf:/data/redis.conf redis redis-server /data/redis.conf
e362be85c7c73cdebbee0bd2e37041234d540509c5e10caf057b0ff61e9016fa

C:\Users>docker run -d -p8001:6379 --name rc-demo-2 -m50m --network rc-demo -v D:\data\rc-demo\redis.conf:/data/redis.conf redis redis-server /data/redis.conf
520e715c582e7de2c9bf97981edf2cdb3c607d76a3481f19aab09434f249bc7d

C:\Users>docker run -d -p8000:6379 --name rc-demo-1 -m50m --network rc-demo -v D:\data\rc-demo\redis.conf:/data/redis.conf redis redis-server /data/redis.conf
e2356f57fb7252451460c44a6b657327c329162756107dbacdcd3351276b7c38

第三步 创建redis集群并分配槽位

redis开始集群开关后启动,默认每个节点都是单节点集群,需要使用cluster meet ip:port 把各个独立的集群之间关联起来。本文使用redis-cli提供的集群create命令一键meet多个集群并且均衡分配槽位使用命令如下redis-cli --cluster create 172.21.0.2:6379 172.21.0.3:6379 172.21.0.4:6379

演示
默认是平均分配槽位,选择yes即可

root@e362be85c7c7:/data# redis-cli --cluster create 172.21.0.2:6379 172.21.0.3:6379 172.21.0.4:6379
>>> Performing hash slots allocation on 3 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
M: eebf083014a1e62354b4ad5f2e962d8b81536808 172.21.0.2:6379
   slots:[0-5460] (5461 slots) master
M: b9a4c11fef3c442868cfab8a91660b3320fc2ecd 172.21.0.3:6379
   slots:[5461-10922] (5462 slots) master
M: d1cafc60288858550706f5c400e205ff076ecf80 172.21.0.4:6379
   slots:[10923-16383] (5461 slots) master
Can I set the above configuration? (type 'yes' to accept): 

此时整个集群已经创建完毕

>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
..
>>> Performing Cluster Check (using node 172.21.0.2:6379)
M: eebf083014a1e62354b4ad5f2e962d8b81536808 172.21.0.2:6379
   slots:[0-5460] (5461 slots) master
M: d1cafc60288858550706f5c400e205ff076ecf80 172.21.0.4:6379
   slots:[10923-16383] (5461 slots) master
M: b9a4c11fef3c442868cfab8a91660b3320fc2ecd 172.21.0.3:6379
   slots:[5461-10922] (5462 slots) master
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

验证

在任意一台执行set dj dj命令。
当收到MOVED返回时,表示key对应的槽不在本节点,到指定节点登录再设置即可,如果使用其他语言的客户端运行命令时,首先会计算key对应的槽位,再到指定节点运行命令。

root@e2356f57fb72:/data# redis-cli 
127.0.0.1:6379> set dj dj
(error) MOVED 2105 172.21.0.2:6379
127.0.0.1:6379> set dj dj
(error) MOVED 2105 172.21.0.2:6379
127.0.0.1:6379> 

设置成功

root@e362be85c7c7:/data# redis-cli 
127.0.0.1:6379> set cdj cdj
OK
127.0.0.1:6379> get dj
"dj"

验证集群状态

使用check明理路可以查看各个节点ip和cluster_id的信息:redis-cli --cluster check 127.0.0.1:6379
演示

root@e362be85c7c7:/data# redis-cli --cluster check 127.0.0.1:6379
127.0.0.1:6379 (eebf0830...) -> 1 keys | 5461 slots | 0 slaves.
172.21.0.4:6379 (d1cafc60...) -> 0 keys | 5461 slots | 0 slaves.
172.21.0.3:6379 (b9a4c11f...) -> 0 keys | 5462 slots | 0 slaves.
[OK] 1 keys in 3 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 127.0.0.1:6379)
M: eebf083014a1e62354b4ad5f2e962d8b81536808 127.0.0.1:6379
   slots:[0-5460] (5461 slots) master
M: d1cafc60288858550706f5c400e205ff076ecf80 172.21.0.4:6379
   slots:[10923-16383] (5461 slots) master
M: b9a4c11fef3c442868cfab8a91660b3320fc2ecd 172.21.0.3:6379
   slots:[5461-10922] (5462 slots) master
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

查看集群槽分配和key分布

登录任意一台集群节点的机器运行以下命令即可:redis-cli --cluster info 127.0.0.1:6379
演示

root@e362be85c7c7:/data# redis-cli --cluster info 127.0.0.1:6379
127.0.0.1:6379 (eebf0830...) -> 1 keys | 5461 slots | 0 slaves.
172.21.0.4:6379 (d1cafc60...) -> 0 keys | 5461 slots | 0 slaves.
172.21.0.3:6379 (b9a4c11f...) -> 0 keys | 5462 slots | 0 slaves.

槽位再分配

使用redis-cli提供的reshard即可执行槽位再分配,但是只能指定个数,不能指定具体几号槽位。
使用redis-cli --cluster reshard 127.0.0.1:6379开始reshard,输入要移动几个slot,从哪个node移动哪个node即可。

root@e2356f57fb72:/data# redis-cli --cluster reshard 127.0.0.1:6379
>>> Performing Cluster Check (using node 127.0.0.1:6379)
M: d1cafc60288858550706f5c400e205ff076ecf80 127.0.0.1:6379
   slots:[0-1999],[10923-16383] (7461 slots) master
M: b9a4c11fef3c442868cfab8a91660b3320fc2ecd 172.21.0.3:6379
   slots:[2000-10922] (8923 slots) master
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 3000
What is the receiving node ID? d1cafc60288858550706f5c400e205ff076ecf80
Please enter all the source node IDs.
  Type 'all' to use all the nodes as source nodes for the hash slots.
  Type 'done' once you entered all the source nodes IDs.
Source node #1: b9a4c11fef3c442868cfab8a91660b3320fc2ecd
Source node #2: done

控制台会打印移动的节点。选择yes

    Moving slot 4992 from b9a4c11fef3c442868cfab8a91660b3320fc2ecd
    Moving slot 4993 from b9a4c11fef3c442868cfab8a91660b3320fc2ecd
    Moving slot 4994 from b9a4c11fef3c442868cfab8a91660b3320fc2ecd
    Moving slot 4995 from b9a4c11fef3c442868cfab8a91660b3320fc2ecd
    Moving slot 4996 from b9a4c11fef3c442868cfab8a91660b3320fc2ecd
    Moving slot 4997 from b9a4c11fef3c442868cfab8a91660b3320fc2ecd
    Moving slot 4998 from b9a4c11fef3c442868cfab8a91660b3320fc2ecd
    Moving slot 4999 from b9a4c11fef3c442868cfab8a91660b3320fc2ecd
Do you want to proceed with the proposed reshard plan (yes/no)? yes

删除节点

当节点负责了槽位是无法删除节点的。 使用reshard即可移除本节点的所有负责的槽位,再删除。

root@e362be85c7c7:/data# redis-cli --cluster del-node 127.0.0.1:6379 eebf083014a1e62354b4ad5f2e962d8b81536808
>>> Removing node eebf083014a1e62354b4ad5f2e962d8b81536808 from cluster 127.0.0.1:6379
[ERR] Node 127.0.0.1:6379 is not empty! Reshard data away and try again.

移除所有槽位后即可删除本节点。

root@e362be85c7c7:/data# redis-cli --cluster check 127.0.0.1:6379
127.0.0.1:6379 (eebf0830...) -> 0 keys | 0 slots | 0 slaves.
172.21.0.4:6379 (d1cafc60...) -> 0 keys | 7461 slots | 0 slaves.
172.21.0.3:6379 (b9a4c11f...) -> 0 keys | 8923 slots | 0 slaves.
[OK] 0 keys in 3 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 127.0.0.1:6379)
M: eebf083014a1e62354b4ad5f2e962d8b81536808 127.0.0.1:6379
   slots: (0 slots) master
M: d1cafc60288858550706f5c400e205ff076ecf80 172.21.0.4:6379
   slots:[0-1999],[10923-16383] (7461 slots) master
M: b9a4c11fef3c442868cfab8a91660b3320fc2ecd 172.21.0.3:6379
   slots:[2000-10922] (8923 slots) master
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
root@e362be85c7c7:/data# redis-cli --cluster del-node 127.0.0.1:6379 eebf083014a1e62354b4ad5f2e962d8b81536808
>>> Removing node eebf083014a1e62354b4ad5f2e962d8b81536808 from cluster 127.0.0.1:6379
>>> Sending CLUSTER FORGET messages to the cluster...
>>> Sending CLUSTER RESET SOFT to the deleted node.
root@e362be85c7c7:/data# 

槽位负载均衡

注意!!!
本命令执行时不可中断,中断会导致集群不可用,可以使用fix命令修复

随着节点的加入和删除,每个节点的槽位分配并不均衡,可以使用rebalance命令再分配槽位
均衡前

root@e2356f57fb72:/data# redis-cli --cluster info 127.0.0.1:6379
127.0.0.1:6379 (d1cafc60...) -> 0 keys | 10461 slots | 0 slaves.
172.21.0.3:6379 (b9a4c11f...) -> 0 keys | 5923 slots | 0 slaves.
[OK] 0 keys in 2 masters.
0.00 keys per slot on average.

演示

root@e2356f57fb72:/data# redis-cli --cluster rebalance 127.0.0.1:6379
>>> Performing Cluster Check (using node 127.0.0.1:6379)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...

平均分配

root@e2356f57fb72:/data# redis-cli --cluster info 127.0.0.1:6379
127.0.0.1:6379 (d1cafc60...) -> 0 keys | 8192 slots | 0 slaves.
172.21.0.3:6379 (b9a4c11f...) -> 0 keys | 8192 slots | 0 slaves.
[OK] 0 keys in 2 masters.
0.00 keys per slot on average.

修复集群

使用rebalance命令中断时,可以使用fix命令修复

中断rebalance命令

root@e2356f57fb72:/data# redis-cli --cluster check 127.0.0.1:6379
127.0.0.1:6379 (d1cafc60...) -> 0 keys | 7488 slots | 0 slaves.
172.21.0.3:6379 (b9a4c11f...) -> 0 keys | 8895 slots | 0 slaves.
[OK] 0 keys in 2 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 127.0.0.1:6379)
M: d1cafc60288858550706f5c400e205ff076ecf80 127.0.0.1:6379
   slots:[2973-4999],[10923-16383] (7488 slots) master
M: b9a4c11fef3c442868cfab8a91660b3320fc2ecd 172.21.0.3:6379
   slots:[0-2971],[5000-10922] (8895 slots) master
[ERR] Nodes don't agree about configuration!
>>> Check for open slots...
[WARNING] Node 172.21.0.3:6379 has slots in importing state 2972.
[WARNING] The following slots are open: 2972.
>>> Check slots coverage...
[ERR] Not all 16384 slots are covered by nodes.

使用redis-cli --cluster fix 127.0.0.1:6379修复

root@e2356f57fb72:/data# redis-cli --cluster fix 127.0.0.1:6379
127.0.0.1:6379 (d1cafc60...) -> 0 keys | 7488 slots | 0 slaves.
172.21.0.3:6379 (b9a4c11f...) -> 0 keys | 8895 slots | 0 slaves.
[OK] 0 keys in 2 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 127.0.0.1:6379)
M: d1cafc60288858550706f5c400e205ff076ecf80 127.0.0.1:6379
   slots:[2973-4999],[10923-16383] (7488 slots) master
M: b9a4c11fef3c442868cfab8a91660b3320fc2ecd 172.21.0.3:6379
   slots:[0-2971],[5000-10922] (8895 slots) master
[ERR] Nodes don't agree about configuration!
>>> Check for open slots...
[WARNING] Node 172.21.0.3:6379 has slots in importing state 2972.
[WARNING] The following slots are open: 2972.
>>> Fixing open slot 2972
Set as importing in: 172.21.0.3:6379
>>> No single clear owner for the slot, selecting an owner by # of keys...
*** Configuring 127.0.0.1:6379 as the slot owner
>>> Case 2: Moving all the 2972 slot keys to its owner 127.0.0.1:6379
Moving slot 2972 from 172.21.0.3:6379 to 127.0.0.1:6379: 
>>> Setting 2972 as STABLE in 172.21.0.3:6379
>>> Check slots coverage...
[OK] All 16384 slots covered.

加入节点

我们把之前删除的rc-demo-3加入到集群中,并为之分配槽位。

加入集群

root@e362be85c7c7:/data# redis-cli --cluster add-node 127.0.0.1:6379 172.21.0.3:6379
>>> Adding node 127.0.0.1:6379 to cluster 172.21.0.3:6379
>>> Performing Cluster Check (using node 172.21.0.3:6379)
M: b9a4c11fef3c442868cfab8a91660b3320fc2ecd 172.21.0.3:6379
   slots:[0-2971],[5000-10922] (8895 slots) master
M: d1cafc60288858550706f5c400e205ff076ecf80 172.21.0.4:6379
   slots:[2972-4999],[10923-16383] (7489 slots) master
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 127.0.0.1:6379 to make it join the cluster.
[OK] New node added correctly.

使用rebalance重新分配
命令:redis-cli --cluster check 127.0.0.1:6379
当执行命令失败的时候等待一会。因为刚刚加入集群,redis集群通信使用gossip协议,存在一定的延迟。

root@e362be85c7c7:/data# redis-cli --cluster check 127.0.0.1:6379
127.0.0.1:6379 (eebf0830...) -> 0 keys | 5462 slots | 0 slaves.
172.21.0.4:6379 (d1cafc60...) -> 0 keys | 5461 slots | 0 slaves.
172.21.0.3:6379 (b9a4c11f...) -> 0 keys | 5461 slots | 0 slaves.
[OK] 0 keys in 3 masters.
0.00 keys per slot on average.
>>> Performing Cluster Check (using node 127.0.0.1:6379)
M: eebf083014a1e62354b4ad5f2e962d8b81536808 127.0.0.1:6379
   slots:[0-5461] (5462 slots) master
M: d1cafc60288858550706f5c400e205ff076ecf80 172.21.0.4:6379
   slots:[10923-16383] (5461 slots) master
M: b9a4c11fef3c442868cfab8a91660b3320fc2ecd 172.21.0.3:6379
   slots:[5462-10922] (5461 slots) master
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值