Redis集群的基本配置

本文详细介绍了Redis集群的基本概念,包括集群的作用、工作原理和节点类型。接着,通过实际操作展示了如何在多台服务器上部署Redis集群,包括配置管理主机、创建集群、访问集群、添加和移除服务器。此外,还提供了集群状态检查和数据分布调整的方法,深入理解Redis集群的管理和运维。
摘要由CSDN通过智能技术生成

目录

一.Redis集群集群概述

二.部署Redis集群

1.基本原理

2.配置管理主机

3.创建集群

4.访问集群

5.添加服务器

6.移除服务器


一.Redis集群集群概述

  • 所谓集群,就是通过添加服务器的数量,提供相同的服务,从而让服务器达到一个稳定、高效的状态

  • 单个redis存在不稳定性。当redis服务宕机了,就没有可用的服务了

  • 单个redis的读写能力是有限的

  • Redis集群是为了强化redis的读写能力

  • redis集群中,每一个redis称之为一个节点

  • 所有的redis节点彼此互联(PING-PONG机制),内部使用二进制协议优化传输速度和带宽

  • 节点的fail是通过集群中超过半数的节点检测失效时才生效

  • 客户端与redis节点直连,不需要中间proxy层.客户端不需要连接集群所有节点,连接集群中任何一个可用节点即可

  • redis集群中,有两种类型的节点:主节点(master)、从节点(slave)

  • redis集群,是基于redis主从复制实现

    • 主从复制模型中,有多个redis节点

    • 其中,有且仅有一个为主节点Master。从节点Slave可以有多个

    • 只要网络连接正常,Master会一直将自己的数据更新同步给Slaves,保持主从同步

    • 主节点Master可读、可写

    • 从节点Slave只读

二.部署Redis集群

1.基本原理

  • 集群中至少应该有奇数个节点,所以至少有三个节点,每个节点至少有一个备份节点

  • 架构图

  • 存储结构

  • redis-cluster把所有的物理节点映射到[0-16383]slot上(不一定是平均分配),cluster 负责维护node<->slot<->value

  • Redis集群预分好16384个桶,当需要在 Redis 集群中放置一个 key-value 时,根据 CRC16(key) mod 16384的值,决定将一个key放到哪个桶中。

2.配置管理主机

[root@manager1 ~]# yum install -y rubygems
[root@manager1 ~]# gem install redis-3.2.1.gem
  • 部署集群管理脚本

[root@manager1 ~]# tar xf redis-4.0.8.tar.gz
[root@manager1 ~]# cp redis-4.0.8/src/redis-trib.rb /usr/local/bin/
[root@manager1 ~]# chmod +x /usr/local/bin/redis-trib.rb
# 查看帮助
[root@manager1 ~]# redis-trib.rb help

3.创建集群

1)启动redis1的集群功能

# 停止服务
[root@redis1 ~]# service redis_6379 stop
Stopping ...
Redis stopped
​
# 修改配置文件
[root@redis1 ~]# vim /etc/redis/6379.conf
protected-mode no        # 关闭保护模式,以允许不使用密码、不指定绑定地址提供服务
# bind 127.0.0.1         # 注释掉,以监听在全部地址
# requirepass tedu.cn    # 不使用密码
cluster-enabled yes      # 启用集群功能
cluster-config-file nodes-6379.conf   # 集群配置文件位置
cluster-node-timeout 5000  # 心跳时间
​
# 清空原有数据
[root@redis1 ~]# rm -rf /var/lib/redis/6379/*
​
# 修改服务启动文件
[root@redis1 ~]# vim +43 /etc/init.d/redis_6379
... ...
            $CLIEXEC -p $REDISPORT shutdown
... ...
​
# 启动服务
[root@redis1 ~]# service redis_6379 start
Starting Redis server...
​
# 查看端口,集群服务运行在16379端口上
[root@redis1 ~]# ss -tlnp | grep redis
LISTEN     0      128          *:16379                    *:*                   users:(("redis-server",pid=18952,fd=10))
LISTEN     0      128          *:6379                     *:*                   users:(("redis-server",pid=18952,fd=7))
LISTEN     0      128       [::]:16379                 [::]:*                   users:(("redis-server",pid=18952,fd=9))
LISTEN     0      128       [::]:6379                  [::]:*                   users:(("redis-server",pid=18952,fd=6))

2)配置redis2、redis3、redis4、redis5、redis6的集群功能

# 为了操作方便,可以配置redis1到其他节点的免密登陆
[root@redis1 ~]# ssh-keygen 
[root@redis1 ~]# for i in {12..16}
> do
> ssh-copy-id 192.168.1.$i
> done
​
# 将redis1上编译好的redis拷贝到其他主机
[root@redis1 ~]# for i in 1{2..6}
> do
> scp -r /usr/local/redis 192.168.1.$i:/usr/local/
> done
​
# 在redis2、redis3、redis4、redis5、redis6上,将redis命令目录添加至PATH环境变量
[root@redis1 ~]# for i in 1{2..6}; do ssh 192.168.1.$i "echo 'export PATH=$PATH:/usr/local/redis/bin' >> /etc/bashrc"; done
​
# 将redis1上源码目录拷贝到其他主机
[root@redis1 ~]# for i in 1{2..6}; do scp -r redis-4.0.8/ 192.168.1.$i:/root/; done
​
# 分别在redis2、redis3、redis4、redis5、redis6上执行初始化服务器脚本
[root@nodeX ~]# cd redis-4.0.8/
[root@nodeX redis-4.0.8]# utils/install_server.sh
​
# 停止redis2、redis3、redis4、redis5、redis6上的redis服务
[root@redis1 ~]# for i in 1{2..6}; do ssh 192.168.1.$i service redis_6379 stop; done
​
# 拷贝redis1的配置文件到redis2、redis3、redis4、redis5、redis6
[root@redis1 ~]# for i in 1{2..6}; do scp /etc/redis/6379.conf 192.168.1.$i:/etc/redis/; done
​
# 清除各主机上的数据
[root@redis1 ~]# for i in 1{2..6}; do ssh 192.168.1.$i rm -rf /var/lib/redis/6379/*; done
​
# 启动redis2、redis3、redis4、redis5、redis6上的redis服务
[root@redis1 ~]# for i in 1{2..6}; do ssh 192.168.1.$i service redis_6379 start; done

3)在管理主机manager1上创建集群

[root@manager1 ~]# redis-trib.rb create --replicas 1 \
> 192.168.1.11:6379 192.168.1.12:6379 192.168.1.13:6379 \
> 192.168.1.14:6379 192.168.1.15:6379 192.168.1.16:6379
​
>>> Creating cluster
>>> Performing hash slots allocation on 6 nodes...
Using 3 masters:
192.168.1.11:6379
192.168.1.12:6379
192.168.1.13:6379
Adding replica 192.168.1.15:6379 to 192.168.1.11:6379
Adding replica 192.168.1.16:6379 to 192.168.1.12:6379
Adding replica 192.168.1.14:6379 to 192.168.1.13:6379
M: b1b8e906ea2c7b71e567485f02d0007ad902a5c5 192.168.1.11:6379
   slots:0-5460 (5461 slots) master
M: 7cd1b4dc246c4c11780be1186f118ce79794d182 192.168.1.12:6379
   slots:5461-10922 (5462 slots) master
M: 3dd1687f4c9d37a4bc095705a2ab0a35b27b59cc 192.168.1.13:6379
   slots:10923-16383 (5461 slots) master
S: c85a2a1474dea3fbf4ac4073c437b8eada04b806 192.168.1.14:6379
   replicates 3dd1687f4c9d37a4bc095705a2ab0a35b27b59cc
S: a49746ee742866f0db4cebbc2de17733c0ace5b8 192.168.1.15:6379
   replicates b1b8e906ea2c7b71e567485f02d0007ad902a5c5
S: 3c0b9fe153392132003c9707d5132647b59031d4 192.168.1.16:6379
   replicates 7cd1b4dc246c4c11780be1186f118ce79794d182
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join...
>>> Performing Cluster Check (using node 192.168.1.11:6379)
M: b1b8e906ea2c7b71e567485f02d0007ad902a5c5 192.168.1.11:6379
   slots:0-5460 (5461 slots) master
   1 additional replica(s)
M: 3dd1687f4c9d37a4bc095705a2ab0a35b27b59cc 192.168.1.13:6379
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
M: 7cd1b4dc246c4c11780be1186f118ce79794d182 192.168.1.12:6379
   slots:5461-10922 (5462 slots) master
   1 additional replica(s)
S: a49746ee742866f0db4cebbc2de17733c0ace5b8 192.168.1.15:6379
   slots: (0 slots) slave
   replicates b1b8e906ea2c7b71e567485f02d0007ad902a5c5
S: c85a2a1474dea3fbf4ac4073c437b8eada04b806 192.168.1.14:6379
   slots: (0 slots) slave
   replicates 3dd1687f4c9d37a4bc095705a2ab0a35b27b59cc
S: 3c0b9fe153392132003c9707d5132647b59031d4 192.168.1.16:6379
   slots: (0 slots) slave
   replicates 7cd1b4dc246c4c11780be1186f118ce79794d182
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

4)在管理主机上查看集群信息

[root@manager1 ~]# redis-trib.rb info 192.168.1.11:6379
192.168.1.11:6379 (b1b8e906...) -> 0 keys | 5461 slots | 1 slaves.
192.168.1.13:6379 (3dd1687f...) -> 0 keys | 5461 slots | 1 slaves.
192.168.1.12:6379 (7cd1b4dc...) -> 0 keys | 5462 slots | 1 slaves.
[OK] 0 keys in 3 masters.
0.00 keys per slot on average.
​
[root@manager1 ~]# redis-trib.rb check 192.168.1.11:6379
>>> Performing Cluster Check (using node 192.168.1.11:6379)
M: b1b8e906ea2c7b71e567485f02d0007ad902a5c5 192.168.1.11:6379
   slots:0-5460 (5461 slots) master
   1 additional replica(s)
M: 3dd1687f4c9d37a4bc095705a2ab0a35b27b59cc 192.168.1.13:6379
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
M: 7cd1b4dc246c4c11780be1186f118ce79794d182 192.168.1.12:6379
   slots:5461-10922 (5462 slots) master
   1 additional replica(s)
S: a49746ee742866f0db4cebbc2de17733c0ace5b8 192.168.1.15:6379
   slots: (0 slots) slave
   replicates b1b8e906ea2c7b71e567485f02d0007ad902a5c5
S: c85a2a1474dea3fbf4ac4073c437b8eada04b806 192.168.1.14:6379
   slots: (0 slots) slave
   replicates 3dd1687f4c9d37a4bc095705a2ab0a35b27b59cc
S: 3c0b9fe153392132003c9707d5132647b59031d4 192.168.1.16:6379
   slots: (0 slots) slave
   replicates 7cd1b4dc246c4c11780be1186f118ce79794d182
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

5)在集群节点上查看集群信息

# 直接将服务器上的redis-cli拷贝到客户端即可
[root@redis1 ~]# scp /usr/local/redis/bin/redis-cli 192.168.1.10:/usr/local/bin
​
# 在客户端上登陆服务器,并查看集群状态
[root@node10 ~]# redis-cli -h 192.168.1.11
192.168.1.11:6379> PING
PONG
​
192.168.1.11:6379> CLUSTER INFO
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_ping_sent:954
cluster_stats_messages_pong_sent:895
cluster_stats_messages_sent:1849
cluster_stats_messages_ping_received:890
cluster_stats_messages_pong_received:954
cluster_stats_messages_meet_received:5
cluster_stats_messages_received:1849

4.访问集群

  • 客户端可以连接任何一台集群服务器

  • 存储数据时,根据算法,客户端自动重定向到指定服务器存储

[root@node10 ~]# redis-cli -c -h 192.168.1.11
192.168.1.11:6379> SET name tom
-> Redirected to slot [5798] located at 192.168.1.12:6379
OK

192.168.1.12:6379> SET gender male
-> Redirected to slot [15355] located at 192.168.1.13:6379
OK

192.168.1.13:6379> SET email tom@tedu.cn
-> Redirected to slot [10780] located at 192.168.1.12:6379
OK

192.168.1.12:6379> SET phone 15011223344
OK

192.168.1.12:6379> SET address beijing
-> Redirected to slot [3680] located at 192.168.1.11:6379
OK

5.添加服务器

1)添加主服务器redis7(192.168.1.17)

  • 准备一台初始化完成的redis服务器

  • 在管理主机manager1(192.168.1.20)上,添加master角色主机到集群

[root@manager1 ~]# redis-trib.rb add-node 192.168.1.17:6379 192.168.1.11:6379
>>> Adding node 192.168.1.17:6379 to cluster 192.168.1.11:6379
>>> Performing Cluster Check (using node 192.168.1.11:6379)
M: b1b8e906ea2c7b71e567485f02d0007ad902a5c5 192.168.1.11:6379
   slots:0-5460 (5461 slots) master
   1 additional replica(s)
M: 3dd1687f4c9d37a4bc095705a2ab0a35b27b59cc 192.168.1.13:6379
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
M: 7cd1b4dc246c4c11780be1186f118ce79794d182 192.168.1.12:6379
   slots:5461-10922 (5462 slots) master
   1 additional replica(s)
S: a49746ee742866f0db4cebbc2de17733c0ace5b8 192.168.1.15:6379
   slots: (0 slots) slave
   replicates b1b8e906ea2c7b71e567485f02d0007ad902a5c5
S: c85a2a1474dea3fbf4ac4073c437b8eada04b806 192.168.1.14:6379
   slots: (0 slots) slave
   replicates 3dd1687f4c9d37a4bc095705a2ab0a35b27b59cc
S: 3c0b9fe153392132003c9707d5132647b59031d4 192.168.1.16:6379
   slots: (0 slots) slave
   replicates 7cd1b4dc246c4c11780be1186f118ce79794d182
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 192.168.1.17:6379 to make it join the cluster.
[OK] New node added correctly.
  • 在管理主机上查看集群消息

[root@manager1 ~]# redis-trib.rb info 192.168.1.11:6379
192.168.1.11:6379 (b1b8e906...) -> 1 keys | 5461 slots | 1 slaves.
192.168.1.13:6379 (3dd1687f...) -> 1 keys | 5461 slots | 1 slaves.
192.168.1.12:6379 (7cd1b4dc...) -> 3 keys | 5462 slots | 1 slaves.
192.168.1.17:6379 (e4138e59...) -> 0 keys | 0 slots | 0 slaves.
[OK] 5 keys in 4 masters.
0.00 keys per slot on average.
  • 在管理主机上检测集群

[root@manager1 ~]# redis-trib.rb info 192.168.1.11:6379
192.168.1.11:6379 (b1b8e906...) -> 1 keys | 5461 slots | 1 slaves.
192.168.1.13:6379 (3dd1687f...) -> 1 keys | 5461 slots | 1 slaves.
192.168.1.12:6379 (7cd1b4dc...) -> 3 keys | 5462 slots | 1 slaves.
192.168.1.17:6379 (e4138e59...) -> 0 keys | 0 slots | 0 slaves.
[OK] 5 keys in 4 masters.
0.00 keys per slot on average.
[root@manager1 ~]# redis-trib.rb check 192.168.1.11:6379
>>> Performing Cluster Check (using node 192.168.1.11:6379)
M: b1b8e906ea2c7b71e567485f02d0007ad902a5c5 192.168.1.11:6379
   slots:0-5460 (5461 slots) master
   1 additional replica(s)
M: 3dd1687f4c9d37a4bc095705a2ab0a35b27b59cc 192.168.1.13:6379
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
M: 7cd1b4dc246c4c11780be1186f118ce79794d182 192.168.1.12:6379
   slots:5461-10922 (5462 slots) master
   1 additional replica(s)
M: e4138e59c01d0ed762162f7dddb61d7cdff1b85d 192.168.1.17:6379
   slots: (0 slots) master
   0 additional replica(s)
S: a49746ee742866f0db4cebbc2de17733c0ace5b8 192.168.1.15:6379
   slots: (0 slots) slave
   replicates b1b8e906ea2c7b71e567485f02d0007ad902a5c5
S: c85a2a1474dea3fbf4ac4073c437b8eada04b806 192.168.1.14:6379
   slots: (0 slots) slave
   replicates 3dd1687f4c9d37a4bc095705a2ab0a35b27b59cc
S: 3c0b9fe153392132003c9707d5132647b59031d4 192.168.1.16:6379
   slots: (0 slots) slave
   replicates 7cd1b4dc246c4c11780be1186f118ce79794d182
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
  • 在管理主机上,为新的master主机分配哈希槽

[root@manager1 ~]# redis-trib.rb reshard 192.168.1.11:6379
How many slots do you want to move (from 1 to 16384)?4096
What is the receiving node ID?e4138e59c01d0ed762162f7dddb61d7cdff1b85d
Source node #1:all
Do you want to proceed with the proposed reshard plan (yes/no)?yes
  • 在管理主机上,查看集群信息

[root@manager1 ~]# redis-trib.rb info 192.168.1.11:6379
192.168.1.11:6379 (b1b8e906...) -> 1 keys | 4096 slots | 1 slaves.
192.168.1.13:6379 (3dd1687f...) -> 1 keys | 4096 slots | 1 slaves.
192.168.1.12:6379 (7cd1b4dc...) -> 2 keys | 4096 slots | 1 slaves.
192.168.1.17:6379 (e4138e59...) -> 1 keys | 4096 slots | 0 slaves.
[OK] 5 keys in 4 masters.
0.00 keys per slot on average.

2)添加从服务器redis8(192.168.1.18)

  • 准备一台初始化完成的redis服务器

  • 在管理主机manager1(192.168.1.20)上,添加slave角色主机到集群。不指定主节点的 id ,新节点成为从节点最少的主节点的从节点。

[root@manager1 ~]# redis-trib.rb add-node --slave 192.168.1.18:6379 192.168.1.11:6379 
>>> Adding node 192.168.1.18:6379 to cluster 192.168.1.11:6379
>>> Performing Cluster Check (using node 192.168.1.11:6379)
M: b1b8e906ea2c7b71e567485f02d0007ad902a5c5 192.168.1.11:6379
   slots:1365-5460 (4096 slots) master
   1 additional replica(s)
M: 3dd1687f4c9d37a4bc095705a2ab0a35b27b59cc 192.168.1.13:6379
   slots:12288-16383 (4096 slots) master
   1 additional replica(s)
M: 7cd1b4dc246c4c11780be1186f118ce79794d182 192.168.1.12:6379
   slots:6827-10922 (4096 slots) master
   1 additional replica(s)
M: e4138e59c01d0ed762162f7dddb61d7cdff1b85d 192.168.1.17:6379
   slots:0-1364,5461-6826,10923-12287 (4096 slots) master
   0 additional replica(s)
S: a49746ee742866f0db4cebbc2de17733c0ace5b8 192.168.1.15:6379
   slots: (0 slots) slave
   replicates b1b8e906ea2c7b71e567485f02d0007ad902a5c5
S: c85a2a1474dea3fbf4ac4073c437b8eada04b806 192.168.1.14:6379
   slots: (0 slots) slave
   replicates 3dd1687f4c9d37a4bc095705a2ab0a35b27b59cc
S: 3c0b9fe153392132003c9707d5132647b59031d4 192.168.1.16:6379
   slots: (0 slots) slave
   replicates 7cd1b4dc246c4c11780be1186f118ce79794d182
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
Automatically selected master 192.168.1.17:6379
>>> Send CLUSTER MEET to node 192.168.1.18:6379 to make it join the cluster.
Waiting for the cluster to join.
>>> Configure node as replica of 192.168.1.17:6379.
[OK] New node added correctly.
  • 在管理主机上,查看集群新消息

[root@manager1 ~]# redis-trib.rb info 192.168.1.11:6379
192.168.1.11:6379 (b1b8e906...) -> 1 keys | 4096 slots | 1 slaves.
192.168.1.13:6379 (3dd1687f...) -> 1 keys | 4096 slots | 1 slaves.
192.168.1.12:6379 (7cd1b4dc...) -> 2 keys | 4096 slots | 1 slaves.
192.168.1.17:6379 (e4138e59...) -> 1 keys | 4096 slots | 1 slaves.
[OK] 5 keys in 4 masters.
0.00 keys per slot on average.
  • 在管理主机上,检测集群

[root@manager1 ~]# redis-trib.rb info 192.168.1.11:6379
192.168.1.11:6379 (b1b8e906...) -> 1 keys | 4096 slots | 1 slaves.
192.168.1.13:6379 (3dd1687f...) -> 1 keys | 4096 slots | 1 slaves.
192.168.1.12:6379 (7cd1b4dc...) -> 2 keys | 4096 slots | 1 slaves.
192.168.1.17:6379 (e4138e59...) -> 1 keys | 4096 slots | 1 slaves.
[OK] 5 keys in 4 masters.
0.00 keys per slot on average.
[root@manager1 ~]# redis-trib.rb check 192.168.1.11:6379
>>> Performing Cluster Check (using node 192.168.1.11:6379)
M: b1b8e906ea2c7b71e567485f02d0007ad902a5c5 192.168.1.11:6379
   slots:1365-5460 (4096 slots) master
   1 additional replica(s)
S: 6c2079f43fd3e85c535dea1f275940740ab93765 192.168.1.18:6379
   slots: (0 slots) slave
   replicates e4138e59c01d0ed762162f7dddb61d7cdff1b85d
M: 3dd1687f4c9d37a4bc095705a2ab0a35b27b59cc 192.168.1.13:6379
   slots:12288-16383 (4096 slots) master
   1 additional replica(s)
M: 7cd1b4dc246c4c11780be1186f118ce79794d182 192.168.1.12:6379
   slots:6827-10922 (4096 slots) master
   1 additional replica(s)
M: e4138e59c01d0ed762162f7dddb61d7cdff1b85d 192.168.1.17:6379
   slots:0-1364,5461-6826,10923-12287 (4096 slots) master
   1 additional replica(s)
S: a49746ee742866f0db4cebbc2de17733c0ace5b8 192.168.1.15:6379
   slots: (0 slots) slave
   replicates b1b8e906ea2c7b71e567485f02d0007ad902a5c5
S: c85a2a1474dea3fbf4ac4073c437b8eada04b806 192.168.1.14:6379
   slots: (0 slots) slave
   replicates 3dd1687f4c9d37a4bc095705a2ab0a35b27b59cc
S: 3c0b9fe153392132003c9707d5132647b59031d4 192.168.1.16:6379
   slots: (0 slots) slave
   replicates 7cd1b4dc246c4c11780be1186f118ce79794d182
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

6.移除服务器

1)移除从服务器

  • 在管理主机上,直接移除从服务器redis8(192.168.1.18)

[root@manager1 ~]# redis-trib.rb del-node 192.168.1.18:6379 6c2079f43fd3e85c535dea1f275940740ab93765
>>> Removing node 6c2079f43fd3e85c535dea1f275940740ab93765 from cluster 192.168.1.18:6379
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node.
  •  查看集群
[root@manager1 ~]# redis-trib.rb check 192.168.1.11:6379
>>> Performing Cluster Check (using node 192.168.1.11:6379)
M: b1b8e906ea2c7b71e567485f02d0007ad902a5c5 192.168.1.11:6379
   slots:1365-5460 (4096 slots) master
   1 additional replica(s)
M: 3dd1687f4c9d37a4bc095705a2ab0a35b27b59cc 192.168.1.13:6379
   slots:12288-16383 (4096 slots) master
   1 additional replica(s)
M: 7cd1b4dc246c4c11780be1186f118ce79794d182 192.168.1.12:6379
   slots:6827-10922 (4096 slots) master
   1 additional replica(s)
M: e4138e59c01d0ed762162f7dddb61d7cdff1b85d 192.168.1.17:6379
   slots:0-1364,5461-6826,10923-12287 (4096 slots) master
   0 additional replica(s)
S: a49746ee742866f0db4cebbc2de17733c0ace5b8 192.168.1.15:6379
   slots: (0 slots) slave
   replicates b1b8e906ea2c7b71e567485f02d0007ad902a5c5
S: c85a2a1474dea3fbf4ac4073c437b8eada04b806 192.168.1.14:6379
   slots: (0 slots) slave
   replicates 3dd1687f4c9d37a4bc095705a2ab0a35b27b59cc
S: 3c0b9fe153392132003c9707d5132647b59031d4 192.168.1.16:6379
   slots: (0 slots) slave
   replicates 7cd1b4dc246c4c11780be1186f118ce79794d182
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

[root@manager1 ~]# redis-trib.rb info 192.168.1.11:6379
192.168.1.11:6379 (b1b8e906...) -> 1 keys | 4096 slots | 1 slaves.
192.168.1.13:6379 (3dd1687f...) -> 1 keys | 4096 slots | 1 slaves.
192.168.1.12:6379 (7cd1b4dc...) -> 2 keys | 4096 slots | 1 slaves.
192.168.1.17:6379 (e4138e59...) -> 1 keys | 4096 slots | 0 slaves.
[OK] 5 keys in 4 masters.
0.00 keys per slot on average.

 2)移除主服务器

  • 在管理节点上,删除master服务器占用的哈希槽

[root@manager1 ~]# redis-trib.rb reshard 192.168.1.11:6379
>>> Performing Cluster Check (using node 192.168.1.11:6379)
M: b1b8e906ea2c7b71e567485f02d0007ad902a5c5 192.168.1.11:6379
   slots:1365-5460 (4096 slots) master
   1 additional replica(s)
M: 3dd1687f4c9d37a4bc095705a2ab0a35b27b59cc 192.168.1.13:6379
   slots:12288-16383 (4096 slots) master
   1 additional replica(s)
M: 7cd1b4dc246c4c11780be1186f118ce79794d182 192.168.1.12:6379
   slots:6827-10922 (4096 slots) master
   1 additional replica(s)
M: e4138e59c01d0ed762162f7dddb61d7cdff1b85d 192.168.1.17:6379
   slots:0-1364,5461-6826,10923-12287 (4096 slots) master
   0 additional replica(s)
S: a49746ee742866f0db4cebbc2de17733c0ace5b8 192.168.1.15:6379
   slots: (0 slots) slave
   replicates b1b8e906ea2c7b71e567485f02d0007ad902a5c5
S: c85a2a1474dea3fbf4ac4073c437b8eada04b806 192.168.1.14:6379
   slots: (0 slots) slave
   replicates 3dd1687f4c9d37a4bc095705a2ab0a35b27b59cc
S: 3c0b9fe153392132003c9707d5132647b59031d4 192.168.1.16:6379
   slots: (0 slots) slave
   replicates 7cd1b4dc246c4c11780be1186f118ce79794d182
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 4096           
What is the receiving node ID? b1b8e906ea2c7b71e567485f02d0007ad902a5c5
Please enter all the source node IDs.
  Type 'all' to use all the nodes as source nodes for the hash slots.
  Type 'done' once you entered all the source nodes IDs.
Source node #1:e4138e59c01d0ed762162f7dddb61d7cdff1b85d
Source node #2:done
... ...
Do you want to proceed with the proposed reshard plan (yes/no)? yes
  • 查看集群信息

[root@manager1 ~]# redis-trib.rb info 192.168.1.11:6379
192.168.1.11:6379 (b1b8e906...) -> 2 keys | 8192 slots | 1 slaves.
192.168.1.13:6379 (3dd1687f...) -> 1 keys | 4096 slots | 1 slaves.
192.168.1.12:6379 (7cd1b4dc...) -> 2 keys | 4096 slots | 1 slaves.
192.168.1.17:6379 (e4138e59...) -> 0 keys | 0 slots | 0 slaves.
[OK] 5 keys in 4 masters.
0.00 keys per slot on average.
  • 移除master

[root@manager1 ~]# redis-trib.rb del-node 192.168.1.11:6379 e4138e59c01d0ed762162f7dddb61d7cdff1b85d
>>> Removing node e4138e59c01d0ed762162f7dddb61d7cdff1b85d from cluster 192.168.1.11:6379
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node.
  • 查看集群信息

[root@manager1 ~]# redis-trib.rb info 192.168.1.11:6379
192.168.1.11:6379 (b1b8e906...) -> 2 keys | 8192 slots | 1 slaves.
192.168.1.13:6379 (3dd1687f...) -> 1 keys | 4096 slots | 1 slaves.
192.168.1.12:6379 (7cd1b4dc...) -> 2 keys | 4096 slots | 1 slaves.
[OK] 5 keys in 3 masters.
0.00 keys per slot on average.
  • 在所有节点上重新均衡分配哈希槽

[root@manager1 ~]# redis-trib.rb rebalance 192.168.1.11:6379
[root@manager1 ~]# redis-trib.rb info 192.168.1.11:6379
192.168.1.11:6379 (b1b8e906...) -> 2 keys | 5461 slots | 1 slaves.
192.168.1.13:6379 (3dd1687f...) -> 1 keys | 5462 slots | 1 slaves.
192.168.1.12:6379 (7cd1b4dc...) -> 2 keys | 5461 slots | 1 slaves.
[OK] 5 keys in 3 masters.
0.00 keys per slot on average.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值