redis之分布式集群实战

环境

1.ubuntu16.04
2.docker容器
3.redis3.2

集群搭建

3主3从集群

172.20.0.11:7011
172.20.0.12:7012
172.20.0.13:7013
172.20.0.21:7021
172.20.0.22:7022
172.20.0.23:7023

步骤1:
创建docker-compose.yml文件:
这里我们使用docker容器部署6台redis节点。
运用到了自定义网桥和固定容器ip地址,配置如下:

version: '2'
services:
   redis7011:
      image: docker.io/redis:3.2
      container_name: redis7011
      restart: always
      networks:
        extnetwork:
            ipv4_address: 172.20.0.11
      ports:
         - 7011:7011
      volumes:
         - '/app/redis/7011/redis.conf:/etc/redis/redis.conf'
      command: redis-server /etc/redis/redis.conf
      
   redis7012:
      image: docker.io/redis:3.2
      container_name: redis7012
      restart: always
      networks:
        extnetwork:
            ipv4_address: 172.20.0.12
      ports:
         - 7012:7012
      volumes:
         - '/app/redis/7012/redis.conf:/etc/redis/redis.conf'
      command: redis-server /etc/redis/redis.conf
   
   redis7013:
      image: docker.io/redis:3.2
      container_name: redis7013
      restart: always
      networks:
        extnetwork:
            ipv4_address: 172.20.0.13
      ports:
         - 7013:7013
      volumes:
         - '/app/redis/7013/redis.conf:/etc/redis/redis.conf'
      command: redis-server /etc/redis/redis.conf
   redis7021:
      image: docker.io/redis:3.2
      container_name: redis7021
      restart: always
      networks:
        extnetwork:
            ipv4_address: 172.20.0.21
      ports:
         - 7021:7021
      volumes:
         - '/app/redis/7021/redis.conf:/etc/redis/redis.conf'
      command: redis-server /etc/redis/redis.conf

   redis7022:
      image: docker.io/redis:3.2
      container_name: redis7022
      restart: always
      networks:
        extnetwork:
            ipv4_address: 172.20.0.22
      ports:
         - 7022:7022
      volumes:
         - '/app/redis/7022/redis.conf:/etc/redis/redis.conf'
      command: redis-server /etc/redis/redis.conf

   redis7023:
      image: docker.io/redis:3.2
      container_name: redis7023
      restart: always
      networks:
        extnetwork:
            ipv4_address: 172.20.0.23
      ports:
         - 7023:7023
      volumes:
         - '/app/redis/7023/redis.conf:/etc/redis/redis.conf'
      command: redis-server /etc/redis/redis.conf

networks:
   extnetwork:
      ipam:
         config:
         - subnet: 172.20.0.0/16
           gateway: 172.20.0.1

步骤2:
然后提供6个redis配置文件,如下:

文件1:/app/redis/7011/redis.conf

port 7011
cluster-enabled yes
cluster-config-file nodes-7011.conf
cluster-node-timeout 5000
appendonly yes

文件2:/app/redis/7012/redis.conf

port 7012
cluster-enabled yes
cluster-config-file nodes-7012.conf
cluster-node-timeout 5000
appendonly yes

文件3:/app/redis/7013/redis.conf

port 7013
cluster-enabled yes
cluster-config-file nodes-7013.conf
cluster-node-timeout 5000
appendonly yes

文件4:/app/redis/7021/redis.conf

port 7021
cluster-enabled yes
cluster-config-file nodes-7021.conf
cluster-node-timeout 5000
appendonly yes

文件5:/app/redis/7022/redis.conf

port 7022
cluster-enabled yes
cluster-config-file nodes-7022.conf
cluster-node-timeout 5000
appendonly yes

文件6:/app/redis/7023/redis.conf

port 7023
cluster-enabled yes
cluster-config-file nodes-7023.conf
cluster-node-timeout 5000
appendonly yes

步骤3:
启动容器:docker-compose up -d

CONTAINER ID        IMAGE                                                   COMMAND                  CREATED             STATUS              PORTS                              NAMES
02d4ad65ebb5        registry.cn-hangzhou.aliyuncs.com/xylink/redis:3.2_v1   "docker-entrypoint..."   About an hour ago   Up About an hour    6379/tcp, 0.0.0.0:7012->7012/tcp   redis7012
8ad5120524f5        registry.cn-hangzhou.aliyuncs.com/xylink/redis:3.2_v1   "docker-entrypoint..."   About an hour ago   Up About an hour    6379/tcp, 0.0.0.0:7022->7022/tcp   redis7022
c7cd2ff9a2b2        registry.cn-hangzhou.aliyuncs.com/xylink/redis:3.2_v1   "docker-entrypoint..."   About an hour ago   Up 20 minutes       6379/tcp, 0.0.0.0:7013->7013/tcp   redis7013
38297fc850cd        registry.cn-hangzhou.aliyuncs.com/xylink/redis:3.2_v1   "docker-entrypoint..."   About an hour ago   Up About an hour    6379/tcp, 0.0.0.0:7021->7021/tcp   redis7021
d88275251606        registry.cn-hangzhou.aliyuncs.com/xylink/redis:3.2_v1   "docker-entrypoint..."   About an hour ago   Up 55 minutes       6379/tcp, 0.0.0.0:7023->7023/tcp   redis7023
6ca20fa97632        registry.cn-hangzhou.aliyuncs.com/xylink/redis:3.2_v1   "docker-entrypoint..."   About an hour ago   Up About an hour    6379/tcp, 0.0.0.0:7011->7011/tcp   redis7011

创建集群:
redis-trib.rb是redis提供的集群工具,在src目录下。需要依赖ruby环境。下面提供一下ruby搭建脚本:

#更新一下源
apt-get update
#安装ruby环境
apt-get install ruby
#安装wget
apt install wget
#下载gen包
wget http://rubygems.org/downloads/redis-3.3.0.gem
#安装ruby-redis插件 
gem install -l redis-3.3.0.gem 
gem install redis

./redis-trib.rb create --replicas 1 172.20.0.11:7011 172.20.0.12:7012 172.20.0.13:7013 172.20.0.21:7021 172.20.0.22:7022 172.20.0.23:7023

说明:–replicas后面的1表示每个master节点分配一个slave节点。

[root@iZj6c3zjf2blpqz40tntifZ redis]# ./redis-trib.rb  create --replicas 1 172.20.0.11:7011 172.20.0.12:7012 172.20.0.13:7013 172.20.0.21:7021 172.20.0.22:7022 172.20.0.23:7023
>>> Creating cluster
>>> Performing hash slots allocation on 6 nodes...
Using 3 masters:
172.20.0.11:7011
172.20.0.12:7012
172.20.0.13:7013
Adding replica 172.20.0.21:7021 to 172.20.0.11:7011
Adding replica 172.20.0.22:7022 to 172.20.0.12:7012
Adding replica 172.20.0.23:7023 to 172.20.0.13:7013
M: 6c09bb314f68b70472f3147e24462a3150c8a658 172.20.0.11:7011
   slots:0-5460 (5461 slots) master
M: da2edc77b8cac87468d95b60e2a12041e9e1f474 172.20.0.12:7012
   slots:5461-10922 (5462 slots) master
M: fa2296ba9405cb3078ae5b8130eb7bd74771e518 172.20.0.13:7013
   slots:10923-16383 (5461 slots) master
S: 385d7a4545ddcaee371358ec3b61a397c0694641 172.20.0.21:7021
   replicates 6c09bb314f68b70472f3147e24462a3150c8a658
S: 57b5e8a67b644ac6a78ee473c555ff5f42557d72 172.20.0.22:7022
   replicates da2edc77b8cac87468d95b60e2a12041e9e1f474
S: 93b4fbafdc1feeb5608cef667a4e78a9d4f1c1d7 172.20.0.23:7023
   replicates fa2296ba9405cb3078ae5b8130eb7bd74771e518
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join...
>>> Performing Cluster Check (using node 172.20.0.11:7011)
M: 6c09bb314f68b70472f3147e24462a3150c8a658 172.20.0.11:7011
   slots:0-5460 (5461 slots) master
M: da2edc77b8cac87468d95b60e2a12041e9e1f474 172.20.0.12:7012
   slots:5461-10922 (5462 slots) master
M: fa2296ba9405cb3078ae5b8130eb7bd74771e518 172.20.0.13:7013
   slots:10923-16383 (5461 slots) master
M: 385d7a4545ddcaee371358ec3b61a397c0694641 172.20.0.21:7021
   slots: (0 slots) master
   replicates 6c09bb314f68b70472f3147e24462a3150c8a658
M: 57b5e8a67b644ac6a78ee473c555ff5f42557d72 172.20.0.22:7022
   slots: (0 slots) master
   replicates da2edc77b8cac87468d95b60e2a12041e9e1f474
M: 93b4fbafdc1feeb5608cef667a4e78a9d4f1c1d7 172.20.0.23:7023
   slots: (0 slots) master
   replicates fa2296ba9405cb3078ae5b8130eb7bd74771e518
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

上面输入yes后等待执行完成。
到此,分布式集群已经搭建完成,后面了解集群的相关操作。

集群操作

1.我们随便登录一台redis节点
./redis-cli -c -h 172.20.0.11 -p 7011
说明:-c表示集群模式登录
测试下存取数据

[root@iZj6c3zjf2blpqz40tntifZ redis]# ./redis-cli -c -h 172.20.0.11 -p 7011
172.20.0.11:7011> set a 1
-> Redirected to slot [15495] located at 172.20.0.13:7013
OK
172.20.0.13:7013> get a
"1"
172.20.0.13:7013> set b 2
-> Redirected to slot [3300] located at 172.20.0.11:7011
OK
172.20.0.11:7011> get b
"2"
172.20.0.11:7011>

2.集群详情
cluster info

172.20.0.11:7011> cluster info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:6
cluster_my_epoch:1
cluster_stats_messages_sent:8187
cluster_stats_messages_received:8187

3.查看集群节点
cluster nodes

172.20.0.11:7011> cluster nodes
da2edc77b8cac87468d95b60e2a12041e9e1f474 172.20.0.12:7012 master - 0 1541495131722 2 connected 5461-10922
6c09bb314f68b70472f3147e24462a3150c8a658 172.20.0.11:7011 myself,master - 0 0 1 connected 0-5460
385d7a4545ddcaee371358ec3b61a397c0694641 172.20.0.21:7021 slave 6c09bb314f68b70472f3147e24462a3150c8a658 0 1541495130720 4 connected
57b5e8a67b644ac6a78ee473c555ff5f42557d72 172.20.0.22:7022 slave da2edc77b8cac87468d95b60e2a12041e9e1f474 0 1541495130720 5 connected
93b4fbafdc1feeb5608cef667a4e78a9d4f1c1d7 172.20.0.23:7023 slave fa2296ba9405cb3078ae5b8130eb7bd74771e518 0 1541495132224 6 connected
fa2296ba9405cb3078ae5b8130eb7bd74771e518 172.20.0.13:7013 master - 0 1541495131221 3 connected 10923-16383

4.删除集群节点
这里我们删除master节点172.20.0.13:7013和slave节点172.20.0.23:7023
redis集群节点无法直接删除,需要将master中的槽迁移到别的节点
./redis-trib.rb reshard 172.20.0.13:7013

[root@iZj6c3zjf2blpqz40tntifZ redis]# ./redis-trib.rb reshard 172.20.0.13:7013
>>> Performing Cluster Check (using node 172.20.0.13:7013)
M: fa2296ba9405cb3078ae5b8130eb7bd74771e518 172.20.0.13:7013
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
M: 6c09bb314f68b70472f3147e24462a3150c8a658 172.20.0.11:7011
   slots:0-5460 (5461 slots) master
   1 additional replica(s)
S: 57b5e8a67b644ac6a78ee473c555ff5f42557d72 172.20.0.22:7022
   slots: (0 slots) slave
   replicates da2edc77b8cac87468d95b60e2a12041e9e1f474
S: 385d7a4545ddcaee371358ec3b61a397c0694641 172.20.0.21:7021
   slots: (0 slots) slave
   replicates 6c09bb314f68b70472f3147e24462a3150c8a658
S: 93b4fbafdc1feeb5608cef667a4e78a9d4f1c1d7 172.20.0.23:7023
   slots: (0 slots) slave
   replicates fa2296ba9405cb3078ae5b8130eb7bd74771e518
M: da2edc77b8cac87468d95b60e2a12041e9e1f474 172.20.0.12:7012
   slots:5461-10922 (5462 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 16384
What is the receiving node ID? 
###注:下面这个节点是接收槽的节点,这里我们迁移到节点
6c09bb314f68b70472f3147e24462a3150c8a658
172.20.0.11:7011
Please enter all the source node IDs.
  Type 'all' to use all the nodes as source nodes for the hash slots.
  Type 'done' once you entered all the source nodes IDs.
  ### 下面的id是需要删除的节点172.20.0.13:7013的id
Source node #1:fa2296ba9405cb3078ae5b8130eb7bd74771e518
 ###done表示完成,也就是只迁移到一个节点上。
Source node #2:done

Ready to move 16384 slots.
  Source nodes:
    M: fa2296ba9405cb3078ae5b8130eb7bd74771e518 172.20.0.13:7013
   slots:10923-16383 (5461 slots) master
   1 additional replica(s)
  Destination node:
    M: 6c09bb314f68b70472f3147e24462a3150c8a658 172.20.0.11:7011
   slots:0-5460 (5461 slots) master
   1 additional replica(s)
  Resharding plan:

上面迁移完成后
我们先看一下迁移完成之后的节点信息

172.20.0.11:7011> cluster nodes
da2edc77b8cac87468d95b60e2a12041e9e1f474 172.20.0.12:7012 master - 0 1541495991251 2 connected 5461-10922
6c09bb314f68b70472f3147e24462a3150c8a658 172.20.0.11:7011 myself,master - 0 0 7 connected 0-5460 10923-16383
385d7a4545ddcaee371358ec3b61a397c0694641 172.20.0.21:7021 slave 6c09bb314f68b70472f3147e24462a3150c8a658 0 1541495989749 7 connected
57b5e8a67b644ac6a78ee473c555ff5f42557d72 172.20.0.22:7022 slave da2edc77b8cac87468d95b60e2a12041e9e1f474 0 1541495990251 5 connected
93b4fbafdc1feeb5608cef667a4e78a9d4f1c1d7 172.20.0.23:7023 slave 6c09bb314f68b70472f3147e24462a3150c8a658 0 1541495991752 7 connected
fa2296ba9405cb3078ae5b8130eb7bd74771e518 172.20.0.13:7013 master - 0 1541495991251 3 connected

我们发现节点172.20.0.13:7013上已经没有槽了

执行删除
./redis-trib.rb del-node 172.20.0.13:7013 fa2296ba9405cb3078ae5b8130eb7bd74771e518

[root@iZj6c3zjf2blpqz40tntifZ redis]# ./redis-trib.rb del-node 172.20.0.13:7013 fa2296ba9405cb3078ae5b8130eb7bd74771e518
>>> Removing node fa2296ba9405cb3078ae5b8130eb7bd74771e518 from cluster 172.20.0.13:7013
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node.

我们再来看一下节点信息

172.20.0.11:7011> cluster nodes
da2edc77b8cac87468d95b60e2a12041e9e1f474 172.20.0.12:7012 master - 0 1541496191878 2 connected 5461-10922
6c09bb314f68b70472f3147e24462a3150c8a658 172.20.0.11:7011 myself,master - 0 0 7 connected 0-5460 10923-16383
385d7a4545ddcaee371358ec3b61a397c0694641 172.20.0.21:7021 slave 6c09bb314f68b70472f3147e24462a3150c8a658 0 1541496192378 7 connected
57b5e8a67b644ac6a78ee473c555ff5f42557d72 172.20.0.22:7022 slave da2edc77b8cac87468d95b60e2a12041e9e1f474 0 1541496190876 5 connected
93b4fbafdc1feeb5608cef667a4e78a9d4f1c1d7 172.20.0.23:7023 slave 6c09bb314f68b70472f3147e24462a3150c8a658 0 1541496192378 7 connected

发现之前的master节点172.20.0.13:7013的slave节点172.20.0.23:7023变成了其他master节点的从节点。
这也证明了,当删除master节点时,它的从节点会变成其他主节点的从节点。

当然我们可以随意删除从节点,不用迁移槽

./redis-trib.rb del-node 172.20.0.23:7023 93b4fbafdc1feeb5608cef667a4e78a9d4f1c1d7
>>> Removing node 93b4fbafdc1feeb5608cef667a4e78a9d4f1c1d7 from cluster 172.20.0.23:7023
>>> Sending CLUSTER FORGET messages to the cluster...
>>> SHUTDOWN the node.

172.20.0.12:7012> cluster nodes
da2edc77b8cac87468d95b60e2a12041e9e1f474 172.20.0.12:7012 myself,master - 0 0 2 connected 5461-10922
57b5e8a67b644ac6a78ee473c555ff5f42557d72 172.20.0.22:7022 slave da2edc77b8cac87468d95b60e2a12041e9e1f474 0 1541496592093 5 connected
385d7a4545ddcaee371358ec3b61a397c0694641 172.20.0.21:7021 slave 6c09bb314f68b70472f3147e24462a3150c8a658 0 1541496591592 7 connected
6c09bb314f68b70472f3147e24462a3150c8a658 172.20.0.11:7011 master - 0 1541496593095 7 connected 0-5460 10923-16383

5.添加集群节点
语法:

./redis-trib.rb add-node new_host:new_port existing_host:existing_port

./redis-trib.rb add-node 172.20.0.13:7013 172.20.0.11:7011
说明:
172.20.0.13:7013是新增加的节点
172.20.0.11:7011是集群中已存在的节点

[root@iZj6c3zjf2blpqz40tntifZ redis]# ./redis-trib.rb add-node 172.20.0.13:7013 172.20.0.11:7011
>>> Adding node 172.20.0.13:7013 to cluster 172.20.0.11:7011
>>> Performing Cluster Check (using node 172.20.0.11:7011)
M: 6c09bb314f68b70472f3147e24462a3150c8a658 172.20.0.11:7011
   slots:0-5460,10923-16383 (10922 slots) master
   1 additional replica(s)
M: da2edc77b8cac87468d95b60e2a12041e9e1f474 172.20.0.12:7012
   slots:5461-10922 (5462 slots) master
   1 additional replica(s)
S: 385d7a4545ddcaee371358ec3b61a397c0694641 172.20.0.21:7021
   slots: (0 slots) slave
   replicates 6c09bb314f68b70472f3147e24462a3150c8a658
S: 57b5e8a67b644ac6a78ee473c555ff5f42557d72 172.20.0.22:7022
   slots: (0 slots) slave
   replicates da2edc77b8cac87468d95b60e2a12041e9e1f474
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 172.20.0.13:7013 to make it join the cluster.
[OK] New node added correctly.

此时集群中多了一个节点,但没有分配槽

172.20.0.12:7012> cluster nodes
da2edc77b8cac87468d95b60e2a12041e9e1f474 172.20.0.12:7012 myself,master - 0 0 2 connected 5461-10922
57b5e8a67b644ac6a78ee473c555ff5f42557d72 172.20.0.22:7022 slave da2edc77b8cac87468d95b60e2a12041e9e1f474 0 1541501589550 5 connected
385d7a4545ddcaee371358ec3b61a397c0694641 172.20.0.21:7021 slave 6c09bb314f68b70472f3147e24462a3150c8a658 0 1541501589049 7 connected
6c09bb314f68b70472f3147e24462a3150c8a658 172.20.0.11:7011 master - 0 1541501590552 7 connected 0-5460 10923-16383
0bb6754f53761d8497d31abe34fd1dff458abdf6 172.20.0.13:7013 master - 0 1541501589550 0 connected

下面给新增的节点分配槽

[root@iZj6c3zjf2blpqz40tntifZ redis]# ./redis-trib.rb reshard 172.20.0.13:7013
>>> Performing Cluster Check (using node 172.20.0.13:7013)
M: 0bb6754f53761d8497d31abe34fd1dff458abdf6 172.20.0.13:7013
   slots: (0 slots) master
   0 additional replica(s)
M: 6c09bb314f68b70472f3147e24462a3150c8a658 172.20.0.11:7011
   slots:0-5460,10923-16383 (10922 slots) master
   1 additional replica(s)
S: 385d7a4545ddcaee371358ec3b61a397c0694641 172.20.0.21:7021
   slots: (0 slots) slave
   replicates 6c09bb314f68b70472f3147e24462a3150c8a658
S: 57b5e8a67b644ac6a78ee473c555ff5f42557d72 172.20.0.22:7022
   slots: (0 slots) slave
   replicates da2edc77b8cac87468d95b60e2a12041e9e1f474
M: da2edc77b8cac87468d95b60e2a12041e9e1f474 172.20.0.12:7012
   slots:5461-10922 (5462 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
How many slots do you want to move (from 1 to 16384)? 5460
What is the receiving node ID? 
### 这是新增节点的id
0bb6754f53761d8497d31abe34fd1dff458abdf6
Please enter all the source node IDs.
  Type 'all' to use all the nodes as source nodes for the hash slots.
  Type 'done' once you entered all the source nodes IDs.
### 这个是集群中已存在槽节点的id
Source node #1:6c09bb314f68b70472f3147e24462a3150c8a658

输入done
输入yes
等待分配完成…

分配完成之后在来看一下集群节点情况

172.20.0.12:7012> cluster nodes
da2edc77b8cac87468d95b60e2a12041e9e1f474 172.20.0.12:7012 myself,master - 0 0 2 connected 5461-10922
57b5e8a67b644ac6a78ee473c555ff5f42557d72 172.20.0.22:7022 slave da2edc77b8cac87468d95b60e2a12041e9e1f474 0 1541501972469 5 connected
385d7a4545ddcaee371358ec3b61a397c0694641 172.20.0.21:7021 slave 6c09bb314f68b70472f3147e24462a3150c8a658 0 1541501971968 7 connected
6c09bb314f68b70472f3147e24462a3150c8a658 172.20.0.11:7011 master - 0 1541501970966 7 connected 5460 10923-16383
0bb6754f53761d8497d31abe34fd1dff458abdf6 172.20.0.13:7013 master - 0 1541501971467 8 connected 0-5459

新增的节点已经有槽了,说明增加节点成功了!
新增节点默认是master节点。

6.添加slave从节点

上面我们增加了一个master节点,现在我们给它增加一个slave从节点。
./redis-trib.rb add-node 172.20.0.23:7023 172.20.0.11:7011
我们新增节点:172.20.0.23:7023

[root@iZj6c3zjf2blpqz40tntifZ redis]# ./redis-trib.rb add-node 172.20.0.23:7023 172.20.0.11:7011
>>> Adding node 172.20.0.23:7023 to cluster 172.20.0.11:7011
>>> Performing Cluster Check (using node 172.20.0.11:7011)
M: 6c09bb314f68b70472f3147e24462a3150c8a658 172.20.0.11:7011
   slots:5460,10923-16383 (5462 slots) master
   1 additional replica(s)
M: da2edc77b8cac87468d95b60e2a12041e9e1f474 172.20.0.12:7012
   slots:5461-10922 (5462 slots) master
   1 additional replica(s)
S: 385d7a4545ddcaee371358ec3b61a397c0694641 172.20.0.21:7021
   slots: (0 slots) slave
   replicates 6c09bb314f68b70472f3147e24462a3150c8a658
S: 57b5e8a67b644ac6a78ee473c555ff5f42557d72 172.20.0.22:7022
   slots: (0 slots) slave
   replicates da2edc77b8cac87468d95b60e2a12041e9e1f474
M: 0bb6754f53761d8497d31abe34fd1dff458abdf6 172.20.0.13:7013
   slots:0-5459 (5460 slots) master
   0 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
>>> Send CLUSTER MEET to node 172.20.0.23:7023 to make it join the cluster.
[OK] New node added correctly.

我们等新增的节点
./redis-cli -h 172.20.0.23 -p 7023

[root@iZj6c3zjf2blpqz40tntifZ redis]# ./redis-cli -h 172.20.0.23 -p 7023
172.20.0.23:7023> cluster replicate 0bb6754f53761d8497d31abe34fd1dff458abdf6
OK

cluster replicate后面跟master节点的ip,这里我们设置为172.20.0.23:7023的slave节点。

查看一下

172.20.0.11:7011> cluster nodes
da2edc77b8cac87468d95b60e2a12041e9e1f474 172.20.0.12:7012 master - 0 1541522277388 2 connected 5461-10922
6c09bb314f68b70472f3147e24462a3150c8a658 172.20.0.11:7011 myself,master - 0 0 7 connected 5460 10923-16383
56043b9b4dfb7c1466cab22735efa26f40cc8c56 172.20.0.23:7023 slave 0bb6754f53761d8497d31abe34fd1dff458abdf6 0 1541522277388 8 connected
385d7a4545ddcaee371358ec3b61a397c0694641 172.20.0.21:7021 slave 6c09bb314f68b70472f3147e24462a3150c8a658 0 1541522276886 7 connected
57b5e8a67b644ac6a78ee473c555ff5f42557d72 172.20.0.22:7022 slave da2edc77b8cac87468d95b60e2a12041e9e1f474 0 1541522276387 5 connected
0bb6754f53761d8497d31abe34fd1dff458abdf6 172.20.0.13:7013 master - 0 1541522277888 8 connected 0-5459

7.验证master节点挂机

docker stop redis7013
我们将172.20.0.13:7013宕机。看一下集群情况

172.20.0.11:7011> cluster nodes
da2edc77b8cac87468d95b60e2a12041e9e1f474 172.20.0.12:7012 master - 0 1541522384096 2 connected 5461-10922
6c09bb314f68b70472f3147e24462a3150c8a658 172.20.0.11:7011 myself,master - 0 0 7 connected 5460 10923-16383
56043b9b4dfb7c1466cab22735efa26f40cc8c56 172.20.0.23:7023 master - 0 1541522385099 9 connected 0-5459
385d7a4545ddcaee371358ec3b61a397c0694641 172.20.0.21:7021 slave 6c09bb314f68b70472f3147e24462a3150c8a658 0 1541522384597 7 connected
57b5e8a67b644ac6a78ee473c555ff5f42557d72 172.20.0.22:7022 slave da2edc77b8cac87468d95b60e2a12041e9e1f474 0 1541522385601 5 connected
0bb6754f53761d8497d31abe34fd1dff458abdf6 172.20.0.13:7013 master,fail - 1541522379177 1541522377573 8 connected

发现从节点172.20.0.23:7023变成了主节点,从而实现高可用
我们再启动之前的主节点172.20.0.13:7013,再看一下集群情况

172.20.0.11:7011> cluster nodes
da2edc77b8cac87468d95b60e2a12041e9e1f474 172.20.0.12:7012 master - 0 1541522424702 2 connected 5461-10922
6c09bb314f68b70472f3147e24462a3150c8a658 172.20.0.11:7011 myself,master - 0 0 7 connected 5460 10923-16383
56043b9b4dfb7c1466cab22735efa26f40cc8c56 172.20.0.23:7023 master - 0 1541522423703 9 connected 0-5459
385d7a4545ddcaee371358ec3b61a397c0694641 172.20.0.21:7021 slave 6c09bb314f68b70472f3147e24462a3150c8a658 0 1541522423703 7 connected
57b5e8a67b644ac6a78ee473c555ff5f42557d72 172.20.0.22:7022 slave da2edc77b8cac87468d95b60e2a12041e9e1f474 0 1541522424202 5 connected
0bb6754f53761d8497d31abe34fd1dff458abdf6 172.20.0.13:7013 slave 56043b9b4dfb7c1466cab22735efa26f40cc8c56 0 1541522422702 9 connected

之前的master节点变成了slave节点。

完!

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值