参考地址: https://www.runoob.com/docker/docker-redis-cluster.html
参考地址: https://www.runoob.com/docker/docker-redis-cluster.html
分布式数据库是把整个数据集按照分区规则映射到多个节点,每个节点负责一部分数据。
Redis Cluster采用虚拟槽分区(引入虚拟槽改进的一致性哈希算法),所有的键根据哈希函数映射到0~16383整数槽内,计算公式:slot=CRC16(key)&16383,找到槽,再找到槽所在的节点。槽是集群内数据管理和迁移的基本单位。
搭建集群用6个节点 3主3从
集群规划
1、下载 redis.conf
下载地址:https://github.com/antirez/redis/blob/unstable/redis.conf
2、修改 redis.conf
开启集群功能:
cluster-enabled yes
设置节点端口:
port 6391
节点超时时间,单位毫秒:
cluster-node-timeout 15000
集群内部配置文件:
cluster-config-file "nodes-6379.conf"
3、编写 docker-compose.yml
version: "3.6"
services:
redis-master1:
image: redis:5.0 # 基础镜像
container_name: redis-master1 # 容器服务名
working_dir: /config # 工作目录
environment: # 环境变量
- PORT=6391 # 跟 config/nodes-6391.conf 里的配置一样的端口
ports: # 映射端口,对外提供服务
- "6391:6391" # redis 的服务端口
- "16391:16391" # redis 集群监控端口
stdin_open: true # 标准输入打开
networks: # docker 网络设置
redis-master:
ipv4_address: 172.50.0.2
tty: true
privileged: true # 拥有容器内命令执行的权限
volumes: ["./config:/config"] # 映射数据卷,配置目录
entrypoint: # 设置服务默认的启动程序
- /bin/bash
- redis.sh
redis-master2:
image: redis:5.0
working_dir: /config
container_name: redis-master2
environment:
- PORT=6392
networks:
redis-master:
ipv4_address: 172.50.0.3
ports:
- "6392:6392"
- "16392:16392"
stdin_open: true
tty: true
privileged: true
volumes: ["./config:/config"]
entrypoint:
- /bin/bash
- redis.sh
redis-master3:
image: redis:5.0
container_name: redis-master3
working_dir: /config
environment:
- PORT=6393
networks:
redis-master:
ipv4_address: 172.50.0.4
ports:
- "6393:6393"
- "16393:16393"
stdin_open: true
tty: true
privileged: true
volumes: ["./config:/config"]
entrypoint:
- /bin/bash
- redis.sh
redis-slave1:
image: redis:5.0
container_name: redis-slave1
working_dir: /config
environment:
- PORT=6394
networks:
redis-slave:
ipv4_address: 172.30.0.2
ports:
- "6394:6394"
- "16394:16394"
stdin_open: true
tty: true
privileged: true
volumes: ["./config:/config"]
entrypoint:
- /bin/bash
- redis.sh
redis-salve2:
image: redis:5.0
working_dir: /config
container_name: redis-salve2
environment:
- PORT=6395
ports:
- "6395:6395"
- "16395:16395"
stdin_open: true
networks:
redis-slave:
ipv4_address: 172.30.0.3
tty: true
privileged: true
volumes: ["./config:/config"]
entrypoint:
- /bin/bash
- redis.sh
redis-salve3:
image: redis:5.0
container_name: redis-slave3
working_dir: /config
environment:
- PORT=6396
ports:
- "6396:6396"
- "16396:16396"
stdin_open: true
networks:
redis-slave:
ipv4_address: 172.30.0.4
tty: true
privileged: true
volumes: ["./config:/config"]
entrypoint:
- /bin/bash
- redis.sh
networks:
redis-master:
driver: bridge # 创建一个docker 的桥接网络
ipam:
driver: default
config:
-
subnet: 172.50.0.0/16
redis-slave:
driver: bridge
ipam:
driver: default
config:
-
subnet: 172.30.0.0/16
4、编写 redis 默认的启动脚本
创建文件 config/redis.sh,内容如下:
redis-server /config/nodes-${PORT}.conf
config下面放redis配置文件nodes-6391-6396.conf
5、启动集群
启动服务命令如下:
$ docker-compose up -d
6. 初始化集群
1.随便进入一个节点
redis-cli -h 192.168.1.130 -p 6391
#输入info cluster 查看集群信息 显示如下
# Cluster
cluster_enabled:1
#cluster info
cluster_state:fail
cluster_slots_assigned:0
cluster_slots_ok:0
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:1
cluster_size:0
cluster_current_epoch:0
cluster_my_epoch:0
cluster_stats_messages_sent:0
cluster_stats_messages_received:0
# cluster nodes
f85f4ffb9bf728c8214846b9b220bb4ccd7cdc6b :6391@16391 myself,master - 0 0 0 connected
现在仅有一个节点 是因为每个节点只知道自己信息还没有相互握手
2.节点握手
节点握手是指一批运行在集群模式下的节点通过Gossip协议彼此通信,达到感知对方的过程。在集群内任意节点上执行cluster meet命令加入新节点,握手状态会通过消息在集群内传播,这样其它节点会自动发现新节点并发起握手流程。
#依次类推93..96
192.168.1.130:6391> cluster meet 192.168.1.130 6392
#执行完之后再次执行 cluster nodes
46936ecf75500129e16897c1d0b228e18eec112d 172.50.0.1:6396@16396 master - 0 1588238630000 5 connected
101c966e9c13ca57bc215d8992871dc6e183252b 172.50.0.1:6395@16395 master - 0 1588238627000 4 connected
e34be842b4a93d328026d8a003592c494082cb14 172.50.0.1:6393@16393 master - 0 1588238629606 2 connected
bdb2b1d66ec77e0a701e70d4b987ee94634e979c 172.50.0.1:6394@16394 master - 0 1588238628599 3 connected
7864ce2be97b15e7998377ae934f855e67079682 172.50.0.1:6392@16392 master - 0 1588238630616 1 connected
f85f4ffb9bf728c8214846b9b220bb4ccd7cdc6b 172.50.0.2:6391@16391 myself,master - 0 1588238627000 0 connected
节点握手后还不能正常工作,这时集群处于下线状态,是由于槽没有分配到节点,集群无法完成槽到节点的映射。
192.168.1.130:6391> set hello redis
(error) CLUSTERDOWN Hash slot not served
3.分配槽
Redis cluster把所有数据映射到16384个槽中,每个key会映射为一个固定的槽,只有当节点分配了槽,才能响应和这些槽关联的键命令,下面通过cluster addslots命令为节点分配槽。
#执行如下三段命令 将插槽分配
root@yakir:/usr/local/docker/redis-cluster# redis-cli -h 192.168.1.130 -p 6391 cluster addslots {0..5641}
redis-cli -h 192.168.1.130 -p 6392 cluster addslots {5642..10922}
redis-cli -h 192.168.1.130 -p 6393 cluster addslots {10923..16383}
当前集群状态是ok,进入在线状态,所有槽都已经分配给节点,cluster nodes命令可看到槽和节点的对应关系。
#再次输入 cluster info 查看集群状态
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:5
cluster_my_epoch:0
cluster_stats_messages_ping_sent:225
cluster_stats_messages_pong_sent:225
cluster_stats_messages_meet_sent:5
cluster_stats_messages_sent:455
cluster_stats_messages_ping_received:225
cluster_stats_messages_pong_received:230
cluster_stats_messages_received:455
#cluster nodes 查看节点
46936ecf75500129e16897c1d0b228e18eec112d 172.50.0.1:6396@16396 master - 0 1588238899000 5 connected
101c966e9c13ca57bc215d8992871dc6e183252b 172.50.0.1:6395@16395 master - 0 1588238899000 4 connected
e34be842b4a93d328026d8a003592c494082cb14 172.50.0.1:6393@16393 master - 0 1588238899026 2 connected 10923-16383
bdb2b1d66ec77e0a701e70d4b987ee94634e979c 172.50.0.1:6394@16394 master - 0 1588238900036 3 connected
7864ce2be97b15e7998377ae934f855e67079682 172.50.0.1:6392@16392 master - 0 1588238899000 1 connected 5642-10922
f85f4ffb9bf728c8214846b9b220bb4ccd7cdc6b 172.50.0.2:6391@16391 myself,master - 0 1588238896000 0 connected 0-5641
目前还有3个节点没有使用,作为一个完整的集群,每个负责处理槽的节点应该具有从节点,保证出现故障时,可以进行自动故障转移,使用cluster replicate命令让一个节点成为从节点。
#4--1 指定当前节点作为1的从节点备份
192.168.1.130:6394> cluster replicate f85f4ffb9bf728c8214846b9b220bb4ccd7cdc6b
#5--2
192.168.1.130:6395> cluster replicate 7864ce2be97b15e7998377ae934f855e67079682
#6--3
192.168.1.130:6396> cluster replicate e34be842b4a93d328026d8a003592c494082cb14
目前为止,手动建立了一个6个节点的集群,3个主节点负责处理槽和相关数据,3个从节点负责故障转移。