19.Redis系列之高可用哨兵与集群模式

1. 哨兵模式

redis哨兵模式实现master主节点宕机后自动切换至slave从节点,从而保证服务高可用

我们配置一主6379、两个从节点6380 6381、以及三个哨兵节点26379、26380、26381,哨兵节点作用是当主节点宕机后,会从从节点中选举出新的主节点,从而高可用;用三个哨兵节点是防止单哨兵节点故障
下面我们具体实践一遍,在对其中的原理深入讨论

1.1 docker compose方式部署启动哨兵模式

1.1.1 docker-compose.yml配置

version: '3'
services:
  redis6379:
    container_name: redis6379
    image: redis:7.0.5-alpine3.16
    ports:
      - 6379:6379
    command: redis-server /usr/local/redis/conf/redis.conf
    volumes:
      - ./conf/redis6379.conf:/usr/local/redis/conf/redis.conf
    sysctls: # socket监听(listen)的backlog上限, 默认值同样为128。backlog就是socket的监听队列,当一个请求(request)尚未被处理或建立时,他会进入backlog。而socket server可以一次性处理backlog中的所有请求,处理后的请求不再位于监听队列中。当server处理请求较慢,以至于监听队列被填满后,新来的请求会被拒绝
      - net.core.somaxconn=1024
    # 使用该参数,container内的root拥有真正的root权限。否则,container内的root只是外部的一个普通用户权限
    privileged: true
    environment:
      - TZ=Asia/Shanghai
      - LANG=en_US.UTF-8
  redis6380:
    container_name: redis6380
    image: redis:7.0.5-alpine3.16
    ports:
      - 6380:6379
    command: redis-server /usr/local/redis/conf/redis.conf
    volumes:
      - ./conf/redis6380.conf:/usr/local/redis/conf/redis.conf
    sysctls:
      - net.core.somaxconn=1024
    privileged: true
    environment:
      - TZ=Asia/Shanghai
      - LANG=en_US.UTF-8
  redis6381:
    container_name: redis6381
    image: redis:7.0.5-alpine3.16
    ports:
      - 6381:6379
    command: redis-server /usr/local/redis/conf/redis.conf
    volumes:
      - ./conf/redis6381.conf:/usr/local/redis/conf/redis.conf
    sysctls:
      - net.core.somaxconn=1024
    privileged: true
    environment:
      - TZ=Asia/Shanghai
      - LANG=en_US.UTF-8
  sentinel26379:
    container_name: sentinel26379
    image: redis:7.0.5-alpine3.16
    ports:
      - 26379:26379
    command: redis-sentinel /usr/local/redis/conf/sentinel.conf
    volumes:
      - ./conf/sentinel26379.conf:/usr/local/redis/conf/sentinel.conf
    environment:
      - TZ=Asia/Shanghai
      - LANG=en_US.UTF-8
  sentinel26380:
    container_name: sentinel26380
    image: redis:7.0.5-alpine3.16
    ports:
      - 26380:26379
    command: redis-sentinel /usr/local/redis/conf/sentinel.conf
    volumes:
      - ./conf/sentinel26380.conf:/usr/local/redis/conf/sentinel.conf
    environment:
      - TZ=Asia/Shanghai
      - LANG=en_US.UTF-8
  sentinel26381:
    container_name: sentinel26381
    image: redis:7.0.5-alpine3.16
    ports:
      - 26381:26379
    command: redis-sentinel /usr/local/redis/conf/sentinel.conf
    volumes:
      - ./conf/sentinel26381.conf:/usr/local/redis/conf/sentinel.conf
    environment:
      - TZ=Asia/Shanghai
      - LANG=en_US.UTF-8

1.1.2 redis.conf三节点配置

其中redis6379.conf配置为:

# 客户端连接密码
requirepass 123456
# 从节点数据从主节点同步时连接密码
masterauth 123456
# 设置选举为master节点优先级,设为三节点中优先级最高,0则永不被选举为主节点
replica-priority 98
# 最少有一个可用从节点并且10S内从节点发送确认消息,否则主节点不允许客户端写
min-replicas-to-write 1
min-replicas-max-lag 10
# docker环境中特别指定为本地IP地址,普通机器不用指定
replica-announce-ip 192.168.1.9
replica-announce-port 6379

其中redis6380.conf配置为:

requirepass 123456
masterauth 123456
# 设置选举为master节点优先级,设为三节点中优先级第二
replica-priority 99
min-replicas-to-write 1
min-replicas-max-lag 10
replica-announce-ip 192.168.1.9
replica-announce-port 6380

其中redis6381.conf配置为:

requirepass 123456
masterauth 123456
# 设置选举为master节点优先级,设为三节点中优先级最低
replica-priority 100
min-replicas-to-write 1
min-replicas-max-lag 10
replica-announce-ip 192.168.1.9
replica-announce-port 6381

关注公众号 算法小生,回复 哨兵,即可获取完整配置代码

1.1.3 sentinel.conf三哨兵节点配置

sentinel announce-ip 192.168.1.9
# 其他哨兵节点同理设置为26380、26381
sentinel announce-port 26379
# 指定监测的主节点 并且主节点主观下线的确认数量
sentinel monitor mymaster 192.168.1.9 6379 2
# 连接master密码
sentinel auth-pass mymaster 123456

1.1.4 故障迁移实践

# 启动服务
docker-compose up -d

进入主节点6379,执行info replication命令,结果如下

127.0.0.1:6379> info replication
# Replication
role:master # 当前节点为主节点
connected_slaves:2 # 两个从节点
min_slaves_good_slaves:2
slave0:ip=192.168.1.9,port=6381,state=online,offset=9569,lag=0
slave1:ip=192.168.1.9,port=6380,state=online,offset=9569,lag=1
master_failover_state:no-failover
master_replid:cf305c9024f910c05d362ff44bf14156fbd28a06
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:9706
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1
repl_backlog_histlen:9706

现在我们shutdown主节点,然后连接6380节点,执行执行info replication命令,结果显示6380选举为主节点,一切正常

127.0.0.1:6379> info replication
# Replication
role:master # 主节点
connected_slaves:1 # 从节点只有一个
min_slaves_good_slaves:1
slave0:ip=192.168.1.9,port=6381,state=online,offset=46286,lag=1
master_failover_state:no-failover
master_replid:7c86a7dfde5866b315cb2bb1f511362957e2e6c3
master_replid2:cf305c9024f910c05d362ff44bf14156fbd28a06
master_repl_offset:46423
second_repl_offset:39372
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:860
repl_backlog_histlen:45564

当我们重新启动6379节点,则会成为6380的从节点

127.0.0.1:6379> info replication
# Replication
role:master
connected_slaves:2
min_slaves_good_slaves:2
slave0:ip=192.168.1.9,port=6381,state=online,offset=60084,lag=1
slave1:ip=192.168.1.9,port=6379,state=online,offset=60244,lag=0
master_failover_state:no-failover
master_replid:267c5deb8c1ac716c81e8edac189a19dbb9b0c23
master_replid2:abfc618a736ea90d843a69cd3d8f7144178fbbd9
master_repl_offset:60381
second_repl_offset:49717
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:997
repl_backlog_histlen:59385

1.2 哨兵功能总结

  • 监控功能:哨兵监测主节点与从节点是否正常
  • 通知功能:哨兵可以通知系统管理员或其他程序某节点发生异常
  • 自动故障迁移:主节点宕机后,可自动实现主从切换
  • 配置提供者:客户端询问哨兵已获取主节点地址

1.3 哨兵选举流程

  1. 每个哨兵通过+sdown[Subjectively Down]事件监测到主节点主观下线
  2. +sdown之后升级为+odown[Objectively Down]事件,即客观下线,表示多个哨兵认为主节点不可达
  3. 多个哨兵选举出一个哨兵进行故障转移
  4. 故障转移完成

4

2. 集群模式

redis中集群模式,通过Redis Cluster部署拓扑进行水平扩展

2.1 Redis Cluster相关配置

# 设置redis实例能够加入集群
cluster-enabled yes
# 集群配置文件,由redis节点创建和更新
cluster-config-file nodes-6379.conf
# 节点超时时间15s
cluster-node-timeout 15000
# 集群内部通信的端口,即集群总线端口
cluster-port 16379
# 为了获得最大可用性,设置为0,这意味着副本将始终尝试故障转移
# 否则(node-timeout * cluster-replica-validity-factor) + repl-ping-replica-period时间后才会故障转移
cluster-replica-validity-factor 0
# 主节点至少拥有1个副本则宕机后副本才会故障迁移
cluster-migration-barrier 1
# 允许自动故障迁移
cluster-allow-replica-migration yes
# 即使当部分插槽不可用集群仍可用
cluster-require-full-coverage no
# 副本可以自动故障迁移
cluster-replica-no-failover no
# 集群挂掉不可读
cluster-allow-reads-when-down no

# 以下为docker部署特别指定
cluster-announce-ip 192.168.1.5
cluster-announce-port 6379
cluster-announce-bus-port 16379

以上配置我们假定为redis6379.conf、对于redis6380.conf、redis6381.conf、redis6382.conf、redis6383.conf、redis6384.conf,我们修改为对应端口即可,如下所示

cluster-announce-ip 192.168.1.5
cluster-announce-port 6380
cluster-announce-bus-port 16380

关注公众号 算法小生,回复 redis集群 即可获取对应代码详细配置

2.2 docker-compose配置及集群启动

version: '3'
services:
  redis6379:
    container_name: redis6379
    image: redis:7.0.5-alpine3.16
    ports:
      - 6379:6379
      - 16379:16379
    command: redis-server /usr/local/redis/conf/redis.conf
    volumes:
      - ./conf/redis6379.conf:/usr/local/redis/conf/redis.conf
    sysctls:
      - net.core.somaxconn=1024
    privileged: true
    environment:
      - TZ=Asia/Shanghai
      - LANG=en_US.UTF-8
  redis6380:
    container_name: redis6380
    image: redis:7.0.5-alpine3.16
    ports:
      - 6380:6379
      - 16380:16379
    command: redis-server /usr/local/redis/conf/redis.conf
    volumes:
      - ./conf/redis6380.conf:/usr/local/redis/conf/redis.conf
    sysctls:
      - net.core.somaxconn=1024
    privileged: true
    environment:
      - TZ=Asia/Shanghai
      - LANG=en_US.UTF-8
  redis6381:
    container_name: redis6381
    image: redis:7.0.5-alpine3.16
    ports:
      - 6381:6379
      - 16381:16379
    command: redis-server /usr/local/redis/conf/redis.conf
    volumes:
      - ./conf/redis6381.conf:/usr/local/redis/conf/redis.conf
    sysctls:
      - net.core.somaxconn=1024
    privileged: true
    environment:
      - TZ=Asia/Shanghai
      - LANG=en_US.UTF-8
  redis6382:
    container_name: redis6382
    image: redis:7.0.5-alpine3.16
    ports:
      - 6382:6379
      - 16382:16379
    command: redis-server /usr/local/redis/conf/redis.conf
    volumes:
      - ./conf/redis6382.conf:/usr/local/redis/conf/redis.conf
    sysctls:
      - net.core.somaxconn=1024
    privileged: true
    environment:
      - TZ=Asia/Shanghai
      - LANG=en_US.UTF-8
  redis6383:
    container_name: redis6383
    image: redis:7.0.5-alpine3.16
    ports:
      - 6383:6379
      - 16383:16379
    command: redis-server /usr/local/redis/conf/redis.conf
    volumes:
      - ./conf/redis6383.conf:/usr/local/redis/conf/redis.conf
    sysctls:
      - net.core.somaxconn=1024
    privileged: true
    environment:
      - TZ=Asia/Shanghai
      - LANG=en_US.UTF-8
  redis6384:
    container_name: redis6384
    image: redis:7.0.5-alpine3.16
    ports:
      - 6384:6379
      - 16384:16379
    command: redis-server /usr/local/redis/conf/redis.conf
    volumes:
      - ./conf/redis6384.conf:/usr/local/redis/conf/redis.conf
    sysctls:
      - net.core.somaxconn=1024
    privileged: true
    environment:
      - TZ=Asia/Shanghai
      - LANG=en_US.UTF-8

执行docker-compose up -d启动redis集群

shenjian\redis\cluster>docker-compose up -d
WARNING: Found orphan containers (es3, kibana, es1) for this project. If you removed or renamed this service in your compose file, you can run this command with th
e --remove-orphans flag to clean it up.
Creating redis6382 ... done
Creating redis6379 ... done
Creating redis6384 ... done
Creating redis6380 ... done
Creating redis6383 ... done
Creating redis6381 ... done

我们连接6379节点,执行以下redis-cli --askpass -p 6379 cluster nodes命令查看节点集群信息

/data # redis-cli --askpass -p 6379 cluster nodes
Please input password: ******
e8392606e948e6e33183119a93b7b3011f5b9407 192.168.1.5:6379@16379 myself,master - 0 0 0 connected

执行以下命令进行集群创建

/data #  redis-cli --cluster create --cluster-replicas 1 192.168.1.5:6379 192.168.1.5:6380 192.168.1.5:6381 192.168.1.5:
6382 192.168.1.5:6383 192.168.1.5:6384 --askpass
Please input password: ******
>>> Performing hash slots allocation on 6 nodes...
Master[0] -> Slots 0 - 5460
Master[1] -> Slots 5461 - 10922
Master[2] -> Slots 10923 - 16383
Adding replica 192.168.1.5:6383 to 192.168.1.5:6379
Adding replica 192.168.1.5:6384 to 192.168.1.5:6380
Adding replica 192.168.1.5:6382 to 192.168.1.5:6381
>>> Trying to optimize slaves allocation for anti-affinity
[WARNING] Some slaves are in the same host as their master
M: e8392606e948e6e33183119a93b7b3011f5b9407 192.168.1.5:6379
   slots:[0-5460] (5461 slots) master
M: 3e55473bdff571c26761998a499da50dd743082b 192.168.1.5:6380
   slots:[5461-10922] (5462 slots) master
M: 76d7a639bc4849c60fbb0986389c8cf0a335e064 192.168.1.5:6381
   slots:[10923-16383] (5461 slots) master
S: 896b2f4d563abb071758506544b0db41365ffb3e 192.168.1.5:6382
   replicates e8392606e948e6e33183119a93b7b3011f5b9407
S: e89bbe523c5854a056356400960034c1715c11ee 192.168.1.5:6383
   replicates 3e55473bdff571c26761998a499da50dd743082b
S: 42a6b0e0f2df8a95fafa3d52d01a41e1b9310581 192.168.1.5:6384
   replicates 76d7a639bc4849c60fbb0986389c8cf0a335e064
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join
..
>>> Performing Cluster Check (using node 192.168.1.5:6379)
M: e8392606e948e6e33183119a93b7b3011f5b9407 192.168.1.5:6379
   slots:[0-5460] (5461 slots) master
   1 additional replica(s)
S: e89bbe523c5854a056356400960034c1715c11ee 192.168.1.5:6383
   slots: (0 slots) slave
   replicates 3e55473bdff571c26761998a499da50dd743082b
S: 42a6b0e0f2df8a95fafa3d52d01a41e1b9310581 192.168.1.5:6384
   slots: (0 slots) slave
   replicates 76d7a639bc4849c60fbb0986389c8cf0a335e064
S: 896b2f4d563abb071758506544b0db41365ffb3e 192.168.1.5:6382
   slots: (0 slots) slave
   replicates e8392606e948e6e33183119a93b7b3011f5b9407
M: 76d7a639bc4849c60fbb0986389c8cf0a335e064 192.168.1.5:6381
   slots:[10923-16383] (5461 slots) master
   1 additional replica(s)
M: 3e55473bdff571c26761998a499da50dd743082b 192.168.1.5:6380
   slots:[5461-10922] (5462 slots) master
   1 additional replica(s)
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

再次执行集群查看命令

/data # redis-cli --askpass -p 6379 cluster nodes
Please input password: ******
e8392606e948e6e33183119a93b7b3011f5b9407 192.168.1.5:6379@16379 myself,master - 0 1669455837000 1 connected 0-5460
e89bbe523c5854a056356400960034c1715c11ee 192.168.1.5:6383@16383 slave 3e55473bdff571c26761998a499da50dd743082b 0 1669455833031 2 connected
42a6b0e0f2df8a95fafa3d52d01a41e1b9310581 192.168.1.5:6384@16384 slave 76d7a639bc4849c60fbb0986389c8cf0a335e064 0 1669455838063 3 connected
896b2f4d563abb071758506544b0db41365ffb3e 192.168.1.5:6382@16382 slave e8392606e948e6e33183119a93b7b3011f5b9407 0 1669455837056 1 connected
76d7a639bc4849c60fbb0986389c8cf0a335e064 192.168.1.5:6381@16381 master - 0 1669455835000 3 connected 10923-16383
3e55473bdff571c26761998a499da50dd743082b 192.168.1.5:6380@16380 master - 0 1669455836051 2 connected 5461-10922

我们可以看到3主3从集群已经创建成功,0-16383总共16384个插槽已经分配完毕,并且可以看到

  • 0-5460插槽分配到主节点192.168.1.5:6379@16379
  • 5461-10922插槽分配到主节点192.168.1.5:6380@16380
  • 10923-16383插槽分配到主节点 192.168.1.5:6381@16381

2.3 数据写入插槽

# 连接到集群
/data # redis-cli -c -p 6379 --askpass
Please input password: ******
# 设置键值,可以看到被分配到14315插槽位于192.168.1.5:6381
127.0.0.1:6379> set username shenjian
-> Redirected to slot [14315] located at 192.168.1.5:6381
OK
192.168.1.5:6381> set age 26
-> Redirected to slot [741] located at 192.168.1.5:6379
OK
# 多值设置key不在同一个插槽,报错,原因CRC16(key)% 16384得出插槽位置不唯一
192.168.1.5:6380> mset username shenjian age 20
(error) CROSSSLOT Keys in request don't hash to the same slot
# 我们通过哈希标签{}标识指定CRC16(key)% 16384中的key为user,故可设置成功
192.168.1.5:6380> mset username:{user} shenjian age:{user} 20
OK
# 获取值
192.168.1.5:6380> mget username:{user} age:{user}
1) "shenjian"
2) "20"

2.4 自动故障转移

我们把6379shutdown,连接其他节点查看集群状态可以看到6379主节点的丛节点192.168.1.5:6382@16382成为了主节点

/data # redis-cli --askpass -p 6379 cluster nodes
Please input password: ******
e89bbe523c5854a056356400960034c1715c11ee 192.168.1.5:6383@16383 slave 3e55473bdff571c26761998a499da50dd743082b 0 1669457824960 2 connected
3e55473bdff571c26761998a499da50dd743082b 192.168.1.5:6380@16380 myself,master - 0 1669457820000 2 connected 5461-10922
76d7a639bc4849c60fbb0986389c8cf0a335e064 192.168.1.5:6381@16381 master - 0 1669457823954 3 connected 10923-16383
42a6b0e0f2df8a95fafa3d52d01a41e1b9310581 192.168.1.5:6384@16384 slave 76d7a639bc4849c60fbb0986389c8cf0a335e064 0 1669457823000 3 connected
e8392606e948e6e33183119a93b7b3011f5b9407 192.168.1.5:6379@16379 master,fail - 1669457796163 1669457795760 1 connected
896b2f4d563abb071758506544b0db41365ffb3e 192.168.1.5:6382@16382 master - 0 1669457822948 7 connected 0-5460

由于我们是单机模拟伪集群,没有进行数据目录挂载,在实际的集群中,请勿忘记数据目录挂载备份

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

算法小生Đ

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值