Redis复制

Redis replica

作用

  • 读写分离

  • 容灾恢复

  • 数据备份

  • 水平扩容支撑高并发

conf

redis.confdescription
replicaof  主机ip port
masterauth  主机验证密码(redis-cli 登录密码

单机集群

Compose devOps

配置docker-compose快速部署服务

version: '1.0'
services:
  redis6379:
    image: redis
    container_name: redis6379
    ports:
      - 6379:6379
    restart: true
    volumes:
      - /storage/redis6379/conf/redis6379.conf:/usr/local/etc/redis/redis.conf \
      - /storage/redis6379/data:/data \
    command: ["redis-server", "/usr/local/etc/redis/redis.conf"]
  redis6380:
    image: redis
    container_name: redis6380
    ports:
      - 6380:6380
    restart: true
    volumes:
      - /storage/redis6380/conf/redis6380.conf:/usr/local/etc/redis/redis.conf \
      - /storage/redis6380/data:/data \
    command: ["redis-server","/usr/local/etc/redis/redis.conf"]
    depends_on:
        - redis6379
  redis6380:
    image: redis
    container_name: redis6381
    ports:
      - 6381:6381
    restart: true
    volumes:
      - /storage/redis6381/conf/redis6381.conf:/usr/local/etc/redis/redis.conf \
      - /storage/redis6381/data:/data \
    command: ["redis-server","/usr/local/etc/redis/redis.conf"]
    depends_on:
        - redis6379

集群关系

一主多从
master
slave1
slave2
.etc
级联传递

中间结点依然无写能力

master
slave1
slave2
.etc

slave command

# slave 结点执行命令动态绑定
slaveof master-ip master-port
# 临时密令,重启后主从关系消失

# 使结点独立
slaveof no one

基本概念

  • slave结点不可执行写入,报错(error) READONLY You can't write against a read only replica.

  • slave结点启动后统一复制,随后跟随复制

  • master结点shutdown后,slave结点原地待命,数据正常查询,等待主机重新启动

  • master restart后,主从关系依然存在

  • 中间结点无写功能

Details

  • 登录slave从机时,指定端口docker exec -it redis6380 -p 6380

  • 登录redis-cli后,通过info replication查询配置信息

复制原理

  • slave启动,同步初请

    slave 启动后不断重试发送sync命令

    slave首次全新连接master,一次性完全同步(全量复制)将被自动执行,slave自身原有数据将会被master数据覆盖清除

  • 首次连接,全量复制

    master结点收到sync命令后会在开始后台保存快照(RDB,主从复制触发rdb持久化),同时将所有接受到的用于修改数据集的命令缓存起来,master结点执行rdb持久化后,master将rdb snapshot 文件核所有缓存命令发送到slave,完成全量复制

# ${url} 表式服务器ip
1:M 02 Dec 2023 21:00:30.482 * Ready to accept connections tcp
# 接受请求
1:M 02 Dec 2023 21:00:30.594 * Replica ${url} asks for synchronization
1:M 02 Dec 2023 21:00:30.594 * Full resync requested by replica ${url}
1:M 02 Dec 2023 21:00:30.594 * Replication backlog created, my new replication IDs are '8c3b0bcfd52dafbd69d639c42512d41acaac4379' and '0000000000000000000000000000000000000000'
1:M 02 Dec 2023 21:00:30.594 * Delay next BGSAVE for diskless SYNC
1:M 02 Dec 2023 21:00:30.892 * Replica ${url} asks for synchronization
1:M 02 Dec 2023 21:00:30.892 * Full resync requested by replica ${url}
1:M 02 Dec 2023 21:00:30.892 * Delay next BGSAVE for diskless SYNC
# 执行bgsave
1:M 02 Dec 2023 21:00:35.395 * Starting BGSAVE for SYNC with target: replicas sockets
1:M 02 Dec 2023 21:00:35.395 * Background RDB transfer started by pid 20
20:C 02 Dec 2023 21:00:35.396 * Fork CoW for RDB: current 0 MB, peak 0 MB, average 0 MB
1:M 02 Dec 2023 21:00:35.396 * Diskless rdb transfer, done reading from pipe, 2 replicas still up.
1:M 02 Dec 2023 21:00:35.401 * Background RDB transfer terminated with success
1:M 02 Dec 2023 21:00:35.401 * Streamed RDB transfer with replica ${url}:6380 succeeded (socket). Waiting for REPLCONF ACK from replica to enable streaming
1:M 02 Dec 2023 21:00:35.401 * Synchronization with replica ${url}:6380 succeeded
1:M 02 Dec 2023 21:00:35.401 * Streamed RDB transfer with replica ${url}:6381 succeeded (socket). Waiting for REPLCONF ACK from replica to enable streaming
1:M 02 Dec 2023 21:00:35.401 * Synchronization with replica ${url}:6381 succeeded
#rdb 传输成功
1:M 04 Dec 2023 13:07:43.081 * 1 changes in 3600 seconds. Saving...
1:M 04 Dec 2023 13:07:43.081 * Background saving started by pid 46
46:C 04 Dec 2023 13:07:43.090 * DB saved on disk
  • 心跳持续,保持通信

    repl-ping-replica-period 10,默认心跳时间10s

  • 进入平稳,增量复制

    master继续将新的收集到的修改命令自动一次发送给slave,完成同步

  • 从机下线,重连续传

    master 会检查backlog中的offset,master和slave都会保存一个复制的offset和masterId,master只会将offset后的数据复制给slave

expired key

问题:Redis expires allow keys to have a limited time to live (TTL). Such a feature depends on the ability of an instance to count the time, however Redis replicas correctly replicate keys with expires, even when such keys are altered using Lua scripts.

解决如下:

  1. Replicas don’t expire keys, instead they wait for masters to expire the keys. When a master expires a key (or evict it because of LRU), it synthesizes a DEL command which is transmitted to all the replicas.
  2. However because of master-driven expire, sometimes replicas may still have in memory keys that are already logically expired, since the master was not able to provide the DEL command in time. To deal with that the replica uses its logical clock to report that a key does not exist only for read operations that don’t violate the consistency of the data set (as new commands from the master will arrive). In this way replicas avoid reporting logically expired keys that are still existing. In practical terms, an HTML fragments cache that uses replicas to scale will avoid returning items that are already older than the desired time to live.
  3. During Lua scripts executions no key expiries are performed. As a Lua script runs, conceptually the time in the master is frozen, so that a given key will either exist or not for all the time the script runs. This prevents keys expiring in the middle of a script, and is needed to send the same script to the replica in a way that is guaranteed to have the same effects in the data set.

问题

主从结构下,高并发时,master的写压力增大,且slave增多对性能消耗,如果master结点故障,slave结点陷入等待,此结构不满足需求,引入哨兵机制

  • 9
    点赞
  • 8
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

lamooo19

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值