redis的主从复制
1. 是什么
2. 能干吗
3. 怎么玩
3.1 配从(库)不配主(库)
3.2 从库配置:slaveof 主库IP 主库端口
每次与master断开后,都需要重新连接,除非你配置进redis.conf文件
info replication
3.3 修改配置文件细节操作
拷贝多个redis.conf文件
开启daemonize yes
pid文件名称
指定端口
Log文件名字
dump.rdb名字
root@kmubt:/softfile/myredis# ps -ef|grep redis root 1246 1 0 08:41 ? 00:00:00 /usr/local/redis/redis-server *:6379 root 1249 1080 0 08:41 pts/0 00:00:00 /usr/local/redis/redis-cli -p 6379 root 1257 1 0 08:41 ? 00:00:00 /usr/local/redis/redis-server *:6380 root 1272 1136 0 08:41 pts/1 00:00:00 /usr/local/redis/redis-cli -p 6380 root 1286 1 0 08:44 ? 00:00:00 /usr/local/redis/redis-server *:6381 root 1295 1156 0 08:44 pts/2 00:00:00 /usr/local/redis/redis-cli -p 6381
3.4 常用三招
3.4.1 一主二仆
Init
一个master连个slave
日志查看
-rw-r--r-- 1 root root 7097 2月 2 09:02 6379.log
-rw-r--r-- 1 root root 82550 2月 2 09:01 6380.log
-rw-r--r-- 1 root root 119169 2月 2 09:02 6381.log
6379.log内容
1335:M 02 Feb 08:59:48.268 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
1335:M 02 Feb 08:59:48.268 # Server started, Redis version 3.0.4
1335:M 02 Feb 08:59:48.268 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
1335:M 02 Feb 08:59:48.268 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
1335:M 02 Feb 08:59:48.268 * DB loaded from disk: 0.000 seconds
1335:M 02 Feb 08:59:48.268 * The server is now ready to accept connections on port 6379
1335:M 02 Feb 09:01:58.798 * Slave 127.0.0.1:6380 asks for synchronization
1335:M 02 Feb 09:01:58.798 * Full resync requested by slave 127.0.0.1:6380
1335:M 02 Feb 09:01:58.798 * Starting BGSAVE for SYNC with target: disk
1335:M 02 Feb 09:01:58.799 * Background saving started by pid 1351
1351:C 02 Feb 09:01:58.825 * DB saved on disk
1351:C 02 Feb 09:01:58.826 * RDB: 6 MB of memory used by copy-on-write
1335:M 02 Feb 09:01:58.898 * Background saving terminated with success
1335:M 02 Feb 09:01:58.899 * Synchronization with slave 127.0.0.1:6380 succeeded
1335:M 02 Feb 09:02:08.897 * Slave 127.0.0.1:6381 asks for synchronization
1335:M 02 Feb 09:02:08.897 * Full resync requested by slave 127.0.0.1:6381
1335:M 02 Feb 09:02:08.897 * Starting BGSAVE for SYNC with target: disk
1335:M 02 Feb 09:02:08.898 * Background saving started by pid 1352
1352:C 02 Feb 09:02:08.945 * DB saved on disk
1352:C 02 Feb 09:02:08.945 * RDB: 8 MB of memory used by copy-on-write
1335:M 02 Feb 09:02:08.995 * Background saving terminated with success
1335:M 02 Feb 09:02:08.997 * Synchronization with slave 127.0.0.1:6381 succeeded
主从问题演示
3.4.2 薪火相传
- 上一个slave可以是下一个slave的master,slave同样可以接收其他slaves的连接和同步请求,那么该slave作为了链条中下一个的master,可以有效减master的压力
- 中途变更转向:会清楚之前的数据,重新建立拷贝最新的
slaveof 新主库IP 新主库端口
- 演示
3.4.3 反客为主
SLAVEOF no one
使当前数据库停止与其他数据库的同步,转成主数据库
演示
6380service
127.0.0.1:6380> INFO replication
# Replication
role:slave
master_host:127.0.0.1
master_port:6379
master_link_status:down
master_last_io_seconds_ago:-1
master_sync_in_progress:0
slave_repl_offset:360
master_link_down_since_seconds:11
slave_priority:100
slave_read_only:1
connected_slaves:1
slave0:ip=127.0.0.1,port=6381,state=online,offset=346,lag=1
master_repl_offset:346
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:2
repl_backlog_histlen:345
127.0.0.1:6380> SLAVEOF no one
OK
127.0.0.1:6380> INFO replication
# Replication
role:master
connected_slaves:1
slave0:ip=127.0.0.1,port=6381,state=online,offset=360,lag=0
master_repl_offset:360
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:2
repl_backlog_histlen:359
127.0.0.1:6380> set name fasdfa
OK
127.0.0.1:6380> get name
"fasdfa"
6381service
127.0.0.1:6381> get name
"fasdfa"
127.0.0.1:6381>
4. 复制原理
1335:M 02 Feb 09:01:58.798 * Slave 127.0.0.1:6380 asks for synchronization
1335:M 02 Feb 09:01:58.798 * Full resync requested by slave 127.0.0.1:6380
1335:M 02 Feb 09:01:58.798 * Starting BGSAVE for SYNC with target: disk
1335:M 02 Feb 09:01:58.799 * Background saving started by pid 1351
1351:C 02 Feb 09:01:58.825 * DB saved on disk
1351:C 02 Feb 09:01:58.826 * RDB: 6 MB of memory used by copy-on-write
1335:M 02 Feb 09:01:58.898 * Background saving terminated with success
1335:M 02 Feb 09:01:58.899 * Synchronization with slave 127.0.0.1:6380 succeeded
- slave启动成功连接到master会发送一个sync命令
- Master接到命令启动后台的存盘进程,同时收集所有接收到的用于修改数据集的命令,在后台进程执行完成之后,master将传动整个数据文件到slave,以完成一次完全同步操作。
- 全量复制:slave服务在接收到数据库文件数据后,将其存盘并加载到内存中。
- 增量复制:master继续将新的所有收集到的修改命令依次传给slave,完成同步
- 但是只要是重新连接master,依次完全同步(全量复制)将被自动执行
5. 哨兵模式(sentinel)
5.1 是什么
反客为主的自动版,能够后台监控主机是否故障,如果故障了根据投票结果,自动从库转为主库
5.2 怎么玩
调整结构:6379 主 80,81为从
自定义的/software/myredis目录下新建sentinel.conf文件
配置哨兵,填写内容
sentinel monitor 被监控数据库名字(自己起名字) 127.0.0.1 6379 1
上面最后一个数字1,表示主机挂掉后slave投票看让谁接替主机,得票多于1的为主机
touch sentinel.conf vim sentinel.conf #添加内容为: sentinel monitor redis6379 127.0.0.1 6379 1
启动哨兵
redis-sentinel /software/myredis/sentinel.conf
上述目录按照各自的实际情况配置
启动成功后:
root@kmubt:/usr/local/redis# ./redis-sentinel /softfile/myredis/sentinel.conf 1382:X 02 Feb 13:58:40.740 * Increased maximum number of open files to 10032 (it was originally set to 1024). _._ _.-``__ ''-._ _.-`` `. `_. ''-._ Redis 3.0.4 (00000000/0) 64 bit .-`` .-```. ```\/ _.,_ ''-._ ( ' , .-` | `, ) Running in sentinel mode |`-._`-...-` __...-.``-._|'` _.-'| Port: 26379 | `-._ `._ / _.-' | PID: 1382 `-._ `-._ `-./ _.-' _.-' |`-._`-._ `-.__.-' _.-'_.-'| | `-._`-._ _.-'_.-' | http://redis.io `-._ `-._`-.__.-'_.-' _.-' |`-._`-._ `-.__.-' _.-'_.-'| | `-._`-._ _.-'_.-' | `-._ `-._`-.__.-'_.-' _.-' `-._ `-.__.-' _.-' `-._ _.-' `-.__.-' 1382:X 02 Feb 13:58:40.759 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128. 1382:X 02 Feb 13:58:40.759 # Sentinel runid is 8cac51a9b955a33f8d0408de3dfb32d30f29a523 1382:X 02 Feb 13:58:40.760 # +monitor master redis6379 127.0.0.1 6379 quorum 1 1382:X 02 Feb 13:58:41.744 * +slave slave 127.0.0.1:6380 127.0.0.1 6380 @ redis6379 127.0.0.1 6379 1382:X 02 Feb 13:58:41.749 * +slave slave 127.0.0.1:6381 127.0.0.1 6381 @ redis6379 127.0.0.1 6379
正常主从演示
原有有的主master挂了
127.0.0.1:6379> SHUTDOWN not connected> exit
投票选举
1382:X 02 Feb 14:26:02.378 # +sdown master redis6379 127.0.0.1 6379 1382:X 02 Feb 14:26:02.379 # +odown master redis6379 127.0.0.1 6379 #quorum 1/1 1382:X 02 Feb 14:26:02.380 # +new-epoch 1 1382:X 02 Feb 14:26:02.383 # +try-failover master redis6379 127.0.0.1 6379 1382:X 02 Feb 14:26:02.459 # +vote-for-leader 8cac51a9b955a33f8d0408de3dfb32d30f29a523 1 1382:X 02 Feb 14:26:02.461 # +elected-leader master redis6379 127.0.0.1 6379 1382:X 02 Feb 14:26:02.464 # +failover-state-select-slave master redis6379 127.0.0.1 6379 1382:X 02 Feb 14:26:02.538 # +selected-slave slave 127.0.0.1:6381 127.0.0.1 6381 @ redis6379 127.0.0.1 6379 1382:X 02 Feb 14:26:02.538 * +failover-state-send-slaveof-noone slave 127.0.0.1:6381 127.0.0.1 6381 @ redis6379 127.0.0.1 6379 1382:X 02 Feb 14:26:02.621 * +failover-state-wait-promotion slave 127.0.0.1:6381 127.0.0.1 6381 @ redis6379 127.0.0.1 6379 1382:X 02 Feb 14:26:03.553 # +promoted-slave slave 127.0.0.1:6381 127.0.0.1 6381 @ redis6379 127.0.0.1 6379 1382:X 02 Feb 14:26:03.554 # +failover-state-reconf-slaves master redis6379 127.0.0.1 6379 1382:X 02 Feb 14:26:03.610 * +slave-reconf-sent slave 127.0.0.1:6380 127.0.0.1 6380 @ redis6379 127.0.0.1 6379 1382:X 02 Feb 14:26:04.632 * +slave-reconf-inprog slave 127.0.0.1:6380 127.0.0.1 6380 @ redis6379 127.0.0.1 6379 1382:X 02 Feb 14:26:04.633 * +slave-reconf-done slave 127.0.0.1:6380 127.0.0.1 6380 @ redis6379 127.0.0.1 6379 1382:X 02 Feb 14:26:04.732 # +failover-end master redis6379 127.0.0.1 6379 1382:X 02 Feb 14:26:04.734 # +switch-master redis6379 127.0.0.1 6379 127.0.0.1 6381 1382:X 02 Feb 14:26:04.739 * +slave slave 127.0.0.1:6380 127.0.0.1 6380 @ redis6379 127.0.0.1 6381 1382:X 02 Feb 14:26:04.741 * +slave slave 127.0.0.1:6379 127.0.0.1 6379 @ redis6379 127.0.0.1 6381 1382:X 02 Feb 14:26:34.748 # +sdown slave 127.0.0.1:6379 127.0.0.1 6379 @ redis6379 127.0.0.1 6381
重新主从继续开工,info replication查看
问题:如果之前的master重启回来,会不会双master冲突??
不会冲突,重启后,身份自动变为slave,追随当前的主库
6379服务器
root@kmubt:/usr/local/redis# ./redis-server /softfile/myredis/redis6379.conf root@kmubt:/usr/local/redis# ./redis-cli -p 6379
6381服务器(当前的主库)
127.0.0.1:6381> info replication # Replication role:master connected_slaves:2 slave0:ip=127.0.0.1,port=6380,state=online,offset=14411,lag=1 slave1:ip=127.0.0.1,port=6379,state=online,offset=14411,lag=1 master_repl_offset:14411 repl_backlog_active:1 repl_backlog_size:1048576 repl_backlog_first_byte_offset:2 repl_backlog_histlen:14410
5.3 一组sentinel同时监控多个master
6. 复制的缺点
复制的延时
由于所有的写操作都是现在master上操作,然后同步更新到slave上,所以从master同步到slave上有一定的延迟,当系统忙的时候,延迟问题会更加严重,slave机器数量的增加也会是这个问题更加严重。。