mysqlfailover是mysql官方用python语言写的一款工具,包含在mysql utilities工具集中。主要作用是保障mysql高可用。他会定时检测节点状态,当master节点不可用时,会自动转移到从节点上,同时剩余的从节点都会指向转以后的节点。如何保证数据一致,在下面分析中会有说明。
mysqlfailover使用前提:
1、必须开启GTID模式,在GTID模式下,复制延迟已经减小到最低。用压测工具会有3秒左右的延迟。这取决于设置多少SQL线程。如果秒插1万,可以设置为16。
2、配置文件中必须添加:
report-host=
report-port=
master-info-repository=TABLE
relay-log-info-repository=TABLE
用于从节点可以被检测到。
3、权限:
必须要有with grant option权限。
安装也非常简单。
下载好mysql utilities工具集:https://downloads.mysql.com/archives/utilities/
unzip mysql-utilities-1.6.5.zip
cd mysql-utilities-1.6.5
python ./setup.py build
python ./setup.py install
到此为止安装完成。
使用:
mysqlfailover --master=failover:123456@'192.168.0.106':3306 --discover-slaves-login=failover:123456 --daemon=start --log=/data/failover.log
建立好主从服务。这里略。
检测事物是否完整转移:
这里使用sysbentch工具来进行批量插入。
sysbench --test=oltp --mysql-db=test --mysql-user=root --mysql-password=123456 --oltp-table-size=1000000000 --oltp-num-tables=15 prepare 批量插入
sysbench 0.4.12.10: multi-threaded system evaluation benchmark
No DB drivers specified, using mysql
Creating table 'sbtest1'...
Creating table 'sbtest5'...
Creating table 'sbtest4'...
Creating table 'sbtest8'...
Creating table 'sbtest9'...
Creating table 'sbtest6'...
Creating table 'sbtest2'...
Creating table 'sbtest'...
Creating table 'sbtest3'...
Creating table 'sbtest14'...
Creating table 'sbtest10'...
Creating table 'sbtest12'...
Creating table 'sbtest11'...
Creating table 'sbtest7'...
Creating table 'sbtest13'...
Creating 1000000000 records in table 'sbtest11'...
Creating 1000000000 records in table 'sbtest6'...
Creating 1000000000 records in table 'sbtest4'...
Creating 1000000000 records in table 'sbtest5'...
Creating 1000000000 records in table 'sbtest8'...
Creating 1000000000 records in table 'sbtest14'...
Creating 1000000000 records in table 'sbtest3'...
Creating 1000000000 records in table 'sbtest13'...
Creating 1000000000 records in table 'sbtest9'...
Creating 1000000000 records in table 'sbtest10'...
Creating 1000000000 records in table 'sbtest1'...
Creating 1000000000 records in table 'sbtest12'...
Creating 1000000000 records in table 'sbtest'...
Creating 1000000000 records in table 'sbtest7'...
Creating 1000000000 records in table 'sbtest2'...
等待几分钟后:
kill -9 17448
kill -9 18350
之后,该工具自动转移输出,可以看到已经转移到了丛机上:
Q-quit R-refresh H-health G-GTID Lists U-UUIDs
Failed to reconnect to the master after 3 attemps.
Failover starting in 'auto' mode...
# Checking eligibility of slave 192.168.0.109:3306 for candidate.
# GTID_MODE=ON ... Ok
# Replication user exists ... Ok
# Candidate slave 192.168.0.109:3306 will become the new master.
# Checking slaves status (before failover).
# Preparing candidate for failover.
WARNING: IP lookup by name failed for 44,reason: Unknown host
WARNING: IP lookup by address failed for 192.168.0.109,reason: Unknown host
WARNING: IP lookup by address failed for 192.168.0.112,reason: Unknown host
# Missing transactions found on 192.168.0.112:3306. SELECT gtid_subset() = 0
# LOCK STRING: FLUSH TABLES WITH READ LOCK
# Read only is ON for 192.168.0.112:3306.
# Connecting candidate to 192.168.0.112:3306 as a temporary slave to retrieve unprocessed GTIDs.
# Change master command for 192.168.0.109:3306
# CHANGE MASTER TO MASTER_HOST = '192.168.0.112', MASTER_USER = 'backup', MASTER_PASSWORD = '123456', MASTER_PORT = 3306, MASTER_AUTO_POSITION=1
# Read only is OFF for 192.168.0.112:3306.
# UNLOCK STRING: UNLOCK TABLES
# Waiting for candidate to catch up to slave 192.168.0.112:3306.
# Slave 192.168.0.109:3306:
# QUERY = SELECT WAIT_UNTIL_SQL_THREAD_AFTER_GTIDS('c142ca67-b898-11e8-86e8-000c29367e64:1', 300)
# Return Code = 3
# Slave 192.168.0.109:3306:
# QUERY = SELECT WAIT_UNTIL_SQL_THREAD_AFTER_GTIDS('c777e02f-b898-11e8-86a0-000c29c6f346:1-4', 300)
# Return Code = 0
# Creating replication user if it does not exist.
# Stopping slaves.
# Performing STOP on all slaves.
WARNING: IP lookup by name failed for 44,reason: Unknown host
WARNING: IP lookup by address failed for 192.168.0.109,reason: Unknown host
WARNING: IP lookup by address failed for 192.168.0.112,reason: Unknown host
# Executing stop on slave 192.168.0.109:3306 WARN - slave is not configured with this master
# Executing stop on slave 192.168.0.109:3306 Ok
WARNING: IP lookup by address failed for 192.168.0.106,reason: Unknown host
# Executing stop on slave 192.168.0.112:3306 WARN - slave is not configured with this master
# Executing stop on slave 192.168.0.112:3306 Ok
WARNING: IP lookup by name failed for 44,reason: Unknown host
WARNING: IP lookup by address failed for 192.168.0.109,reason: Unknown host
# Switching slaves to new master.
# Change master command for 192.168.0.112:3306
# CHANGE MASTER TO MASTER_HOST = '192.168.0.109', MASTER_USER = 'backup', MASTER_PASSWORD = '123456', MASTER_PORT = 3306, MASTER_AUTO_POSITION=1
# Disconnecting new master as slave.
# Execute on 192.168.0.109:3306: RESET SLAVE ALL
# Starting slaves.
# Performing START on all slaves.
# Executing start on slave 192.168.0.112:3306 Ok
# Checking slaves for errors.
# 192.168.0.112:3306 status: Ok
# Failover complete.
# Discovering slaves for master at 192.168.0.109:3306
Failover console will restart in 5 seconds.
# Attempting to contact 192.168.0.109 ... Success
# Attempting to contact 192.168.0.112 ... Success
MySQL Replication Failover Utility
Failover Mode = auto Next Interval = Sat Sep 15 14:15:30 2018
Master Information
------------------
Binary Log File Position Binlog_Do_DB Binlog_Ignore_DB
mysql-bin.000001 657
GTID Executed Set
b5c5054c-b898-11e8-8670-000c299e1daf:1 [...]
# Attempting to contact 192.168.0.109 ... Success
# Attempting to contact 192.168.0.112 ... Success
Replication Health Status
+----------------+-------+---------+--------+------------+---------+-------------+-------------------+-----------------+------------+-------------+--------------+------------------+---------------+-----------+----------------+------------+---------------+
| host | port | role | state | gtid_mode | health | version | master_log_file | master_log_pos | IO_Thread | SQL_Thread | Secs_Behind | Remaining_Delay | IO_Error_Num | IO_Error | SQL_Error_Num | SQL_Error | Trans_Behind |
+----------------+-------+---------+--------+------------+---------+-------------+-------------------+-----------------+------------+-------------+--------------+------------------+---------------+-----------+----------------+------------+---------------+
| 192.168.0.109 | 3306 | MASTER | UP | ON | OK | 5.7.22-log | mysql-bin.000001 | 657 | | | | | | | | | |
| 192.168.0.112 | 3306 | SLAVE | UP | ON | OK | 5.7.22-log | mysql-bin.000001 | 657 | Yes | Yes | 0 | No | 0 | | 0 | | 0 |
+----------------+-------+---------+--------+------------+---------+-------------+-------------------+-----------------+------------+-------------+--------------+------------------+---------------+-----------+----------------+------------+---------------+
分析:
当程序检测到master服务停止后:
1、检查指定的候选服务器是否正常,检查GTID模式是否开启
2、锁表,防止事物提交带来的数据不一致问题。
3、如果开启了read_only模式,则会自动将其关闭,并且先change master to到另一台从机上以保证数据一致
4、解锁表,保证候选服务器和另一台从机的事物一致
5、检测候选服务器的事物号,然后停止全部从机:stop slave;
6、切换到新master,也就是候选服务器,将所有从机指向候选服务器。断开与原master的连接,执行reset slave语句
7、在从机开启start slave,开始复制,这时从机都已经指向了新master。故障转移完成。
现在在主机上输出二进制日志,看最后一次插入是哪个事物:
mysqlbinlog --base64-output=decode-rows -v mysql-bin.000005 > ~/bin.log
vim ~/bin.log
截取最后一部分:
### INSERT INTO `test`.`sbtest8`
### SET
### @1=289999
### @2=0
### @3=''
### @4='qqqqqqqqqqwwwwwwwwwweeeeeeeeeerrrrrrrrrrtttttttttt'
### INSERT INTO `test`.`sbtest8`
### SET
### @1=290000
### @2=0
### @3=''
### @4='qqqqqqqqqqwwwwwwwwwweeeeeeeeeerrrrrrrrrrtttttttttt'
# at 265373582
#180901 15:41:10 server id 1 end_log_pos 265373613 CRC32 0xa53bca62 Xid = 7014
COMMIT/*!*/;
SET @@SESSION.GTID_NEXT= 'AUTOMATIC' /* added by mysqlbinlog */ /*!*/;
DELIMITER ;
# End of log file
/*!50003 SET COMPLETION_TYPE=@OLD_COMPLETION_TYPE*/;
/*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=0*/;
可以看到在主库上最后一次插入的操作是在test库下的sbtest8表,第一列值为290000,也就是id列。
现在切换到从库上进入sbtest8这张表,看看这条事物是否已经复制到了从库:
mysql> use test
Database changed
mysql> select * from sbtest8 where id = '290000';
+--------+---+---+----------------------------------------------------+
| id | k | c | pad |
+--------+---+---+----------------------------------------------------+
| 290000 | 0 | | qqqqqqqqqqwwwwwwwwwweeeeeeeeeerrrrrrrrrrtttttttttt |
+--------+---+---+----------------------------------------------------+
1 row in set (0.00 sec)
可以看到已经有了数据,看看是不是最后一条,从库是否回滚了未提交的事物:
mysql> select * from sbtest8 where id = '290001';
Empty set (0.00 sec)
mysql> select * from sbtest8 order by id desc limit 1;
+--------+---+---+----------------------------------------------------+
| id | k | c | pad |
+--------+---+---+----------------------------------------------------+
| 290000 | 0 | | qqqqqqqqqqwwwwwwwwwweeeeeeeeeerrrrrrrrrrtttttttttt |
+--------+---+---+----------------------------------------------------+
1 row in set (0.00 sec)
mysql> \q
Bye
可以看出id=290000确实是最后一条事物,如果有未提交的事物可能已经回滚,证明主库与复制到从库的事物不会丢失。
最后,可以使用mysqldiff工具来检查主从之间的不一致:
[root@node2 data]# mysqldiff --server1=failover:123456@192.168.0.109:3306 --server2=failover:123456@192.168.0.112:3306 --difftype=sql test:test
# WARNING: Using a password on the command line interface can be insecure.
# server1 on 192.168.0.109: ... connected.
# server2 on 192.168.0.112: ... connected.
# Comparing `test` to `test` [PASS]
# Comparing `test`.`sbtest` to `test`.`sbtest` [PASS]
# Comparing `test`.`sbtest1` to `test`.`sbtest1` [PASS]
# Comparing `test`.`sbtest10` to `test`.`sbtest10` [PASS]
# Comparing `test`.`sbtest11` to `test`.`sbtest11` [PASS]
# Comparing `test`.`sbtest12` to `test`.`sbtest12` [PASS]
# Comparing `test`.`sbtest13` to `test`.`sbtest13` [PASS]
# Comparing `test`.`sbtest14` to `test`.`sbtest14` [PASS]
# Comparing `test`.`sbtest2` to `test`.`sbtest2` [PASS]
# Comparing `test`.`sbtest3` to `test`.`sbtest3` [PASS]
# Comparing `test`.`sbtest4` to `test`.`sbtest4` [PASS]
# Comparing `test`.`sbtest5` to `test`.`sbtest5` [PASS]
# Comparing `test`.`sbtest6` to `test`.`sbtest6` [PASS]
# Comparing `test`.`sbtest7` to `test`.`sbtest7` [PASS]
# Comparing `test`.`sbtest8` to `test`.`sbtest8` [PASS]
# Comparing `test`.`sbtest9` to `test`.`sbtest9` [PASS]
# Success. All objects are the same.
说明在延迟的情况下,事物并没有丢失。
注意:
mysqlfailover程序适合于只做纯粹的单点写入复制架构。
不适合于从机当测试库或从机做审计做其他服务器等操作。必须要严格保证所有从库没有任何的写入。
在使用MySQLfailover时,最好在所有从库开启read_only参数,以保证数据一致性。
在多从拓补中,如果master挂掉后,要将master再重新加入到原来的拓补中,并且还是将旧master设置为主。server1为旧master,server2为故障转移后的master。
1、停止mysqlfailover故障转移工具。并且启动旧master实例。server1
2、将旧master服务器设置为现在的master的从服务器,用以检查事物完整性和二进制日志完整性:
mysqlreplicate --master=failover:123456@192.168.88.196:3307 --slave=failover:123456@192.168.88.194:3307 --rpl-user=backup:123456
3、用mysqlrpladmin 工具将旧master设置为整个拓补的新主:
mysqlrpladmin --master=failover:123456@192.168.88.196:3307 --new-master=failover:123456@192.168.88.194:3307 --discover-slaves-login=failover:123456 --demote-master switchover
4、恢复mysqlfailover工具启动,这里要使用--force选项来启动。
未经允许,谢绝转载