1、安装percona的安装源
yum install https://downloads.percona.com/downloads/percona-release/percona-release-1.0-9/redhat/percona-release-1.0-9.noarch.rpm
2、安装xtrabackup8
yum -y install percona-xtrabackup-80
yum -y install qpress
3、使用方法
备份
rm -rf /myweb/mysql_backup/*
xtrabackup --defaults-file=/etc/my.cnf --backup --user=root --password=123456 --target-dir=/myweb/mysql_backup/ --socket=/tmp/mysql.sock
压缩
tar czvf mysql_backup.tar.gz mysql_backup
传输
scp mysql_backup.tar.gz root@192.168.0.52:/myweb/
scp mysql_backup.tar.gz root@192.168.0.53:/myweb/
到目标机解压
service mysql stop
rm -rf /myweb/mysql/data/*
rm -rf /myweb/mysql_backup/*
tar zxvf mysql_backup.tar.gz
恢复
xtrabackup --prepare --target-dir=/myweb/mysql_backup/
xtrabackup --copy-back --target-dir=/myweb/mysql_backup/
chown -R mysql.mysql /myweb/mysql/data/*
chown -R mysql.mysql /myweb/mysql/data/
1、我们查看主节点的GTID号 show master status;
‘binlog.000004’, ‘330379’, ‘’, ‘’, ‘297e4c56-87c7-11eb-b94b-e2a3f0bcfe56:1-124,\naaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:1-1158:1000006’
2、我们在看从库的状态 show master status;
‘binlog.000002’, ‘2512’, ‘’, ‘’, ‘297e4c56-87c7-11eb-b94b-e2a3f0bcfe56:1-2,\naaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:1-25:1000006’
果然是不一样,当然应该不一样,要不集群就不会是失败的状态。
3、开始修复
我们先停止从节点
stop group_replicatiton;
清空当前GTID EXECUTED
reset master;
设置和主节点一样的
set global gtid_purged=‘297e4c56-87c7-11eb-b94b-e2a3f0bcfe56:1-124,\naaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa:1-1158:1000006’
start group_replication;
即可上线
再次查看修复的那个从节点的状态,已经上线了
查看对应的日志,OK 已经和主同步了,(如果你熟悉GTID的原理你就知道为什么了)
使用此种手段,修复第二个节点
修复成功,集群恢复工作。