MHA简介
软件简介
MHA(Master High Availability)目前在MySQL高可用方面是一个相对成熟的解决方案,它由日本DeNA公司youshimaton(现就职于Facebook公司)开发,是一套优秀的作为MySQL高可用性环境下故障切换和主从提升的高可用软件。在MySQL故障切换过程中,MHA能做到在10~30秒之内自动完成数据库的故障切换操作,并且在进行故障切换的过程中,MHA能在最大程度上保证数据的一致性,以达到真正意义上的高可用。
-
MHA还提供在线主库切换的功能,能够安全地切换当前运行的主库到一个新的主库中 (通过将从库提升为主库),大概0.5-2秒内即可完成。
-
该软件由两部分组成:MHA Manager(管理节点)和MHA Node(数据节点)。MHA Manager可以单独部署在一台独立的机器上管理多个master-slave集群,也可以部署在一台slave节点上。MHA Node运行在每台MySQL服务器上,MHA Manager会定时探测集群中的master节点,当master出现故障时,它可以自动将最新数据的slave提升为新的master,然后将所有其他的slave重新指向新的master。整个故障转移过程对应用程序完全透明。
-
MHA可以与半同步复制结合起来。如果只有一个slave已经收到了最新的二进制日志,MHA可以将最新的二进制日志应用于其他所有的slave服务器上,因此可以保证所有节点的数据一致性。
-
目前MHA主要支持一主多从的架构,要搭建MHA,要求一个复制集群中必须最少有三台数据库服务器,一主二从,即一台充当master,一台充当备用master,另外一台充当从库,因为至少需要三台服务器,出于机器成本的考虑,淘宝也在该基础上进行了改造,目前淘宝TMHA已经支持一主一从。
MHA优点总结
-
自动故障转移快
-
主库崩溃不存在数据一致性问题
-
不需要对当前mysql环境做重大修改
-
不需要添加额外的服务器(仅一台manager就可管理上百个replication)
-
性能优秀,可工作在半同步复制和异步复制,当监控mysql状态时,仅需要每隔N秒向master发送ping包(默认3秒),所以对性能无影响。你可以理解为MHA的性能和简单的主从复制框架性能一样。
-
只要replication支持的存储引擎,MHA都支持,不会局限于innodb
实验环境
server1 172.25.9.1
server2 172.25.9.2
server3 172.25.9.3
server4 172.25.9.4
MHA实现
恢复主从复制
- vim /etc/my.cnf
[root@server3 mysql]# cat /etc/my.cnf
[mysqld]
basedir=/usr/local/mysql
datadir=/data/mysql
socket=/data/mysql/mysql.sock
character-set-server=utf8mb4
server-id=2
#log-bin=master-bin
gtid_mode=ON
enforce-gtid-consistency=ON
default_authentication_plugin=mysql_native_password
!includedir /etc/my.cnf.d
cd /data/mysql
rm -fr *
##清空缓存文件
- 初始化并开启
[root@server1 mysql]# mysqld --initialize --user=mysql
2020-08-20T01:51:41.057368Z 0 [System] [MY-013169] [Server] /usr/local/mysql/bin/mysqld (mysqld 8.0.21) initializing of server in progress as process 8973
2020-08-20T01:51:41.077433Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.
2020-08-20T01:51:41.725462Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended.
2020-08-20T01:51:43.397093Z 6 [Note] [MY-010454] [Server] A temporary password is generated for root@localhost: !k?:f9wA<#s?
[root@server1 mysql]# /etc/init.d/mysqld start
Starting MySQL.Logging to '/data/mysql/server1.err'.
SUCCESS!
- 以上步骤三台主机都要操作
- 进入mysql
- [root@server1 mysql]# mysql -p
Enter password:
- 在mysql命令行中输入
- 在server1中
mysql> set sql_log_bin=0;
Query OK, 0 rows affected (0.01 sec)
mysql> alter user root@localhost identified by 'westos';
Query OK, 0 rows affected (0.00 sec)
mysql> set sql_log_bin=1;
Query OK, 0 rows affected (0.00 sec)
mysql> create user repl@'%' identified by 'westos';
Query OK, 0 rows affected (0.01 sec)
mysql> grant replication slave on *.* to repl@'%';
Query OK, 0 rows affected (0.01 sec)
- 在server3和server4中:
mysql> set sql_log_bin=0;
Query OK, 0 rows affected (0.00 sec)
mysql> alter user root@localhost identified by 'westos';
Query OK, 0 rows affected (0.01 sec)
mysql> set sql_log_bin=1;
Query OK, 0 rows affected (0.00 sec)
mysql> change master to master_host='172.25.9.1',master_user='repl',master_password='westos',master_auto_position=1;
Query OK, 0 rows affected, 2 warnings (0.02 sec)
mysql> start slave;
Query OK, 0 rows affected (0.00 sec)
-
查看slave 状态,
show slave status \G;
发现Slave_IO_Running: Yes Slave_SQL_Running: Yes
表示成功。
MHA安装
-
server2中
-
yum install -y *.rpm
-
并且将
mha4mysql-node-0.58-0.el7.centos.noarch.rpm
传给server1 server3 server4
[root@server2 MHA-7]# scp mha4mysql-node-0.58-0.el7.centos.noarch.rpm root@172.25.9.1:/mnt/
mha4mysql-node-0.58-0.el7.centos.noarch.rpm 100% 35KB 22.9MB/s 00:00
[root@server2 MHA-7]# scp mha4mysql-node-0.58-0.el7.centos.noarch.rpm root@172.25.9.3:/mnt/
The authenticity of host '172.25.9.3 (172.25.9.3)' can't be established.
ECDSA key fingerprint is SHA256:BRPa9K53qZwHKMjqk5qTAAZAFc16GNjA+QN+UxKh20g.
ECDSA key fingerprint is MD5:03:4a:fe:a4:0a:03:ab:09:66:eb:58:cd:31:d9:30:0c.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '172.25.9.3' (ECDSA) to the list of known hosts.
root@172.25.9.3's password:
mha4mysql-node-0.58-0.el7.centos.noarch.rpm 100% 35KB 28.7MB/s 00:00
[root@server2 MHA-7]# scp mha4mysql-node-0.58-0.el7.centos.noarch.rpm root@172.25.9.4:/mnt/
The authenticity of host '172.25.9.4 (172.25.9.4)' can't be established.
ECDSA key fingerprint is SHA256:JBRo5bN7Jzl3WdNRHasc2ITRQ5SPN5c0or6w52WlN4E.
ECDSA key fingerprint is MD5:c4:48:3f:c2:5f:64:67:7e:fd:3a:f2:0f:a8:60:6d:c8.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '172.25.9.4' (ECDSA) to the list of known hosts.
root@172.25.9.4's password:
mha4mysql-node-0.58-0.el7.centos.noarch.rpm 100% 35KB 23.3MB/s 00:00
- server1 server3 server4中安装
[root@server1 mnt]# rpm -ivh mha4mysql-node-0.58-0.el7.centos.noarch.rpm
error: Failed dependencies:
perl(DBD::mysql) is needed by mha4mysql-node-0.58-0.el7.centos.noarch
perl(DBI) is needed by mha4mysql-node-0.58-0.el7.centos.noarch
- 安装依赖性
yum install -y mha4mysql-node-0.58-0.el7.centos.noarch.rpm
- 安装成功
[root@server1 mnt]# rpm -ivh mha4mysql-node-0.58-0.el7.centos.noarch.rpm
Preparing... ################################# [100%]
package mha4mysql-node-0.58-0.el7.centos.noarch is already installed
[root@server1 mnt]# rpm -ql mha4mysql-node-0.58-0.el7.centos.noarch
/usr/bin/apply_diff_relay_logs
/usr/bin/filter_mysqlbinlog
/usr/bin/purge_relay_logs
/usr/bin/save_binary_logs
/usr/share/man/man1/apply_diff_relay_logs.1.gz
/usr/share/man/man1/filter_mysqlbinlog.1.gz
/usr/share/man/man1/purge_relay_logs.1.gz
/usr/share/man/man1/save_binary_logs.1.gz
/usr/share/perl5/vendor_perl/MHA/BinlogHeaderParser.pm
/usr/share/perl5/vendor_perl/MHA/BinlogManager.pm
/usr/share/perl5/vendor_perl/MHA/BinlogPosFindManager.pm
/usr/share/perl5/vendor_perl/MHA/BinlogPosFinder.pm
/usr/share/perl5/vendor_perl/MHA/BinlogPosFinderElp.pm
/usr/share/perl5/vendor_perl/MHA/BinlogPosFinderXid.pm
/usr/share/perl5/vendor_perl/MHA/NodeConst.pm
/usr/share/perl5/vendor_perl/MHA/NodeUtil.pm
/usr/share/perl5/vendor_perl/MHA/SlaveUtil.pm
[root@server1 mnt]# rpm -qa|grep node
mha4mysql-node-0.58-0.el7.centos.noarch
libvirt-daemon-driver-nodedev-4.5.0-10.el7.x86_64
- server2中
[root@server2 MHA-7]# ls
mha4mysql-manager-0.58-0.el7.centos.noarch.rpm perl-Mail-Sender-0.8.23-1.el7.noarch.rpm
mha4mysql-manager-0.58.tar.gz perl-Mail-Sendmail-0.79-21.el7.noarch.rpm
mha4mysql-node-0.58-0.el7.centos.noarch.rpm perl-MIME-Lite-3.030-1.el7.noarch.rpm
perl-Config-Tiny-2.14-7.el7.noarch.rpm perl-MIME-Types-1.38-2.el7.noarch.rpm
perl-Email-Date-Format-1.002-15.el7.noarch.rpm perl-Net-Telnet-3.03-19.el7.noarch.rpm
perl-Log-Dispatch-2.41-1.el7.1.noarch.rpm perl-Parallel-ForkManager-1.18-2.el7.noarch.rpm
[root@server2 MHA-7]# tar -zxvf mha4mysql-manager-0.58.tar.gz
[root@server2 MHA-7]# cd mha4mysql-manager-0.58/
[root@server2 mha4mysql-manager-0.58]# ls
AUTHORS bin COPYING debian lib Makefile.PL MANIFEST MANIFEST.SKIP README rpm samples t tests
[root@server2 mha4mysql-manager-0.58]# cd samples/
[root@server2 samples]# ls
conf scripts
[root@server2 samples]# cd conf
[root@server2 conf]# ls
app1.cnf masterha_default.cnf
[root@server2 conf]# mkdir /etc/mha
[root@server2 conf]# cat masterha_default.cnf app1.cnf > /etc/mha/app.cnf
- server1中
- 进入mysql
- 创建一个mysql.user,用于基于mha管理mysql,所以,可以直接在主服务器上创建账号repl,并授予其权限,由于已建立了主从复制关系,所以,无需在其它从节点上重复创建账号。
mysql> create user root@'%' identified by 'westos';
Query OK, 0 rows affected (0.00 sec)
- server2中配置文件
vim /etc/mha/app.cnf
[root@server2 mha]# cat app.cnf
[server default]
user=root
password=westos
ssh_user=root
master_binlog_dir= /data/mysql
remote_workdir=/tmp
secondary_check_script= masterha_secondary_check -s 172.25.9.3 -s 172.25.9.4
ping_interval=3 #检测主服务器健康状态的时间间隔,单位为秒
#master_ip_failover_script= /script/masterha/master_ip_failover
#shutdown_script= /script/masterha/power_manager
#report_script= /script/masterha/send_report
#master_ip_online_change_script= /script/masterha/master_ip_online_change
manager_workdir=/etc/mha/app1
manager_log=/etc/mha/app1/manager.log
[server1]
hostname=172.25.9.1
candidate_master=1
[server2]
hostname=172.25.9.3
candidate_master=1## 表示当master挂掉之后,由它接管master
check_repl_delay=0
[server3]
hostname=172.25.9.4
no_master=1#如果不提升为主节点,则为no_master=1
- 测试:
server2中
[root@server2 conf]# mysql -h 172.25.9.1 -uroot -pwestos
masterha_check_ssh --conf=/etc/mha/app.cnf
#检测各节点间ssh互信通信配置时候是否successfully
masterha_check_repl --conf=/etc/mha/app.cnf
#检查管理Mysql复制集群的连接配置参数是否OK
确保两者都成功
- 有报错,原因是互相不能无密码访问
- 使主机间可以互相免密
[root@server2 ~]# scp -r .ssh/ 172.25.9.1:
root@172.25.9.1's password:
known_hosts 100% 857 1.7MB/s 00:00
authorized_keys 100% 788 1.4MB/s 00:00
id_rsa 100% 1679 3.3MB/s 00:00
id_rsa.pub 100% 394 880.1KB/s 00:00
[root@server2 ~]# scp -r .ssh/ 172.25.9.3:
root@172.25.9.3's password:
known_hosts 100% 857 840.6KB/s 00:00
authorized_keys 100% 788 1.1MB/s 00:00
id_rsa 100% 1679 2.3MB/s 00:00
id_rsa.pub 100% 394 679.8KB/s 00:00
[root@server2 ~]# scp -r .ssh/ 172.25.9.4:
root@172.25.9.4's password:
known_hosts 100% 857 977.7KB/s 00:00
authorized_keys 100% 788 942.4KB/s 00:00
id_rsa 100% 1679 2.5MB/s 00:00
id_rsa.pub 100% 394 552.1KB/s 00:00
- 验证成功
[root@server2 mha]# masterha_check_repl --conf=/etc/mha/app.cnf
Thu Aug 20 13:21:01 2020 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Thu Aug 20 13:21:01 2020 - [info] Reading application default configuration from /etc/mha/app.cnf..
Thu Aug 20 13:21:01 2020 - [info] Reading server configuration from /etc/mha/app.cnf..
Thu Aug 20 13:21:01 2020 - [info] MHA::MasterMonitor version 0.58.
Thu Aug 20 13:21:02 2020 - [info] GTID failover mode = 1
Thu Aug 20 13:21:02 2020 - [info] Dead Servers:
Thu Aug 20 13:21:02 2020 - [info] Alive Servers:
Thu Aug 20 13:21:02 2020 - [info] 172.25.9.1(172.25.9.1:3306)
Thu Aug 20 13:21:02 2020 - [info] 172.25.9.3(172.25.9.3:3306)
Thu Aug 20 13:21:02 2020 - [info] 172.25.9.4(172.25.9.4:3306)
Thu Aug 20 13:21:02 2020 - [info] Alive Slaves:
Thu Aug 20 13:21:02 2020 - [info] 172.25.9.3(172.25.9.3:3306) Version=8.0.21 (oldest major version between slaves) log-bin:enabled
Thu Aug 20 13:21:02 2020 - [info] GTID ON
Thu Aug 20 13:21:02 2020 - [info] Replicating from 172.25.9.1(172.25.9.1:3306)
Thu Aug 20 13:21:02 2020 - [info] Primary candidate for the new Master (candidate_master is set)
Thu Aug 20 13:21:02 2020 - [info] 172.25.9.4(172.25.9.4:3306) Version=8.0.21 (oldest major version between slaves) log-bin:enabled
Thu Aug 20 13:21:02 2020 - [info] GTID ON
Thu Aug 20 13:21:02 2020 - [info] Replicating from 172.25.9.1(172.25.9.1:3306)
Thu Aug 20 13:21:02 2020 - [info] Not candidate for the new Master (no_master is set)
Thu Aug 20 13:21:02 2020 - [info] Current Alive Master: 172.25.9.1(172.25.9.1:3306)
Thu Aug 20 13:21:02 2020 - [info] Checking slave configurations..
Thu Aug 20 13:21:02 2020 - [info] read_only=1 is not set on slave 172.25.9.3(172.25.9.3:3306).
Thu Aug 20 13:21:02 2020 - [info] read_only=1 is not set on slave 172.25.9.4(172.25.9.4:3306).
Thu Aug 20 13:21:02 2020 - [info] Checking replication filtering settings..
Thu Aug 20 13:21:02 2020 - [info] binlog_do_db= , binlog_ignore_db=
Thu Aug 20 13:21:02 2020 - [info] Replication filtering check ok.
Thu Aug 20 13:21:02 2020 - [info] GTID (with auto-pos) is supported. Skipping all SSH and Node package checking.
Thu Aug 20 13:21:02 2020 - [info] Checking SSH publickey authentication settings on the current master..
Thu Aug 20 13:21:02 2020 - [info] HealthCheck: SSH to 172.25.9.1 is reachable.
Thu Aug 20 13:21:02 2020 - [info]
172.25.9.1(172.25.9.1:3306) (current master)
+--172.25.9.3(172.25.9.3:3306)
+--172.25.9.4(172.25.9.4:3306)
Thu Aug 20 13:21:02 2020 - [info] Checking replication health on 172.25.9.3..
Thu Aug 20 13:21:02 2020 - [info] ok.
Thu Aug 20 13:21:02 2020 - [info] Checking replication health on 172.25.9.4..
Thu Aug 20 13:21:02 2020 - [info] ok.
Thu Aug 20 13:21:02 2020 - [warning] master_ip_failover_script is not defined.
Thu Aug 20 13:21:02 2020 - [warning] shutdown_script is not defined.
Thu Aug 20 13:21:02 2020 - [info] Got exit code 0 (Not master dead).
MySQL Replication Health is OK.
- 这一步可能会出现not ok
原因是
server1上没有给权限
进入mysql:
grant all on *.* to root@'%';
- server2
#启动MHA
[root@server2 mha]# nohup masterha_manager --conf=/etc/mha/app.cnf &
[1] 9716
[root@server2 mha]# nohup: ignoring input and appending output to ‘nohup.out’
[root@server2 mha]# cd /etc/mha/app1
[root@server2 app1]# ls
app.master_status.health manager.log
[root@server2 app1]# cat app.master_status.health
9716 0:PING_OK master:172.25.9.1
##检测到正常运行
测试故障转移
手动检测
- server1中
- 关闭mysql
/etc/init.d/mysqld stop
此时master由server3接管
- 在server1中
mysql> change master to master_host='172.25.9.3',master_user='repl',master_password='westos';
Query OK, 0 rows affected, 2 warnings (0.03 sec)
mysql> show slave status\G;
*************************** 1. row ***************************
Slave_IO_State:
Master_Host: 172.25.9.3
Master_User: repl
Master_Port: 3306
就可以查询到slave status
配置脚本和vip漂移
- 清理/etc/mha/app1下的app1.failover.complete
在server2上:
cd /etc/mha/app1
rm -fr app.failover.complete
nohup masterha_manager --conf=/etc/mha/app.cnf &
nohup: ignoring input and appending output to ‘nohup.out’
- 编辑master_ip_failover 和 master_ip_online_change 两个脚本,配置vip
cd /root/MHA-7/mha4mysql-manager-0.58/samples/scripts/
mv master_ip_online_change master_ip_failover /etc/mha/app1
cd /etc/mha/app1
vim master_ip_failover
[root@server2 scripts]# cat master_ip_failover
#!/usr/bin/env perl
use strict;
use warnings FATAL => 'all';
use Getopt::Long;
my (
$command, $ssh_user, $orig_master_host, $orig_master_ip,
$orig_master_port, $new_master_host, $new_master_ip, $new_master_port
);
my $vip = '172.25.0.100/24';
my $ssh_start_vip = "/sbin/ip addr add $vip dev eth0";
my $ssh_stop_vip = "/sbin/ip addr del $vip dev eth0";
GetOptions(
'command=s' => \$command,
'ssh_user=s' => \$ssh_user,
'orig_master_host=s' => \$orig_master_host,
'orig_master_ip=s' => \$orig_master_ip,
'orig_master_port=i' => \$orig_master_port,
'new_master_host=s' => \$new_master_host,
'new_master_ip=s' => \$new_master_ip,
'new_master_port=i' => \$new_master_port,
);
exit &main();
sub main {
print "\n\nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\n\n";
if ( $command eq "stop" || $command eq "stopssh" ) {
my $exit_code = 1;
eval {
print "Disabling the VIP on old master: $orig_master_host \n";
&stop_vip();
$exit_code = 0;
};
if ($@) {
warn "Got Error: $@\n";
exit $exit_code;
}
exit $exit_code;
}
elsif ( $command eq "start" ) {
my $exit_code = 10;
eval {
print "Enabling the VIP - $vip on the new master - $new_master_host \n";
&start_vip();
$exit_code = 0;
};
if ($@) {
warn $@;
exit $exit_code;
}
exit $exit_code;
}
elsif ( $command eq "status" ) {
print "Checking the Status of the script.. OK \n";
exit 0;
}
else {
&usage();
exit 1;
}
}
sub start_vip() {
`ssh $ssh_user\@$new_master_host \" $ssh_start_vip \"`;
}
sub stop_vip() {
return 0 unless ($ssh_user);
`ssh $ssh_user\@$orig_master_host \" $ssh_stop_vip \"`;
}
sub usage {
print
"Usage: master_ip_failover --command=start|stop|stopssh|status --orig_master_host=host --orig_master_ip=ip --orig_master_port=port --new_master_host=host --new_master_ip=ip --new_master_port=port\n";
}
vim master_ip_online_change
#!/usr/bin/env perl
use strict;
use warnings FATAL =>'all';
use Getopt::Long;
my $vip = '172.25.0.100/24';
my $ssh_start_vip = "/sbin/ip addr add $vip dev eth0";
my $ssh_stop_vip = "/sbin/ip addr del $vip dev eth0";
my $exit_code = 0;
my (
$command, $orig_master_is_new_slave, $orig_master_host,
$orig_master_ip, $orig_master_port, $orig_master_user,
$orig_master_password, $orig_master_ssh_user, $new_master_host,
$new_master_ip, $new_master_port, $new_master_user,
$new_master_password, $new_master_ssh_user,
);
GetOptions(
'command=s' => \$command,
'orig_master_is_new_slave' => \$orig_master_is_new_slave,
'orig_master_host=s' => \$orig_master_host,
'orig_master_ip=s' => \$orig_master_ip,
'orig_master_port=i' => \$orig_master_port,
'orig_master_user=s' => \$orig_master_user,
'orig_master_password=s' => \$orig_master_password,
'orig_master_ssh_user=s' => \$orig_master_ssh_user,
'new_master_host=s' => \$new_master_host,
'new_master_ip=s' => \$new_master_ip,
'new_master_port=i' => \$new_master_port,
'new_master_user=s' => \$new_master_user,
'new_master_password=s' => \$new_master_password,
'new_master_ssh_user=s' => \$new_master_ssh_user,
);
exit &main();
sub main {
#print "\n\nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\n\n";
if ( $command eq "stop" || $command eq "stopssh" ) {
# $orig_master_host, $orig_master_ip, $orig_master_port are passed.
# If you manage master ip address at global catalog database,
# invalidate orig_master_ip here.
my $exit_code = 1;
eval {
print "\n\n\n***************************************************************\n";
print "Disabling the VIP - $vip on old master: $orig_master_host\n";
print "***************************************************************\n\n\n\n";
&stop_vip();
$exit_code = 0;
};
if ($@) {
warn "Got Error: $@\n";
exit $exit_code;
}
exit $exit_code;
}
elsif ( $command eq "start" ) {
# all arguments are passed.
# If you manage master ip address at global catalog database,
# activate new_master_ip here.
# You can also grant write access (create user, set read_only=0, etc) here.
my $exit_code = 10;
eval {
print "\n\n\n***************************************************************\n";
print "Enabling the VIP - $vip on new master: $new_master_host \n";
print "***************************************************************\n\n\n\n";
&start_vip();
$exit_code = 0;
};
if ($@) {
warn $@;
exit $exit_code;
}
exit $exit_code;
}
elsif ( $command eq "status" ) {
print "Checking the Status of the script.. OK \n";
`ssh $orig_master_ssh_user\@$orig_master_host \" $ssh_start_vip \"`;
exit 0;
}
else {
&usage();
exit 1;
}
}
# A simple system call that enable the VIP on the new master
sub start_vip() {
`ssh $new_master_ssh_user\@$new_master_host \" $ssh_start_vip \"`;
}
# A simple system call that disable the VIP on the old_master
sub stop_vip() {
`ssh $orig_master_ssh_user\@$orig_master_host \" $ssh_stop_vip \"`;
}
sub usage {
print
"Usage: master_ip_failover --command=start|stop|stopssh|status --orig_master_host=host --orig_master_ip=ip --orig_master_port=port --new_master_host=host --new_master_ip=ip --new_master_port=port\n";
}
- 测试
在server3上:
ip addr add 172.25.9.100 dev eth0
目前server3是master,所以先给server3添加vip
/etc/init.d/mysql stop
##关闭mysql模拟故障
cat /etc/mha/app1/manager.log
##查看日志