MHA部署和切换

简介:

Master High Availability(MHA)是一种用于确保服务连续性和高可用性的解决方案,主要应用于MySQL等数据库系统。它通过监控数据库主节点的状态,当主节点出现故障时自动切换到备份节点上,并将新的主节点信息广播给其他节点,从而确保整个系统在主节点故障的情况下仍然可以正常工作。

MHA的主要功能包括:

自动监测:MHA会持续监测主节点的状态,并在发生故障时立即进行处理。

快速故障转移:当主节点出现故障时,MHA会自动将工作负载转移到备份节点上,从而实现快速的故障转移。

数据同步:MHA会自动将备份节点中的数据与主节点同步,保证在主节点故障恢复后可以无缝地切换回去。

自动切换:当主节点恢复后,MHA会自动将工作负载切换回主节点,以保证整个系统的稳定性和连续性。

MHA是一种基于集群、负载均衡、故障转移、备份和恢复等技术的高可用性解决方案,能够帮助企业实现服务连续性和高可用性,保证关键业务的顺利进行。

一、部署环境

mysql版本:5.7.27
yum源:epel源
系统:centos7
主从架构:一主两从
192.168.2.31主库
192.168.2.32从库
192.168.2.33从库
vip:192.168.2.34
CPU:1核
内存:1G
keepalived版本:1.4.5
mysql跟keepalived已经安装好了,这篇文档主要部署MHA(mysql,keepalived安装看我之前文档)
链接:https://pan.baidu.com/s/1EQhvVSimRoC8Ak9Zwcos1Q
提取码:hw14
本文章所需要的rpm包跟脚本

二、安装部署

1.安装依赖

三台机器都需要安装依赖

yum -y install perl-DBD-MySQL perl-ExtUtils-CBuilder  perl-ExtUtils-MakeMaker  perl-devel perl-CPAN  
yum -y install perl-DBD-MySQL perl-Config-Tiny perl-Log-Dispatch perl-Parallel-ForkManager

在这里插入图片描述

2.安装rpm

所需要的软件包,node节点三台机器都上传,manager节点我上传到了从库192.168.2.33上,
管理节点:192.168.2.33
node节点:192.168.2.31; 192.168.2.32

2.1先安装mha4mysql-node节点

这一步三台机器都需要执行

rpm -ivh mha4mysql-node-0.58-0.el7.centos.noarch.rpm

在这里插入图片描述

查看安装路径
[root@mha1 ~]# rpm -ql mha4mysql-node-0.58-0.el7.centos.noarch
/usr/bin/apply_diff_relay_logs
/usr/bin/filter_mysqlbinlog
/usr/bin/purge_relay_logs
/usr/bin/save_binary_logs
/usr/share/man/man1/apply_diff_relay_logs.1.gz
/usr/share/man/man1/filter_mysqlbinlog.1.gz
/usr/share/man/man1/purge_relay_logs.1.gz
/usr/share/man/man1/save_binary_logs.1.gz
/usr/share/perl5/vendor_perl/MHA/BinlogHeaderParser.pm
/usr/share/perl5/vendor_perl/MHA/BinlogManager.pm
/usr/share/perl5/vendor_perl/MHA/BinlogPosFindManager.pm
/usr/share/perl5/vendor_perl/MHA/BinlogPosFinder.pm
/usr/share/perl5/vendor_perl/MHA/BinlogPosFinderElp.pm
/usr/share/perl5/vendor_perl/MHA/BinlogPosFinderXid.pm
/usr/share/perl5/vendor_perl/MHA/NodeConst.pm
/usr/share/perl5/vendor_perl/MHA/NodeUtil.pm
/usr/share/perl5/vendor_perl/MHA/SlaveUtil.pm

2.2再安装mha4mysql-manager节点

这一步只在管理节点192.168.2.33上执行

rpm -ivh mha4mysql-manager-0.58-0.el7.noarch.rpm

在这里插入图片描述

[root@mha3 ~]# rpm -ql mha4mysql-manager-0.58-0.el7.noarch
/usr/bin/masterha_check_repl
/usr/bin/masterha_check_ssh
/usr/bin/masterha_check_status
/usr/bin/masterha_conf_host
/usr/bin/masterha_manager
/usr/bin/masterha_master_monitor
/usr/bin/masterha_master_switch
/usr/bin/masterha_secondary_check
/usr/bin/masterha_stop
/usr/share/man/man1/masterha_check_repl.1.gz
/usr/share/man/man1/masterha_check_ssh.1.gz
/usr/share/man/man1/masterha_check_status.1.gz
/usr/share/man/man1/masterha_conf_host.1.gz
/usr/share/man/man1/masterha_manager.1.gz
/usr/share/man/man1/masterha_master_monitor.1.gz
/usr/share/man/man1/masterha_master_switch.1.gz
/usr/share/man/man1/masterha_secondary_check.1.gz
/usr/share/man/man1/masterha_stop.1.gz
/usr/share/perl5/vendor_perl/MHA/Config.pm
/usr/share/perl5/vendor_perl/MHA/DBHelper.pm
/usr/share/perl5/vendor_perl/MHA/FileStatus.pm
/usr/share/perl5/vendor_perl/MHA/HealthCheck.pm
/usr/share/perl5/vendor_perl/MHA/ManagerAdmin.pm
/usr/share/perl5/vendor_perl/MHA/ManagerAdminWrapper.pm
/usr/share/perl5/vendor_perl/MHA/ManagerConst.pm
/usr/share/perl5/vendor_perl/MHA/ManagerUtil.pm
/usr/share/perl5/vendor_perl/MHA/MasterFailover.pm
/usr/share/perl5/vendor_perl/MHA/MasterMonitor.pm
/usr/share/perl5/vendor_perl/MHA/MasterRotate.pm
/usr/share/perl5/vendor_perl/MHA/SSHCheck.pm
/usr/share/perl5/vendor_perl/MHA/Server.pm
/usr/share/perl5/vendor_perl/MHA/ServerManager.pm

3.创建mha使用的目录和mysql等命令软连接

所有节点都执行

3.1创建目录

mkdir -p /usr/local/masterha/app1   /usr/local/masterha/etc   /usr/local/masterha/bin

3.2创建mysqlbinlog和mysql的软连接,防止masterha_check_repl脚本报错

ln -s /usr/local/mysql/bin/mysqlbinlog /usr/bin/mysqlbinlog
ln -s /usr/local/mysql/bin/mysql /usr/bin/mysql

4.添加mha管理账号

mysql> grant all privileges on *.* to mha@'192.168.2.%' identified by 'mha';
Query OK, 0 rows affected, 1 warning (0.01 sec)

查看是否添加成功,主库添加会自动同步到从库,所以在从库上也查看一下
还有一点就是主从用户,检查一下是不是所有节点都有,有一个节点没有的话会报错

5.编译配置文件

在管理节点上编辑

vim /usr/local/masterha/etc/app1.cnf
[server default]
manager_log=/usr/local/masterha/app1/manager.log										 # MHA管理器日志文件路径
manager_workdir=/usr/local/masterha/app1								 					# MHA管理器工作目录路径
master_ip_failover_script=/usr/local/masterha/bin/master_ip_failover					# 切换IP的脚本路径
master_ip_online_change_script=/usr/local/masterha/bin/master_ip_online_change		 # 在线更改IP地址的脚本路径
remote_workdir=/usr/local/masterha/app1/								# 远程服务器工作目录路径
#主从复制账户
repl_user=repl_slave											##切记,所有节点上都得有主从账户
repl_password=repl_slave

#免密钥检测
ssh_user=root

#mha 连接mysql用户
user=mha
password=mha

ping_interval=1								# 心跳间隔时间
secondary_check_script= /usr/bin/masterha_secondary_check -s 192.168.2.32 -s 192.168.2.33				  # 二次检查的脚本路径,用于检测主库和备库状态		
maxcon=2000						 # 最大连接数
maxfail=60							 # 最大失败次数
ping_type=SELECT					 # 心跳检测方式

[server1]																 # MySQL实例1
master_binlog_dir="/opt/mysqldata"					# 主库binlog文件所在目录
candidate_master=1											# 该实例是否为可选主库
check_repl_delay=0											 # 复制延迟检查
hostname=192.168.2.31										 # 实例主机名或IP地址
port=38141															# 实例MySQL端口号
ssh_port=22														# 实例SSH端口号

[server2]
master_binlog_dir="/opt/mysqldata"
candidate_master=1
check_repl_delay=0
hostname=192.168.2.32
port=38141
ssh_port=22

[server3]
master_binlog_dir="/opt/mysqldata"
no_master=1														# 该实例不作为主库
check_repl_delay=0
hostname=192.168.2.33
port=38141
ssh_port=22

6.添加各个节点之间的ssh免密登陆

对所有节点都进行配置

#创建秘钥对
[root@mha1 ~]# ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa >/dev/null 2>&1
#分发公钥,包括自己
[root@mha1 ~]#  ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.2.31
[root@mha1 ~]#  ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.2.32
[root@mha1 ~]#  ssh-copy-id -i /root/.ssh/id_rsa.pub root@192.168.2.33
#开启ssh服务支持秘钥认证
[root@mha1 ~]# vim /etc/ssh/sshd_config
44 PubkeyAuthentication yes
45 RSAAuthentication yes
49 AuthorizedKeysFile      .ssh/authorized_keys
[root@mha1 ~]#  systemctl restart sshd

7.测试ssh连接,成功会出现All SSH connection tests passed successfully

/usr/bin/masterha_check_ssh --conf=/usr/local/masterha/etc/app1.cnf
[root@mha3 etc]# /usr/bin/masterha_check_ssh --conf=/usr/local/masterha/etc/app1.cnf
Fri May 12 22:43:26 2023 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Fri May 12 22:43:26 2023 - [info] Reading application default configuration from /usr/local/masterha/etc/app1.cnf..
Fri May 12 22:43:26 2023 - [info] Reading server configuration from /usr/local/masterha/etc/app1.cnf..
Fri May 12 22:43:26 2023 - [info] Starting SSH connection tests..
Fri May 12 22:43:27 2023 - [debug] 
Fri May 12 22:43:26 2023 - [debug]  Connecting via SSH from root@192.168.2.31(192.168.2.31:22) to root@192.168.2.32(192.168.2.32:22)..
Fri May 12 22:43:26 2023 - [debug]   ok.
Fri May 12 22:43:26 2023 - [debug]  Connecting via SSH from root@192.168.2.31(192.168.2.31:22) to root@192.168.2.33(192.168.2.33:22)..
Fri May 12 22:43:27 2023 - [debug]   ok.
Fri May 12 22:43:27 2023 - [debug] 
Fri May 12 22:43:27 2023 - [debug]  Connecting via SSH from root@192.168.2.32(192.168.2.32:22) to root@192.168.2.31(192.168.2.31:22)..
Fri May 12 22:43:27 2023 - [debug]   ok.
Fri May 12 22:43:27 2023 - [debug]  Connecting via SSH from root@192.168.2.32(192.168.2.32:22) to root@192.168.2.33(192.168.2.33:22)..
Fri May 12 22:43:27 2023 - [debug]   ok.
Fri May 12 22:43:28 2023 - [debug] 
Fri May 12 22:43:27 2023 - [debug]  Connecting via SSH from root@192.168.2.33(192.168.2.33:22) to root@192.168.2.31(192.168.2.31:22)..
Fri May 12 22:43:27 2023 - [debug]   ok.
Fri May 12 22:43:27 2023 - [debug]  Connecting via SSH from root@192.168.2.33(192.168.2.33:22) to root@192.168.2.32(192.168.2.32:22)..
Fri May 12 22:43:28 2023 - [debug]   ok.
Fri May 12 22:43:28 2023 - [info] All SSH connection tests passed successfully.

8.测试repl状态,出现MySQL Replication Health is OK证明主从配置正确

在测试之前需要上传两个脚本,给744权限,脚本在文章最开始的网盘链接里下载

chmod 744 master_ip_failover master_ip_online_change
vim master_ip_online_change
vim master_ip_failover							#两个脚本都修改下面内容
my $vip = '192.168.2.34/32'; 

在这里插入图片描述

/usr/bin/masterha_check_repl --conf=/usr/local/masterha/etc/app1.cnf
[root@mha3 bin]# /usr/bin/masterha_check_repl --conf=/usr/local/masterha/etc/app1.cnf
Fri May 12 23:17:28 2023 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Fri May 12 23:17:28 2023 - [info] Reading application default configuration from /usr/local/masterha/etc/app1.cnf..
Fri May 12 23:17:28 2023 - [info] Reading server configuration from /usr/local/masterha/etc/app1.cnf..
Fri May 12 23:17:28 2023 - [info] MHA::MasterMonitor version 0.58.
Fri May 12 23:17:29 2023 - [info] GTID failover mode = 0
Fri May 12 23:17:29 2023 - [info] Dead Servers:
Fri May 12 23:17:29 2023 - [info] Alive Servers:
Fri May 12 23:17:29 2023 - [info]   192.168.2.31(192.168.2.31:38141)
Fri May 12 23:17:29 2023 - [info]   192.168.2.32(192.168.2.32:38141)
Fri May 12 23:17:29 2023 - [info]   192.168.2.33(192.168.2.33:38141)
Fri May 12 23:17:29 2023 - [info] Alive Slaves:
Fri May 12 23:17:29 2023 - [info]   192.168.2.32(192.168.2.32:38141)  Version=5.7.27-log (oldest major version between slaves) log-bin:enabled
Fri May 12 23:17:29 2023 - [info]     Replicating from 192.168.2.31(192.168.2.31:38141)
Fri May 12 23:17:29 2023 - [info]     Primary candidate for the new Master (candidate_master is set)
Fri May 12 23:17:29 2023 - [info]   192.168.2.33(192.168.2.33:38141)  Version=5.7.27-log (oldest major version between slaves) log-bin:enabled
Fri May 12 23:17:29 2023 - [info]     Replicating from 192.168.2.31(192.168.2.31:38141)
Fri May 12 23:17:29 2023 - [info]     Not candidate for the new Master (no_master is set)
Fri May 12 23:17:29 2023 - [info] Current Alive Master: 192.168.2.31(192.168.2.31:38141)
Fri May 12 23:17:29 2023 - [info] Checking slave configurations..
Fri May 12 23:17:29 2023 - [info]  read_only=1 is not set on slave 192.168.2.32(192.168.2.32:38141).
Fri May 12 23:17:29 2023 - [warning]  relay_log_purge=0 is not set on slave 192.168.2.32(192.168.2.32:38141).
Fri May 12 23:17:29 2023 - [info]  read_only=1 is not set on slave 192.168.2.33(192.168.2.33:38141).
Fri May 12 23:17:29 2023 - [warning]  relay_log_purge=0 is not set on slave 192.168.2.33(192.168.2.33:38141).
Fri May 12 23:17:29 2023 - [info] Checking replication filtering settings..
Fri May 12 23:17:29 2023 - [info]  binlog_do_db= , binlog_ignore_db= 
Fri May 12 23:17:29 2023 - [info]  Replication filtering check ok.
Fri May 12 23:17:29 2023 - [info] GTID (with auto-pos) is not supported
Fri May 12 23:17:29 2023 - [info] Starting SSH connection tests..
Fri May 12 23:17:30 2023 - [info] All SSH connection tests passed successfully.
Fri May 12 23:17:30 2023 - [info] Checking MHA Node version..
Fri May 12 23:17:31 2023 - [info]  Version check ok.
Fri May 12 23:17:31 2023 - [info] Checking SSH publickey authentication settings on the current master..
Fri May 12 23:17:31 2023 - [info] HealthCheck: SSH to 192.168.2.31 is reachable.
Fri May 12 23:17:31 2023 - [info] Master MHA Node version is 0.58.
Fri May 12 23:17:31 2023 - [info] Checking recovery script configurations on 192.168.2.31(192.168.2.31:38141)..
Fri May 12 23:17:31 2023 - [info]   Executing command: save_binary_logs --command=test --start_pos=4 --binlog_dir=/opt/mysqldata --output_file=/usr/local/masterha/app1//save_binary_logs_test --manager_version=0.58 --start_file=mysql-bin.000004 
Fri May 12 23:17:31 2023 - [info]   Connecting to root@192.168.2.31(192.168.2.31:22).. 
  Creating /usr/local/masterha/app1 if not exists..    ok.
  Checking output directory is accessible or not..
   ok.
  Binlog found at /opt/mysqldata, up to mysql-bin.000004
Fri May 12 23:17:31 2023 - [info] Binlog setting check done.
Fri May 12 23:17:31 2023 - [info] Checking SSH publickey authentication and checking recovery script configurations on all alive slave servers..
Fri May 12 23:17:31 2023 - [info]   Executing command : apply_diff_relay_logs --command=test --slave_user='mha' --slave_host=192.168.2.32 --slave_ip=192.168.2.32 --slave_port=38141 --workdir=/usr/local/masterha/app1/ --target_version=5.7.27-log --manager_version=0.58 --relay_log_info=/opt/mysqldata/relay-log.info  --relay_dir=/opt/mysqldata/  --slave_pass=xxx
Fri May 12 23:17:31 2023 - [info]   Connecting to root@192.168.2.32(192.168.2.32:22).. 
  Checking slave recovery environment settings..
    Opening /opt/mysqldata/relay-log.info ... ok.
    Relay log found at /opt/mysqldata, up to mha2-relay-bin.000010
    Temporary relay log file is /opt/mysqldata/mha2-relay-bin.000010
    Checking if super_read_only is defined and turned on.. not present or turned off, ignoring.
    Testing mysql connection and privileges..
mysql: [Warning] Using a password on the command line interface can be insecure.
 done.
    Testing mysqlbinlog output.. done.
    Cleaning up test file(s).. done.
Fri May 12 23:17:31 2023 - [info]   Executing command : apply_diff_relay_logs --command=test --slave_user='mha' --slave_host=192.168.2.33 --slave_ip=192.168.2.33 --slave_port=38141 --workdir=/usr/local/masterha/app1/ --target_version=5.7.27-log --manager_version=0.58 --relay_log_info=/opt/mysqldata/relay-log.info  --relay_dir=/opt/mysqldata/  --slave_pass=xxx
Fri May 12 23:17:31 2023 - [info]   Connecting to root@192.168.2.33(192.168.2.33:22).. 
  Checking slave recovery environment settings..
    Opening /opt/mysqldata/relay-log.info ... ok.
    Relay log found at /opt/mysqldata, up to mha3-relay-bin.000010
    Temporary relay log file is /opt/mysqldata/mha3-relay-bin.000010
    Checking if super_read_only is defined and turned on.. not present or turned off, ignoring.
    Testing mysql connection and privileges..
mysql: [Warning] Using a password on the command line interface can be insecure.
 done.
    Testing mysqlbinlog output.. done.
    Cleaning up test file(s).. done.
Fri May 12 23:17:31 2023 - [info] Slaves settings check done.
Fri May 12 23:17:31 2023 - [info] 
192.168.2.31(192.168.2.31:38141) (current master)
 +--192.168.2.32(192.168.2.32:38141)
 +--192.168.2.33(192.168.2.33:38141)

Fri May 12 23:17:31 2023 - [info] Checking replication health on 192.168.2.32..
Fri May 12 23:17:31 2023 - [info]  ok.
Fri May 12 23:17:31 2023 - [info] Checking replication health on 192.168.2.33..
Fri May 12 23:17:31 2023 - [info]  ok.
Fri May 12 23:17:31 2023 - [info] Checking master_ip_failover_script status:
Fri May 12 23:17:31 2023 - [info]   /usr/local/masterha/bin/master_ip_failover --command=status --ssh_user=root --orig_master_host=192.168.2.31 --orig_master_ip=192.168.2.31 --orig_master_port=38141 


IN SCRIPT TEST====/etc/init.d/keepalived stop==/etc/init.d/keepalived start===

Checking the Status of the script.. OK 
Fri May 12 23:17:31 2023 - [info]  OK.
Fri May 12 23:17:31 2023 - [warning] shutdown_script is not defined.
Fri May 12 23:17:31 2023 - [info] Got exit code 0 (Not master dead).

MySQL Replication Health is OK.

9.修改源代码

sed -i '244s/super_read_only/read_only/g' /usr/share/perl5/vendor_perl/MHA/SlaveUtil.pm
sed -i '262s/$dbh/#$dbh/g' /usr/share/perl5/vendor_perl/MHA/SlaveUtil.pm
sed -i '275s/super_read_only/read_only/g' /usr/share/perl5/vendor_perl/MHA/SlaveUtil.pm
vim +294 /usr/share/perl5/vendor_perl/MHA/HealthCheck.pm
替换为
$log->info("maxcon:$self->{maxcon},maxfail:$self->{maxfail},currentcon:$href->{Value}") if ($log and $href->{Value}>=$self->{maxcon});

三、启动mha管理程序

1.编写脚本启动

编辑脚本来管理服务的启动跟关闭,脚本里的两条命令放在后台screen里面执行也可以,这里我是编写了脚本。

cd /usr/local/masterha/etc/
[root@mha3 etc]# vim start_mha.sh 
#!/bin/bash
nohup masterha_manager --conf=/usr/local/masterha/etc/app1.cnf  --ignore_last_failover >/tmp/mha_manager.log < /dev/null 2>&1 &
[root@mha3 etc]# vim stop_mha.sh 
#!/bin/bash
masterha_stop --conf=/usr/local/masterha/etc/app1.cnf
[root@mha3 etc]# chmod 744 start_mha.sh stop_mha.sh							#给两个脚本744权限

启动mha

[root@mha3 etc]# ./start_mha.sh 

2.检测mha运行状态

/usr/bin/masterha_check_status --conf=/usr/local/masterha/etc/app1.cnf

在这里插入图片描述
如果报以下错误,就需要把上面加的这行注释掉,把默认的打开

vim +294 /usr/share/perl5/vendor_perl/MHA/HealthCheck.pm
注释掉
$log->info("maxcon:$self->{maxcon},maxfail:$self->{maxfail},currentcon:$href->{Value}") if ($log and $href->{Value}>=$self->{maxcon});
打开
$log->info("maxcon:$self->{maxcon},maxfail:$self->{maxfail},currentcon:$href->{Value}") if ($log);

在这里插入图片描述

四、切换测试

因为上面配置文件里面写了最大连接数2000,持续60s后会切换,所以咱们要测试的就是两种情况,第一种就是数据库宕机,第二种就是连接数超过2000.

maxcon=2000						 # 最大连接数
maxfail=60							 # 最大失败次数

1.模拟宕机

主库192.168.2.31
[root@mha1 ~]# /etc/init.d/mysql stop
Shutting down MySQL............ SUCCESS!
管理节点10.0.2.33
tailf /usr/local/masterha/app1/manager.log

在这里插入图片描述
当主库宕机持续60s后,MHA就会切换,vip就会切换到192.168.2.32上。
在这里插入图片描述
此时的主库就是192.168.2.32,在192.168.2.33上执行

mysql> show slave status\G;
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 192.168.2.32
                  Master_User: repl_slave
                  Master_Port: 38141
                Connect_Retry: 60
              Master_Log_File: mysql-bin.000007
          Read_Master_Log_Pos: 443
               Relay_Log_File: mha3-relay-bin.000002
                Relay_Log_Pos: 320
        Relay_Master_Log_File: mysql-bin.000007
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes

再把192.168.2.31上的mysql起来,这时192.168.2.31上是没有主从信息的

mysql> show slave status\G;
Empty set (0.00 sec)

ERROR: 
No query specified

这时候需要我们去指定主从,指定主从的信息节点信息从管理节点192.168.2.33的/usr/local/masterha/app1/manager.log里面获得

192.168.2.33
[root@mha3 app1]# tailf /usr/local/masterha/app1/manager.log
Sat May 13 00:16:33 2023 - [info]  All other slaves should start replication from here. Statement should be: CHANGE MASTER TO MASTER_HOST='192.168.2.32', MASTER_PORT=38141, MASTER_LOG_FILE='mysql-bin.000007', MASTER_LOG_POS=443, MASTER_USER='repl_slave', MASTER_PASSWORD='xxx';

在192.168.2.31上执行以下命令重新指定主从

mysql> CHANGE MASTER TO MASTER_HOST='192.168.2.32',MASTER_USER='repl_slave',MASTER_PASSWORD='repl_slave',MASTER_PORT=38141,MASTER_LOG_FILE='mysql-bin.000007',MASTER_LOG_POS=443;
Query OK, 0 rows affected, 2 warnings (0.00 sec)

mysql> start slave;
Query OK, 0 rows affected (0.00 sec)

mysql> show slave status\G;
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 192.168.2.32
                  Master_User: repl_slave
                  Master_Port: 38141
                Connect_Retry: 60
              Master_Log_File: mysql-bin.000007
          Read_Master_Log_Pos: 443
               Relay_Log_File: mha1-relay-bin.000002
                Relay_Log_Pos: 320
        Relay_Master_Log_File: mysql-bin.000007
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes

需要注意的是,在切换完成以后mha自动也就停止了,需要我们在手动执行脚本起来
在这里插入图片描述

2.脚本模拟超连接数切换

目前主从关系
主:192.168.2.32
从:192.168.2.31
从:192.168.2.31
所以我们需要在从库192.168.2.32上编写脚本如下:

[root@mha2 mysqldata]# mkdir /shell
[root@mha2 mysqldata]# cd /shell/
[root@mha2 shell]# vim conn_mysql.sh
#!/bin/bash
for((i=1;i<2100;i=i+1))
do
nohup /usr/local/mysql/bin/mysql -p*************** mon_test -e "select sleep(200)" >/tmp/mysql.log < /dev/null 2>&1 &															
done
[root@mha2 shell]# chmod 744 conn_mysql.sh

我们需要在主库192.168.2.32上创建一个库

mysql> create database mon_test;
Query OK, 1 row affected (0.00 sec)
[root@mha2 shell]# bash conn_mysql.sh

这时在管理节点192.168.2.33上查看会发现,这时连接数超2000了,等60s以后发生切换
在这里插入图片描述
在这里插入图片描述
切换完成以后确认一下,此时vip又切回到了192.168.2.31上,同时再192.168.2.33上查看,主库也变成了192.168.2.31

192.168.2.31
[root@mha1 ~]# hostname -I
192.168.2.31 192.168.2.34
192.168.2.33
mysql> show slave status\G;
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 192.168.2.31
                  Master_User: repl_slave
                  Master_Port: 38141
                Connect_Retry: 60
              Master_Log_File: mysql-bin.000005
          Read_Master_Log_Pos: 154
               Relay_Log_File: mha3-relay-bin.000002
                Relay_Log_Pos: 320
        Relay_Master_Log_File: mysql-bin.000005
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes

然后我们去192.168.2.32上指定一下主从,需要区分的是这次是连接数超了,数据库还是正常运行,没有宕机,所以我们直接执行指定主从就好,指定主从的信息节点信息从管理节点192.168.2.33的/usr/local/masterha/app1/manager.log里面获得。

192.168.2.33
[root@mha3 app1]# tailf /usr/local/masterha/app1/manager.log
Sat May 13 00:41:03 2023 - [info]  All other slaves should start replication from here. Statement should be: CHANGE MASTER TO MASTER_HOST='192.168.2.31', MASTER_PORT=38141, MASTER_LOG_FILE='mysql-bin.000005', MASTER_LOG_POS=154, MASTER_USER='repl_slave', MASTER_PASSWORD='xxx';

在192.168.2.32上重新指定主从

mysql> CHANGE MASTER TO MASTER_HOST='192.168.2.31',MASTER_USER='repl_slave',MASTER_PASSWORD='repl_slave',MASTER_PORT=38141,MASTER_LOG_FILE='mysql-bin.000005',MASTER_LOG_POS=154;
Query OK, 0 rows affected, 2 warnings (0.00 sec)

mysql> start slave;
Query OK, 0 rows affected (0.00 sec)

mysql> show slave status\G;
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 192.168.2.31
                  Master_User: repl_slave
                  Master_Port: 38141
                Connect_Retry: 60
              Master_Log_File: mysql-bin.000005
          Read_Master_Log_Pos: 154
               Relay_Log_File: mha2-relay-bin.000002
                Relay_Log_Pos: 320
        Relay_Master_Log_File: mysql-bin.000005
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes

3.手动在线切换MHA

/usr/bin/masterha_master_switch --master_state=alive --interactive=0 --orig_master_is_new_slave --conf=/usr/local/masterha/etc/app1.cnf
--master_state=alive:指定当前主数据库的状态为 "alive",表示它是可用的。
--interactive=0:设置为 0,表示在执行切换操作时不需要用户交互。
--orig_master_is_new_slave:表示原来的主数据库将成为新的从数据库。
--conf=/usr/local/masterha/etc/app1.cnf:指定配置文件的路径为 /usr/local/masterha/etc/app1.cnf。

手动在线切换mha,切换时需要将在运行的mha停掉后才能切换。

[root@localhost etc]# /usr/bin/masterha_master_switch --master_state=alive --interactive=0 --orig_master_is_new_slave --conf=/usr/local/masterha/etc/app1.cnf
Fri Jan  5 19:17:37 2024 - [info] MHA::MasterRotate version 0.58.
Fri Jan  5 19:17:37 2024 - [info] Starting online master switch..
Fri Jan  5 19:17:37 2024 - [info] 
Fri Jan  5 19:17:37 2024 - [info] * Phase 1: Configuration Check Phase..
Fri Jan  5 19:17:37 2024 - [info] 
Fri Jan  5 19:17:37 2024 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Fri Jan  5 19:17:37 2024 - [info] Reading application default configuration from /usr/local/masterha/etc/app1.cnf..
Fri Jan  5 19:17:37 2024 - [info] Reading server configuration from /usr/local/masterha/etc/app1.cnf..
Fri Jan  5 19:17:38 2024 - [info] GTID failover mode = 0
Fri Jan  5 19:17:38 2024 - [info] Current Alive Master: 192.168.2.26(192.168.2.26:38141)
Fri Jan  5 19:17:38 2024 - [info] Alive Slaves:
Fri Jan  5 19:17:38 2024 - [info]   192.168.2.27(192.168.2.27:38141)  Version=5.7.27-log (oldest major version between slaves) log-bin:enabled
Fri Jan  5 19:17:38 2024 - [info]     Replicating from 192.168.2.26(192.168.2.26:38141)
Fri Jan  5 19:17:38 2024 - [info]     Primary candidate for the new Master (candidate_master is set)
Fri Jan  5 19:17:38 2024 - [info]   192.168.2.28(192.168.2.28:38141)  Version=5.7.27-log (oldest major version between slaves) log-bin:enabled
Fri Jan  5 19:17:38 2024 - [info]     Replicating from 192.168.2.26(192.168.2.26:38141)
Fri Jan  5 19:17:38 2024 - [info]     Not candidate for the new Master (no_master is set)
Fri Jan  5 19:17:38 2024 - [info] Executing FLUSH NO_WRITE_TO_BINLOG TABLES. This may take long time..
Fri Jan  5 19:17:38 2024 - [info]  ok.
Fri Jan  5 19:17:38 2024 - [info] Checking MHA is not monitoring or doing failover..
Fri Jan  5 19:17:38 2024 - [info] Checking replication health on 192.168.2.27..
Fri Jan  5 19:17:38 2024 - [info]  ok.
Fri Jan  5 19:17:38 2024 - [info] Checking replication health on 192.168.2.28..
Fri Jan  5 19:17:38 2024 - [info]  ok.
Fri Jan  5 19:17:38 2024 - [info] Searching new master from slaves..
Fri Jan  5 19:17:38 2024 - [info]  Candidate masters from the configuration file:
Fri Jan  5 19:17:38 2024 - [info]   192.168.2.26(192.168.2.26:38141)  Version=5.7.27-log log-bin:enabled
Fri Jan  5 19:17:38 2024 - [info]   192.168.2.27(192.168.2.27:38141)  Version=5.7.27-log (oldest major version between slaves) log-bin:enabled
Fri Jan  5 19:17:38 2024 - [info]     Replicating from 192.168.2.26(192.168.2.26:38141)
Fri Jan  5 19:17:38 2024 - [info]     Primary candidate for the new Master (candidate_master is set)
Fri Jan  5 19:17:38 2024 - [info]  Non-candidate masters:
Fri Jan  5 19:17:38 2024 - [info]   192.168.2.28(192.168.2.28:38141)  Version=5.7.27-log (oldest major version between slaves) log-bin:enabled
Fri Jan  5 19:17:38 2024 - [info]     Replicating from 192.168.2.26(192.168.2.26:38141)
Fri Jan  5 19:17:38 2024 - [info]     Not candidate for the new Master (no_master is set)
Fri Jan  5 19:17:38 2024 - [info]  Searching from candidate_master slaves which have received the latest relay log events..
Fri Jan  5 19:17:38 2024 - [info] 
From:
192.168.2.26(192.168.2.26:38141) (current master)
 +--192.168.2.27(192.168.2.27:38141)
 +--192.168.2.28(192.168.2.28:38141)

To:
192.168.2.27(192.168.2.27:38141) (new master)
 +--192.168.2.28(192.168.2.28:38141)
 +--192.168.2.26(192.168.2.26:38141)
Fri Jan  5 19:17:38 2024 - [info] Checking whether 192.168.2.27(192.168.2.27:38141) is ok for the new master..
Fri Jan  5 19:17:38 2024 - [info]  ok.
Fri Jan  5 19:17:38 2024 - [info] 192.168.2.26(192.168.2.26:38141): SHOW SLAVE STATUS returned empty result. To check replication filtering rules, temporarily executing CHANGE MASTER to a dummy host.
Fri Jan  5 19:17:38 2024 - [info] 192.168.2.26(192.168.2.26:38141): Resetting slave pointing to the dummy host.
Fri Jan  5 19:17:38 2024 - [info] ** Phase 1: Configuration Check Phase completed.
Fri Jan  5 19:17:38 2024 - [info] 
Fri Jan  5 19:17:38 2024 - [info] * Phase 2: Rejecting updates Phase..
Fri Jan  5 19:17:38 2024 - [info] 
Fri Jan  5 19:17:38 2024 - [info] Executing master ip online change script to disable write on the current master:
Fri Jan  5 19:17:38 2024 - [info]   /usr/local/masterha/bin/master_ip_online_change --command=stop --orig_master_host=192.168.2.26 --orig_master_ip=192.168.2.26 --orig_master_port=38141 --orig_master_user='mha' --new_master_host=192.168.2.27 --new_master_ip=192.168.2.27 --new_master_port=38141 --new_master_user='mha' --orig_master_ssh_user=root --new_master_ssh_user=root   --orig_master_is_new_slave --orig_master_password=xxx --new_master_password=xxx



***************************************************************
Disabling the VIP - 192.168.2.30/32 on old master: 192.168.2.26
***************************************************************



Fri Jan  5 19:17:50 2024 - [info]  ok.
Fri Jan  5 19:17:50 2024 - [info] Locking all tables on the orig master to reject updates from everybody (including root):
Fri Jan  5 19:17:50 2024 - [info] Executing FLUSH TABLES WITH READ LOCK..
Fri Jan  5 19:17:50 2024 - [info]  ok.
Fri Jan  5 19:17:50 2024 - [info] Orig master binlog:pos is mysql-bin.000002:154.
Fri Jan  5 19:17:50 2024 - [info]  Waiting to execute all relay logs on 192.168.2.27(192.168.2.27:38141)..
Fri Jan  5 19:17:50 2024 - [info]  master_pos_wait(mysql-bin.000002:154) completed on 192.168.2.27(192.168.2.27:38141). Executed 0 events.
Fri Jan  5 19:17:50 2024 - [info]   done.
Fri Jan  5 19:17:50 2024 - [info] Getting new master's binlog name and position..
Fri Jan  5 19:17:50 2024 - [info]  mysql-bin.000003:154
Fri Jan  5 19:17:50 2024 - [info]  All other slaves should start replication from here. Statement should be: CHANGE MASTER TO MASTER_HOST='192.168.2.27', MASTER_PORT=38141, MASTER_LOG_FILE='mysql-bin.000003', MASTER_LOG_POS=154, MASTER_USER='repl_slave', MASTER_PASSWORD='xxx';
Fri Jan  5 19:17:50 2024 - [info] Executing master ip online change script to allow write on the new master:
Fri Jan  5 19:17:50 2024 - [info]   /usr/local/masterha/bin/master_ip_online_change --command=start --orig_master_host=192.168.2.26 --orig_master_ip=192.168.2.26 --orig_master_port=38141 --orig_master_user='mha' --new_master_host=192.168.2.27 --new_master_ip=192.168.2.27 --new_master_port=38141 --new_master_user='mha' --orig_master_ssh_user=root --new_master_ssh_user=root   --orig_master_is_new_slave --orig_master_password=xxx --new_master_password=xxx



***************************************************************
Enabling the VIP - 192.168.2.30/32 on new master: 192.168.2.27 
***************************************************************



Fri Jan  5 19:18:00 2024 - [info]  ok.
Fri Jan  5 19:18:00 2024 - [info] 
Fri Jan  5 19:18:00 2024 - [info] * Switching slaves in parallel..
Fri Jan  5 19:18:00 2024 - [info] 
Fri Jan  5 19:18:00 2024 - [info] -- Slave switch on host 192.168.2.28(192.168.2.28:38141) started, pid: 8581
Fri Jan  5 19:18:00 2024 - [info] 
Fri Jan  5 19:18:01 2024 - [info] Log messages from 192.168.2.28 ...
Fri Jan  5 19:18:01 2024 - [info] 
Fri Jan  5 19:18:00 2024 - [info]  Waiting to execute all relay logs on 192.168.2.28(192.168.2.28:38141)..
Fri Jan  5 19:18:00 2024 - [info]  master_pos_wait(mysql-bin.000002:154) completed on 192.168.2.28(192.168.2.28:38141). Executed 0 events.
Fri Jan  5 19:18:00 2024 - [info]   done.
Fri Jan  5 19:18:00 2024 - [info]  Resetting slave 192.168.2.28(192.168.2.28:38141) and starting replication from the new master 192.168.2.27(192.168.2.27:38141)..
Fri Jan  5 19:18:00 2024 - [info]  Executed CHANGE MASTER.
Fri Jan  5 19:18:00 2024 - [info]  Slave started.
Fri Jan  5 19:18:01 2024 - [info] End of log messages from 192.168.2.28 ...
Fri Jan  5 19:18:01 2024 - [info] 
Fri Jan  5 19:18:01 2024 - [info] -- Slave switch on host 192.168.2.28(192.168.2.28:38141) succeeded.
Fri Jan  5 19:18:01 2024 - [info] Unlocking all tables on the orig master:
Fri Jan  5 19:18:01 2024 - [info] Executing UNLOCK TABLES..
Fri Jan  5 19:18:01 2024 - [info]  ok.
Fri Jan  5 19:18:01 2024 - [info] Starting orig master as a new slave..
Fri Jan  5 19:18:01 2024 - [info]  Resetting slave 192.168.2.26(192.168.2.26:38141) and starting replication from the new master 192.168.2.27(192.168.2.27:38141)..
Fri Jan  5 19:18:01 2024 - [info]  Executed CHANGE MASTER.
Fri Jan  5 19:18:01 2024 - [info]  Slave started.
Fri Jan  5 19:18:01 2024 - [info] All new slave servers switched successfully.
Fri Jan  5 19:18:01 2024 - [info] 
Fri Jan  5 19:18:01 2024 - [info] * Phase 5: New master cleanup phase..
Fri Jan  5 19:18:01 2024 - [info] 
Fri Jan  5 19:18:01 2024 - [info]  192.168.2.27: Resetting slave info succeeded.
Fri Jan  5 19:18:01 2024 - [info] Switching master to 192.168.2.27(192.168.2.27:38141) completed successfully.

至此,我们本篇文档MHA部署和切换就正式结束。

  • 2
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值