企业级运维——Mysql的高可用集群架构搭建并实现VIP漂移

一、Mysql高可用架构简介

MHA(Master High Availability)目前在MySQL高可用方面是一个相对成熟的解决方案,它由日本DeNA公司youshimaton(现就职于Facebook公司)开发,是一套优秀的作为MySQL高可用性环境下故障切换和主从提升的高可用软件。在MySQL故障切换过程中,MHA能做到在0~30秒之内自动完成数据库的故障切换操作,并且在进行故障切换的过程中,MHA能在最大程度上保证数据的一致性,以达到真正意义上的高可用。

该软件由两部分组成:

  • MHA Manager(管理节点)
  • MHA Node(数据节点)。

MHA Manager可以单独部署在一台独立的机器上管理多个master-slave集群,也可以部署在其中的一台slave节点上。

MHA Node运行在每台MySQL服务器上,MHA Manager会定时探测集群中的master节点,当master出现故障时,它可以自动将最新数据的slave提升为新的master,然后将所有其他的slave重新指向新的master。整个故障转移过程对应用程序完全透明。

二、MHA的优点

在MHA自动故障切换过程中,MHA试图从宕机的主服务器上保存二进制日志,最大程度的保证数据的不丢失,但这并不总是可行的。例如,如果主服务器硬件故障或无法通过ssh访问,MHA没法保存二进制日志,只进行故障转移而丢失了最新的数据。使用MySQL 5.5的半同步复制,可以大大降低数据丢失的风险。MHA可以与半同步复制结合起来。如果只有一个slave已经收到了最新的二进制日志,MHA可以将最新的二进制日志应用于其他所有的slave服务器上,因此可以保证所有节点的数据一致性。

三、MHA的处理流程

  • 从宕机崩溃的master保存二进制日志事件(binlog events);
  • 识别含有最新更新的slave;
  • 应用差异的中继日志(relay log)到其他的slave;
  • 应用从master保存的二进制日志事件(binlog events);
  • 提升一个slave为新的master;
  • 使其他的slave连接新的master进行复制;

MHA的manager和node相关参数的介绍:

  • manager工具主要包括以下几个工具:
工具名作用
masterha_check_ssh检查MHA的SSH配置状况
masterha_check_repl检查MySQL复制状况
masterha_manger启动MHA
masterha_check_status检测当前MHA运行状态
masterha_master_monitor检测master是否宕机
masterha_master_switch控制故障转移(自动或者手动)
masterha_conf_host添加或删除配置的server信息
  • Node工具包(这些工具通常由MHA Manager的脚本触发,无需人为操作)主要包括以下几个工具:
工具名作用
save_binary_logs保存和复制master的二进制日志
apply_diff_relay_logs识别差异的中继日志事件并将其差异的事件应用于其他的slave
purge_relay_logs清除中继日志(不会阻塞SQL线程)

四、MHA之mysql的高可用集群架构的搭建

实验环境

四台rhel7.5的虚拟机

主机名(IP)服务
server1(172.25.66.1)master
server2(172.25.66.2)slave(备master)
server3(172.25.66.3)slave
server4(172.25.66.4)MHA管理节点

server1,server2和server3上配置基于gtid的主从复制

这里可以参考之前的博客:https://mp.csdn.net/mdeditor/97511230#

  • 1.重新刷新数据库
    server1,server2,server3操作相同。
[root@server1 mysql]# systemctl stop mysqld.service 
[root@server1 mysql]# pwd
/var/lib/mysql
[root@server1 mysql]# ls
auto.cnf       binlog.index  client-cert.pem  ibdata1      mysql             performance_schema  server-cert.pem  test
binlog.000001  ca-key.pem    client-key.pem   ib_logfile0  mysql-bin.000001  private_key.pem     server-key.pem
binlog.000002  ca.pem        ib_buffer_pool   ib_logfile1  mysql-bin.index   public_key.pem      sys
[root@server1 mysql]# rm -fr *
  • 2.编辑server1,server2,server3的数据库配置文件,注意server_id不同,并开启mysql服务。
log-bin=binlog
server_id=1
gtid_mode=ON
enforce_gtid_consistency=ON
log_slave_updates=ON
systemctl restart mysqld
  • 3.查看数据库密码登录数据库并且修改密码(server1,2,3密码需要相同)
[root@server1 mysql]# mysql -uroot -p
Enter password: 
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 5
Server version: 5.7.24-log

Copyright (c) 2000, 2018, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> alter user root@localhost identified by 'Wsp+123ld';
Query OK, 0 rows affected (0.02 sec)
  • 4.进行主从配置(基于半同步)
    因为进行MHA搭建每一个主机都有可能是主也有可能是从,所以都要安装主从设备的插件。

server1(主):

mysql> grant replication slave on *.* to repl@'172.25.66.%' identified by 'Wsp+123ld';
Query OK, 0 rows affected, 1 warning (0.01 sec)

mysql> install plugin rpl_semi_sync_master soname 'semisync_master.so';
Query OK, 0 rows affected (0.02 sec)

mysql> install plugin rpl_semi_sync_slave soname 'semisync_slave.so';
Query OK, 0 rows affected (0.02 sec)

mysql> set global rpl_semi_sync_master_enabled=1;
Query OK, 0 rows affected (0.00 sec)

mysql> set global rpl_semi_sync_master_timeout=10000000000000; 	#这里设置为无限大,因为必须半同步成功,才是高可用的基础,保证至少有一台与master数据同步。 
Query OK, 0 rows affected (0.00 sec)

mysql> show variables like '%rpl%';
+-------------------------------------------+----------------+
| Variable_name                             | Value          |
+-------------------------------------------+----------------+
| rpl_semi_sync_master_enabled              | ON             |
| rpl_semi_sync_master_timeout              | 10000000000000 |
| rpl_semi_sync_master_trace_level          | 32             |
| rpl_semi_sync_master_wait_for_slave_count | 1              |
| rpl_semi_sync_master_wait_no_slave        | ON             |
| rpl_semi_sync_master_wait_point           | AFTER_SYNC     |
| rpl_semi_sync_slave_enabled               | OFF            |
| rpl_semi_sync_slave_trace_level           | 32             |
| rpl_stop_slave_timeout                    | 31536000       |
+-------------------------------------------+----------------+

server2(从):

mysql>  change master to master_host='172.25.66.1', master_user='repl',master_password='Wsp+123ld' ,master_auto_position=1;

mysql> install plugin rpl_semi_sync_master soname 'semisync_master.so';

mysql> install plugin rpl_semi_sync_slave soname 'semisync_slave.so';

mysql> set global rpl_semi_sync_slave_enabled=1;

mysql> stop slave io_thread;

mysql> start slave io_thread;

mysql> start slave;

mysql> show slave status\G
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 172.25.66.1
                  Master_User: repl
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: binlog.000002
          Read_Master_Log_Pos: 691
               Relay_Log_File: server2-relay-bin.000002
                Relay_Log_Pos: 898
        Relay_Master_Log_File: binlog.000002
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes

server3(从):

mysql>  change master to master_host='172.25.66.1', master_user='repl',master_password='Wsp+123ld' ,master_auto_position=1;

mysql> install plugin rpl_semi_sync_master soname 'semisync_master.so';

mysql> install plugin rpl_semi_sync_slave soname 'semisync_slave.so';

mysql> set global rpl_semi_sync_slave_enabled=1;

mysql> stop slave io_thread;

mysql> start slave io_thread;

mysql> start slave;

mysql> show slave status\G
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 172.25.66.1
                  Master_User: repl
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: binlog.000002
          Read_Master_Log_Pos: 691
               Relay_Log_File: server3-relay-bin.000002
                Relay_Log_Pos: 898
        Relay_Master_Log_File: binlog.000002
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
              Replicate_Do_DB: 
          Replicate_Ignore_DB: 
           Replicate_Do_Table: 

在server4上进行高可用MHA的配置(管理Mysql集群)

  • 1.官网下载需要用到的安装包

在这里插入图片描述

  • 2.安装
[root@server4 ~]# cd MHA-7/
[root@server4 MHA-7]# ls
mha4mysql-manager-0.58-0.el7.centos.noarch.rpm
mha4mysql-manager-0.58.tar.gz
mha4mysql-node-0.58-0.el7.centos.noarch.rpm
perl-Config-Tiny-2.14-7.el7.noarch.rpm
perl-Email-Date-Format-1.002-15.el7.noarch.rpm
perl-Log-Dispatch-2.41-1.el7.1.noarch.rpm
perl-Mail-Sender-0.8.23-1.el7.noarch.rpm
perl-Mail-Sendmail-0.79-21.el7.noarch.rpm
perl-MIME-Lite-3.030-1.el7.noarch.rpm
perl-MIME-Types-1.38-2.el7.noarch.rpm
perl-Parallel-ForkManager-1.18-2.el7.noarch.rpm
[root@server4 MHA-7]# yum install -y  *.rpm
  • 3.因为MHA中的master于slave之间是通过ssh服务来进行保存二进制日志的所以要保证server4,server3,server2,server1之间相互不用密码登录就可以直接连接
  • 4.在server4上生成钥匙和锁,并将钥匙发给server1,2,3
[root@server4 MHA-7]# ssh-keygen 
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:1f1ZYaYNQTvmM0DnQFWqb9NIg9H/XniPEhb+++RIPk8 root@server4
The key's randomart image is:
+---[RSA 2048]----+
|          .+o*o= |
|          ..=.O .|
|          .o.O...|
|         .  O o.o|
|        S  + B o.|
|            * *..|
|           . Bo.E|
|            oo+Oo|
|             .==*|
+----[SHA256]-----+


[root@server4 MHA-7]# ssh-copy-id server1
[root@server4 MHA-7]# ssh-copy-id server2
[root@server4 MHA-7]# ssh-copy-id server3

这里需要做好本地解析才可以,否则会出错。

上述之是解决了server4能直接连接server1,2,3但是serevr1,2,3之间是无法直接连接的,所以需要将公钥和私钥都发给server1,2,和3,则server1,2,3,4之间都可以直接相互连接

[root@server4 MHA-7]# scp -r /root/.ssh/known_hosts server1:
known_hosts                                         100%  543   510.3KB/s   00:00    
[root@server4 MHA-7]# scp -r /root/.ssh server1:
id_rsa                                              100% 1679     1.8MB/s   00:00    
id_rsa.pub                                          100%  394   516.2KB/s   00:00    
known_hosts                                         100%  543   763.1KB/s   00:00    
[root@server4 MHA-7]# scp -r /root/.ssh server2:
id_rsa                                              100% 1679   308.7KB/s   00:00    
id_rsa.pub                                          100%  394    42.7KB/s   00:00    
known_hosts                                         100%  543   469.6KB/s   00:00    
[root@server4 MHA-7]# scp -r /root/.ssh server3:
id_rsa                                              100% 1679   495.1KB/s   00:00    
id_rsa.pub                                          100%  394    39.5KB/s   00:00    
known_hosts                                         100%  543   354.6KB/s   00:00 
  • 5.将server4上数据节点node的安装包分别发给server1,2,3,并分别安装节点node的包
[root@server4 MHA-7]# scp mha4mysql-node-0.58-0.el7.centos.noarch.rpm server1:
mha4mysql-node-0.58-0.el7.centos.noarch.rpm         100%   35KB   6.1MB/s   00:00    
[root@server4 MHA-7]# scp mha4mysql-node-0.58-0.el7.centos.noarch.rpm server2:
mha4mysql-node-0.58-0.el7.centos.noarch.rpm         100%   35KB  17.2MB/s   00:00    
[root@server4 MHA-7]# scp mha4mysql-node-0.58-0.el7.centos.noarch.rpm server3:
mha4mysql-node-0.58-0.el7.centos.noarch.rpm         100%   35KB  19.3MB/s   00:00 
yum install -y mha4mysql-node-0.58-0.el7.centos.noarch.rpm
  • 6.在server4上编写高可用的配置文件

(1)创建高可用配置文件的目录

[root@server4 ~]# mkdir -p  /etc/masterha

(2)编辑mha的配置文件

[root@server4 ~]# vim /etc/masterha/app1.cnf
文件编辑内容及解释如下:
[server default]
manager_workdir=/etc/masterha //设置manager的工作目录
manager_log=/var/log/masterha.log  //设置manager的日志
master_binlog_dir=/etc/masterha 设置master 保存binlog的位置,以便MHA可以找到master的日志
password=Wsp+123ld  //设置mysql中root用户的密码,这个密码是前文中创建监控用户的那个
user=root  //设置监控用户root
ping_interval=1  //设置监控主库,发送ping包的时间间隔为1s,默认是3秒,尝试三次没有回应的时候自动进行failover
remote_workdir=/tmp  //设置远端mysql在发生切换时binlog的保存位置
repl_password=Wsp+123ld  //设置复制用户的密码
repl_user=repl  //设置复制环境中的复制用户名
ssh_user=root  //设置ssh的登录用户名

[server1]
hostname=172.25.66.1
port=3306

[server2]
hostname=172.25.66.2
port=3306
candidate_master=1   //设置为候选master,如果设置该参数以后,发生主从切换以后将会将此从库提升为主库,即使这个主库不是集群中事件最新的slave  
check_repl_delay=0  
 //默认情况下如果一个slave落后master 100M的relay logs的话,MHA将不会选择该slave作为一个新的master,因为对于这个slave的恢复需要花费很长时间,通过设置check_repl_delay=0,MHA触发切换在选择一个新的master的时候将会忽略复制延时,这个参数对于设置了candidate_master=1的主机非常有用,因为这个候选主在切换的过程中一定是新的master

[server3]
hostname=172.25.66.3
port=3306
no_master=1   //设置server3不能成为master                                                                      

  • 7.在server4上查看ssh的配置是否成功
[root@server4 MHA-7]# masterha_check_ssh --conf=/etc/masterha/app1.cnf 

Mon Jul 29 18:25:05 2019 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Mon Jul 29 18:25:05 2019 - [info] Reading application default configuration from /etc/masterha/app1.cnf..
Mon Jul 29 18:25:05 2019 - [info] Reading server configuration from /etc/masterha/app1.cnf..
Mon Jul 29 18:25:05 2019 - [info] Starting SSH connection tests..
Mon Jul 29 18:25:06 2019 - [debug] 
Mon Jul 29 18:25:05 2019 - [debug]  Connecting via SSH from root@172.25.66.1(172.25.66.1:22) to root@172.25.66.2(172.25.66.2:22)..
Mon Jul 29 18:25:06 2019 - [debug]   ok.
Mon Jul 29 18:25:06 2019 - [debug]  Connecting via SSH from root@172.25.66.1(172.25.66.1:22) to root@172.25.66.3(172.25.66.3:22)..
Mon Jul 29 18:25:06 2019 - [debug]   ok.
Mon Jul 29 18:25:07 2019 - [debug] 
Mon Jul 29 18:25:06 2019 - [debug]  Connecting via SSH from root@172.25.66.2(172.25.66.2:22) to root@172.25.66.1(172.25.66.1:22)..
Mon Jul 29 18:25:06 2019 - [debug]   ok.
Mon Jul 29 18:25:06 2019 - [debug]  Connecting via SSH from root@172.25.66.2(172.25.66.2:22) to root@172.25.66.3(172.25.66.3:22)..
Mon Jul 29 18:25:06 2019 - [debug]   ok.
Mon Jul 29 18:25:08 2019 - [debug] 
Mon Jul 29 18:25:06 2019 - [debug]  Connecting via SSH from root@172.25.66.3(172.25.66.3:22) to root@172.25.66.1(172.25.66.1:22)..
Mon Jul 29 18:25:07 2019 - [debug]   ok.
Mon Jul 29 18:25:07 2019 - [debug]  Connecting via SSH from root@172.25.66.3(172.25.66.3:22) to root@172.25.66.2(172.25.66.2:22)..
Mon Jul 29 18:25:07 2019 - [debug]   ok.
Mon Jul 29 18:25:08 2019 - [info] All SSH connection tests passed successfully.
  • 8.查看mysql的复制情况
[root@server4 MHA-7]# masterha_check_repl --conf=/etc/masterha/app1.cnf 

Mon Jul 29 18:26:20 2019 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Mon Jul 29 18:26:20 2019 - [info] Reading application default configuration from /etc/masterha/app1.cnf..
Mon Jul 29 18:26:20 2019 - [info] Reading server configuration from /etc/masterha/app1.cnf..
Mon Jul 29 18:26:20 2019 - [info] MHA::MasterMonitor version 0.58.
Mon Jul 29 18:26:20 2019 - [error][/usr/share/perl5/vendor_perl/MHA/Server.pm, ln180] Got MySQL error when connecting 172.25.66.2(172.25.66.2:3306) :1045:Access denied for user 'root'@'server4' (using password: YES), but this is not a MySQL crash. Check MySQL server settings.
Mon Jul 29 18:26:20 2019 - [error][/usr/share/perl5/vendor_perl/MHA/ServerManager.pm, ln301]  at /usr/share/perl5/vendor_perl/MHA/ServerManager.pm line 297.
Mon Jul 29 18:26:20 2019 - [error][/usr/share/perl5/vendor_perl/MHA/Server.pm, ln180] Got MySQL error when connecting 172.25.66.1(172.25.66.1:3306) :1045:Access denied for user 'root'@'server4' (using password: YES), but this is not a MySQL crash. Check MySQL server settings.
Mon Jul 29 18:26:20 2019 - [error][/usr/share/perl5/vendor_perl/MHA/ServerManager.pm, ln301]  at /usr/share/perl5/vendor_perl/MHA/ServerManager.pm line 297.
Mon Jul 29 18:26:20 2019 - [error][/usr/share/perl5/vendor_perl/MHA/Server.pm, ln180] Got MySQL error when connecting 172.25.66.3(172.25.66.3:3306) :1045:Access denied for user 'root'@'server4' (using password: YES), but this is not a MySQL crash. Check MySQL server settings.
Mon Jul 29 18:26:20 2019 - [error][/usr/share/perl5/vendor_perl/MHA/ServerManager.pm, ln301]  at /usr/share/perl5/vendor_perl/MHA/ServerManager.pm line 297.
Mon Jul 29 18:26:21 2019 - [error][/usr/share/perl5/vendor_perl/MHA/ServerManager.pm, ln309] Got fatal error, stopping operations
Mon Jul 29 18:26:21 2019 - [error][/usr/share/perl5/vendor_perl/MHA/MasterMonitor.pm, ln427] Error happened on checking configurations.  at /usr/share/perl5/vendor_perl/MHA/MasterMonitor.pm line 329.
Mon Jul 29 18:26:21 2019 - [error][/usr/share/perl5/vendor_perl/MHA/MasterMonitor.pm, ln525] Error happened on monitoring servers.
Mon Jul 29 18:26:21 2019 - [info] Got exit code 1 (Not master dead).

MySQL Replication Health is NOT OK!
  • 9.在server4上检测复制用户,发现还是失败。是因为没有对master的mysql的root用户授予远程登录的权限,需要在server1上添加root用户允许远程登陆权限
mysql> grant all on *.* to root@'%' identified by 'Wsp+123ld';
  • 10.在server4上再次检测复制用户 ,成功。
[root@server4 MHA-7]# masterha_check_repl --conf=/etc/masterha/app1.cnf 

Mon Jul 29 18:28:32 2019 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Mon Jul 29 18:28:32 2019 - [info] Reading application default configuration from /etc/masterha/app1.cnf..
Mon Jul 29 18:28:32 2019 - [info] Reading server configuration from /etc/masterha/app1.cnf..
Mon Jul 29 18:28:32 2019 - [info] MHA::MasterMonitor version 0.58.
Mon Jul 29 18:28:33 2019 - [info] GTID failover mode = 1
Mon Jul 29 18:28:33 2019 - [info] Dead Servers:
Mon Jul 29 18:28:33 2019 - [info] Alive Servers:
Mon Jul 29 18:28:33 2019 - [info]   172.25.66.1(172.25.66.1:3306)
Mon Jul 29 18:28:33 2019 - [info]   172.25.66.2(172.25.66.2:3306)
Mon Jul 29 18:28:33 2019 - [info]   172.25.66.3(172.25.66.3:3306)
Mon Jul 29 18:28:33 2019 - [info] Alive Slaves:
Mon Jul 29 18:28:33 2019 - [info]   172.25.66.2(172.25.66.2:3306)  Version=5.7.24-log (oldest major version between slaves) log-bin:enabled
Mon Jul 29 18:28:33 2019 - [info]     GTID ON
Mon Jul 29 18:28:33 2019 - [info]     Replicating from 172.25.66.1(172.25.66.1:3306)
Mon Jul 29 18:28:33 2019 - [info]     Primary candidate for the new Master (candidate_master is set)
Mon Jul 29 18:28:33 2019 - [info]   172.25.66.3(172.25.66.3:3306)  Version=5.7.24-log (oldest major version between slaves) log-bin:enabled
Mon Jul 29 18:28:33 2019 - [info]     GTID ON
Mon Jul 29 18:28:33 2019 - [info]     Replicating from 172.25.66.1(172.25.66.1:3306)
Mon Jul 29 18:28:33 2019 - [info]     Not candidate for the new Master (no_master is set)
Mon Jul 29 18:28:33 2019 - [info] Current Alive Master: 172.25.66.1(172.25.66.1:3306)
Mon Jul 29 18:28:33 2019 - [info] Checking slave configurations..
Mon Jul 29 18:28:33 2019 - [info]  read_only=1 is not set on slave 172.25.66.2(172.25.66.2:3306).
Mon Jul 29 18:28:33 2019 - [info]  read_only=1 is not set on slave 172.25.66.3(172.25.66.3:3306).
Mon Jul 29 18:28:33 2019 - [info] Checking replication filtering settings..
Mon Jul 29 18:28:33 2019 - [info]  binlog_do_db= , binlog_ignore_db= 
Mon Jul 29 18:28:33 2019 - [info]  Replication filtering check ok.
Mon Jul 29 18:28:33 2019 - [info] GTID (with auto-pos) is supported. Skipping all SSH and Node package checking.
Mon Jul 29 18:28:33 2019 - [info] Checking SSH publickey authentication settings on the current master..
Mon Jul 29 18:28:33 2019 - [info] HealthCheck: SSH to 172.25.66.1 is reachable.
Mon Jul 29 18:28:33 2019 - [info] 
172.25.66.1(172.25.66.1:3306) (current master)
 +--172.25.66.2(172.25.66.2:3306)
 +--172.25.66.3(172.25.66.3:3306)

Mon Jul 29 18:28:33 2019 - [info] Checking replication health on 172.25.66.2..
Mon Jul 29 18:28:33 2019 - [info]  ok.
Mon Jul 29 18:28:33 2019 - [info] Checking replication health on 172.25.66.3..
Mon Jul 29 18:28:33 2019 - [info]  ok.
Mon Jul 29 18:28:33 2019 - [warning] master_ip_failover_script is not defined.
Mon Jul 29 18:28:33 2019 - [warning] shutdown_script is not defined.
Mon Jul 29 18:28:33 2019 - [info] Got exit code 0 (Not master dead).

MySQL Replication Health is OK.

Mysql高可用(MHA)的测试

1.手动同步

  • 1.关闭server1(master)然后在server4(MHA)上手动将master转到server2上
    注意:在server4上进行手动切换前必须先关掉server1上的mysql,否则,手动切换不成功
[root@server1 .ssh]# systemctl stop mysqld.service 
  • 2.MHA(server4)上手动切换
[root@server4 masterha]#  masterha_master_switch --master_state=dead --conf=/etc/masterha/app1.cnf --dead_master_ip=172.25.66.1 --dead_master_host=172.25.66.1 --dead_master_port=3306 --new_master_ip=172.25.66.2 --new_master_port=3306
  • 3.server3上检测是否server2切换为master
mysql> show slave status\G
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 172.25.66.2
                  Master_User: repl
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: binlog.000002
          Read_Master_Log_Pos: 1215
               Relay_Log_File: server3-relay-bin.000002
                Relay_Log_Pos: 649
        Relay_Master_Log_File: binlog.000002
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
  • 4.打开server1的mysqld,因为server1的服务再次开启默认已经为salve了,所以需要给其寻找master,即将server2设为他的master;
mysql> change master to master_host='172.25.66.2',master_port=3306,master_user='repl',master_password='Wsp+123ld',master_auto_position=1;
Query OK, 0 rows affected, 2 warnings (0.18 sec)

mysql> start slave;
Query OK, 0 rows affected (0.01 sec)

mysql> show slave status\G
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 172.25.66.2
                  Master_User: repl
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: binlog.000002
          Read_Master_Log_Pos: 1215
               Relay_Log_File: server1-relay-bin.000002
                Relay_Log_Pos: 649
        Relay_Master_Log_File: binlog.000002
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
  • 5.手动热切换master。
首先删除/etc/masterha/app1.failover.complete,否则再次转换不能成功

[root@server4 masterha]# rm -fr app1.failover.complete

然后手动热切换master

[root@server4 masterha]# masterha_master_switch --conf=/etc/masterha/app1.cnf --master_state=alive --new_master_host=172.25.66.1 --new_master_port=3306 --orig_master_is_new_slave --running_updates_limit=10000
  • 6.在server2查看是否切换成功
mysql> show slave status\G
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 172.25.66.1
                  Master_User: repl
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: binlog.000004
          Read_Master_Log_Pos: 438
               Relay_Log_File: server2-relay-bin.000002
                Relay_Log_Pos: 405
        Relay_Master_Log_File: binlog.000004
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes

2.自动切换master_manager

我们MHA的核心就是在生产环境中,主库出现故障实现自动切换的。

  • 1.关于master_manager的启动参数:

–remove_dead_master_conf
该参数代表当发生主从切换后,原始主库的ip将会从配置文件中移除。
–ignore_last_failover
在缺省情况下,如果MHA检测到连续发生宕机,且两次宕机间隔不足8小时的话,则不会进行Failover,之所以这样限制是为了避免ping-pong效应。该参数代表忽略上次MHA触发切换产生的文件,默认情况下,MHA发生切换后会在日志目录,也就是上面我设置的/etc/masterha产生app1.failover.complete文件,下次再次切换的时候如果发现该目录下存在该文件将不允许触发切换,除非在第一次切换后收到删除该文件,为了方便,这里设置为–ignore_last_failover。

  • 2.开启master_manager后台静默运行
[root@server4 masterha]# nohup masterha_manager --conf=/etc/masterha/app1.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null > /var/log/masterha.log 2>&1 &
[1] 1907
[root@server4 masterha]# ps aux | grep masterha_manager
root      1907  1.2  2.1 299504 21952 pts/0    S    19:31   0:00 perl /usr/bin/masterha_manager --conf=/etc/masterha/app1.cnf --remove_dead_master_conf --ignore_last_failover
root      1931  0.0  0.0 112704   984 pts/0    R+   19:32   0:00 grep --color=auto masterha_manager
  • 3.测试
手动关掉server1(master)
[root@server1 .ssh]# systemctl stop mysqld.service

server3上查看是否切换成功
mysql> show slave status\G
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 172.25.66.2
                  Master_User: repl
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: binlog.000002
          Read_Master_Log_Pos: 1215
               Relay_Log_File: server3-relay-bin.000002
                Relay_Log_Pos: 405
        Relay_Master_Log_File: binlog.000002
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes

Mysql高可用(MHA)配置实现VIP漂移

1.server4 MHA 配置

  • 1.官网下载mha高可用的manager的安装包并且解压,找到这俩个脚本

在这里插入图片描述

  • 2.修改脚本内容:
[root@server4 ~]# vim master_ip_failover

在这里插入图片描述

[root@server4 ~]# vim master_ip_online_change 

在这里插入图片描述

  • 3.给俩个脚本添加执行权限,并且移动到/usr/local/bin下,因为需要直接执行脚本。
[root@server4 ~]# chmod +x master_ip_*
[root@server4 ~]# ll
total 12
-rwxr-xr-x 1 root root 2172 Jul 29 19:46 master_ip_failover
-rwxr-xr-x 1 root root 3847 Jul 29 19:48 master_ip_online_change
  • 4.编辑配置文件
vim /etc/masterha/app1.cnf

在这里插入图片描述

  • 5.给master(server2)上添加vip
[root@server2 .ssh]# ip addr add 172.25.66.100/24 dev eth0
[root@server2 .ssh]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:0c:57:b5 brd ff:ff:ff:ff:ff:ff
    inet 172.25.66.2/24 brd 172.25.66.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 172.25.66.100/24 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:fe0c:57b5/64 scope link 
       valid_lft forever preferred_lft forever

2.测试

  • 手动转换:
[root@server4 bin]# masterha_master_switch  --conf=/etc/masterha/app1.cnf --master_state=alive --new_master_host=172.25.66.1 --new_master_port=3306 --orig_master_is_new_slave --running_updates_limit=10000

发现vip漂到server1上。

[root@server1 .ssh]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:d8:5f:32 brd ff:ff:ff:ff:ff:ff
    inet 172.25.66.1/24 brd 172.25.66.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 172.25.66.100/24 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:fed8:5f32/64 scope link 
       valid_lft forever preferred_lft forever
  • 自动转换:
  • 1.开启mha的自动转换
[root@server4 masterha]# nohup masterha_manager --conf=/etc/masterha/app1.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null > /var/log/masterha.log 2>&1 &
  • 2.关掉server1(master),发现vip漂到server2(新master上)
[root@server1 .ssh]# systemctl stop mysqld

[root@server2 .ssh]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 52:54:00:0c:57:b5 brd ff:ff:ff:ff:ff:ff
    inet 172.25.66.2/24 brd 172.25.66.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet 172.25.66.100/24 scope global secondary eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::5054:ff:fe0c:57b5/64 scope link 
       valid_lft forever preferred_lft forever

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值