Mysql集群

目录

1、在Linux下部署mysql

1.1.1 安装依赖性

1.1.2 下载并解压源码包

1.1.3 源码编译安装mysql

1.1.4 部署mysq

测试

2、mysql的主从复制

2.1 配置mastesr

日志原理:

2.2 配置salve

2.3 当有数据时添加slave2

2.4 延迟复制

2.5 慢查询日志

2.6 mysql的并行复制

2.7 原理刨析

2.8 架构缺陷

3、半同步模式

3.1半同步模式原理

3.2 gtid模式

3.3.启用半同步模式

3.4.测试

3.5 测试总结

面试问题:

4.mysql高可用之组复制 (MGR)

4.3实现mysql组复制

5、mysql-router(mysql路由)

5.1 Mysql route的部署方式

6、mysql高可用之MHA

6.1.MHA概述

6.2 MHA部署实施

6.2.1 配置MHA 的管理环境

6.2.3 MHA的故障切换

6.2.4 为MHA添加VIP功能
​​​​​​​​​​​​​​

企业中大多数的数据库的版本为5.7

前提提要:
在之前我们学习的mysql都是单机的,但是在企业里面单机肯定不行,所以我们要做到对数据库进行主备和高可用。

我们要去mysql的官网里面去下载5.7的版本的源码,选择带c++工具的包。

1、在Linux下部署mysql

node1/node2都做

实验准备:
克隆rhel7 mysql-node1 172.25.254.10 mysql-node1.jingwen.org
克隆rhel7 mysql-node2 172.25.254.20 mysql-node2.jingwen.org

1.1.1 安装依赖性

#使用的商业版的编译,所以我们要下载cmake。
[root@mysql-node1 ~]# yum install cmake gcc-c++ openssl-devel ncurses-devel.x86_64 libtirpc-devel-1.3.3-8.el9_4.x86_64.rpm rpcgen.x86_64

#单独下载libtirpc-devel-0.2.4-0.16.el7.x86_64.rpm
[root@mysql-node1 ~]# yum install libtirpc-devel-0.2.4-0.16.el7.x86_64.rpm

1.1.2 下载并解压源码包

[root@mysql-node1 ~]# tar zxf mysql-boost-5.7.44.tar.gz
[root@mysql-node1 ~]# cd /root/mysql-5.7.44

1.1.3 源码编译安装mysql

#查看这些参数在哪,cmake --help-full
[root@mysql-node1 mysql-5.7.44]# cmake \
-DCMAKE_INSTALL_PREFIX=/usr/local/mysql \  #指定安装路径
-DMYSQL_DATADIR=/data/mysql \              #指定数据目录
-DMYSQL_UNIX_ADDR=/data/mysql/mysql.sock \ #指定套接字文件
-DWITH_INNOBASE_STORAGE_ENGINE=1 \         #指定启用INNODB存储引擎,默认用myisam
-DWITH_EXTRA_CHARSETS=all \                #扩展字符集
-DDEFAULT_CHARSET=utf8mb4 \                #指定默认字符集
-DDEFAULT_COLLATION=utf8mb4_unicode_ci \   #指定默认校验字符集
-DWITH_BOOST=/root/mysql-5.7.44/boost/boost_1_59_0/ #指定c++库依赖
[root@mysql-node1 mysql-5.7.44]# cmake -DCMAKE_INSTALL_PREFIX=/usr/local/mysql -DMYSQL_DATADIR=/data/mysql -DMYSQL_UNIX_ADDR=/data/mysql/mysql.sock -DWITH_INNOBASE_STORAGE_ENGINE=1 -DWITH_EXTRA_CHARSETS=all -DDEFAULT_CHARSET=utf8mb4 -DDEFAULT_COLLATION=utf8mb4_unicode_ci -DWITH_BOOST=/root/mysql-5.7.44/boost/boost_1_59_0/

#按照正常顺序我们的依赖性是通过这个cmake编译之后,报错,可以看见我们缺少什么依赖性,进而进行下载,反复编译,直至所有依赖性都排查出来并下载。

[root@mysql-node1 mysql-5.7.44]# make -j2 #-j2 表示有几个
核心就跑几个进程
[root@mysql-node10 mysql-5.7.44# make install

补充:缺少的第一个依赖性:mysql要支持c++,我们必须要安装gcc和gcc-c++。

再次进行cmake........

缺少的第二个依赖性:缺少openssl,我们要安装其开发包openssl-devel。

再次进行cmake........

缺少的第三个个依赖性:缺少ncurses-devel。

当cmake出错后如果想重新检测,删除 mysql-5.7.44 中 CMakeCache.txt即可

1.1.4 部署mysq

#生成启动脚本
[root@mysql-node1 ~]# cd /usr/local/mysql/
[root@mysql-node1 mysql]# useradd -s /sbin/nologin -M mysql
[root@mysql-node1 mysql]# mkdir /data/mysql -p
[root@mysql-node1 mysql]# chown mysql.mysql -R /data/mysql
[root@mysql-node1 mysql]# cd support-files/
[root@mysql-node1 support-files]# vim mysql.server 
[root@mysql-node1 ~]# cat /etc/my.cnf
[mysqld]
datadir=/data/mysql #指定数据目录
socket=/data/mysql/mysql.sock #指定套接字
symbolic-links=0 #数据只能存放到数据目录中,禁止链接到数据目录
#数据库初始化建立mysql基本数据

#修改环境变量
[root@mysql-node1 support-files]# vi ~/.bash_profile 
PATH=$PATH:$HOME/bin:/usr/local/mysql/bin
[root@mysql-node1 support-files]# source ~/.bash_profile 

[root@mysql-node1 support-files]# cp mysql.server /etc/init.d/mysqld
[root@mysql-node1 support-files]# cd
#数据库初始化建立mysql基本数据
#创建用户mysql
[root@mysql-node1 ~]# mysqld --user mysql --initialize

#记得把初始密码保存到passwd中,以免忘记。
[root@mysql-node1 ~]# vim passwd
[root@mysql-node1 ~]# /etc/init.d/mysqld start
Starting MySQL.Logging to '/data/mysql/mysql-node1.jingwen.org.err'.
 SUCCESS! 
[root@mysql-node1 ~]# chkconfig --list
netconsole     	0:off	1:off	2:off	3:off	4:off	5:off	6:off
network        	0:off	1:off	2:on	3:on	4:on	5:on	6:off
rhnsd          	0:off	1:off	2:on	3:on	4:on	5:on	6:off
[root@mysql-node1 ~]# chkconfig mysqld on
[root@mysql-node1 ~]# chkconfig --list
mysqld         	0:off	1:off	2:on	3:on	4:on	5:on	6:off
netconsole     	0:off	1:off	2:off	3:off	4:off	5:off	6:off
network        	0:off	1:off	2:on	3:on	4:on	5:on	6:off
rhnsd          	0:off	1:off	2:on	3:on	4:on	5:on	6:off
#数据库安全初始化
[root@node1 ~]# mysql_secure_installation
Securing the MySQL server deployment.
Enter password for user root: #输入当前密码
The existing password for the user account root has expired. Please set a new
password.
New password: #输入新密码
Re-enter new password: #重复密码
VALIDATE PASSWORD PLUGIN can be used to test passwords
and improve security. It checks the strength of password
and allows the users to set only those passwords which are
secure enough. Would you like to setup VALIDATE PASSWORD plugin?
Press y|Y for Yes, any other key for No: no #是否启用密码插件
Using existing password for root.
Change the password for root ? ((Press y|Y for Yes, any other key for No) : no
#是否要重置密码
... skipping.
By default, a MySQL installation has an anonymous user,
allowing anyone to log into MySQL without having to have
a user account created for them. This is intended only for
testing, and to make the installation go a bit smoother.
You should remove them before moving into a production
environment.
Remove anonymous users? (Press y|Y for Yes, any other key for No) : y
Success.
Normally, root should only be allowed to connect from
'localhost'. This ensures that someone cannot guess at
the root password from the network.
Disallow root login remotely? (Press y|Y for Yes, any other key for No) : y
Success.
By default, MySQL comes with a database named 'test' that
anyone can access. This is also intended only for testing,
and should be removed before moving into a production
environment.
Remove test database and access to it? (Press y|Y for Yes, any other key for No)
: y
- Dropping test database...
Success.
- Removing privileges on test database...
Success.
Reloading the privilege tables will ensure that all changes
made so far will take effect immediately.
Reload privilege tables now? (Press y|Y for Yes, any other key for No) : y
Success.

测试

[root@node1 ~]# mysql -uroot -plee
mysql: [Warning] Using a password on the command line interface can be insecure.
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 8
Server version: 5.7.44 Source distribution
Copyright (c) 2000, 2023, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> SHOW DATABASES;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| sys |
+--------------------+
4 rows in set (0.00 sec)

2、mysql的主从复制

[里面没有数据的mysql的主从复制]

2.1 配置mastesr

#node1/node2都做,node1作为主/master,node2作为从/slave。
[root@mysql-node1 ~]# vim /etc/my.cnf
[mysqld]
datadir=/data/mysql
socket=/data/mysql/mysql.sock
symbolic-links=0
server-id=10
log-bin=mysql-bin  #做了任何操作会记录在这个日志里面

#重启后,mysql-bin.000001日志就生成了。
[root@mysql-node1 ~]# /etc/init.d/mysqld restart

#进入数据库配置用户权限
[root@mysql-node1 ~]# mysql -plee 
# @'%'表示这个用户可以从任何主机连接到数据库。如果你希望用户只能从特定的主机连接,可以将%替换为那个特定的主机名或IP地址。
mysql> CREATE USER 'repl'@'%' IDENTIFIED BY 'lee'; ##生成专门用来做复制的用户,此用户是用于slave端做认证用
mysql> GRANT REPLICATION SLAVE ON *.* TO repl@'%'; ##对这个用户进行授权
mysql> SHOW MASTER STATUS; ##查看master的状态
+------------------+----------+--------------+------------------+-------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB | Execuid_Set |
+------------------+----------+--------------+------------------+-------------+
| mysql-bin.000001 |      595 |              |                  |             |
+------------------+----------+--------------+------------------+-------------+
1 row in set (0.00 sec)
[root@mysql-node1 ~]# cd /data/mysql/
[root@mysql-node1 mysql]# mysqlbinlog mysql-bin.000001 -vv ##查看二进制日志

日志原理:

数据备份:mysql-bin.000001这个日志的作用,因为所有的动作都被记录在mysql-bin.000001里面,将这个日志复制到其它主机上,用这个日志将动作进行回放,数据就备份成功了。

2.2 配置salve

#node2上做
[root@mysql-node2 ~]# vim /etc/my.cnf
[mysqld]
datadir=/data/mysql
socket=/data/mysql/mysql.sock
symbolic-links=0
server-id=20
[root@mysql-node2 ~]# /etc/init.d/mysqld restart
[root@mysql-node2 ~]# mysql -plee
mysql> CHANGE MASTER TO
MASTER_HOST='172.25.254.10',MASTER_USER='repl',MASTER_PASSWORD='lee',MASTER_LOG_FILE='mysql-bin.000001',MASTER_LOG_POS=595;#注意这里的595是你在node1上的SHOW MASTER STATUS 查询的master的状态
Query OK, 0 rows affected, 2 warnings (0.01 sec)
mysql> start slave;
Query OK, 0 rows affected (0.01 sec)
mysql> SHOW SLAVE STATUS\G;
*************************省略******************

测试

[root@mysql-node1 ~]# mysql -plee
mysql> CREATE DATABASE lee;
Query OK, 1 row affected (0.00 sec)
mysql> CREATE TABLE lee.userlist (
-> username varchar(20) not null,
-> password varchar(50) not null
-> );
Query OK, 0 rows affected (0.02 sec)
mysql> INSERT INTO lee.userlist VALUE ('lee','123');
Query OK, 1 row affected (0.02 sec)
mysql> SELECT * FROM lee.userlist;
+----------+----------+
| username | password |
+----------+----------+
| lee | 123 |
+----------+----------+
1 row in set (0.00 sec)
在slave中查看数据是否有同步过来
[root@mysql-node2 ~]# mysql -plee
mysql> SELECT * FROM lee.userlist;
+----------+----------+
| username | password |
+----------+----------+
| lee | 123 |
+----------+----------+
1 row in set (0.00 sec)
#在node2上添加的数据node1上无法查看

2.3 当有数据时添加slave2

实验准备:
克隆rhel7 172.25.254.30 mysql-node3.jingwen.org

[root@mysql-node1 ~]# rsync -al /usr/local/mysql root@172.25.254.30:/usr/local
#完成基础配置
[root@mysql-node3 ~]# vim /etc/my.cnf
[mysqld]
datadir=/data/mysql
socket=/data/mysql/mysql.sock
symbolic-links=0
server-id=3
[root@mysql-node3 ~]# /etc/init.d/mysqld restart
#从master节点备份数据
[root@mysql-node1 ~]# mysqldump -uroot -plee lee > lee.sql

生产环境中备份时需要锁表,保证备份前后的数据一致 mysql> FLUSH TABLES WITH READ LOCK; 备份后再解锁 mysql> UNLOCK TABLES; mysqldump命令备份的数据文件,在还原时先DROP TABLE,需要合并数据时需要删除此语句

--
-- Table structure for table `userlist`
--
DROP TABLE IF EXISTS `userlist`; #需要合并数据时需要删除此语句
/*!40101 SET @saved_cs_client = @@character_set_client */;
/*!40101 SET character_set_client = utf8 */;
#利用master节点中备份出来的lee.sql在slave2中拉平数据
[root@mysql-node1 ~]# scp lee.sql root@172.25.254.30:/mnt/
[root@mysql-node3 ~]# cd /usr/local/mysql/
[root@mysql-node3 mysql]# cd /mnt/
[root@mysql-node3 mnt]# mysql -uroot -plee -e "CREATE DATABASE lee;"
[root@mysql-node3 mnt]# mysql -uroot -plee lee < lee.sql
[root@mysql-node3 mnt]# mysql -uroot -plee -e "select * from lee.userlist;"
mysql: [Warning] Using a password on the command line interface can be insecure.
+----------+----------+
| username | password |
+----------+----------+
| lee      | 123      |
+----------+----------+
#配置slave2[172.25.254.30]的slave功能
#在master中查询日志pos
[root@mysql-node1 ~]# mysql -uroot -plee -e "show master status;"
mysql: [Warning] Using a password on the command line interface can be insecure.
+------------------+----------+--------------+------------------+-------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+------------------+----------+--------------+------------------+-------------------+
| mysql-bin.000001 |     1238 |              |                  |                   |
+------------------+----------+--------------+------------------+-------------------+
[root@mysql-node3 ~]# mysql -uroot -plee
mysql> CHANGE MASTER TO MASTER_HOST='172.25.254.10', MASTER_USER='repl',
MASTER_PASSWORD='lee', MASTER_LOG_FILE='mysql-bin.000001', MASTER_LOG_POS=1238;
mysql> start slave;
mysql> SHOW SLAVE STATUS\G;
******************* 省略 ***************************

测试

[root@mysql-node1 ~]# mysql -uroot -plee -e "INSERT INTO lee.userlist VALUES('user3','123');"
[root@mysql-node2 ~]# mysql -uroot -plee -e "select * from lee.userlist;"
mysql: [Warning] Using a password on the command line interface can be insecure.
+----------+----------+
| username | password |
+----------+----------+
| lee      | 123      |
| user3    | 123      |
+----------+----------+
[root@mysql-node3 ~]# mysql -uroot -plee -e 'select * from lee.userlist;'
mysql: [Warning] Using a password on the command line interface can be insecure.
+----------+----------+
| username | password |
+----------+----------+
| lee      | 123      |
| user3    | 123      |
+----------+----------+

2.4 延迟复制

延迟复制时用来控制sql线程的,和i/o线程无关 这个延迟复制不是i/o线程过段时间来复制,i/o是正常工作的 是日志已经保存在slave端了,那个sql要等多久进行回放。

解决问题:在node1上删除用户user3,随即node3上查看,60s之内还存在,所以当我们在node1上删错了数据,可以在60s内在node3上复制等等进行挽救。

#在slave[172.25.254.20]端
mysql> STOP SLAVE SQL_THREAD;
mysql> CHANGE MASTER TO MASTER_DELAY=60;
mysql> START SLAVE SQL_THREAD;
mysql> SHOW SLAVE STATUS\G;
Master_Server_Id: 1
Master_UUID: db2d8c92-4dc2-11ef-b6b0-000c299355ea
Master_Info_File: /data/mysql/master.info
SQL_Delay: 60 ##延迟效果
SQL_Remaining_Delay: NULL
Slave_SQL_Running_State: Slave has read all relay log; waiting for more
updates
Master_Retry_Count: 86400

测试: 在master中写入数据后过了延迟时间才能被查询到

#在node1上删除一个用户
mysql> delete from lee.userlist where username='user3';
Query OK, 1 row affected (0.00 sec)

#60s后在node3上就已经查询不到这个用户了
mysql> select * from lee.userlist;
+----------+----------+
| username | password |
+----------+----------+
| lee      | 123      |
+----------+----------+
1 row in set (0.00 sec)

2.5 慢查询日志

慢查询,顾名思义,执行很慢的查询 当执行SQL超过long_query_time参数设定的时间阈值(默认10s)时,就被认为是慢查询,这个SQL语句就是需要优化的。

慢查询被记录在慢查询日志里 慢查询日志默认是不开启的 如果需要优化SQL语句,就可以开启这个功能,它可以让你很容易地知道哪些语句是需要优化的。

mysql> SHOW variables like "slow%";
+---------------------+----------------------------------+
| Variable_name | Value |
+---------------------+----------------------------------+
| slow_launch_time | 2 |
| slow_query_log | OFF |
| slow_query_log_file | /data/mysql/mysql-node1-slow.log |
+---------------------+----------------------------------+
3 rows in set (0.00 sec)

开启慢查询日志

mysql> SET GLOBAL slow_query_log=ON;
Query OK, 0 rows affected (0.00 sec)
mysql> SET long_query_time=4;
Query OK, 0 rows affected (0.00 sec)
mysql> SHOW VARIABLES like "long%";
+-----------------+----------+
| Variable_name | Value |
+-----------------+----------+
| long_query_time | 4.000000 |
+-----------------+----------+
1 row in set (0.00 sec)
mysql> SHOW VARIABLES like "slow%";
+---------------------+----------------------------------+
| Variable_name | Value |
+---------------------+----------------------------------+
| slow_launch_time | 2 |
| slow_query_log | ON | ##慢查询日志开启
| slow_query_log_file | /data/mysql/mysql-node1-slow.log |
+---------------------+----------------------------------+
3 rows in set (0.01 sec)
[root@mysql-node1 ~]# cat /data/mysql/mysql-node1-slow.log #慢查询日志
/usr/local/mysql/bin/mysqld, Version: 5.7.44-log (Source distribution). started
with:
Tcp port: 3306 Unix socket: /data/mysql/mysql.sock
Time Id Command Argument

测试慢查询

mysql> select sleep (10);
[root@mysql-node1 ~]# cat /data/mysql/mysql-node1-slow.log
/usr/local/mysql/bin/mysqld, Version: 5.7.44-log (Source distribution). started
with:
Tcp port: 3306 Unix socket: /data/mysql/mysql.sock
Time Id Command Argument
# Time: 2024-07-29T17:04:07.612704Z
# User@Host: root[root] @ localhost [] Id: 8
# Query_time: 10.000773 Lock_time: 0.000000 Rows_sent: 1 Rows_examined: 0
SET timestamp=1722272647;
select sleep (10);

2.6 mysql的并行复制

查看slave中的线程信息

默认情况下slave中使用的是sql单线程回放 在master中时多用户读写,如果使用sql单线程回放那么会造成主从延迟严重 开启MySQL的多线程回放可以解决上述问题

在slaves中设定
[root@mysql-node2 ~]# vim /etc/my.cnf
[mysqld]
datadir=/data/mysql
socket=/data/mysql/mysql.sock
server-id=2
gtid_mode=ON
enforce-gtid-consistency=ON
slave-parallel-type=LOGICAL_CLOCK #基于组提交,
slave-parallel-workers=16 #开启线程数量
master_info_repository=TABLE #master信息在表中记录,默认记录在/data/mysql//master.info
relay_log_info_repository=TABLE #回放日志信息在表中记录,默认记录在/data/mysql/relay-log.info
relay_log_recovery=ON #日志回放恢复功能开启
[root@mysql-node2 ~]# /etc/init.d/mysql restart
Starting MySQL. SUCCESS!

[root@mysql-node2 ~]# mysql -uroot -plee 
mysql> SHOW PROCESSLIST;

此时sql线程转化为协调线程,16个worker负责处理sql协调线程发送过来的处理请求。

MySQL 组提交(Group commit)是一个性能优化特性,它允许在一个事务日志同步操作中将多个事务的日志记录一起写入。这样做可以减少磁盘I/O的次数,从而提高数据库的整体性能。

2.7 原理刨析

三个线程

三个线程
实际上主从同步的原理就是基于 binlog 进行数据同步的。在主从复制过程中,会基于3个线程来操作,一个主库线程,两个从库线程。
二进制日志转储线程(Binlog dump thread)是一个主库线程。当从库线程连接的时候, 主库可以
将二进制日志发送给从库,当主库读取事件(Event)的时候,会在 Binlog 上加锁,读取完成之
后,再将锁释放掉。
从库 I/O 线程会连接到主库,向主库发送请求更新 Binlog。这时从库的 I/O 线程就可以读取到主库
的二进制日志转储线程发送的 Binlog 更新部分,并且拷贝到本地的中继日志 (Relay log)。
从库 SQL 线程会读取从库中的中继日志,并且执行日志中的事件,将从库中的数据与主库保持同
步。

复制三步骤

步骤1:Master将写操作记录到二进制日志(binlog)。
步骤2:Slave将Master的binary log events拷贝到它的中继日志(relay log);
步骤3:Slave重做中继日志中的事件,将改变应用到自己的数据库中。 MySQL复制是异步的且串行化
的,而且重启后从接入点开始复制。

具体操作

1.slaves端中设置了master端的ip,用户,日志,和日志Position,通过这些信息取得master的认证及信息
2.master端在设定好binlog启动后会开启binlog dump的线程
3.master端的binlog dump把二进制的更新发送到slave端的
4.slave端开启两个线程,一个是I/O线程,一个是sql线程,
i/o线程用于接收master端的二进制日志,此线程会在本地打开relaylog中继日志,并且保存到本地磁盘,sql线程读取本地relog中继日志进行回放
5.什么时候我们需要多个slave?
当读取的而操作远远高与写操作时。我们采用一主多从架构
数据库外层接入负载均衡层并搭配高可用机制

2.8 架构缺陷

主从架构采用的是异步机制

master更新完成后直接发送二进制日志到slave,但是slaves是否真正保存了数据master端不会检测
master端直接保存二进制日志到磁盘
当master端到slave端的网络出现问题时或者master端直接挂掉,二进制日志可能根本没有到达slave
master出现问题slave端接管master,这个过程中数据就丢失了

这样的问题出现就无法达到数据的强一致性,零数据丢失

为了解决这个问题,提出了半同步模式。

3、半同步模式

3.1半同步模式原理

1.用户线程写入完成后master中的dump会把日志推送到slave端
2.slave中的io线程接收后保存到relaylog中继日志
3.保存完成后slave向master端返回ack
4.在未接受到slave的ack时master端时不做提交的,一直处于等待当收到ack后提交到存储引擎
5.在5.6版本中用到的时after_commit模式,after_commit模式时先提交在等待ack返回后输出ok

3.2 gtid模式

当为启用gtid时我们要考虑的问题

在master端的写入时多用户读写,在slave端的复制时单线程日志回放,所以slave端一定会延迟与master端。
这种延迟在slave端的延迟可能会不一致,当master挂掉后slave接管,一般会挑选一个和master延迟日志最接近的充当新的master。
那么为接管master的主机继续充当slave角色并会指向到新的master上,作为其slave。
这时候按照之前的配置我们需要知道新的master上的pos的id,但是我们无法确定新的master和slave之间差多少。

当激活GITD之后 当master出现问题后,slave2和master的数据最接近,会被作为新的master slave1指向新的master,但是他不会去检测新的master的pos id,只需要继续读取自己gtid_next即可

[root@mysql-node1 ~]# mysqlbinlog -vv /data/mysql/mysql-bin.000003
@@省略内容@@
SET @@SESSION.GTID_NEXT= 'AUTOMATIC' /* added by mysqlbinlog */ /*!*/;
DELIMITER ;
# End of log file
/*!50003 SET COMPLETION_TYPE=@OLD_COMPLETION_TYPE*/;
/*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=0*/;
[root@mysql-node2 ~]# mysql -plee
mysql> select * from mysql.gtid_executed;
+--------------------------------------+----------------+--------------+
| source_uuid | interval_start | interval_end |
+--------------------------------------+----------------+--------------+
| 768c6b91-4c01-11ef-a514-000c299355ea | 1 | 1 |
+--------------------------------------+----------------+--------------+
1 row in set (0.00 sec)

设置gtid

#在master端和slave端开启gtid模式
[root@mysql-node1 ~]# vim /etc/my.cnf
[mysqld]
datadir=/data/mysql
socket=/data/mysql/mysql.sock
server-id=1
log-bin=mysql-bin
gtid_mode=ON
enforce-gtid-consistency=ON
symbolic-links=0
[root@mysql-node1 ~]# /etc/init.d/mysqld restart
[root@mysql-node2 ~]# vim /etc/my.cnf
[mysqld]
datadir=/data/mysql
socket=/data/mysql/mysql.sock
server-id=2
log-bin=mysql-bin
gtid_mode=ON
enforce-gtid-consistency=ON
symbolic-links=0
[root@mysql-node2 ~]# /etc/init.d/mysqld restart
[root@mysql-node3 ~]# vim /etc/my.cnf
[mysqld]
datadir=/data/mysql
socket=/data/mysql/mysql.sock
server-id=3
log-bin=mysql-bin
gtid_mode=ON
enforce-gtid-consistency=ON
symbolic-links=0
[root@mysql-node3 ~]# /etc/init.d/mysqld restart
#停止slave端
[root@mysql-node2 ~]# mysql -p
mysql> stop slave;
Query OK, 0 rows affected (0.00 sec)
[root@mysql-node3 ~]# mysql -p
mysql> stop slave;
Query OK, 0 rows affected (0.01 sec)
#开启slave端的gtid
mysql> CHANGE MASTER TO MASTER_HOST='172.25.254.10', MASTER_USER='repl',
MASTER_PASSWORD='lee', MASTER_AUTO_POSITION=1;
mysql> start slave;
mysql> show slave status\G;
**********************************************
Slave_IO_State: Waiting for master to send event
Master_Host: 172.25.254.10
Master_User: repl
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000003
Read_Master_Log_Pos: 154
Relay_Log_File: mysql-node2-relay-bin.000002
Relay_Log_Pos: 367
Relay_Master_Log_File: mysql-bin.000003
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 154
Relay_Log_Space: 580
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 1
Master_UUID: 768c6b91-4c01-11ef-a514-000c299355ea
Master_Info_File: /data/mysql/master.info
SQL_Delay: 0
SQL_Remaining_Delay: NULL
Slave_SQL_Running_State: Slave has read all relay log; waiting for more
updates
Master_Retry_Count: 86400
Master_Bind:
Last_IO_Error_Timestamp:
Last_SQL_Error_Timestamp:
Master_SSL_Crl:
Master_SSL_Crlpath:
Retrieved_Gtid_Set:
Executed_Gtid_Set:
Auto_Position: 1 #功能开启
Replicate_Rewrite_DB:
Channel_Name:
Master_TLS_Version:
1 row in set (0.00 sec)

3.3.启用半同步模式

在master端配置启用半同步模式

[root@mysql-node1 ~]# vim /etc/my.cnf
[mysqld]
datadir=/data/mysql
socket=/data/mysql/mysql.sock
server-id=1
log-bin=mysql-bin
gtid_mode=ON
enforce-gtid-consistency=ON
rpl_semi_sync_master_enabled=1 #开启半同步功能
symbolic-links=0
[root@mysql-node1 ~]# mysql -p lee
#安装半同步插件
mysql> INSTALL PLUGIN rpl_semi_sync_master SONAME 'semisync_master.so';
#查看插件情况
mysql> SELECT PLUGIN_NAME, PLUGIN_STATUS
-> FROM INFORMATION_SCHEMA.PLUGINS
-> WHERE PLUGIN_NAME LIKE '%semi%';
+----------------------+---------------+
| PLUGIN_NAME | PLUGIN_STATUS |
+----------------------+---------------+
| rpl_semi_sync_master | ACTIVE |
+----------------------+---------------+
1 row in set (0.01 sec)
#打开半同步功能
mysql> SET GLOBAL rpl_semi_sync_master_enabled = 1;
#查看半同步功能状态
mysql> SHOW VARIABLES LIKE 'rpl_semi_sync%';
+-------------------------------------------+------------+
| Variable_name | Value |
+-------------------------------------------+------------+
| rpl_semi_sync_master_enabled | ON |
| rpl_semi_sync_master_timeout | 10000 |
| rpl_semi_sync_master_trace_level | 32 |
| rpl_semi_sync_master_wait_for_slave_count | 1 |
| rpl_semi_sync_master_wait_no_slave | ON |
| rpl_semi_sync_master_wait_point | AFTER_SYNC |
+-------------------------------------------+------------+
mysql> SHOW STATUS LIKE 'Rpl_semi_sync%';
+--------------------------------------------+-------+
| Variable_name | Value |
+--------------------------------------------+-------+
| Rpl_semi_sync_master_clients | 0 |
| Rpl_semi_sync_master_net_avg_wait_time | 0 |
| Rpl_semi_sync_master_net_wait_time | 0 |
| Rpl_semi_sync_master_net_waits | 0 |
| Rpl_semi_sync_master_no_times | 0 |
| Rpl_semi_sync_master_no_tx | 0 |
| Rpl_semi_sync_master_status | ON |
| Rpl_semi_sync_master_timefunc_failures | 0 |
| Rpl_semi_sync_master_tx_avg_wait_time | 0 |
| Rpl_semi_sync_master_tx_wait_time | 0 |
| Rpl_semi_sync_master_tx_waits | 0 |
| Rpl_semi_sync_master_wait_pos_backtraverse | 0 |
| Rpl_semi_sync_master_wait_sessions | 0 |
| Rpl_semi_sync_master_yes_tx | 0 |
+--------------------------------------------+-------+
14 rows in set (0.00 sec)
mysql> show plugins

在slave端开启半同步功能

[root@mysql-node2 ~]# vim /etc/my.cnf
[mysqld]
datadir=/data/mysql
socket=/data/mysql/mysql.sock
server-id=1
log-bin=mysql-bin
gtid_mode=ON
enforce-gtid-consistency=ON
rpl_semi_sync_master_enabled=1 #开启半同步功能
symbolic-links=0
[root@mysql-node2 ~]# mysql -plee
mysql> INSTALL PLUGIN rpl_semi_sync_slave SONAME 'semisync_slave.so';
Query OK, 0 rows affected (0.01 sec)
mysql> SET GLOBAL rpl_semi_sync_slave_enabled =1;
Query OK, 0 rows affected (0.00 sec)
mysql> STOP SLAVE IO_THREAD; #重启io线程,半同步才能生效
Query OK, 0 rows affected (0.00 sec)
mysql> START SLAVE IO_THREAD; ##重启io线程,半同步才能生效
Query OK, 0 rows affected (0.00 sec)
mysql> SHOW VARIABLES LIKE 'rpl_semi_sync%';
+---------------------------------+-------+
| Variable_name | Value |
+---------------------------------+-------+
| rpl_semi_sync_slave_enabled | ON |
| rpl_semi_sync_slave_trace_level | 32 |
+---------------------------------+-------+
2 rows in set (0.01 sec)
mysql> SHOW STATUS LIKE 'Rpl_semi_sync%';
+----------------------------+-------+
| Variable_name | Value |
+----------------------------+-------+
| Rpl_semi_sync_slave_status | ON |
+----------------------------+-------+
1 row in set (0.00 sec)

3.4.测试

在master端写入数据

[root@mysql-node1 ~]# mysql -p lee
mysql> insert into lee.userlist values ('user4','123');
Query OK, 1 row affected (0.01 sec)
mysql> SHOW STATUS LIKE 'Rpl_semi_sync%';
+--------------------------------------------+-------+
| Variable_name | Value |
+--------------------------------------------+-------+
| Rpl_semi_sync_master_clients | 2 |
| Rpl_semi_sync_master_net_avg_wait_time | 0 |
| Rpl_semi_sync_master_net_wait_time | 0 |
| Rpl_semi_sync_master_net_waits | 2 |
| Rpl_semi_sync_master_no_times | 0 |
| Rpl_semi_sync_master_no_tx | 0 | #未同步数据0笔
| Rpl_semi_sync_master_status | ON |
| Rpl_semi_sync_master_timefunc_failures | 0 |
| Rpl_semi_sync_master_tx_avg_wait_time | 981 |
| Rpl_semi_sync_master_tx_wait_time | 981 |
| Rpl_semi_sync_master_tx_waits | 1 |
| Rpl_semi_sync_master_wait_pos_backtraverse | 0 |
| Rpl_semi_sync_master_wait_sessions | 0 |
| Rpl_semi_sync_master_yes_tx | 1 | #已同步数据1笔
+--------------------------------------------+-------+
14 rows in set (0.00 sec)

在node2上查询一下

模拟故障:

#在slave端
[root@mysql-node2 ~]# mysql -plee
mysql> STOP SLAVE IO_THREAD;
Query OK, 0 rows affected (0.00 sec)
[root@mysql-node3 ~]# mysql -plee
mysql> STOP SLAVE IO_THREAD;
Query OK, 0 rows affected (0.00 sec)
#在master端插入数据
mysql> insert into lee.userlist values ('user5','555');
Query OK, 1 row affected (10.00 sec) #10秒超时
mysql> SHOW STATUS LIKE 'Rpl_semi%';
+--------------------------------------------+-------+
| Variable_name | Value |
+--------------------------------------------+-------+
| Rpl_semi_sync_master_clients | 0 |
| Rpl_semi_sync_master_net_avg_wait_time | 0 |
| Rpl_semi_sync_master_net_wait_time | 0 |
| Rpl_semi_sync_master_net_waits | 2 |
| Rpl_semi_sync_master_no_times | 1 |
| Rpl_semi_sync_master_no_tx | 1 | #一笔数据为同步
| Rpl_semi_sync_master_status | OFF | #自动转为异步模式,当slave恢复
| Rpl_semi_sync_master_timefunc_failures | 0 | #会自动恢复
| Rpl_semi_sync_master_tx_avg_wait_time | 981 |
| Rpl_semi_sync_master_tx_wait_time | 981 |
| Rpl_semi_sync_master_tx_waits | 1 |
| Rpl_semi_sync_master_wait_pos_backtraverse | 0 |
| Rpl_semi_sync_master_wait_sessions | 0 |
| Rpl_semi_sync_master_yes_tx | 1 |
+--------------------------------------------+-------+
14 rows in set (0.00 sec)

3.5 测试总结

#当我们将node2、node3的半同步关闭。此时node1的模式由半同步模式自动改为异步模式
mysql> STOP SLAVE IO_THREAD;
Query OK, 0 rows affected (0.00 sec)
#此时再将node2的半同步打开
mysql> START SLAVE IO_THREAD;
Query OK, 0 rows affected (0.01 sec)
#此时node1上的模式自动转回半同步模式

半同步模式:当你的网络出现问题时,保证了你数据的一致性。

面试问题:

你使用半同步模式,slave主机都出现问题后,还可以写入数据吗?

不能写入数据。
但是我们能写入数据的原因是因为自动转换为了异步模式,所以可以写入数据。
当我们slave主机问题解决,slave与master恢复通讯之后,半同步模式恢复,能继续写入数据。

4.mysql高可用之组复制 (MGR)

MySQL Group Replication(简称 MGR )是 MySQL 官方于 2016 年 12 月推出的一个全新的高可用与高扩展的解决方案
组复制是 MySQL 5.7.17 版本出现的新特性,它提供了高可用、高扩展、高可靠的 MySQL 集群服务
MySQL 组复制分单主模式和多主模式,传统的mysql复制技术仅解决了数据同步的问题,
MGR 对属于同一组的服务器自动进行协调。对于要提交的事务,组成员必须就全局事务序列中给定事务的顺序达成一致
提交或回滚事务由每个服务器单独完成,但所有服务器都必须做出相同的决定
如果存在网络分区,导致成员无法达成事先定义的分割策略,则在解决此问题之前系统不会继续进行,这是一种内置的自动裂脑保护机制
MGR由组通信系统( Group Communication System ,GCS ) 协议支持
该系统提供故障检测机制、组成员服务以及安全且有序的消息传递

log_slave_updates=ON【打开数据库中继】

原理:在这之前master会把日志给两个slave主机,现在只需要给B,B即会给C。

4.3实现mysql组复制

为了避免出错,在所有节点中从新生成数据库数据

编辑主配置文件:

#在mysql-node10/20/30中
[root@mysql-node10 ~]# /etc/init.d/mysqld stop 
[root@mysql-node10 ~]# ps aux | grep mysqld
[root@mysql-node10 ~]# kill id
[root@mysql-node10 ~]# rm -fr /data/mysql/*
[root@mysql-node10 ~]# vim /etc/my.cnf
[mysqld]
datadir=/data/mysql
socket=/data/mysql/mysql.sock
symbolic-links=0
server-id=10    #配置server唯一标识号
disabled_storage_engines="MyISAM,BLACKHOLE,FEDERATED,ARCHIVE,MEMORY"   #禁用指定存储引擎
gtid_mode=ON    #启用全局事件标识
enforce_gtid_consistency=ON #强制gtid一致
master_info_repository=TABLE #复制事件数据到表中而不记录在数据目录中
relay_log_info_repository=TABLE
binlog_checksum=NONE     #禁止对二进制日志校验
log_slave_updates=ON     #打开数据库中继,
#当slave中sql线程读取日志后也会写入到自己的binlog中
log_bin=binlog           #重新指定log名称
binlog_format=ROW        #使用行日志格式
plugin_load_add='group_replication.so' #加载组复制插件
transaction_write_set_extraction=XXHASH64 #把每个事件编码为加密散列
group_replication_group_name="aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa" 
#通知插件正式加入
#或创建的组名
#名称为uuid格式
group_replication_start_on_boot=off #在server启动时不自动启动组复制
group_replication_local_address="172.25.254.10:33061" #指定插件接受其他成员的信息端口
group_replication_group_seeds="172.25.254.10:33061,172.25.254.20:33061,172.25.254.30:33061"                                  #本地地址允许访问成员列表
group_replication_ip_whitelist="172.25.254.0/24,127.0.0.1/8"     #主机白名单

#不随系统自启而启动,只在初始成员主机中手动开启
#需要在两种情况下做设定:1、初始化建组时 2、关闭并重新启动整个组时
group_replication_bootstrap_group=off                   

group_replication_single_primary_mode=OFF                       #使用多主模式
group_replication_enforce_update_everywhere_checks=ON #组同步中有任何改变检测更新
group_replication_allow_local_disjoint_gtids_join=1 #放弃自己信息以master事件为主
[root@mysql-node10 ~]# mysqld --user=mysql --initialize
[root@mysql-node10 ~]# /etc/init.d/mysqld start
[root@mysql-node10 ~]# mysql -uroot -p'初始化后生成的密码' -e 
"alter user root@localhost identified by 'lee';"
#配置sql

[root@mysql-node10 ~]# mysql -plee
#把日志设置关闭,现在所有的动作都不会记录在日志里面。
mysql> SET SQL_LOG_BIN=0;
Query OK, 0 rows affected (0.00 sec)
mysql> CREATE USER rpl_user@'%' IDENTIFIED BY 'lee';
Query OK, 0 rows affected (0.00 sec)
mysql> GRANT REPLICATION SLAVE ON *.* TO rpl_user@'%';
Query OK, 0 rows affected (0.00 sec)
mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)
mysql> SET SQL_LOG_BIN=1;
#把日志设置打开,现在就可以做主复制。
Query OK, 0 rows affected (0.00 sec)
mysql> CHANGE MASTER TO MASTER_USER='rpl_user', MASTER_PASSWORD='lee' FOR CHANNEL 'group_replication_recovery';
Query OK, 0 rows affected, 2 warnings (0.00 sec)
mysql> SET GLOBAL group_replication_bootstrap_group=ON;  #用以指定初始成员,值在第一台主机中执行
mysql> START GROUP_REPLICATION;
Query OK, 0 rows affected, 1 warning (2.19 sec)
mysql> SET GLOBAL group_replication_bootstrap_group=OFF;
mysql> SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------------------+-------------+--------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST             | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------------------+-------------+--------------+
| group_replication_applier | 2b49c5de-61e5-11ef-97ce-000c2911194c | mysql-node1.jingwen.org |        3306 | ONLINE       |
+---------------------------+--------------------------------------+-------------------------+-------------+--------------+
1 row in set (0.01 sec)

#在每台主机上做解析
[root@mysql-node1 ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
172.25.254.10	mysql-node1.jingwen.org
172.25.254.20	mysql-node2.jingwen.org
172.25.254.30	mysql-node3.jingwen.org
补充:
1、如果初始化失败,把临时文件删除重新初始化和即可。
rm -rf /data/mysql/*

在复制配置文件到myql-node20和mysql-node30、

[root@mysql-node10 ~]# scp /etc/my.cnf root@172.25.254.20:/etc/my.cnf
[root@mysql-node10 ~]# scp /etc/my.cnf root@172.25.254.30:/etc/my.cnf
#注意这两行
server-id=2 #在30上写3
group_replication_local_address="172.25.254.20:33061" #在30上要写30
#修改mysql—node20和mysl-node30中的配置
[root@mysql-node20 & 30 ~]# rm -fr /data/mysql/*
[root@mysql-node10 ~]# mysqld --user=mysql --initialize
[root@mysql-node10 ~]# /etc/init.d/mysqld start
[root@mysql-node10 ~]# mysql -uroot -p初始化后生成的密码 -e "alter user
root@localhost identified by 'lee';"
#配置sql
[root@mysql-node20 & 30 ~]# mysql -plee
mysql> SET SQL_LOG_BIN=0;
Query OK, 0 rows affected (0.00 sec)
mysql> CREATE USER rpl_user@'%' IDENTIFIED BY 'lee';
Query OK, 0 rows affected (0.00 sec)
mysql> GRANT REPLICATION SLAVE ON *.* TO rpl_user@'%';
Query OK, 0 rows affected (0.00 sec)
mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)
mysql> SET SQL_LOG_BIN=1;
Query OK, 0 rows affected (0.00 sec)
mysql> CHANGE MASTER TO MASTER_USER='rpl_user', MASTER_PASSWORD='lee' FOR CHANNEL 'group_replication_recovery';
Query OK, 0 rows affected, 2 warnings (0.00 sec)
mysql> START GROUP_REPLICATION;
Query OK, 0 rows affected, 1 warning (2.19 sec)
mysql> SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------------------+-------------+--------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST             | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------------------+-------------+--------------+
| group_replication_applier | 286740f5-61e7-11ef-92a3-000c298a4727 | mysql-node2.jingwen.org |        3306 | RECOVERING   |
| group_replication_applier | 2b49c5de-61e5-11ef-97ce-000c2911194c | mysql-node1.jingwen.org |        3306 | ONLINE       |
+---------------------------+--------------------------------------+-------------------------+-------------+--------------+
2 rows in set (0.00 sec)

#node3
mysql> SELECT * FROM performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------------------+-------------+--------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST             | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------------------+-------------+--------------+
| group_replication_applier | 286740f5-61e7-11ef-92a3-000c298a4727 | mysql-node2.jingwen.org |        3306 | ONLINE       |
| group_replication_applier | 2b49c5de-61e5-11ef-97ce-000c2911194c | mysql-node1.jingwen.org |        3306 | ONLINE       |
| group_replication_applier | ed645a11-61e7-11ef-ab5b-000c298cae16 | mysql-node3.jignwen.org |        3306 | ONLINE   |
+---------------------------+--------------------------------------+-------------------------+-------------+--------------+
3 rows in set (0.00 sec)

测试:在每个节点都可以完成读写

#在mysql-node10中
[root@mysql-node10 ~]# mysql -p
mysql> CREATE DATABASE lee;
Query OK, 1 row affected (0.00 sec)
mysql> CREATE TABLE lee.userlist(
-> username VARCHAR(10) PRIMARY KEY NOT NULL,
-> password VARCHAR(50) NOT NULL
-> );
Query OK, 0 rows affected (0.01 sec)
mysql> INSERT INTO lee.userlist VALUES ('user1','111');
Query OK, 1 row affected (0.00 sec)
mysql> SELECT * FROM lee.userlist;
+----------+----------+
| username | password |
+----------+----------+
| user1 | 111 |
+----------+----------+
1 row in set (0.00 sec)
#在mysql-node20中
[root@mysql-node20 ~]# mysql -p
mysql> INSERT INTO lee.userlist values ('user2','222');
Query OK, 1 row affected (0.00 sec)
mysql> select * from lee.userlist;
+----------+----------+
| username | password |
+----------+----------+
| user1 | 111 |
| user2 | 222 |
+----------+----------+
2 rows in set (0.00 sec)
#mysql—node30中
[root@mysql-node30 ~]# mysql -p
mysql> INSERT INTO lee.userlist values ('user3','333');
Query OK, 1 row affected (0.00 sec)
mysql> select * from lee.userlist;
+----------+----------+
| username | password |
+----------+----------+
| user1 | 111 |
| user2 | 222 |
| user3 | 333 |
+----------+----------+
3 rows in set (0.00 sec)

多主模式每个节点都可以完成读写以及查询

5、mysql-router(mysql路由)

MySQL Router 是一个对应用程序透明的InnoDB Cluster连接路由服务,提供负载均衡、应用连接故障转移和客户端路由。 利用路由器的连接路由特性,用户可以编写应用程序来连接到路由器,并令路由器使用相应的路由策略来处理连接,使其连接到正确的MySQL数据库服务器

5.1 Mysql route的部署方式

#安装mysql-router
[root@mysql-node1 ~]# rpm -ivh mysql-router-community-8.4.0-1.el7.x86_64.rpm
#在node1上做,首先要将node1的MySQL关闭。
[root@mysql-node1 ~]# ps aux | grep mysqld
root      79307  0.0  0.0 113412  1616 ?        S    11:11   0:00 /bin/sh /usr/local/mysql/bin/mysqld_safe --datadir=/data/mysql --pid-file=/data/mysql/mysql-node1.jingwen.org.pid
mysql     79484  0.0  9.8 2155412 183052 ?      Sl   11:11   0:01 /usr/local/mysql/bin/mysqld --basedir=/usr/local/mysql --datadir=/data/mysql --plugin-dir=/usr/local/mysql/lib/plugin --user=mysql --log-error=mysql-node1.jingwen.org.err --pid-file=/data/mysql/mysql-node1.jingwen.org.pid --socket=/data/mysql/mysql.sock
root      80592  0.0  0.0 112808   964 pts/1    S+   11:47   0:00 grep --color=auto mysqld
[root@mysql-node1 ~]# /etc/init.d/mysqld stop
Shutting down MySQL........... SUCCESS! 
[root@mysql-node1 ~]# ps aux | grep mysqld
root      80642  0.0  0.0 112808   964 pts/1    S+   11:48   0:00 grep --color=auto mysqld

#配置mysql-router
[root@mysql-node1 ~]# vim /etc/mysqlrouter/mysqlrouter.conf
[routing:ro]
bind_address = 0.0.0.0
bind_port = 7001
destinations = 172.25.254.20:3306,172.25.254.30:3306
routing_strategy = round-robin                #轮询(最常用的也是轮询)
[routing:rw]
bind_address = 0.0.0.0
bind_port = 7002
destinations = 172.25.254.10:3306,172.25.254.20:3306
routing_strategy = first-available			 #谁最先响应就访问谁
[root@mysql-router ~]# systemctl start mysqlrouter.service

测试:

#在node2\node3上建立测试用户
mysql> CREATE USER lee@'%' IDENTIFIED BY 'lee';
mysql> GRANT ALL ON lee.* TO lee@'%';
#查看调度效果
[root@mysql-node10 & 20 & 30 ~]# watch -1 lsof -i :3306
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
mysqld 9879 mysql 22u IPv6 56697 0t0 TCP *:mysql (LISTEN)

#轮询
[root@mysql-router ~]# mysql -ulee -plee -h 172.25.254.10 -P 7001
mysql> select @@server_id;
+-------------+
| @@server_id |
+-------------+
|          20 |
+-------------+
1 row in set (0.00 sec)
#退出再次连接查询,此时实现了负载均衡,访问的是172.25.254.20
mysql> exit
Bye
mysql> select @@server_id;
+-------------+
| @@server_id |
+-------------+
|          30 |
+-------------+
1 row in set (0.00 sec)

#优先响应,172.25.254.20优先响应。
mysql> select @@server_id;
+-------------+
| @@server_id |
+-------------+
|          20 |
+-------------+
1 row in set (0.00 sec)

mysql router 并不能限制数据库的读写,访问分流。

6、mysql高可用之MHA

6.1.MHA概述

为什么要用MHA? Master的单点故障问题

什么是 MHA? MHA(Master High Availability)是一套优秀的MySQL高可用环境下故障切换和主从复制的软件。 MHA 的出现就是解决MySQL 单点的问题。 MySQL故障切换过程中,MHA能做到0-30秒内自动完成故障切换操作。 MHA能在故障切换的过程中最大程度上保证数据的一致性,以达到真正意义上的高可用。

MHA 的组成 MHA由两部分组成:MHAManager (管理节点) MHA Node (数据库节点), MHA Manager 可以单独部署在一台独立的机器上管理多个master-slave集群,也可以部署在一台slave 节点上。 MHA Manager 会定时探测集群中的 master 节点。 当 master 出现故障时,它可以自动将最新数据的 slave 提升为新的 master, 然后将所有其他的slave 重新指向新的 master。

MHA 的特点 自动故障切换过程中,MHA从宕机的主服务器上保存二进制日志,最大程度的保证数据不丢失 使用半同步复制,可以大大降低数据丢失的风险,如果只有一个slave已经收到了最新的二进制日志,MHA可以将最新的二进制日志应用于其他所有的slave服务器上,因此可以保证所有节点的数据一致性 目前MHA支持一主多从架构,最少三台服务,即一主两从

故障切换备选主库的算法 1.一般判断从库的是从(position/GTID)判断优劣,数据有差异,最接近于master的slave,成为备选主。 2.数据一致的情况下,按照配置文件顺序,选择备选主库。 3.设定有权重(candidate_master=1),按照权重强制指定备选主。 (1)默认情况下如果一个slave落后master 100M的relay logs的话,即使有权重,也会失效。 (2)如果check_repl_delay=0的话,即使落后很多日志,也强制选择其为备选主。

MHA工作原理 目前MHA主要支持一主多从的架构,要搭建MHA,要求一个复制集群必须最少有3台数据库服务器, 一主二从,即一台充当Master,台充当备用Master,另一台充当从库。 MHA Node 运行在每台 MySQL 服务器上 MHAManager 会定时探测集群中的master 节点 当master 出现故障时,它可以自动将最新数据的slave 提升为新的master 然后将所有其他的slave 重新指向新的master,VIP自动漂移到新的master。 整个故障转移过程对应用程序完全透明。

6.2 MHA部署实施

实验准备:
克隆rhel7 172.25.254.50 mha.jingwen.org
[root@mha ~]# ssh-keygen
[root@mha ~]# cat /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
172.25.254.50   mha.jingwen.org
172.25.254.20   mysql-node2.jingwen.org
172.25.254.30   mysql-node3.jingwen.org
172.25.254.10   mysql-node1.jingwen.org
​
[root@mha ~]# ssh-copy-id -id /root/.ssh/id_rsa.pub root@172.25.254.10
[root@mha ~]# ssh-copy-id -id /root/.ssh/id_rsa.pub root@172.25.254.20
[root@mha ~]# ssh-copy-id -id /root/.ssh/id_rsa.pub root@172.25.254.30

6.2.1 搭建主两从架构

10主 20、30从

[组复制和这里MHA搭建主两从架构是一样的,都是解决单点故障]

#在10节点中
[root@mysql-node10 ~]# /etc/init.d/mysqld stop
[root@mysql-node10 ~]# rm -fr /data/mysql/*
[root@mysql-node10 ~]# vim /etc/my.cnf
[mysqld]
datadir=/data/mysql
socket=/data/mysql/mysql.sock
server-id=1
log-bin=mysql-bin
gtid_mode=ON
log_slave_updates=ON
enforce-gtid-consistency=ON
symbolic-links=0
[root@mysql-node10 ~]# mysqld --user mysql --initialize
[root@mysql-node10 ~]# /etc/init.d/mysqld start
[root@mysql-node10 ~]# mysql_secure_installation
[root@mysql-node10 ~]# mysql -p
mysql> ALTER user root@localhost identified by 'lee';
mysql> CREATE USER 'repl'@'%' IDENTIFIED BY 'lee';
Query OK, 0 rows affected (0.00 sec)
mysql> GRANT REPLICATION SLAVE ON *.* TO repl@'%';
Query OK, 0 rows affected (0.00 sec)
#半同步模式
mysql> INSTALL PLUGIN rpl_semi_sync_master SONAME 'semisync_master.so';
Query OK, 0 rows affected (0.02 sec)
mysql> SET GLOBAL rpl_semi_sync_master_enabled = 1;
Query OK, 0 rows affected (0.00 sec) 
SHOW GLOBAL VARIABLES LIKE '%semi%';  
SHOW GLOBAL STATUS LIKE '%semi%';
​
#在slave1和slave2中
[root@mysql-node20 & 30 ~]# /etc/init.d/mysqld stop
[root@mysql-node20 & 30 ~]# rm -fr /data/mysql/*
[root@mysql-node20 & 30 ~]# vim /etc/my.cnf
[mysqld]
datadir=/data/mysql
socket=/data/mysql/mysql.sock
server-id=1
log-bin=mysql-bin
gtid_mode=ON
log_slave_updates=ON
enforce-gtid-consistency=ON
symbolic-links=0
[root@mysql-node20 & 30 ~]# mysqld --user mysql --initialize
[root@mysql-node20 & 300 ~]# /etc/init.d/mysqld start
[root@mysql-node20 & 30 ~]# mysql_secure_installation
[root@mysql-node20 & 30 ~]# mysql -p
mysql> ALTER user root@localhost identified by 'lee';
mysql> CHANGE MASTER TO MASTER_HOST='172.25.254.10', MASTER_USER='repl',
MASTER_PASSWORD='lee', MASTER_AUTO_POSITION=1;
Query OK, 0 rows affected, 2 warnings (0.00 sec)
mysql> start slave;
Query OK, 0 rows affected (0.00 sec)
mysql> INSTALL PLUGIN rpl_semi_sync_slave SONAME 'semisync_slave.so'; #安装插件
Query OK, 0 rows affected (0.01 sec)
mysql> SET GLOBAL rpl_semi_sync_slave_enabled =1;
Query OK, 0 rows affected (0.00 sec)
mysql> STOP SLAVE IO_THREAD;
Query OK, 0 rows affected (0.00 sec)
mysql> START SLAVE IO_THREAD;
Query OK, 0 rows affected (0.00 sec)
mysql> SHOW STATUS LIKE 'Rpl_semi_sync%';
+----------------------------+-------+
| Variable_name | Value |
+----------------------------+-------+
| Rpl_semi_sync_slave_status | ON |
+----------------------------+-------+
1 row in set (0.01 sec)
mysql> show slave status\G;  #查看是否连接成功

6.2.2安装MHA所需要的软件

#在MHA中
[root@mysql-mha ~]# unzip MHA-7.zip
[root@mysql-mha MHA-7]# ls
mha4mysql-manager-0.58-0.el7.centos.noarch.rpm perl-Mail-Sender-0.8.23-
1.el7.noarch.rpm
mha4mysql-manager-0.58.tar.gz perl-Mail-Sendmail-0.79-
21.el7.noarch.rpm
mha4mysql-node-0.58-0.el7.centos.noarch.rpm perl-MIME-Lite-3.030-
1.el7.noarch.rpm
perl-Config-Tiny-2.14-7.el7.noarch.rpm perl-MIME-Types-1.38-
2.el7.noarch.rpm
perl-Email-Date-Format-1.002-15.el7.noarch.rpm perl-Net-Telnet-3.03-
19.el7.noarch.rpm
perl-Log-Dispatch-2.41-1.el7.1.noarch.rpm perl-Parallel-ForkManager-1.18-
2.el7.noarch.rpm
[root@mysql-mha MHA-7]# yum install *.rpm -y
[root@mysql-mha MHA-7]# scp mha4mysql-node-0.58-0.el7.centos.noarch.rpm
root@172.25.254.10:/mnt
[root@mysql-mha MHA-7]# scp mha4mysql-node-0.58-0.el7.centos.noarch.rpm
root@172.25.254.20:/mnt
[root@mysql-mha MHA-7]# scp mha4mysql-node-0.58-0.el7.centos.noarch.rpm
root@172.25.254.30:/mnt
#在sql-node中
[root@mysql-node10 ~]# yum install /mnt/mha4mysql-node-0.58-0.el7.centos.noarch.rpm -y
[root@mysql-node20 ~]# yum install /mnt/mha4mysql-node-0.58-0.el7.centos.noarch.rpm -y
[root@mysql-node30 ~]# yum install /mnt/mha4mysql-node-0.58-0.el7.centos.noarch.rpm -y
在软件中包含的工具包介绍
1.Manager工具包主要包括以下几个工具:
masterha_check_ssh #检查MHA的SSH配置状况
masterha_check_repl #检查MySQL复制状况
masterha_manger #启动MHA
masterha_check_status #检测当前MHA运行状态
masterha_master_monitor #检测master是否宕机
masterha_master_switch #控制故障转移(自动或者手动)
masterha_conf_host #添加或删除配置的server信息

2.Node工具包 (通常由masterHA主机直接调用,无需人为执行)
save_binary_logs #保存和复制master的二进制日志
apply_diff_relay_logs #识别差异的中继日志事件并将其差异的事件应用于其他的slave
filter_mysqlbinlog #去除不必要的ROLLBACK事件(MHA已不再使用这个工具)
purge_relay_logs #清除中继日志(不会阻塞SQL线程)

6.2.1 配置MHA 的管理环境

1.生成配置目录和配置文件

[root@mysql-mha ~]# masterha_manager --help
Usage:
masterha_manager --global_conf=/etc/masterha_default.cnf #全局配置文件,记录公共设定
--conf=/usr/local/masterha/conf/app1.cnf #不同管理配置文件,记录各自配置
See online reference
(http://code.google.com/p/mysql-master-ha/wiki/masterha_manager) for
details.

因为我们当前只有一套主从,所以我们只需要写一个配置文件即可 rpm包中没有为我们准备配置文件的模板 可以解压源码包后在samples中找到配置文件的模板文件

#生成配置文件
[root@mysql-mha ~]# mkdir /etc/masterha
[root@mysql-mha MHA-7]# tar zxf mha4mysql-manager-0.58.tar.gz 
[root@mysql-mha MHA-7]# cd mha4mysql-manager-0.58/samples/conf/
[root@mysql-mha conf]# cat masterha_default.cnf app1.cnf > /etc/masterha/app1.cnf
​
#首先在10上给权限,在10上做了在20和30上就不用做了,因为我们以及做了组从复制。
mysql> CREATE USER root@'%' identified by 'lee';
Query OK, 0 rows affected (0.00 sec)
mysql> grant ALL ON *.* to root@'%';
Query OK, 0 rows affected (0.04 sec)
​
#编辑配置文件
[root@mysql-mha ~]# vim /etc/masterha/app1.cnf
[server default]
user=root           #mysql管理员用户,因为需要做自动化配置
password=lee        #mysql密码
ssh_user=root       #ssh远程登陆用户
repl_user=repl      #mysql主从复制中负责认证的用户
repl_password=lee   #mysql主从复制中负责认证的用户密码
master_binlog_dir= /data/mysql #二进制日志目录
remote_workdir=/tmp #远程工作目录
#此参数使为了提供冗余检测,方式是mha主机网络自身的问题无法连接数据库节点,应为集群之外的主机
secondary_check_script= masterha_secondary_check -s 172.25.254.10 -s
172.25.254.11
ping_interval=3 #每隔3秒检测一次
#发生故障后调用的脚本,用来迁移vip
# master_ip_failover_script= /script/masterha/master_ip_failover
#电源管理脚本
# shutdown_script= /script/masterha/power_manager
#当发生故障后用此脚本发邮件或者告警通知
# report_script= /script/masterha/send_report
#在线切换时调用的vip迁移脚本,手动
# master_ip_online_change_script= /script/masterha/master_ip_online_change
manager_workdir=/etc/masterha #mha工作目录
manager_log=/var/etc/masterha/manager.log #mha日志
[server1]
hostname=172.25.254.10
candidate_master=1 #可能作为master的主机
check_repl_delay=0 ##默认情况下如果一个slave落后master 100M的relay logs的话
#MHA将不会选择该slave作为一个新的master
#因为对于这个slave的恢复需要花费很长时间
#通过设置check_repl_delay=0
#MHA触发切换在选择一个新的master的时候将会忽略复制延时
#这个参数对于设置了candidate_master=1的主机非常有用
#因为这个候选主在切换的过程中一定是新的master
[server2]
hostname=172.25.254.20
candidate_master=1 #可能作为master的主机
check_repl_delay=0
[server3]
hostname=172.25.254.30
no_master=1 #不会作为master的主机

2.检测配置: a)检测网络及ssh免密

#我们只做了50对10、20、30实现了免密,但是这些主机互相之间都需要免密。
#能免密的根本原因是因为有私钥,所以把私钥复制给其它三台主机即可实现互相免密。
[root@mha .ssh]# scp id_rsa root@172.25.254.10:/root/.ssh/
id_rsa                                     100% 1679   890.8KB/s   00:00   
[root@mha .ssh]# scp id_rsa root@172.25.254.20:/root/.ssh/
id_rsa                                     100% 1679     1.2MB/s   00:00   
[root@mha .ssh]# scp id_rsa root@172.25.254.30:/root/.ssh/
id_rsa                                     100% 1679     1.2MB/s   00:00   
[root@mysql-mha ~]# masterha_check_ssh --conf=/etc/masterha/app1.cnf
Fri Aug 2 16:57:41 2024 - [warning] Global configuration file
/etc/masterha_default.cnf not found. Skipping.
Fri Aug 2 16:57:41 2024 - [info] Reading application default configuration from
/etc/masterha/app1.cnf..
Fri Aug 2 16:57:41 2024 - [info] Reading server configuration from
/etc/masterha/app1.cnf..
Fri Aug 2 16:57:41 2024 - [info] Starting SSH connection tests..
Fri Aug 2 16:57:42 2024 - [debug]
Fri Aug 2 16:57:41 2024 - [debug] Connecting via SSH from
root@172.25.254.10(172.25.254.10:22) to root@172.25.254.20(172.25.254.20:22)..
Fri Aug 2 16:57:41 2024 - [debug] ok.
Fri Aug 2 16:57:41 2024 - [debug] Connecting via SSH from
root@172.25.254.10(172.25.254.10:22) to root@172.25.254.30(172.25.254.30:22)..
Fri Aug 2 16:57:41 2024 - [debug] ok.
Fri Aug 2 16:57:42 2024 - [debug]
Fri Aug 2 16:57:41 2024 - [debug] Connecting via SSH from
root@172.25.254.20(172.25.254.20:22) to root@172.25.254.10(172.25.254.10:22)..
Warning: Permanently added '172.25.254.10' (ECDSA) to the list of known hosts.
Fri Aug 2 16:57:42 2024 - [debug] ok.
Fri Aug 2 16:57:42 2024 - [debug] Connecting via SSH from
root@172.25.254.20(172.25.254.20:22) to root@172.25.254.30(172.25.254.30:22)..
Warning: Permanently added '172.25.254.30' (ECDSA) to the list of known hosts.
Fri Aug 2 16:57:42 2024 - [debug] ok.
Fri Aug 2 16:57:43 2024 - [debug]
Fri Aug 2 16:57:42 2024 - [debug] Connecting via SSH from
root@172.25.254.30(172.25.254.30:22) to root@172.25.254.10(172.25.254.10:22)..
Warning: Permanently added '172.25.254.10' (ECDSA) to the list of known hosts.
Fri Aug 2 16:57:42 2024 - [debug] ok.
Fri Aug 2 16:57:42 2024 - [debug] Connecting via SSH from
root@172.25.254.30(172.25.254.30:22) to root@172.25.254.20(172.25.254.20:22)..
Warning: Permanently added '172.25.254.20' (ECDSA) to the list of known hosts.
Fri Aug 2 16:57:42 2024 - [debug] ok.
Fri Aug 2 16:57:43 2024 - [info] All SSH connection tests passed successfully.

b)检测数据主从复制情况

#在数据节点master端
mysql> GRANT ALL ON *.* TO root@'%' identified by 'lee'; #允许root远程登陆
#执行检测
[root@mysql-mha ~]# masterha_check_repl --conf=/etc/masterha/app1.cnf
Fri Aug 2 17:04:20 2024 - [warning] Global configuration file
/etc/masterha_default.cnf not found. Skipping.
Fri Aug 2 17:04:20 2024 - [info] Reading application default configuration from
/etc/masterha/app1.cnf..
Fri Aug 2 17:04:20 2024 - [info] Reading server configuration from
/etc/masterha/app1.cnf..
Fri Aug 2 17:04:20 2024 - [info] MHA::MasterMonitor version 0.58.
Fri Aug 2 17:04:21 2024 - [info] GTID failover mode = 1
Fri Aug 2 17:04:21 2024 - [info] Dead Servers:
Fri Aug 2 17:04:21 2024 - [info] Alive Servers:
Fri Aug 2 17:04:21 2024 - [info] 172.25.254.10(172.25.254.10:3306)
Fri Aug 2 17:04:21 2024 - [info] 172.25.254.20(172.25.254.20:3306)
Fri Aug 2 17:04:21 2024 - [info] 172.25.254.30(172.25.254.30:3306)
Fri Aug 2 17:04:21 2024 - [info] Alive Slaves:
Fri Aug 2 17:04:21 2024 - [info] 172.25.254.20(172.25.254.20:3306)
Version=5.7.44-log (oldest major version between slaves) log-bin:enabled
Fri Aug 2 17:04:21 2024 - [info] GTID ON
Fri Aug 2 17:04:21 2024 - [info] Replicating from
172.25.254.10(172.25.254.10:3306)
Fri Aug 2 17:04:21 2024 - [info] Primary candidate for the new Master
(candidate_master is set)
Fri Aug 2 17:04:21 2024 - [info] 172.25.254.30(172.25.254.30:3306)
Version=5.7.44-log (oldest major version between slaves) log-bin:enabled
Fri Aug 2 17:04:21 2024 - [info] GTID ON
Fri Aug 2 17:04:21 2024 - [info] Replicating from
172.25.254.10(172.25.254.10:3306)
Fri Aug 2 17:04:21 2024 - [info] Not candidate for the new Master (no_master
is set)
Fri Aug 2 17:04:21 2024 - [info] Current Alive Master:
172.25.254.10(172.25.254.10:3306)
Fri Aug 2 17:04:21 2024 - [info] Checking slave configurations..
Fri Aug 2 17:04:21 2024 - [info] read_only=1 is not set on slave
172.25.254.20(172.25.254.20:3306).
Fri Aug 2 17:04:21 2024 - [info] read_only=1 is not set on slave
172.25.254.30(172.25.254.30:3306).
Fri Aug 2 17:04:21 2024 - [info] Checking replication filtering settings..
Fri Aug 2 17:04:21 2024 - [info] binlog_do_db= , binlog_ignore_db=
Fri Aug 2 17:04:21 2024 - [info] Replication filtering check ok.
Fri Aug 2 17:04:21 2024 - [info] GTID (with auto-pos) is supported. Skipping all
SSH and Node package checking.
Fri Aug 2 17:04:21 2024 - [info] Checking SSH publickey authentication settings
on the current master..
Fri Aug 2 17:04:21 2024 - [info] HealthCheck: SSH to 172.25.254.10 is reachable.
Fri Aug 2 17:04:21 2024 - [info]
172.25.254.10(172.25.254.10:3306) (current master)
+--172.25.254.20(172.25.254.20:3306)
+--172.25.254.30(172.25.254.30:3306)
Fri Aug 2 17:04:21 2024 - [info] Checking replication health on 172.25.254.20..
Fri Aug 2 17:04:21 2024 - [info] ok.
Fri Aug 2 17:04:21 2024 - [info] Checking replication health on 172.25.254.30..
Fri Aug 2 17:04:21 2024 - [info] ok.
Fri Aug 2 17:04:21 2024 - [warning] master_ip_failover_script is not defined.
Fri Aug 2 17:04:21 2024 - [warning] shutdown_script is not defined.
Fri Aug 2 17:04:21 2024 - [info] Got exit code 0 (Not master dead).
MySQL Replication Health is OK.

6.2.3 MHA的故障切换

MHA的故障切换过程 共包括以下的步骤: 1.配置文件检查阶段,这个阶段会检查整个集群配置文件配置 2.宕机的master处理,这个阶段包括虚拟ip摘除操作,主机关机操作 3.复制dead master和最新slave相差的relay log,并保存到MHA Manger具体的目录下 4.识别含有最新更新的slave 5.应用从master保存的二进制日志事件(binlog events) 6.提升一个slave为新的master进行复制 7.使其他的slave连接新的master进行复制

切换方式: master未出现故障手动切换

#在master数据节点还在正常工作情况下
#想让20做为新的master。
[root@mysql-mha ~]# masterha_master_switch \
--conf=/etc/masterha/app1.cnf \ #指定配置文件
--master_state=alive \ #指定master节点状态
--new_master_host=172.25.254.20 \ #指定新master节点
--new_master_port=3306 \ #执行新master节点端口
--orig_master_is_new_slave \ #原始master会变成新的slave
--running_updates_limit=10000 #切换的超时时间
#切换过程如下:
[root@mysql-mha masterha]# masterha_master_switch --conf=/etc/masterha/app1.cnf --master_state=alive --new_master_host=172.25.254.20 --new_master_port=3306 --orig_master_is_new_slave --running_updates_limit=10000
Fri Aug 2 18:30:38 2024 - [info] MHA::MasterRotate version 0.58.
Fri Aug 2 18:30:38 2024 - [info] Starting online master switch..
Fri Aug 2 18:30:38 2024 - [info]
Fri Aug 2 18:30:38 2024 - [info] * Phase 1: Configuration Check Phase..
Fri Aug 2 18:30:38 2024 - [info]
Fri Aug 2 18:30:38 2024 - [warning] Global configuration file
/etc/masterha_default.cnf not found. Skipping.
Fri Aug 2 18:30:38 2024 - [info] Reading application default configuration from
/etc/masterha/app1.cnf..
Fri Aug 2 18:30:38 2024 - [info] Reading server configuration from
/etc/masterha/app1.cnf..
Fri Aug 2 18:30:39 2024 - [info] GTID failover mode = 1
Fri Aug 2 18:30:39 2024 - [info] Current Alive Master:
172.25.254.10(172.25.254.10:3306)
Fri Aug 2 18:30:39 2024 - [info] Alive Slaves:
Fri Aug 2 18:30:39 2024 - [info] 172.25.254.20(172.25.254.20:3306)
Version=5.7.44-log (oldest major version between slaves) log-bin:enabled
Fri Aug 2 18:30:39 2024 - [info] GTID ON
Fri Aug 2 18:30:39 2024 - [info] Replicating from
172.25.254.10(172.25.254.10:3306)
Fri Aug 2 18:30:39 2024 - [info] Primary candidate for the new Master
(candidate_master is set)
Fri Aug 2 18:30:39 2024 - [info] 172.25.254.30(172.25.254.30:3306)
Version=5.7.44-log (oldest major version between slaves) log-bin:enabled
Fri Aug 2 18:30:39 2024 - [info] GTID ON
Fri Aug 2 18:30:39 2024 - [info] Replicating from
172.25.254.10(172.25.254.10:3306)
Fri Aug 2 18:30:39 2024 - [info] Not candidate for the new Master (no_master
is set)
It is better to execute FLUSH NO_WRITE_TO_BINLOG TABLES on the master before
switching. Is it ok to execute on 172.25.254.10(172.25.254.10:3306)? (YES/no):
yes 《《《《
Fri Aug 2 18:30:40 2024 - [info] Executing FLUSH NO_WRITE_TO_BINLOG TABLES. This
may take long time..
Fri Aug 2 18:30:40 2024 - [info] ok.
Fri Aug 2 18:30:40 2024 - [info] Checking MHA is not monitoring or doing
failover..
Fri Aug 2 18:30:40 2024 - [info] Checking replication health on 172.25.254.20..
Fri Aug 2 18:30:40 2024 - [info] ok.
Fri Aug 2 18:30:40 2024 - [info] Checking replication health on 172.25.254.30..
Fri Aug 2 18:30:40 2024 - [info] ok.
Fri Aug 2 18:30:40 2024 - [info] 172.25.254.20 can be new master.
Fri Aug 2 18:30:40 2024 - [info]
From:
172.25.254.10(172.25.254.10:3306) (current master)
+--172.25.254.20(172.25.254.20:3306)
+--172.25.254.30(172.25.254.30:3306)
To:
172.25.254.20(172.25.254.20:3306) (new master)
+--172.25.254.30(172.25.254.30:3306)
+--172.25.254.10(172.25.254.10:3306)
Starting master switch from 172.25.254.10(172.25.254.10:3306) to
172.25.254.20(172.25.254.20:3306)? (yes/NO): yes
Fri Aug 2 18:30:42 2024 - [info] Checking whether
172.25.254.20(172.25.254.20:3306) is ok for the new master..
Fri Aug 2 18:30:42 2024 - [info] ok.
Fri Aug 2 18:30:42 2024 - [info] 172.25.254.10(172.25.254.10:3306): SHOW SLAVE
STATUS returned empty result. To check replication filtering rules, temporarily
executing CHANGE MASTER to a dummy host.
Fri Aug 2 18:30:42 2024 - [info] 172.25.254.10(172.25.254.10:3306): Resetting
slave pointing to the dummy host.
Fri Aug 2 18:30:42 2024 - [info] ** Phase 1: Configuration Check Phase
completed.
Fri Aug 2 18:30:42 2024 - [info]
Fri Aug 2 18:30:42 2024 - [info] * Phase 2: Rejecting updates Phase..
Fri Aug 2 18:30:42 2024 - [info]
master_ip_online_change_script is not defined. If you do not disable writes on
the current master manually, applications keep writing on the current master. Is
it ok to proceed? (yes/NO): yes
Fri Aug 2 18:30:43 2024 - [info] Locking all tables on the orig master to reject
updates from everybody (including root):
Fri Aug 2 18:30:43 2024 - [info] Executing FLUSH TABLES WITH READ LOCK..
Fri Aug 2 18:30:43 2024 - [info] ok.
Fri Aug 2 18:30:43 2024 - [info] Orig master binlog:pos is mysql•bin.000002:1275.
Fri Aug 2 18:30:43 2024 - [info] Waiting to execute all relay logs on
172.25.254.20(172.25.254.20:3306)..
Fri Aug 2 18:30:43 2024 - [info] master_pos_wait(mysql-bin.000002:1275)
completed on 172.25.254.20(172.25.254.20:3306). Executed 0 events.
Fri Aug 2 18:30:43 2024 - [info] done.
Fri Aug 2 18:30:43 2024 - [info] Getting new master's binlog name and position..
Fri Aug 2 18:30:43 2024 - [info] mysql-bin.000002:1519
Fri Aug 2 18:30:43 2024 - [info] All other slaves should start replication from
here. Statement should be: CHANGE MASTER TO MASTER_HOST='172.25.254.20',
MASTER_PORT=3306, MASTER_AUTO_POSITION=1, MASTER_USER='repl',
MASTER_PASSWORD='xxx';
Fri Aug 2 18:30:43 2024 - [info]
Fri Aug 2 18:30:43 2024 - [info] * Switching slaves in parallel..
Fri Aug 2 18:30:43 2024 - [info]
Fri Aug 2 18:30:43 2024 - [info] -- Slave switch on host
172.25.254.30(172.25.254.30:3306) started, pid: 41941
Fri Aug 2 18:30:43 2024 - [info]
Fri Aug 2 18:30:45 2024 - [info] Log messages from 172.25.254.30 ...
Fri Aug 2 18:30:45 2024 - [info]
Fri Aug 2 18:30:43 2024 - [info] Waiting to execute all relay logs on
172.25.254.30(172.25.254.30:3306)..
Fri Aug 2 18:30:43 2024 - [info] master_pos_wait(mysql-bin.000002:1275)
completed on 172.25.254.30(172.25.254.30:3306). Executed 0 events.
Fri Aug 2 18:30:43 2024 - [info] done.
Fri Aug 2 18:30:43 2024 - [info] Resetting slave
172.25.254.30(172.25.254.30:3306) and starting replication from the new master
172.25.254.20(172.25.254.20:3306)..
Fri Aug 2 18:30:43 2024 - [info] Executed CHANGE MASTER.
Fri Aug 2 18:30:44 2024 - [info] Slave started.
Fri Aug 2 18:30:45 2024 - [info] End of log messages from 172.25.254.30 ...
Fri Aug 2 18:30:45 2024 - [info]
Fri Aug 2 18:30:45 2024 - [info] -- Slave switch on host
172.25.254.30(172.25.254.30:3306) succeeded.
Fri Aug 2 18:30:45 2024 - [info] Unlocking all tables on the orig master:
Fri Aug 2 18:30:45 2024 - [info] Executing UNLOCK TABLES..
Fri Aug 2 18:30:45 2024 - [info] ok.
Fri Aug 2 18:30:45 2024 - [info] Starting orig master as a new slave..
Fri Aug 2 18:30:45 2024 - [info] Resetting slave
172.25.254.10(172.25.254.10:3306) and starting replication from the new master
172.25.254.20(172.25.254.20:3306)..
Fri Aug 2 18:30:45 2024 - [info] Executed CHANGE MASTER.
Fri Aug 2 18:30:46 2024 - [info] Slave started.
Fri Aug 2 18:30:46 2024 - [info] All new slave servers switched successfully.
Fri Aug 2 18:30:46 2024 - [info]
Fri Aug 2 18:30:46 2024 - [info] * Phase 5: New master cleanup phase..
Fri Aug 2 18:30:46 2024 - [info]
Fri Aug 2 18:30:46 2024 - [info] 172.25.254.20: Resetting slave info succeeded.
Fri Aug 2 18:30:46 2024 - [info] Switching master to
172.25.254.20(172.25.254.20:3306) completed successfully.

检测:

[root@mysql-mha masterha]# masterha_check_repl --conf=/etc/masterha/app1.cnf
Fri Aug 2 18:33:46 2024 - [warning] Global
configuration file /etc/masterha_default.cnf not found. Skipping.
Fri Aug 2 18:33:46 2024 - [info] Reading application default configuration from
/etc/masterha/app1.cnf..
Fri Aug 2 18:33:46 2024 - [info] Reading server configuration from
/etc/masterha/app1.cnf..
Fri Aug 2 18:33:46 2024 - [info] MHA::MasterMonitor version 0.58.
Fri Aug 2 18:33:47 2024 - [info] GTID failover mode = 1
Fri Aug 2 18:33:47 2024 - [info] Dead Servers:
Fri Aug 2 18:33:47 2024 - [info] Alive Servers:
Fri Aug 2 18:33:47 2024 - [info] 172.25.254.10(172.25.254.10:3306)
Fri Aug 2 18:33:47 2024 - [info] 172.25.254.20(172.25.254.20:3306)
Fri Aug 2 18:33:47 2024 - [info] 172.25.254.30(172.25.254.30:3306)
Fri Aug 2 18:33:47 2024 - [info] Alive Slaves:
Fri Aug 2 18:33:47 2024 - [info] 172.25.254.10(172.25.254.10:3306)
Version=5.7.44-log (oldest major version between slaves) log-bin:enabled
Fri Aug 2 18:33:47 2024 - [info] GTID ON
Fri Aug 2 18:33:47 2024 - [info] Replicating from
172.25.254.20(172.25.254.20:3306)
Fri Aug 2 18:33:47 2024 - [info] Primary candidate for the new Master
(candidate_master is set)
Fri Aug 2 18:33:47 2024 - [info] 172.25.254.30(172.25.254.30:3306)
Version=5.7.44-log (oldest major version between slaves) log-bin:enabled
Fri Aug 2 18:33:47 2024 - [info] GTID ON
Fri Aug 2 18:33:47 2024 - [info] Replicating from
172.25.254.20(172.25.254.20:3306)
Fri Aug 2 18:33:47 2024 - [info] Not candidate for the new Master (no_master
is set)
Fri Aug 2 18:33:47 2024 - [info] Current Alive Master:
172.25.254.20(172.25.254.20:3306)
Fri Aug 2 18:33:47 2024 - [info] Checking slave configurations..
Fri Aug 2 18:33:47 2024 - [info] read_only=1 is not set on slave
172.25.254.30(172.25.254.30:3306).
Fri Aug 2 18:33:47 2024 - [info] Checking replication filtering settings..
Fri Aug 2 18:33:47 2024 - [info] binlog_do_db= , binlog_ignore_db=
Fri Aug 2 18:33:47 2024 - [info] Replication filtering check ok.
Fri Aug 2 18:33:47 2024 - [info] GTID (with auto-pos) is supported. Skipping all
SSH and Node package checking.
Fri Aug 2 18:33:47 2024 - [info] Checking SSH publickey authentication settings
on the current master..
Fri Aug 2 18:33:47 2024 - [info] HealthCheck: SSH to 172.25.254.20 is reachable.
Fri Aug 2 18:33:47 2024 - [info]
172.25.254.20(172.25.254.20:3306) (current master)
+--172.25.254.10(172.25.254.10:3306)
+--172.25.254.30(172.25.254.30:3306)
Fri Aug 2 18:33:47 2024 - [info] Checking replication health on 172.25.254.10..
Fri Aug 2 18:33:47 2024 - [info] ok.
Fri Aug 2 18:33:47 2024 - [info] Checking replication health on 172.25.254.30..
Fri Aug 2 18:33:47 2024 - [info] ok.
Fri Aug 2 18:33:47 2024 - [warning] master_ip_failover_script is not defined.
Fri Aug 2 18:33:47 2024 - [warning] shutdown_script is not defined.
Fri Aug 2 18:33:47 2024 - [info] Got exit code 0 (Not master dead).
MySQL Replication Health is OK.

master故障手动切换

#模拟master故障
[root@mysql-node20 mysql]# /etc/init.d/mysqld stop
#在MHA-master中做故障切换
[root@mysql-mha masterha]# masterha_master_switch --master_state=dead --
conf=/etc/masterha/app1.cnf --dead_master_host=192.168.56.12 --
dead_master_port=3306 --new_master_host=192.168.56.11 --new_master_port=3306 --ignore_last_failover
--ignore_last_failover 表示忽略在/etc/masterha/目录中在切换过程中生成的锁文件
​
masterha_master_switch --master_state=dead --conf=/etc/masterha/app1.cnf --
dead_master_host=172.25.254.20 --dead_master_port=3306 --
new_master_host=172.25.254.10 --new_master_port=3306 --ignore_last_failover
​
--dead_master_ip=<dead_master_ip> is not set. Using 172.25.254.20.
Fri Aug 2 19:38:35 2024 - [warning] Global configuration file
/etc/masterha_default.cnf not found. Skipping.
Fri Aug 2 19:38:35 2024 - [info] Reading application default configuration from
/etc/masterha/app1.cnf..
Fri Aug 2 19:38:35 2024 - [info] Reading server configuration from
/etc/masterha/app1.cnf..
Fri Aug 2 19:38:35 2024 - [info] MHA::MasterFailover version 0.58.
Fri Aug 2 19:38:35 2024 - [info] Starting master failover.
Fri Aug 2 19:38:35 2024 - [info]
Fri Aug 2 19:38:35 2024 - [info] * Phase 1: Configuration Check Phase..
Fri Aug 2 19:38:35 2024 - [info]
Fri Aug 2 19:38:36 2024 - [info] GTID failover mode = 1
Fri Aug 2 19:38:36 2024 - [info] Dead Servers:
Fri Aug 2 19:38:36 2024 - [info] 172.25.254.20(172.25.254.20:3306)
Fri Aug 2 19:38:36 2024 - [info] Checking master reachability via MySQL(double
check)...
Fri Aug 2 19:38:36 2024 - [info] ok.
Fri Aug 2 19:38:36 2024 - [info] Alive Servers:
Fri Aug 2 19:38:36 2024 - [info] 172.25.254.10(172.25.254.10:3306)
Fri Aug 2 19:38:36 2024 - [info] 172.25.254.30(172.25.254.30:3306)
Fri Aug 2 19:38:36 2024 - [info] Alive Slaves:
Fri Aug 2 19:38:36 2024 - [info] 172.25.254.10(172.25.254.10:3306)
Version=5.7.44-log (oldest major version between slaves) log-bin:enabled
Fri Aug 2 19:38:36 2024 - [info] GTID ON
Fri Aug 2 19:38:36 2024 - [info] Replicating from
172.25.254.20(172.25.254.20:3306)
Fri Aug 2 19:38:36 2024 - [info] Primary candidate for the new Master
(candidate_master is set)
Fri Aug 2 19:38:36 2024 - [info] 172.25.254.30(172.25.254.30:3306)
Version=5.7.44-log (oldest major version between slaves) log-bin:enabled
Fri Aug 2 19:38:36 2024 - [info] GTID ON
Fri Aug 2 19:38:36 2024 - [info] Replicating from
172.25.254.20(172.25.254.20:3306)
Fri Aug 2 19:38:36 2024 - [info] Not candidate for the new Master (no_master
is set)
Master 172.25.254.20(172.25.254.20:3306) is dead. Proceed? (yes/NO): yes
Fri Aug 2 19:38:39 2024 - [info] Starting GTID based failover.
Fri Aug 2 19:38:39 2024 - [info]
Fri Aug 2 19:38:39 2024 - [info] ** Phase 1: Configuration Check Phase
completed.
Fri Aug 2 19:38:39 2024 - [info]
Fri Aug 2 19:38:39 2024 - [info] * Phase 2: Dead Master Shutdown Phase..
Fri Aug 2 19:38:39 2024 - [info]
Fri Aug 2 19:38:39 2024 - [info] HealthCheck: SSH to 172.25.254.20 is reachable.
Fri Aug 2 19:38:39 2024 - [info] Forcing shutdown so that applications never
connect to the current master..
Fri Aug 2 19:38:39 2024 - [warning] master_ip_failover_script is not set.
Skipping invalidating dead master IP address.
Fri Aug 2 19:38:39 2024 - [warning] shutdown_script is not set. Skipping
explicit shutting down of the dead master.
Fri Aug 2 19:38:39 2024 - [info] * Phase 2: Dead Master Shutdown Phase
completed.
Fri Aug 2 19:38:39 2024 - [info]
Fri Aug 2 19:38:39 2024 - [info] * Phase 3: Master Recovery Phase..
Fri Aug 2 19:38:39 2024 - [info]
Fri Aug 2 19:38:39 2024 - [info] * Phase 3.1: Getting Latest Slaves Phase..
Fri Aug 2 19:38:39 2024 - [info]
Fri Aug 2 19:38:39 2024 - [info] The latest binary log file/position on all
slaves is mysql-bin.000002:1519
Fri Aug 2 19:38:39 2024 - [info] Retrieved Gtid Set: 1a02fc44-4d68-11ef-8dd9-
000c29d8cf7e:1
Fri Aug 2 19:38:39 2024 - [info] Latest slaves (Slaves that received relay log
files to the latest):
Fri Aug 2 19:38:39 2024 - [info] 172.25.254.10(172.25.254.10:3306)
Version=5.7.44-log (oldest major version between slaves) log-bin:enabled
Fri Aug 2 19:38:39 2024 - [info] GTID ON
Fri Aug 2 19:38:39 2024 - [info] Replicating from
172.25.254.20(172.25.254.20:3306)
Fri Aug 2 19:38:39 2024 - [info] Primary candidate for the new Master
(candidate_master is set)
Fri Aug 2 19:38:39 2024 - [info] 172.25.254.30(172.25.254.30:3306)
Version=5.7.44-log (oldest major version between slaves) log-bin:enabled
Fri Aug 2 19:38:39 2024 - [info] GTID ON
Fri Aug 2 19:38:39 2024 - [info] Replicating from
172.25.254.20(172.25.254.20:3306)
Fri Aug 2 19:38:39 2024 - [info] Not candidate for the new Master (no_master
is set)
Fri Aug 2 19:38:39 2024 - [info] The oldest binary log file/position on all
slaves is mysql-bin.000002:1519
Fri Aug 2 19:38:39 2024 - [info] Retrieved Gtid Set: 1a02fc44-4d68-11ef-8dd9-
000c29d8cf7e:1
Fri Aug 2 19:38:39 2024 - [info] Oldest slaves:
Fri Aug 2 19:38:39 2024 - [info] 172.25.254.10(172.25.254.10:3306)
Version=5.7.44-log (oldest major version between slaves) log-bin:enabled
Fri Aug 2 19:38:39 2024 - [info] GTID ON
Fri Aug 2 19:38:39 2024 - [info] Replicating from
172.25.254.20(172.25.254.20:3306)
Fri Aug 2 19:38:39 2024 - [info] Primary candidate for the new Master
(candidate_master is set)
Fri Aug 2 19:38:39 2024 - [info] 172.25.254.30(172.25.254.30:3306)
Version=5.7.44-log (oldest major version between slaves) log-bin:enabled
Fri Aug 2 19:38:39 2024 - [info] GTID ON
Fri Aug 2 19:38:39 2024 - [info] Replicating from
172.25.254.20(172.25.254.20:3306)
Fri Aug 2 19:38:39 2024 - [info] Not candidate for the new Master (no_master
is set)
Fri Aug 2 19:38:39 2024 - [info]
Fri Aug 2 19:38:39 2024 - [info] * Phase 3.3: Determining New Master Phase..
Fri Aug 2 19:38:39 2024 - [info]
Fri Aug 2 19:38:39 2024 - [info] 172.25.254.10 can be new master.
Fri Aug 2 19:38:39 2024 - [info] New master is 172.25.254.10(172.25.254.10:3306)
Fri Aug 2 19:38:39 2024 - [info] Starting master failover..
Fri Aug 2 19:38:39 2024 - [info]
From:
172.25.254.20(172.25.254.20:3306) (current master)
+--172.25.254.10(172.25.254.10:3306)
+--172.25.254.30(172.25.254.30:3306)
To:
172.25.254.10(172.25.254.10:3306) (new master)
+--172.25.254.30(172.25.254.30:3306)
Starting master switch from 172.25.254.20(172.25.254.20:3306) to
172.25.254.10(172.25.254.10:3306)? (yes/NO): yes
Fri Aug 2 19:38:41 2024 - [info] New master decided manually is
172.25.254.10(172.25.254.10:3306)
Fri Aug 2 19:38:41 2024 - [info]
Fri Aug 2 19:38:41 2024 - [info] * Phase 3.3: New Master Recovery Phase..
Fri Aug 2 19:38:41 2024 - [info]
Fri Aug 2 19:38:41 2024 - [info] Waiting all logs to be applied..
Fri Aug 2 19:38:41 2024 - [info] done.
Fri Aug 2 19:38:41 2024 - [info] Getting new master's binlog name and position..
Fri Aug 2 19:38:41 2024 - [info] mysql-bin.000002:1519
Fri Aug 2 19:38:41 2024 - [info] All other slaves should start replication from
here. Statement should be: CHANGE MASTER TO MASTER_HOST='172.25.254.10',
MASTER_PORT=3306, MASTER_AUTO_POSITION=1, MASTER_USER='repl',
MASTER_PASSWORD='xxx';
Fri Aug 2 19:38:41 2024 - [info] Master Recovery succeeded.
File:Pos:Exec_Gtid_Set: mysql-bin.000002, 1519, 1a02fc44-4d68-11ef-8dd9-
000c29d8cf7e:1,
68f3a901-4deb-11ef-8055-000c29cb63ce:1-5
Fri Aug 2 19:38:41 2024 - [warning] master_ip_failover_script is not set.
Skipping taking over new master IP address.
Fri Aug 2 19:38:41 2024 - [info] Setting read_only=0 on
172.25.254.10(172.25.254.10:3306)..
Fri Aug 2 19:38:41 2024 - [info] ok.
Fri Aug 2 19:38:41 2024 - [info] ** Finished master recovery successfully.
Fri Aug 2 19:38:41 2024 - [info] * Phase 3: Master Recovery Phase completed.
Fri Aug 2 19:38:41 2024 - [info]
Fri Aug 2 19:38:41 2024 - [info] * Phase 4: Slaves Recovery Phase..
Fri Aug 2 19:38:41 2024 - [info]
Fri Aug 2 19:38:41 2024 - [info]
Fri Aug 2 19:38:41 2024 - [info] * Phase 4.1: Starting Slaves in parallel..
Fri Aug 2 19:38:41 2024 - [info]
Fri Aug 2 19:38:41 2024 - [info] -- Slave recovery on host
172.25.254.30(172.25.254.30:3306) started, pid: 42023. Check tmp log
/etc/masterha/172.25.254.30_3306_20240802193835.log if it takes time..
Fri Aug 2 19:38:43 2024 - [info]
Fri Aug 2 19:38:43 2024 - [info] Log messages from 172.25.254.30 ...
Fri Aug 2 19:38:43 2024 - [info]
Fri Aug 2 19:38:41 2024 - [info] Resetting slave
172.25.254.30(172.25.254.30:3306) and starting replication from the new master
172.25.254.10(172.25.254.10:3306)..
Fri Aug 2 19:38:41 2024 - [info] Executed CHANGE MASTER.
Fri Aug 2 19:38:42 2024 - [info] Slave started.
Fri Aug 2 19:38:42 2024 - [info] gtid_wait(1a02fc44-4d68-11ef-8dd9-
000c29d8cf7e:1,
68f3a901-4deb-11ef-8055-000c29cb63ce:1-5) completed on
172.25.254.30(172.25.254.30:3306). Executed 0 events.
Fri Aug 2 19:38:43 2024 - [info] End of log messages from 172.25.254.30.
Fri Aug 2 19:38:43 2024 - [info] -- Slave on host
172.25.254.30(172.25.254.30:3306) started.
Fri Aug 2 19:38:43 2024 - [info] All new slave servers recovered successfully.
Fri Aug 2 19:38:43 2024 - [info]
Fri Aug 2 19:38:43 2024 - [info] * Phase 5: New master cleanup phase..
Fri Aug 2 19:38:43 2024 - [info]
Fri Aug 2 19:38:43 2024 - [info] Resetting slave info on the new master..
Fri Aug 2 19:38:43 2024 - [info] 172.25.254.10: Resetting slave info succeeded.
Fri Aug 2 19:38:43 2024 - [info] Master failover to
172.25.254.10(172.25.254.10:3306) completed successfully.
Fri Aug 2 19:38:43 2024 - [info]
----- Failover Report -----
app1: MySQL Master failover 172.25.254.20(172.25.254.20:3306) to
172.25.254.10(172.25.254.10:3306) succeeded
Master 172.25.254.20(172.25.254.20:3306) is down!
Check MHA Manager logs at mysql-mha.timinglee.org for details.
Started manual(interactive) failover.
Selected 172.25.254.10(172.25.254.10:3306) as a new master.
172.25.254.10(172.25.254.10:3306): OK: Applying all logs succeeded.
172.25.254.30(172.25.254.30:3306): OK: Slave started, replicating from
172.25.254.10(172.25.254.10:3306)
172.25.254.10(172.25.254.10:3306): Resetting slave info succeeded.
Master failover to 172.25.254.10(172.25.254.10:3306) completed successfully.

恢复故障MySQL节点

[root@mysql-node20 tmp]# /etc/init.d/mysqld start
Starting MySQL. SUCCESS!
[root@mysql-node20 tmp]# mysql -p
mysql> CHANGE MASTER TO MASTER_HOST='172.25.254.10', MASTER_USER='repl',
MASTER_PASSWORD='lee', MASTER_AUTO_POSITION=1;
mysql> stop slave;
Query OK, 0 rows affected, 1 warning (0.00 sec)

mysql> start slave;
Query OK, 0 rows affected (0.00 sec)

mysql> show  status slave\G
#测试一主两从是否正常
[root@mysql-mha masterha]# masterha_check_repl --conf=/etc/masterha/app1.cnf
Fri Aug 2 20:15:29 2024 - [info] Checking replication health on 172.25.254.20..
Fri Aug 2 20:15:29 2024 - [info] ok.
Fri Aug 2 20:15:29 2024 - [info] Checking replication health on 172.25.254.30..
Fri Aug 2 20:15:29 2024 - [info] ok.
Fri Aug 2 20:15:29 2024 - [warning] master_ip_failover_script is not defined.
Fri Aug 2 20:15:29 2024 - [warning] shutdown_script is not defined.
Fri Aug 2 20:15:29 2024 - [info] Got exit code 0 (Not master dead).
MySQL Replication Health is OK.

自动切换

[root@mysql-mha masterha]# rm -fr app1.failover.complete #删掉切换锁文件
#监控程序通过指定配置文件监控master状态,当master出问题后自动切换并退出避免重复做故障切换
[root@mysql-mha masterha]# masterha_manager --conf=/etc/masterha/app1.cnf
[root@mysql-mha masterha]# cat /etc/masterha/manager.log
#恢复故障节点

恢复故障节点

[root@mysql-node20 mysql]# /etc/init.d/mysqld start
mysql> CHANGE MASTER TO MASTER_HOST='172.25.254.10', MASTER_USER='repl',
MASTER_PASSWORD='lee', MASTER_AUTO_POSITION=1

清除锁文件

[root@mysql-mha masterha]# rm -rf app1.failover.complete manager.log

6.2.4 为MHA添加VIP功能

#上传在群中发给大家的脚本
[root@mysql-mha ~]# ls
master_ip_failover master_ip_online_change MHA-7 MHA-7.zip
[root@mysql-mha ~]# cp master_ip_failover master_ip_online_change
/usr/local/bin/
[root@mysql-mha ~]# chmod +x /usr/local/bin/master_ip_*
#修改脚本在脚本中只需要修改下vip即可
[root@mysql-mha ~]# vim /usr/local/bin/master_ip_failover
my $vip = '172.25.254.100/24';
my $ssh_start_vip = "/sbin/ip addr add $vip dev eth0";
my $ssh_stop_vip = "/sbin/ip addr del $vip dev eth0";
[root@mysql-mha ~]# vim /usr/local/bin/master_ip_online_change
my $vip = '172.25.254.100/24';
my $ssh_start_vip = "/sbin/ip addr add $vip dev eth0";
my $ssh_stop_vip = "/sbin/ip addr del $vip dev eth0";
my $exit_code = 0;
[root@mysql-mha masterha]# masterha_manager --conf=/etc/masterha/app1.cnf   
 #启动监控程序
[root@mysql-node10 tmp]# ip a a 172.25.254.100/24 dev eth0 
#在master节点添加VIP

模拟故障

[root@mysql-node10 ~]# /etc/init.d/mysqld stop #关闭主节点服务
[root@mysql-mha masterha]# cat manager.log

恢复故障主机

[root@mysql-node20 mysql]# /etc/init.d/mysqld start
mysql> CHANGE MASTER TO MASTER_HOST='172.25.254.10', MASTER_USER='repl',
MASTER_PASSWORD='lee', MASTER_AUTO_POSITION=1
[root@mysql-mha masterha]# rm -rf app1.failover.complete manager.log

手动切换后查看vip变化

[root@mysql-mha masterha]# masterha_master_switch --conf=/etc/masterha/app1.cnf --master_state=alive --new_master_host=172.25.254.10 --new_master_port=3306 --orig_master_is_new_slave --running_updates_limit=10000
[root@mysql-node10 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP
group default qlen 1000
link/ether 00:0c:29:cb:63:ce brd ff:ff:ff:ff:ff:ff
inet 172.25.254.10/24 brd 172.25.254.255 scope global noprefixroute eth0
valid_lft forever preferred_lft forever
inet 172.25.254.100/24 scope global secondary eth0
valid_lft forever preferred_lft forever
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值