基于MySQL5.7MHA的配置

MHA是一个一主两从的结果 也就是A->(B,C)

MASTER 192.168.1.101

SLAVE1/MHA-MANAGER 192.168.1.102

SLAVE2 192.168.1.103

VIP 192.168.1.109

这里面将slave1 节点座位MHA的管理节点,MHA的管理节点绝对不能放在master 上面。
安装perl的yum源

yum install perl-DBD-MySQL

yum install perl-Config-Tiny

yum install perl-Log-Dispatch

yum install perl-Parallel-ForkManager

我这里为方便之前直接使用了
yum install -y perl* 把所有的都安装了

然后安装epel-release-6-8.noarch
[root@node1 ~]# rpm -ivh http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm

安装mha包里面的rpm文件
[root@node1 package]# ls
masterha mha4mysql-manager-0.56.tar.gz mha4mysql-node-0.56.tar.gz
mha4mysql-manager-0.56-0.el6.noarch.rpm mha4mysql-node-0.56-0.el6.noarch.rpm mha_masterha.zip

[root@node1 package]# yum localinstall -y *.rpm
这里采用的rpm安装方便

然后将master文件夹拷贝到 /etc目录下
[root@node1 package]# cp -r masterha /etc/

[root@node1 package]# cd /etc/masterha
[root@node1 masterha]# ls
app1.conf drop_vip.sh init_vip.sh masterha_default.conf master_ip_failover master_ip_online_change
app1.conf 为配置文件:

[root@node1 masterha]# cat app1.conf
[server default]


#mha manager工作目录
manager_workdir = /var/log/masterha/app1
manager_log = /var/log/masterha/app1/app1.log
remote_workdir = /var/log/masterha/app1

[server1]
hostname=node1
master_binlog_dir = /data/mysql/mysql3306/logs
candidate_master = 1
check_repl_delay = 0     #用防止master故障时,切换时slave有延迟,卡在那里切不过来。

[server2]
hostname=node2
master_binlog_dir=/data/mysql/mysql3306/logs
candidate_master=1
check_repl_delay=0
[server3]
hostname=node3
master_binlog_dir=/data/mysql/mysql3306/logs
candidate_master=1
check_repl_delay=0

init_vip.sh
[root@node1 masterha]# cat init_vip.sh
vip=”192.168.1.109/32”
/sbin/ip addr add $vip dev eth0
然后初始化VIP
[root@node1 masterha]# sh init_vip.sh
[root@node1 masterha]# ip addr show
1: lo:

MySQL的用户和密码

user=admin
password=123456

系统ssh用户

ssh_user=root

复制用户

repl_user=repl
repl_password= repl4slave

监控

ping_interval=1

shutdown_script=”“

切换调用的脚本

master_ip_failover_script= /etc/masterha/master_ip_failover
master_ip_online_change_script= /etc/masterha/master_ip_online_change

master_ip_failover master_ip_online_change

这时候一主两从的结构的复制架构就可以对外服务了。现在要解决的问题就是如何解决当master挂掉了会有slave来接管作为master。
然后进行检测配置是否正确

[root@node1 masterha]# masterha_check_ssh --global_conf=/etc/masterha/masterha_default.conf --conf=/etc/masterha/app1.conf
Sat Oct 10 03:03:22 2015 - [info] Reading default configuration from /etc/masterha/masterha_default.conf..
Sat Oct 10 03:03:22 2015 - [info] Reading application default configuration from /etc/masterha/app1.conf..
Sat Oct 10 03:03:22 2015 - [info] Reading server configuration from /etc/masterha/app1.conf..
Sat Oct 10 03:03:22 2015 - [info] Starting SSH connection tests..
Sat Oct 10 03:03:23 2015 - [debug]
Sat Oct 10 03:03:22 2015 - [debug]  Connecting via SSH from root@node1(192.168.1.101:22) to root@node2(192.168.1.102:22)..
Sat Oct 10 03:03:22 2015 - [debug]   ok.
Sat Oct 10 03:03:22 2015 - [debug]  Connecting via SSH from root@node1(192.168.1.101:22) to root@node3(192.168.1.103:22)..
Sat Oct 10 03:03:22 2015 - [debug]   ok.
Sat Oct 10 03:03:23 2015 - [debug]
Sat Oct 10 03:03:23 2015 - [debug]  Connecting via SSH from root@node2(192.168.1.102:22) to root@node1(192.168.1.101:22)..
Sat Oct 10 03:03:23 2015 - [debug]   ok.
Sat Oct 10 03:03:23 2015 - [debug]  Connecting via SSH from root@node2(192.168.1.102:22) to root@node3(192.168.1.103:22)..
Sat Oct 10 03:03:23 2015 - [debug]   ok.
Sat Oct 10 03:03:24 2015 - [debug]
Sat Oct 10 03:03:23 2015 - [debug]  Connecting via SSH from root@node3(192.168.1.103:22) to root@node1(192.168.1.101:22)..
Sat Oct 10 03:03:23 2015 - [debug]   ok.
Sat Oct 10 03:03:23 2015 - [debug]  Connecting via SSH from root@node3(192.168.1.103:22) to root@node2(192.168.1.102:22)..
Sat Oct 10 03:03:23 2015 - [debug]   ok.
Sat Oct 10 03:03:24 2015 - [info] All SSH connection tests passed successfully.

[root@node1 masterha]# masterha_check_repl --global_conf=/etc/masterha/masterha_default.conf --conf=/etc/masterha/app1.conf
Sat Oct 10 03:08:43 2015 - [info] Reading default configuration from /etc/masterha/masterha_default.conf..
Sat Oct 10 03:08:43 2015 - [info] Reading application default configuration from /etc/masterha/app1.conf..
Sat Oct 10 03:08:43 2015 - [info] Reading server configuration from /etc/masterha/app1.conf..
Sat Oct 10 03:08:43 2015 - [info] MHA::MasterMonitor version 0.56.
Sat Oct 10 03:08:44 2015 - [warning] SQL Thread is stopped(no error) on node1(192.168.1.101:3306)
Sat Oct 10 03:08:44 2015 - [error][/usr/share/perl5/vendor_perl/MHA/ServerManager.pm, ln781] Multi-master configuration is detected, but two or more masters are either writable (read-only is not set) or dead! Check configurations for details. Master configurations are as below:
Master node2(192.168.1.102:3306), replicating from 192.168.1.101(192.168.1.101:3306)
Master node1(192.168.1.101:3306), replicating from 192.168.1.102(192.168.1.102:3306)

Sat Oct 10 03:08:44 2015 - [error][/usr/share/perl5/vendor_perl/MHA/MasterMonitor.pm, ln424] Error happened on checking configurations.  at /usr/share/perl5/vendor_perl/MHA/MasterMonitor.pm line 326
Sat Oct 10 03:08:44 2015 - [error][/usr/share/perl5/vendor_perl/MHA/MasterMonitor.pm, ln523] Error happened on monitoring servers.
Sat Oct 10 03:08:44 2015 - [info] Got exit code 1 (Not master dead).

这个错误的原因是因为mha误把node1当成了slave 因为master 没有slave的状态所以报错
在mysql下执行下面的命令:
mysql> reset slave all;
Query OK, 0 rows affected (0.05 sec)

mysql> show slave status \G
Empty set (0.00 sec)
然后在重新测试一下
[root@node1 masterha]# masterha_check_repl --global_conf=/etc/masterha/masterha_default.conf --conf=/etc/masterha/app1.conf
Sat Oct 10 03:14:16 2015 - [info] Reading default configuration from /etc/masterha/masterha_default.conf..
Sat Oct 10 03:14:16 2015 - [info] Reading application default configuration from /etc/masterha/app1.conf..
Sat Oct 10 03:14:16 2015 - [info] Reading server configuration from /etc/masterha/app1.conf..
Sat Oct 10 03:14:16 2015 - [info] MHA::MasterMonitor version 0.56.
Sat Oct 10 03:14:17 2015 - [info] GTID failover mode = 0
Sat Oct 10 03:14:17 2015 - [info] Dead Servers:
Sat Oct 10 03:14:17 2015 - [info] Alive Servers:
Sat Oct 10 03:14:17 2015 - [info]   node1(192.168.1.101:3306)
Sat Oct 10 03:14:17 2015 - [info]   node2(192.168.1.102:3306)
Sat Oct 10 03:14:17 2015 - [info]   node3(192.168.1.103:3306)
Sat Oct 10 03:14:17 2015 - [info] Alive Slaves:
Sat Oct 10 03:14:17 2015 - [info]   node2(192.168.1.102:3306)  Version=5.6.26-log (oldest major version between slaves) log-bin:enabled
Sat Oct 10 03:14:17 2015 - [info]     Replicating from 192.168.1.101(192.168.1.101:3306)
Sat Oct 10 03:14:17 2015 - [info]     Primary candidate for the new Master (candidate_master is set)
Sat Oct 10 03:14:17 2015 - [info]   node3(192.168.1.103:3306)  Version=5.6.26-log (oldest major version between slaves) log-bin:enabled
Sat Oct 10 03:14:17 2015 - [info]     Replicating from 192.168.1.101(192.168.1.101:3306)
Sat Oct 10 03:14:17 2015 - [info]     Primary candidate for the new Master (candidate_master is set)
Sat Oct 10 03:14:17 2015 - [info] Current Alive Master: node1(192.168.1.101:3306)
Sat Oct 10 03:14:17 2015 - [info] Checking slave configurations..
Sat Oct 10 03:14:17 2015 - [info]  read_only=1 is not set on slave node2(192.168.1.102:3306).
Sat Oct 10 03:14:17 2015 - [warning]  relay_log_purge=0 is not set on slave node2(192.168.1.102:3306).
Sat Oct 10 03:14:17 2015 - [info]  read_only=1 is not set on slave node3(192.168.1.103:3306).
Sat Oct 10 03:14:17 2015 - [warning]  relay_log_purge=0 is not set on slave node3(192.168.1.103:3306). 这个警告要自己设置一下
Sat Oct 10 03:14:17 2015 - [info] Checking replication filtering settings..
Sat Oct 10 03:14:17 2015 - [info]  binlog_do_db= , binlog_ignore_db=
Sat Oct 10 03:14:17 2015 - [info]  Replication filtering check ok.
Sat Oct 10 03:14:17 2015 - [info] GTID (with auto-pos) is not supported
Sat Oct 10 03:14:17 2015 - [info] Starting SSH connection tests..
Sat Oct 10 03:14:18 2015 - [info] All SSH connection tests passed successfully.
Sat Oct 10 03:14:18 2015 - [info] Checking MHA Node version..
Sat Oct 10 03:14:19 2015 - [info]  Version check ok.
Sat Oct 10 03:14:19 2015 - [info] Checking SSH publickey authentication settings on the current master..
Sat Oct 10 03:14:19 2015 - [info] HealthCheck: SSH to node1 is reachable.
Sat Oct 10 03:14:19 2015 - [info] Master MHA Node version is 0.56.
Sat Oct 10 03:14:19 2015 - [info] Checking recovery script configurations on node1(192.168.1.101:3306)..
Sat Oct 10 03:14:19 2015 - [info]   Executing command: save_binary_logs --command=test --start_pos=4 --binlog_dir=/data/mysql/mysql3306/logs --output_file=/var/log/masterha/app1/save_binary_logs_test --manager_version=0.56 --start_file=mysql-bin.000037
Sat Oct 10 03:14:19 2015 - [info]   Connecting to root@192.168.1.101(node1:22)..
  Creating /var/log/masterha/app1 if not exists..    ok.

这个直接创建这个目录就可以了
  Checking output directory is accessible or not..
   ok.
  Binlog found at /data/mysql/mysql3306/logs, up to mysql-bin.000037
Sat Oct 10 03:14:19 2015 - [info] Binlog setting check done.
Sat Oct 10 03:14:19 2015 - [info] Checking SSH publickey authentication and checking recovery script configurations on all alive slave servers..
Sat Oct 10 03:14:19 2015 - [info]   Executing command : apply_diff_relay_logs --command=test --slave_user='admin' --slave_host=node2 --slave_ip=192.168.1.102 --slave_port=3306 --workdir=/var/log/masterha/app1 --target_version=5.6.26-log --manager_version=0.56 --relay_log_info=/data/mysql/mysql3306/data/relay-log.info  --relay_dir=/data/mysql/mysql3306/data/  --slave_pass=xxx
Sat Oct 10 03:14:19 2015 - [info]   Connecting to root@192.168.1.102(node2:22)..
Can't exec "mysqlbinlog": 没有那个文件或目录 at /usr/share/perl5/vendor_perl/MHA/BinlogManager.pm line 106.
这个解决的方法:创建一个mysqlbinlog的软链
发现在node2和node3中并不存在mysqlbinlog  所以要做一个软链
分别在node2和node3下执行下面这条命令

 ln -s /usr/local/mysql/bin/mysqlbinlog /usr/bin/mysqlbinlog

mysqlbinlog version command failed with rc 1:0, please verify PATH, LD_LIBRARY_PATH, and client options
 at /usr/bin/apply_diff_relay_logs line 493
Sat Oct 10 03:14:20 2015 - [error][/usr/share/perl5/vendor_perl/MHA/MasterMonitor.pm, ln205] Slaves settings check failed!
Sat Oct 10 03:14:20 2015 - [error][/usr/share/perl5/vendor_perl/MHA/MasterMonitor.pm, ln413] Slave configuration failed.
Sat Oct 10 03:14:20 2015 - [error][/usr/share/perl5/vendor_perl/MHA/MasterMonitor.pm, ln424] Error happened on checking configurations.  at /usr/bin/masterha_check_repl line 48
Sat Oct 10 03:14:20 2015 - [error][/usr/share/perl5/vendor_perl/MHA/MasterMonitor.pm, ln523] Error happened on monitoring servers.
Sat Oct 10 03:14:20 2015 - [info] Got exit code 1 (Not master dead).

MySQL Replication Health is NOT OK!

然后在重新执行测试命令 发现:
    Testing mysql connection and privileges..sh: mysql: command not found
这个就是在node2和node3中在将mysql创建一个软链
 ln -s /usr/local/mysql/bin/mysql /usr/bin/mysql
在次执行出现了:
[root@node1 app1]# masterha_check_repl --global_conf=/etc/masterha/masterha_default.conf --conf=/etc/masterha/app1.conf
……..
MySQL Replication Health is OK.

现在把 /etc/masterha这个文件夹下的文件全部复制到另两个机器中:
然后分别在node2和node3上执行这个测试脚本

启动manager进程
在node2 节点启动master manager 节点
[root@node2 masterha]# masterha_manager –global_conf=/etc/masterha/masterha_default.conf –conf=/etc/masterha/app1.conf >/tmp/mha_manager.log 2>&1 &
利用screen 执行 这个要自己掌握啊!!!!
查看日志
[root@node2 app1]# cat /var/log/masterha/app1/app1.log

这样MHA就搭建成功了。

总结一下:
1.要搭建各机器的信任机制。
2.创建一主多从的复制结构这里需要使用传统复制,MHA目前来说对GITD的复制不是很友好,同样也不支持一主一从的复制结构
3.然后配置 masterha配置信息和master manager的配置信息 这里要注意。
master manager的如果不是有单独的服务器当做管理节点的话,那么最好把每一个节点都安装node 和manager 方便自由切换。
4 检测mha配置,有几点需要注意要
4.1将master的slave 状态重置
4.2 需要创建 mysql和mysql的 软链接
ln -s /usr/local/mysql/bin/mysql /usr/bin/mysql
ln -s /usr/local/mysql/bin/mysqld /usr/bin/mysqld
5 启动mha manager

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值