mysql mha-0.58和mysql5.6.39

搭建完mha后,检查复制状态报错: 

root@manager[/root]#/usr/local/bin/masterha_check_repl --conf=/u01/mha/etc/app.cnf
Sat Oct  6 22:30:36 2018 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Sat Oct  6 22:30:36 2018 - [info] Reading application default configuration from /u01/mha/etc/app.cnf..
Sat Oct  6 22:30:36 2018 - [info] Reading server configuration from /u01/mha/etc/app.cnf..
Sat Oct  6 22:30:36 2018 - [info] MHA::MasterMonitor version 0.58.
Sat Oct  6 22:30:38 2018 - [info] GTID failover mode = 0
Sat Oct  6 22:30:38 2018 - [info] Dead Servers:
Sat Oct  6 22:30:38 2018 - [info] Alive Servers:
Sat Oct  6 22:30:38 2018 - [info]   master(192.168.2.118:3306)
Sat Oct  6 22:30:38 2018 - [info]   slave(192.168.2.119:3306)
Sat Oct  6 22:30:38 2018 - [info]   manager(192.168.2.120:3306)
Sat Oct  6 22:30:38 2018 - [info] Alive Slaves:
Sat Oct  6 22:30:38 2018 - [info]   slave(192.168.2.119:3306)  Version=5.6.39-log (oldest major version between slaves) log-bin:enabled
Sat Oct  6 22:30:38 2018 - [info]     Replicating from 192.168.2.118(192.168.2.118:3306)
Sat Oct  6 22:30:38 2018 - [info]     Primary candidate for the new Master (candidate_master is set)
Sat Oct  6 22:30:38 2018 - [info]   manager(192.168.2.120:3306)  Version=5.6.39-log (oldest major version between slaves) log-bin:enabled
Sat Oct  6 22:30:38 2018 - [info]     Replicating from 192.168.2.118(192.168.2.118:3306)
Sat Oct  6 22:30:38 2018 - [info]     Not candidate for the new Master (no_master is set)
Sat Oct  6 22:30:38 2018 - [info] Current Alive Master: master(192.168.2.118:3306)
Sat Oct  6 22:30:38 2018 - [info] Checking slave configurations..
Sat Oct  6 22:30:38 2018 - [info]  read_only=1 is not set on slave slave(192.168.2.119:3306).
Sat Oct  6 22:30:38 2018 - [warning]  relay_log_purge=0 is not set on slave slave(192.168.2.119:3306).
Sat Oct  6 22:30:38 2018 - [info]  read_only=1 is not set on slave manager(192.168.2.120:3306).
Sat Oct  6 22:30:38 2018 - [warning]  relay_log_purge=0 is not set on slave manager(192.168.2.120:3306).
Sat Oct  6 22:30:38 2018 - [info] Checking replication filtering settings..
Sat Oct  6 22:30:38 2018 - [info]  binlog_do_db= , binlog_ignore_db= 
Sat Oct  6 22:30:38 2018 - [info]  Replication filtering check ok.
Sat Oct  6 22:30:38 2018 - [info] GTID (with auto-pos) is not supported
Sat Oct  6 22:30:38 2018 - [info] Starting SSH connection tests..
Sat Oct  6 22:30:40 2018 - [info] All SSH connection tests passed successfully.
Sat Oct  6 22:30:40 2018 - [info] Checking MHA Node version..
Sat Oct  6 22:30:40 2018 - [info]  Version check ok.
Sat Oct  6 22:30:40 2018 - [info] Checking SSH publickey authentication settings on the current master..
Sat Oct  6 22:30:41 2018 - [info] HealthCheck: SSH to master is reachable.
Sat Oct  6 22:30:41 2018 - [info] Master MHA Node version is 0.58.
Sat Oct  6 22:30:41 2018 - [info] Checking recovery script configurations on master(192.168.2.118:3306)..
Sat Oct  6 22:30:41 2018 - [info]   Executing command: save_binary_logs --command=test --start_pos=4 --binlog_dir=/u01/my3306/log/binlog --output_file=/u01/mha/etc/app/save_binary_logs_test --manager_version=0.58 --start_file=binlog.000010 
Sat Oct  6 22:30:41 2018 - [info]   Connecting to root@192.168.2.118(master:22).. 
  Creating /u01/mha/etc/app if not exists..    ok.
  Checking output directory is accessible or not..
   ok.
  Binlog found at /u01/my3306/log/binlog, up to binlog.000010
Sat Oct  6 22:30:41 2018 - [info] Binlog setting check done.
Sat Oct  6 22:30:41 2018 - [info] Checking SSH publickey authentication and checking recovery script configurations on all alive slave servers..
Sat Oct  6 22:30:41 2018 - [info]   Executing command : apply_diff_relay_logs --command=test --slave_user='root' --slave_host=slave --slave_ip=192.168.2.119 --slave_port=3306 --workdir=/u01/mha/etc/app --target_version=5.6.39-log --manager_version=0.58 --relay_log_info=/u01/my3306/log/relay-log.info  --relay_dir=/u01/my3306/data/  --slave_pass=xxx
Sat Oct  6 22:30:41 2018 - [info]   Connecting to root@192.168.2.119(slave:22).. 
  Checking slave recovery environment settings..
    Opening /u01/my3306/log/relay-log.info ... ok.
    Relay log found at /u01/my3306/log, up to relaylog.000005
    Temporary relay log file is /u01/my3306/log/relaylog.000005
    Checking if super_read_only is defined and turned on..DBD::mysql::st execute failed: Unknown system variable 'super_read_only' at /usr/local/share/perl5/MHA/SlaveUtil.pm line 245.
Sat Oct  6 22:30:41 2018 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm, ln208] Slaves settings check failed!
Sat Oct  6 22:30:41 2018 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm, ln416] Slave configuration failed.
Sat Oct  6 22:30:41 2018 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm, ln427] Error happened on checking configurations.  at /usr/local/bin/masterha_check_repl line 48.
Sat Oct  6 22:30:41 2018 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm, ln525] Error happened on monitoring servers.
Sat Oct  6 22:30:41 2018 - [info] Got exit code 1 (Not master dead).

MySQL Replication Health is NOT OK!
root@manager[/root]#/usr/local/bin/masterha_check_repl --conf=/u01/mha/etc/app.cnf
Sat Oct  6 23:10:15 2018 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Sat Oct  6 23:10:15 2018 - [info] Reading application default configuration from /u01/mha/etc/app.cnf..
Sat Oct  6 23:10:15 2018 - [info] Reading server configuration from /u01/mha/etc/app.cnf..
Sat Oct  6 23:10:15 2018 - [info] MHA::MasterMonitor version 0.58.
Sat Oct  6 23:10:16 2018 - [info] GTID failover mode = 0
Sat Oct  6 23:10:16 2018 - [info] Dead Servers:
Sat Oct  6 23:10:16 2018 - [info] Alive Servers:
Sat Oct  6 23:10:16 2018 - [info]   master(192.168.2.118:3306)
Sat Oct  6 23:10:16 2018 - [info]   slave(192.168.2.119:3306)
Sat Oct  6 23:10:16 2018 - [info]   manager(192.168.2.120:3306)
Sat Oct  6 23:10:16 2018 - [info] Alive Slaves:
Sat Oct  6 23:10:16 2018 - [info]   slave(192.168.2.119:3306)  Version=5.6.39-log (oldest major version between slaves) log-bin:enabled
Sat Oct  6 23:10:16 2018 - [info]     Replicating from 192.168.2.118(192.168.2.118:3306)
Sat Oct  6 23:10:16 2018 - [info]     Primary candidate for the new Master (candidate_master is set)
Sat Oct  6 23:10:16 2018 - [info]   manager(192.168.2.120:3306)  Version=5.6.39-log (oldest major version between slaves) log-bin:enabled
Sat Oct  6 23:10:16 2018 - [info]     Replicating from 192.168.2.118(192.168.2.118:3306)
Sat Oct  6 23:10:16 2018 - [info]     Not candidate for the new Master (no_master is set)
Sat Oct  6 23:10:16 2018 - [info] Current Alive Master: master(192.168.2.118:3306)
Sat Oct  6 23:10:16 2018 - [info] Checking slave configurations..
Sat Oct  6 23:10:16 2018 - [warning]  relay_log_purge=0 is not set on slave slave(192.168.2.119:3306).
Sat Oct  6 23:10:16 2018 - [warning]  relay_log_purge=0 is not set on slave manager(192.168.2.120:3306).
Sat Oct  6 23:10:16 2018 - [info] Checking replication filtering settings..
Sat Oct  6 23:10:16 2018 - [info]  binlog_do_db= , binlog_ignore_db= 
Sat Oct  6 23:10:16 2018 - [info]  Replication filtering check ok.
Sat Oct  6 23:10:16 2018 - [info] GTID (with auto-pos) is not supported
Sat Oct  6 23:10:16 2018 - [info] Starting SSH connection tests..
Sat Oct  6 23:10:18 2018 - [info] All SSH connection tests passed successfully.
Sat Oct  6 23:10:18 2018 - [info] Checking MHA Node version..
Sat Oct  6 23:10:18 2018 - [info]  Version check ok.
Sat Oct  6 23:10:18 2018 - [info] Checking SSH publickey authentication settings on the current master..
Sat Oct  6 23:10:18 2018 - [info] HealthCheck: SSH to master is reachable.
Sat Oct  6 23:10:18 2018 - [info] Master MHA Node version is 0.58.
Sat Oct  6 23:10:18 2018 - [info] Checking recovery script configurations on master(192.168.2.118:3306)..
Sat Oct  6 23:10:18 2018 - [info]   Executing command: save_binary_logs --command=test --start_pos=4 --binlog_dir=/u01/my3306/log/binlog --output_file=/u01/mha/etc/app/save_binary_logs_test --manager_version=0.58 --start_file=binlog.000010 
Sat Oct  6 23:10:18 2018 - [info]   Connecting to root@192.168.2.118(master:22).. 
  Creating /u01/mha/etc/app if not exists..    ok.
  Checking output directory is accessible or not..
   ok.
  Binlog found at /u01/my3306/log/binlog, up to binlog.000010
Sat Oct  6 23:10:18 2018 - [info] Binlog setting check done.
Sat Oct  6 23:10:18 2018 - [info] Checking SSH publickey authentication and checking recovery script configurations on all alive slave servers..
Sat Oct  6 23:10:18 2018 - [info]   Executing command : apply_diff_relay_logs --command=test --slave_user='root' --slave_host=slave --slave_ip=192.168.2.119 --slave_port=3306 --workdir=/u01/mha/etc/app --target_version=5.6.39-log --manager_version=0.58 --relay_log_info=/u01/my3306/log/relay-log.info  --relay_dir=/u01/my3306/data/  --slave_pass=xxx
Sat Oct  6 23:10:18 2018 - [info]   Connecting to root@192.168.2.119(slave:22).. 
  Checking slave recovery environment settings..
    Opening /u01/my3306/log/relay-log.info ... ok.
    Relay log found at /u01/my3306/log, up to relaylog.000005
    Temporary relay log file is /u01/my3306/log/relaylog.000005
    Checking if super_read_only is defined and turned on..DBD::mysql::st execute failed: Unknown system variable 'super_read_only' at /usr/local/share/perl5/MHA/SlaveUtil.pm line 245.
Sat Oct  6 23:10:19 2018 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm, ln208] Slaves settings check failed!
Sat Oct  6 23:10:19 2018 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm, ln416] Slave configuration failed.
Sat Oct  6 23:10:19 2018 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm, ln427] Error happened on checking configurations.  at /usr/local/bin/masterha_check_repl line 48.
Sat Oct  6 23:10:19 2018 - [error][/usr/local/share/perl5/MHA/MasterMonitor.pm, ln525] Error happened on monitoring servers.
Sat Oct  6 23:10:19 2018 - [info] Got exit code 1 (Not master dead).

MySQL Replication Health is NOT OK!

错误显示:

Checking if super_read_only is defined and turned on

检查从库的super_read_only参数定义,并且是否是启用状态,发现该参数并不存在,所以报错。

查看mysql版本:

发现是5.6的,所以在5.6之前,这个参数是不存在的,从5.7引入这个参数。

解决办法是把mha manager改成0.56版本,此处我用的是0.58版本。

  • 1
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
包含以下rpm包,安装顺序可能不一致,自行尝试 Node: libmysqlclient18-10.0.35-1.7.x86_64.rpm perl-Data-ShowTable-4.5-2.14.noarch.rpm perl-DBD-mysql-4.021-12.5.2.x86_64.rpm mha4mysql-node-0.58.tar.gz perl-Params-Validate-1.06-2.18.x86_64.rpm perl-Log-Dispatch-2.68-63.1.noarch.rpm **package rpm -ivh perl-Class-Data-Inheritable-0.08-83.1.noarch.rpm rpm -ivh perl-Devel-StackTrace-2.03-40.1.noarch.rpm rpm -ivh perl-Exception-Class-1.44-40.1.noarch.rpm rpm -ivh perl-Devel-GlobalDestruction-0.14-26.1.noarch.rpm rpm -ivh perl-Eval-Closure-0.14-30.1.noarch.rpm rpm -ivh perl-MRO-Compat-0.13-32.1.noarch2.rpm rpm -ivh perl-Role-Tiny-2.000006-4.1.noarch.rpm rpm -ivh perl-Sub-Exporter-Progressive-0.001013-2.1.noarch.rpm rpm -ivh perl-Specio-0.43-13.1.noarch.rpm rpm -ivh perl-Dist-CheckConflicts-0.09-1.7.noarch.rpm rpm -ivh perl-Package-Stash-XS-0.28-1.18.x86_64.rpm rpm -ivh perl-Package-Stash-0.36-2.5.noarch.rpm rpm -ivh perl-B-Hooks-EndOfScope-0.24-36.1.noarch.rpm rpm -ivh perl-namespace-clean-0.27-4.1.noarch.rpm rpm -ivh perl-namespace-autoclean-0.28-29.1.noarch.rpm rpm -ivh perl-Test-Fatal-0.014-7.1.noarch.rpm rpm -ivh perl-Devel-GlobalDestruction-0.14-26.1.noarch.rpm rpm -ivh perl-Sub-Identify-0.14-43.1.x86_64.rpm rpm -ivh perl-Sub-Quote-2.005001-9.1.noarch.rpm rpm -ivh perl-Variable-Magic-0.62-33.1.x86_64.rpm rpm -ivh perl-Scalar-List-Utils-1.35-1.147.x86_64.rpm rpm -ivh perl-Params-ValidationCompiler-0.30-10.1.noarch.rpm rpm -ivh perl-Dist-CheckConflicts-0.09-1.7.noarch.rpm rpm -ivh perl-Package-Stash-XS-0.28-1.18.x86_64.rpm rpm -ivh perl-Package-Stash-0.36-2.5.noarch.rpm rpm -ivh perl-Try-Tiny-0.16-3.19.noarch.rpm rpm -ivh perl-Module-Implementation-0.07-2.5.noarch.rpm rpm -ivh perl-Module-Runtime-0.014-4.1.noarch.rpm rpm -ivh perl-Dist-CheckConflicts-0.09-1.7.noarch.rpm rpm -ivh perl-Package-Stash-XS-0.28-1.18.x86_64.rpm rpm -ivh perl-Package-Stash-0.36-2.5.noarch.rpm rpm -ivh perl-B-Hooks-EndOfScope-0.24-36.1.noarch.rpm rpm -ivh perl-namespace-clean-0.27-4.1.noarch.rpm rpm -ivh perl-namespace

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值