myslq主从同步和常用命令

MySQL 5.7 multi-source replication

(简单的说就是一从多主,从5.7后MySQL开始支持)

环境资源:

现有4台pc:

Ip                                 OS                                alias

172.171.17.144        fedora linux 24          slave-144

        

172.171.17.145        fedora linux 24          master-145

172.171.17.146        fedora linux 24          master-146

172.171.17.147        fedora linux 24          master-147

环境搭建目的:

全都安装mysql5.7版本,为slave-144配置多源复制后,在master-145、master-146、master-147上创建库表在slave-144会实时同步更新。

环境搭建步骤:

  1. 安装MySQL5.7

这里是自己的测试环境,打算使用mysql的源安装最新版的软件,这样一旦有更新只要update一下就好了,生产环境建议选择最新的稳定版二进制包安装。

//安装mysql源

rpm -ivh http://dev.mysql.com/get/mysql57-community-release-fc24-8.noarch.rpm

//安装mysql社区版

dnf install mysql-community-client mysql-community-server

//设置mysql开机启动

systemctl enable mysqld.service

// 启动mysql

systemctl start mysqld.service

  1. 配置多源复制

这里主要是参考官方文档里的说明,尝试和修改配置文件,中间过程略过,只贴出配置和说明。(可参考:

http://dev.mysql.com/doc/refman/5.7/en/replication-multi-source-tutorials.html

MySQL :: MySQL 5.7 Reference Manual :: 16.1.3.4 Setting Up Replication Using GTIDs

MySQL :: MySQL 5.7 Reference Manual :: 16.1.2.1 Setting the Replication Source Configuration)

1.修改配置文件:

slave-144(/etc/my.cnf):

[mysqld]

  #开启bin-log,多源复制基础

  log-bin = mysql-bin

  server-id = 144

  # 下面两行设置bin-log日志的大小限制和过期时间,防止过多bin-log日志导致磁盘不够

  max-binlog-size=5G
  expire_logs_days=90

  #开启gtid逻辑复制模式,mysql5.7推荐使用的模式

  gtid-mode = on

  enforce-gtid-consistency = 1

master-145 master-146 master-147同上在/etc/my.cnf中的[mysqld]增加上述配置,注意server-id不能相同,这里用ip的最后一位表示(server-id:144-147)。

// 重启mysql

systemctl restart mysqld.service

2.创建同步账户

// 找出安装后的随机密码

grep 'temporary password' /var/log/mysqld.log

// 登入mysql

mysql -uroot –p’随机密码’

// 修改mysql本地root登陆密码

ALTER USER 'root'@'localhost' IDENTIFIED BY 'mypassword=new';

// 创建同步用的账户

CREATE USER 'crawler'@'%' IDENTIFIED BY ' mypassword=new';

// 赋给该用户远程登陆访问的所有权限

GRANT ALL PRIVILEGES ON *.* TO 'crawler'@'%' ;

// 刷新权限

flush privileges; 

3.在slave-144上配置多源复制

使用mysql客户端用同步账户登陆slave-144的mysql,然后操作:

// 配置master-145,master-146,master-147的多源复制

change master to master_host='172.171.17.145', master_user='crawler', master_port=3306, master_password='femlwampn9=L', MASTER_AUTO_POSITION = 1 for CHANNEL 'master-145';

change master to master_host='172.171.17.146', master_user='crawler', master_port=3306, master_password='femlwampn9=L', MASTER_AUTO_POSITION = 1 for CHANNEL 'master-146';

change master to master_host='172.171.17.147', master_user='crawler', master_port=3306, master_password='femlwampn9=L', MASTER_AUTO_POSITION = 1 for CHANNEL 'master-147';

// 启动这三个源的复制频道

start slave for channel 'master-145';

start slave for channel 'master-146';

start slave for channel 'master-147';

// 观察配置是否成功

show slave status;

这里基本就成功了,可以在master-145、master-146、master-147上创建数据库表,看是否在slave上实时同步:

  

// 停止和重置复制的命令为

stop slave for channel 'master-145';

stop slave for channel 'master-146';

stop slave for channel 'master-147';

reset slave for channel 'master-145';

reset slave for channel 'master-146';

reset slave for channel 'master-147';

mysql主从同步常用命令:

change master 'crawler1' to master_host='192.168.10.108',master_port=3306,master_user='crawler',master_password='crawler';
change master 'crawler2' to master_host='192.168.10.109',master_port=3306,master_user='crawler',master_password='crawler';
change master 'crawler3' to master_host='192.168.10.110',master_port=3306,master_user='crawler',master_password='crawler';
change master 'crawler4' to master_host='192.168.10.111',master_port=3306,master_user='crawler',master_password='crawler';
show all slaves status;
 

show slave 'crawler1' status;
show slave 'crawler2' status;
show slave 'crawler3' status;
show slave 'crawler4' status;
show slave 'crawler5' status;
show slave 'crawler6' status;
show slave 'crawler7_8' status;
show slave 'crawler9_10' status;

set global crawler1.replicate_do_db = "crawler1";
set global crawler1.replicate_do_table = "crawler1.controller_config,crawler1.controller_task_input,crawler1.controller_task_output,crawler1.controller_task_status,crawler1.controller_server_status";

set global crawler2.replicate_do_db = "crawler2";
set global crawler2.replicate_do_table = "crawler2.controller_config,crawler2.controller_task_input,crawler2.controller_task_output,crawler2.controller_task_status,crawler2.controller_server_status";

set global crawler3.replicate_do_db = "crawler3";
set global crawler3.replicate_do_table = "crawler3.controller_config,crawler3.controller_task_input,crawler3.controller_task_output,crawler3.controller_task_status,crawler3.controller_server_status";

set global crawler4.replicate_do_db = "crawler4";
set global crawler4.replicate_do_table = "crawler4.controller_config,crawler4.controller_task_input,crawler4.controller_task_output,crawler4.controller_task_status,crawler4.controller_server_status";

set global crawler5.replicate_do_db = "crawler5";
set global crawler5.replicate_do_table = "crawler5.controller_config,crawler5.controller_task_input,crawler5.controller_task_output,crawler5.controller_task_status,crawler5.controller_server_status";

set global crawler6.replicate_do_db = "crawler6";
set global crawler6.replicate_do_table = "crawler6.controller_config,crawler6.controller_task_input,crawler6.controller_task_output,crawler6.controller_task_status,crawler6.controller_server_status";

set global crawler7_8.replicate_do_db = "crawler7,crawler8";
set global crawler7_8.replicate_do_table = "crawler7.controller_config,crawler7.controller_task_input,crawler7.controller_task_output,crawler7.controller_task_status,crawler7.controller_server_status,crawler8.controller_config,crawler8.controller_task_input,crawler8.controller_task_output,crawler8.controller_task_status,crawler8.controller_server_status";

set global crawler9_10.replicate_do_db = "crawler9,crawler10";
set global crawler9_10.replicate_do_table = "crawler9.controller_config,crawler9.controller_task_input,crawler9.controller_task_output,crawler9.controller_task_status,crawler9.controller_server_status,crawler10.controller_config,crawler10.controller_task_input,crawler10.controller_task_output,crawler10.controller_task_status,crawler10.controller_server_status";
start slave 'crawler1';
start slave 'crawler2';
start slave 'crawler3';
start slave 'crawler4';
start slave 'crawler5';
start slave 'crawler6';
start slave 'crawler7_8';
start slave 'crawler9_10';
stop slave 'crawler1';
stop slave 'crawler2';
stop slave 'crawler3';
stop slave 'crawler4';
stop slave 'crawler5';
stop slave 'crawler6';
stop slave 'crawler7_8';
stop slave 'crawler9_10';

reset slave 'crawler1' all;
reset slave 'crawler2' all;
reset slave 'crawler3' all;
reset slave 'crawler4' all;

start all slaves;
stop all slaves;

set @@default_master_connection = 'crawler1';
set global sql_slave_skip_counter = 1;

reset slave all;
reset master;

show master status;


change master 'crawler1' to master_host='192.168.72.52',master_port=3306,master_user='crawler',master_password='crawler', master_log_file='crawler1-bin.000082', master_log_pos=107720375;
 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值