MGR单主(Single-Primary)模式部署

MGR单主(Single-Primary)模式部署
1、创建复制账号

​ 在每一个MySQL节点创建复制使用账号

set sql_log_bin=0;      #设为0后,在Master数据库上执行的语句都不记录binlog

GRANT REPLICATION SLAVE ON *.* TO 'repl'@'192.168.136.*' IDENTIFIED BY 'repl';

CHANGE MASTER TO MASTER_USER='repl',MASTER_PASSWORD='repl' FOR CHANNEL 'group_replication_recovery';

set sql_log_bin=1;
2、修改配置参数

​ 修改每一个MySQL节点的配置文件 my.cnf ,增加如下参数,并重启数据库实例

[mysqld]

master-info-repository=TABLE														#复制元数据存储在系统表而不是文件
relay-log-info-repository=TABLE													#中继日志信息写入到表而不是文件
gtid_mode=on																						#开启全局事务ID
enforce_gtid_consistency=on															#开启全局事务ID强一致性
slave_preserve_commit_order=on													#控制从库并行reply时事务提交的顺序
binlog_checksum=NONE																		#禁用二进制日志时间校验和
transaction_write_set_extraction=XXHASH64								#以便在server收集写集合的同时将其记录到二进制日志。写集合基于每行的主键,并且是行更改后的唯一标识此标识将用于检测冲突。

3、主节点配置

​ 选择一个 MySQL 节点作为主节点,安装组复制插件并配置组复制参数,同时将参数写入 my.cnf 文件。

#开放 23306 端口
firewall-cmd --zone=public --add-port=23306/tcp --permanent
firewall-cmd --reload
#安装组复制插件
INSTALL PLUGIN group_replication SONAME 'group_replication.so';

#使用SELECT UUID()生成一个UUID。
mysql> select uuid();
+--------------------------------------+
| uuid()                               |
+--------------------------------------+
| 3a90016b-afe0-11ea-ab97-00505636b68c |
+--------------------------------------+
1 row in set (0.00 sec)

set global group_replication_group_name='3a90016b-afe0-11ea-ab97-00505636b68c';		#组的名字可以随便起,必须是有效的UUID,但不能用主机的GTID! 所有节点的这个组名必须保持一致!。在二进制日志中为组复制事件设置GTID时,将在内部使用此UUID。使用SELECT UUID()生成一个UUID。
set global group_replication_local_address='192.168.136.136:23306';				#不同节点配置不同节点本身的IP地址和端口,区分MYSQL自身的3306端口
set global group_replication_group_seeds='192.168.136.136:23306,192.168.136.135:23306,192.168.136.137:23306';			#表示当集群有新成员加入时,可以从哪些节点(IP:Port)获取需要的数据进行 recovery
set global group_replication_start_on_boot=off;					#表示组复制是否随实例启动而启动
set global group_replication_bootstrap_group=on;				#只用于集群初始化的时候,主节点必须执行,其他节点不需要执行
set global group_replication_ip_whitelist='192.168.136.0/24';				#IP白名单,确定可以接受哪个主机的组通信系统连接

注意:写入 my.cnf 文件时,group_replication 开头的参数应该增加 loose- 开头,例如 loose-group_replication_group_name

loose-group_replication_group_name='3a90016b-afe0-11ea-ab97-00505636b68c'
loose-group_replication_local_address='192.168.136.136:23306'
loose-group_replication_group_seeds='192.168.136.136:23306,192.168.136.135:23306,192.168.136.137:23306'
loose-group_replication_start_on_boot=off
#loose-group_replication_bootstrap_group=on
loose-group_replication_ip_whitelist='192.168.136.0/24'
4、在主节点启动组复制并查看日志。
start group_replication;

​ 如果显示:Plugin group_replication reported: ‘This server is working as primary member.’ 则成功启动

2020-06-16T14:49:09.898607Z 17 [Note] Plugin group_replication reported: 'Group communication SSL configuration: group_replication_ssl_mode: "DISABLED"'
2020-06-16T14:49:09.899050Z 17 [Warning] Plugin group_replication reported: '[GCS] Automatically adding IPv4 localhost address to the whitelist. It is mandatory that it is added.'
2020-06-16T14:49:09.899464Z 17 [Note] Plugin group_replication reported: '[GCS] SSL was not enabled'
2020-06-16T14:49:09.899496Z 17 [Note] Plugin group_replication reported: 'Initialized group communication with configuration: group_replication_group_name: "3a90016b-afe0-11ea-ab97-00505636b68c"; group_replication_local_address: "192.168.136.136:23306"; group_replication_group_seeds: "192.168.136.136:23306,192.168.136.135:23306,192.168.136.137:23306"; group_replication_bootstrap_group: true; group_replication_poll_spin_loops: 0; group_replication_compression_threshold: 1000000; group_replication_ip_whitelist: "192.168.136.0/24"'
2020-06-16T14:49:09.899566Z 17 [Note] Plugin group_replication reported: '[GCS] Configured number of attempts to join: 0'
2020-06-16T14:49:09.899580Z 17 [Note] Plugin group_replication reported: '[GCS] Configured time between attempts to join: 5 seconds'
2020-06-16T14:49:09.899632Z 17 [Note] Plugin group_replication reported: 'Member configuration: member_id: 1363306; member_uuid: "839d3796-a8a1-11ea-b1db-00505636b68c"; single-primary mode: "true"; group_replication_auto_increment_increment: 7; '
2020-06-16T14:49:09.899961Z 20 [Note] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_applier' executed'. Previous state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 373, master_bind=''. New state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''.
2020-06-16T14:49:09.914952Z 17 [Note] Plugin group_replication reported: 'Group Replication applier module successfully initialized!'
2020-06-16T14:49:09.915006Z 17 [Note] Plugin group_replication reported: 'auto_increment_increment is set to 7'
2020-06-16T14:49:09.915033Z 17 [Note] Plugin group_replication reported: 'auto_increment_offset is set to 1363306'
2020-06-16T14:49:09.915005Z 23 [Note] Slave SQL thread for channel 'group_replication_applier' initialized, starting replication in log 'FIRST' at position 0, relay log './ip136-relay-bin-group_replication_applier.000002' position: 4
2020-06-16T14:49:09.915432Z 0 [Note] Plugin group_replication reported: 'XCom protocol version: 3'
2020-06-16T14:49:09.915447Z 0 [Note] Plugin group_replication reported: 'XCom initialized and ready to accept incoming connections on port 23306'
2020-06-16T14:49:10.921514Z 0 [Note] Plugin group_replication reported: 'Group membership changed to ip136:3306 on view 15923189509206964:1.'
2020-06-16T14:49:10.921627Z 30 [Note] Plugin group_replication reported: 'Only one server alive. Declaring this server as online within the replication group'
2020-06-16T14:49:11.231107Z 0 [Note] Plugin group_replication reported: 'This server was declared online within the replication group'
2020-06-16T14:49:11.231322Z 0 [Note] Plugin group_replication reported: 'A new primary with address ip136:3306 was elected, enabling conflict detection until the new primary applies all relay logs.'
2020-06-16T14:49:11.231425Z 32 [Note] Plugin group_replication reported: 'This server is working as primary member.'
5、在主节点通过视图查看现在的集群状态
mysql> select * from performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| group_replication_applier | 839d3796-a8a1-11ea-b1db-00505636b68c | ip136       |        3306 | ONLINE       |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
1 row in set (0.00 sec)

mysql> select VARIABLE_VALUE from performance_schema.global_status where VARIABLE_NAME='group_replication_primary_member';
+--------------------------------------+
| VARIABLE_VALUE                       |
+--------------------------------------+
| 839d3796-a8a1-11ea-b1db-00505636b68c |
+--------------------------------------+
1 row in set (0.00 sec)
6、在主节点关闭 group_replication_bootstrap_group
set global group_replication_bootstrap_group=off;
7、配置从节点

​ 在需要加入集群的两个节点分别安装组复制插件,并配置组复制参数,同时写入 my.cnf 文件

#开放 23306 端口
firewall-cmd --zone=public --add-port=23306/tcp --permanent
firewall-cmd --reload
INSTALL PLUGIN group_replication SONAME 'group_replication.so';

#节点1
set global group_replication_group_name='3a90016b-afe0-11ea-ab97-00505636b68c';
set global group_replication_local_address='192.168.136.135:23306';
set global group_replication_group_seeds='192.168.136.136:23306,192.168.136.135:23306,192.168.136.137:23306';
set global group_replication_start_on_boot=off;
set global group_replication_ip_whitelist='192.168.136.0/24';


loose-group_replication_group_name='3a90016b-afe0-11ea-ab97-00505636b68c'
loose-group_replication_local_address='192.168.136.135:23306'
loose-group_replication_group_seeds='192.168.136.136:23306,192.168.136.135:23306,192.168.136.137:23306'
loose-group_replication_start_on_boot=off
loose-group_replication_ip_whitelist='192.168.136.0/24'

#节点2
set global group_replication_group_name='3a90016b-afe0-11ea-ab97-00505636b68c';
set global group_replication_local_address='192.168.136.137:23306';
set global group_replication_group_seeds='192.168.136.136:23306,192.168.136.135:23306,192.168.136.137:23306';
set global group_replication_start_on_boot=off;
set global group_replication_ip_whitelist='192.168.136.0/24';

loose-group_replication_group_name='3a90016b-afe0-11ea-ab97-00505636b68c'
loose-group_replication_local_address='192.168.136.137:23306'
loose-group_replication_group_seeds='192.168.136.136:23306,192.168.136.135:23306,192.168.136.137:23306'
loose-group_replication_start_on_boot=off
loose-group_replication_ip_whitelist='192.168.136.0/24'
8、在两个节点启动组复制

​ 依次在两个节点启动组复制,加入集群环境,可以通过观察日志或者视图确认是否加入成功

#启动组复制
start group_replication;

#查看视图
mysql> select * from performance_schema.replication_group_members;
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| group_replication_applier | 839d3796-a8a1-11ea-b1db-00505636b68c | ip136       |        3306 | ONLINE       |
| group_replication_applier | e741120b-a7e9-11ea-9af7-0050563bbd20 | ip135       |        3306 | ONLINE       |
| group_replication_applier | f44ecd83-a8d1-11ea-ad97-0050562512fc | ip137       |        3306 | ONLINE       |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
3 rows in set (0.00 sec)


#日志打印如下,则启动成功
2020-06-16T14:56:17.265752Z 2 [Note] Plugin group_replication reported: 'This server is working as secondary member with primary member address ip136:3306.'
...
2020-06-16T14:56:17.344919Z 0 [Note] Plugin group_replication reported: 'This server was declared online within the replication group'

​ 启动详细日志

2020-06-16T14:56:12.522704Z 2 [Note] Plugin group_replication reported: 'Group communication SSL configuration: group_replication_ssl_mode: "DISABLED"'
2020-06-16T14:56:12.523859Z 2 [Warning] Plugin group_replication reported: '[GCS] Automatically adding IPv4 localhost address to the whitelist. It is mandatory that it is added.'
2020-06-16T14:56:12.524045Z 2 [Note] Plugin group_replication reported: '[GCS] SSL was not enabled'
2020-06-16T14:56:12.524102Z 2 [Note] Plugin group_replication reported: 'Initialized group communication with configuration: group_replication_group_name: "3a90016b-afe0-11ea-ab97-00505636b68c"; group_replication_local_address: "192.168.136.135:23306"; group_replication_group_seeds: "192.168.136.136:23306,192.168.136.135:23306,192.168.136.137:23306"; group_replication_bootstrap_group: false; group_replication_poll_spin_loops: 0; group_replication_compression_threshold: 1000000; group_replication_ip_whitelist: "192.168.136.0/24"'
2020-06-16T14:56:12.524147Z 2 [Note] Plugin group_replication reported: '[GCS] Configured number of attempts to join: 0'
2020-06-16T14:56:12.524156Z 2 [Note] Plugin group_replication reported: '[GCS] Configured time between attempts to join: 5 seconds'
2020-06-16T14:56:12.524185Z 2 [Note] Plugin group_replication reported: 'Member configuration: member_id: 1353306; member_uuid: "e741120b-a7e9-11ea-9af7-0050563bbd20"; single-primary mode: "true"; group_replication_auto_increment_increment: 7; '
2020-06-16T14:56:12.530898Z 4 [Note] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_applier' executed'. Previous state master_host='', master_port= 3306, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''.
2020-06-16T14:56:12.557787Z 7 [Note] Slave SQL thread for channel 'group_replication_applier' initialized, starting replication in log 'FIRST' at position 0, relay log './ip135-relay-bin-group_replication_applier.000001' position: 4
2020-06-16T14:56:12.557788Z 2 [Note] Plugin group_replication reported: 'Group Replication applier module successfully initialized!'
2020-06-16T14:56:12.557874Z 2 [Note] Plugin group_replication reported: 'auto_increment_increment is set to 7'
2020-06-16T14:56:12.557881Z 2 [Note] Plugin group_replication reported: 'auto_increment_offset is set to 1353306'
2020-06-16T14:56:12.591272Z 0 [Note] Plugin group_replication reported: 'XCom protocol version: 3'
2020-06-16T14:56:12.591345Z 0 [Note] Plugin group_replication reported: 'XCom initialized and ready to accept incoming connections on port 23306'
2020-06-16T14:56:17.265752Z 2 [Note] Plugin group_replication reported: 'This server is working as secondary member with primary member address ip136:3306.'
2020-06-16T14:56:17.267305Z 14 [Note] Plugin group_replication reported: 'Establishing group recovery connection with a possible donor. Attempt 1/10'
2020-06-16T14:56:17.267876Z 0 [Note] Plugin group_replication reported: 'Group membership changed to ip136:3306, ip135:3306 on view 15923189509206964:2.'
2020-06-16T14:56:17.283582Z 14 [Note] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_recovery' executed'. Previous state master_host='', master_port= 3306, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='ip136', master_port= 3306, master_log_file='', master_log_pos= 4, master_bind=''.
2020-06-16T14:56:17.292194Z 14 [Note] Plugin group_replication reported: 'Establishing connection to a group replication recovery donor 839d3796-a8a1-11ea-b1db-00505636b68c at ip136 port: 3306.'
2020-06-16T14:56:17.292856Z 16 [Warning]
2020-06-16T14:56:17.300970Z 16 [Note] Slave I/O thread for channel 'group_replication_recovery': connected to master 'repl@ip136:3306',replication started in log 'FIRST' at position 4
2020-06-16T14:56:17.309664Z 17 [Note] Slave SQL thread for channel 'group_replication_recovery' initialized, starting replication in log 'FIRST' at position 0, relay log './ip135-relay-bin-group_replication_recovery.000001' position: 4
2020-06-16T14:56:17.322972Z 14 [Note] Plugin group_replication reported: 'Terminating existing group replication donor connection and purging the corresponding logs.'
2020-06-16T14:56:17.323386Z 17 [Warning] Slave SQL for channel 'group_replication_recovery': Coordinator thread of multi-threaded slave is being stopped in the middle of assigning a group of events; deferring to exit until the group completion ... , Error_code: 0
2020-06-16T14:56:17.324925Z 17 [Note] Slave SQL thread for channel 'group_replication_recovery' exiting, replication stopped in log 'mysql-binlog.000010' at position 1442
2020-06-16T14:56:17.325910Z 16 [Note] Slave I/O thread killed while reading event for channel 'group_replication_recovery'
2020-06-16T14:56:17.325935Z 16 [Note] Slave I/O thread exiting for channel 'group_replication_recovery', read up to log 'mysql-binlog.000010', position 1442
2020-06-16T14:56:17.337845Z 14 [Note] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_recovery' executed'. Previous state master_host='ip136', master_port= 3306, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''.
2020-06-16T14:56:17.344919Z 0 [Note] Plugin group_replication reported: 'This server was declared online within the replication group'


9、如何找出 Primary 成员

​ 集群搭建成功之后,可以通过查看视图找出 Primary 成员;或者使用 show variables like ‘%read_only%’; 因为 MGR 默认单主模式,secondary 节点会自动开启 read only 模式

mysql> select VARIABLE_VALUE from performance_schema.global_status where VARIABLE_NAME='group_replication_primary_member';
+--------------------------------------+
| VARIABLE_VALUE                       |
+--------------------------------------+
| 839d3796-a8a1-11ea-b1db-00505636b68c |
+--------------------------------------+
1 row in set (0.00 sec)
#查看 Primary 成员
mysql> select *from  performance_schema.replication_group_members where member_id =(select variable_value from performance_schema.global_status WHERE VARIABLE_NAME= 'group_replication_primary_member');
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST | MEMBER_PORT | MEMBER_STATE |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
| group_replication_applier | 839d3796-a8a1-11ea-b1db-00505636b68c | ip136       |        3306 | ONLINE       |
+---------------------------+--------------------------------------+-------------+-------------+--------------+
1 row in set (0.00 sec)


10、常见报错

​ 1) 问题一

mysql> INSTALL PLUGIN group_replication SONAME 'group_replication.so';
ERROR 1126 (HY000):

​ 原因 :MySQL的配置文件指定的 plugin_dir 中,找不到你要安装的【*.so】文件。

#查看 plugin_dir 路径
mysql> show global variables like 'plugin_dir';
+---------------+-------------------------------+
| Variable_name | Value                         |
+---------------+-------------------------------+
| plugin_dir    | /home/mysql/mysql/lib/plugin/ |
+---------------+-------------------------------+
1 row in set (0.00 sec)


#将【*.so】文件复制到此路径下
cp /home/mysql/mysql/mysql-5.7.30/lib/plugin/group_replication.so /home/mysql/mysql/lib/plugin/

​ 2)问题二

Plugin group_replication reported: '[GCS] Error on opening a connection to 192.168.136.136:23306 on local port: 23306.'
Plugin group_replication reported: '[GCS] Error on opening a connection to 192.168.136.137:23306 on local port: 23306.'
原因:23306 未放通
centos7 firewalld常用命令

方法一:关闭防火墙
# 启动 
systemctl start firewalld
# 关闭
systemctl stop firewalld
# 查看状态
systemctl status firewalld 
# 开机禁用 
systemctl disable firewalld
# 开机启用
systemctl enable firewalld
# 查看所有打开的端口
firewall-cmd --zone=public --list-ports

方法二:开放 23306 端口
firewall-cmd --zone=public --add-port=23306/tcp --permanent
firewall-cmd --reload


节点接入报错
Plugin group_replication reported: '[GCS] The member was unable to join the group. Local port: 23306'
Plugin group_replication reported: 'Timeout on wait for view after joining group'

#开放 23306 端口
firewall-cmd --zone=public --add-port=23306/tcp --permanent
firewall-cmd --reload



centos6 防火墙常用命令
# 查看防火墙状态
service iptables status
 
# 停止防火墙
service iptables stop
 
# 启动防火墙
service iptables start
 
# 重启防火墙
service iptables restart
 
# 永久关闭防火墙
chkconfig iptables off
 
# 永久关闭后重启
chkconfig iptables on

#开放23306端口
vim /etc/sysconfig/iptables
-A INPUT -m state --state NEW -M tcp -p tcp --dport 23306 -j ACCEPT
service iptables restart
11、集群故障
#主从节点故障,重新加入组复制
stop group_replication;
start group_replication;

#查看 recovery 阶段的执行情况
show slave status for channel 'group_replication_recovery' \G;
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
MySQL MGRMySQL Group Replication)是MySQL官方提供的一种高可用性和容灾解决方案,支持单主模式和多主模式。在单主模式中,只有一个节点作为主节点来处理写入操作,其他节点作为备节点用于读取和复制数据。 以下是使用MySQL MGR搭建单主模式的基本步骤: 1. 安装MySQL:在每个节点上安装MySQL,并确保版本一致。 2. 配置MySQL实例:在每个节点上配置MySQL实例,包括设置端口、数据目录、日志文件等。 3. 配置主节点:选择一台节点作为主节点,配置其为MGR的主节点。在主节点上设置`group_replication_single_primary_mode=ON`,并配置其他节点的IP地址和端口。 4. 配置备节点:配置其他节点为备节点,并将其添加到主节点的组中。在备节点上设置`group_replication_single_primary_mode=OFF`,并配置主节点的IP地址和端口。 5. 启动MySQL实例:在每个节点上启动MySQL实例,并确保实例正常运行。 6. 配置管理账号:在主节点上创建一个管理账号,并分配相应的权限给其他节点。 7. 启动MGR:在主节点上启动MGR,并使用管理账号将其他节点添加到组中。 8. 验证集群状态:使用MySQL的命令行工具或客户端连接到主节点,并执行`SHOW STATUS LIKE 'group_%'`命令来验证集群状态。 以上是基本的步骤,具体的配置和操作可能会因环境和需求而有所不同。建议在搭建MySQL MGR之前,先详细阅读MySQL官方文档中关于MGR的相关章节,以确保正确配置和操作。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值