MySQL8.0 MGR的维护管理

现在有个三节点的MGR集群:

mysql> select * from performance_schema.replication_group_members;
+---------------------------+--------------------------------------+----------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST    | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+----------------+-------------+--------------+-------------+----------------+
| group_replication_applier | b4a92e9f-4416-11ef-ab6d-5254009ccf5d | VM-20-8-centos |        3381 | ONLINE       | PRIMARY     | 8.0.25         |
| group_replication_applier | ba8d6325-4416-11ef-94a6-5254009ccf5d | VM-20-8-centos |        3382 | ONLINE       | SECONDARY   | 8.0.25         |
| group_replication_applier | c0bbb3f3-4416-11ef-b304-5254009ccf5d | VM-20-8-centos |        3383 | ONLINE       | SECONDARY   | 8.0.25         |
+---------------------------+--------------------------------------+----------------+-------------+--------------+-------------+----------------+
3 rows in set (0.00 sec)

切换主节点

当主节点需要进行维护时,或者执行滚动升级时,就可以对其进行切换,将主节点切换到其他节点。

在命令行模式下,可以使用 group_replication_set_as_primary() 这个udf实现切换,例如:

mysql> select group_replication_set_as_primary('ba8d6325-4416-11ef-94a6-5254009ccf5d');
+--------------------------------------------------------------------------+
| group_replication_set_as_primary('ba8d6325-4416-11ef-94a6-5254009ccf5d') |
+--------------------------------------------------------------------------+
| Primary server switched to: ba8d6325-4416-11ef-94a6-5254009ccf5d         |
+--------------------------------------------------------------------------+
1 row in set (1.01 sec)

mysql> select * from performance_schema.replication_group_members;
+---------------------------+--------------------------------------+----------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST    | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+----------------+-------------+--------------+-------------+----------------+
| group_replication_applier | b4a92e9f-4416-11ef-ab6d-5254009ccf5d | VM-20-8-centos |        3381 | ONLINE       | SECONDARY   | 8.0.25         |
| group_replication_applier | ba8d6325-4416-11ef-94a6-5254009ccf5d | VM-20-8-centos |        3382 | ONLINE       | PRIMARY     | 8.0.25         |
| group_replication_applier | c0bbb3f3-4416-11ef-b304-5254009ccf5d | VM-20-8-centos |        3383 | ONLINE       | SECONDARY   | 8.0.25         |
+---------------------------+--------------------------------------+----------------+-------------+--------------+-------------+----------------+
3 rows in set (0.00 sec)

也可以使用mysql shell进行切换

 MySQL  159.75.158.18:3381 ssl  JS > var cluster = dba.getCluster()
 MySQL  159.75.158.18:3381 ssl  JS > cluster.setPrimaryInstance('admin@159.75.158.18:3383')
Setting instance '159.75.158.18:3383' as the primary instance of cluster 'MGR1'...

Instance 'VM-20-8-centos:3381' remains SECONDARY.
Instance 'VM-20-8-centos:3382' was switched from PRIMARY to SECONDARY.
Instance 'VM-20-8-centos:3383' was switched from SECONDARY to PRIMARY.

WARNING: The cluster internal session is not the primary member anymore. For cluster management operations please obtain a fresh cluster handle using dba.getCluster().

The instance '159.75.158.18:3383' was successfully elected as primary.

切换单主/多主模式

在命令行模式下,可以调用 group_replication_switch_to_single_primary_mode() 和 group_replication_switch_to_multi_primary_mode() 来切换单主/多主模式。

mysql>  select group_replication_switch_to_multi_primary_mode();
+--------------------------------------------------+
| group_replication_switch_to_multi_primary_mode() |
+--------------------------------------------------+
| Mode switched to multi-primary successfully.     |
+--------------------------------------------------+
1 row in set (1.01 sec)


mysql> select * from performance_schema.replication_group_members;
+---------------------------+--------------------------------------+----------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST    | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+----------------+-------------+--------------+-------------+----------------+
| group_replication_applier | b4a92e9f-4416-11ef-ab6d-5254009ccf5d | VM-20-8-centos |        3381 | ONLINE       | PRIMARY     | 8.0.25         |
| group_replication_applier | ba8d6325-4416-11ef-94a6-5254009ccf5d | VM-20-8-centos |        3382 | ONLINE       | PRIMARY     | 8.0.25         |
| group_replication_applier | c0bbb3f3-4416-11ef-b304-5254009ccf5d | VM-20-8-centos |        3383 | ONLINE       | PRIMARY     | 8.0.25         |
+---------------------------+--------------------------------------+----------------+-------------+--------------+-------------+----------------+
3 rows in set (0.00 sec)

#切换成单主模式时可以指定某个节点的 server_uuid,如果不指定则会根据规则自动选择一个新的主节点

mysql> select group_replication_switch_to_single_primary_mode('b4a92e9f-4416-11ef-ab6d-5254009ccf5d');
+-----------------------------------------------------------------------------------------+
| group_replication_switch_to_single_primary_mode('b4a92e9f-4416-11ef-ab6d-5254009ccf5d') |
+-----------------------------------------------------------------------------------------+
| Mode switched to single-primary successfully.                                           |
+-----------------------------------------------------------------------------------------+
1 row in set (1.01 sec)

mysql> select * from performance_schema.replication_group_members;
+---------------------------+--------------------------------------+----------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST    | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+----------------+-------------+--------------+-------------+----------------+
| group_replication_applier | b4a92e9f-4416-11ef-ab6d-5254009ccf5d | VM-20-8-centos |        3381 | ONLINE       | PRIMARY     | 8.0.25         |
| group_replication_applier | ba8d6325-4416-11ef-94a6-5254009ccf5d | VM-20-8-centos |        3382 | ONLINE       | SECONDARY   | 8.0.25         |
| group_replication_applier | c0bbb3f3-4416-11ef-b304-5254009ccf5d | VM-20-8-centos |        3383 | ONLINE       | SECONDARY   | 8.0.25         |
+---------------------------+--------------------------------------+----------------+-------------+--------------+-------------+----------------+
3 rows in set (0.00 sec)

同样,也可以使用mysql shell进行调用 switchToSinglePrimaryMode() 以及 switchToMultiPrimaryMode() 函数进行切换。同样地,函数 switchToSinglePrimaryMode() 里也可以指定某个节点作为新的主节点。

 MySQL  159.75.158.18:3381 ssl  JS > cluster.switchToMultiPrimaryMode()
Switching cluster 'MGR1' to Multi-Primary mode...

Instance 'VM-20-8-centos:3381' remains PRIMARY.
Instance 'VM-20-8-centos:3382' was switched from SECONDARY to PRIMARY.
Instance 'VM-20-8-centos:3383' was switched from SECONDARY to PRIMARY.

The cluster successfully switched to Multi-Primary mode.
 MySQL  159.75.158.18:3381 ssl  JS > cluster.switchToSinglePrimaryMode('159.75.158.18:3381')
Switching cluster 'MGR1' to Single-Primary mode...

Instance 'VM-20-8-centos:3381' remains PRIMARY.
Instance 'VM-20-8-centos:3382' was switched from PRIMARY to SECONDARY.
Instance 'VM-20-8-centos:3383' was switched from PRIMARY to SECONDARY.

WARNING: The cluster internal session is not the primary member anymore. For cluster management operations please obtain a fresh cluster handle using dba.getCluster().

WARNING: Existing connections that expected a R/W connection must be disconnected, i.e. instances that became SECONDARY.

The cluster successfully switched to Single-Primary mode.

添加新节点


#在待接入节点上设置捐献者
#为了降低对Primary节点的影响,建议选择其他Secondary节点
mysql> set global clone_valid_donor_list='159.75.158.18:3382';

#停掉mgr服务(如果有的话),关闭super_read_only模式,然后开始复制数据
#注意这里要填写的端口是3382(MySQL正常服务端口),而不是33821这个MGR服务专用端口
mysql> stop group_replication; set global super_read_only=0; clone INSTANCE FROM admin@159.75.158.18:3382 IDENTIFIED BY '****';

使用mysql shell添加更容易些

cluster.addInstance('admin@159.75.158.18:3383')
The safest and most convenient way to provision a new instance is through automatic clone provisioning, which will completely overwrite the state of 'VM-20-8-centos:3383' with a physical snapshot from an existing cluster member. To use this method by default, set the 'recoveryMethod' option to 'clone'.

The incremental state recovery may be safely used if you are sure all updates ever executed in the cluster were done with GTIDs enabled, there are no purged transactions and the new instance contains the same GTID set as the cluster or a subset of it. To use this method by default, set the 'recoveryMethod' option to 'incremental'.

Incremental state recovery was selected because it seems to be safely usable.

Validating instance configuration at 159.75.158.18:3383...

This instance reports its own address as VM-20-8-centos:3383

Instance configuration is suitable.
NOTE: Group Replication will communicate with other members using 'VM-20-8-centos:33831'. Use the localAddress option to override.

A new instance will be added to the InnoDB cluster. Depending on the amount of
data on the cluster this might take from a few seconds to several hours.

Adding instance to the cluster...

Monitoring recovery process of the new cluster member. Press ^C to stop monitoring and let it continue in background.
State recovery already finished for 'VM-20-8-centos:3383'

The instance 'VM-20-8-centos:3383' was successfully added to the cluster.

删除节点

在命令行模式下,一个节点想退出MGR集群,直接执行 stop group_replication 即可,如果这个节点只是临时退出集群,后面还想加回集群,则执行 start group_replication 即可自动再加入。而如果是想彻底退出集群,则停止MGR服务后,执行 reset master; reset slave all; 重置所有复制(包含MGR)相关的信息就可以了。

登录3383端口的节点
mysql> stop group_replication;
Query OK, 0 rows affected (4.55 sec)


再次查看集群
mysql>  select * from performance_schema.replication_group_members;
+---------------------------+--------------------------------------+----------------+-------------+--------------+-------------+----------------+
| CHANNEL_NAME              | MEMBER_ID                            | MEMBER_HOST    | MEMBER_PORT | MEMBER_STATE | MEMBER_ROLE | MEMBER_VERSION |
+---------------------------+--------------------------------------+----------------+-------------+--------------+-------------+----------------+
| group_replication_applier | b4a92e9f-4416-11ef-ab6d-5254009ccf5d | VM-20-8-centos |        3381 | ONLINE       | PRIMARY     | 8.0.25         |
| group_replication_applier | ba8d6325-4416-11ef-94a6-5254009ccf5d | VM-20-8-centos |        3382 | ONLINE       | SECONDARY   | 8.0.25         |
+---------------------------+--------------------------------------+----------------+-------------+--------------+-------------+----------------+
2 rows in set (0.00 sec)

使用mysql shell里,只需调用 removeInstance() 函数即可删除某个节点,例如:
c.removeInstance(‘159.75.158.18:3383’);

 c.removeInstance('159.75.158.18:3383')
The instance will be removed from the InnoDB cluster. Depending on the instance
being the Seed or not, the Metadata session might become invalid. If so, please
start a new session to the Metadata Storage R/W instance.

Instance '159.75.158.18:3383' is attempting to leave the cluster...

The instance '159.75.158.18:3383' was successfully removed from the cluster.

异常退出的节点重新加回

当节点因为网络断开、实例crash等异常情况与MGR集群断开连接后,这个节点的状态会变成 UNREACHABLE,待到超过 group_replication_member_expel_timeout + 5 秒后,集群会踢掉该节点。等到这个节点再次启动并执行 start group_replication,正常情况下,该节点应能自动重新加回集群。

在mysql shell里,可以调用 rejoinInstance() 函数将异常的节点重新加回集群:
cluster.rejoinInstance(‘159.75.158.18:3383’)

 cluster.rejoinInstance('159.75.158.18:3383')
Rejoining instance 'VM-20-8-centos:3383' to cluster 'MGR1'...
The instance 'VM-20-8-centos:3383' was successfully rejoined to the cluster.

重启MGR集群

正常情况下,MGR集群中的Primary节点退出时,剩下的节点会自动选出新的Primary节点。当最后一个节点也退出时,相当于整个MGR集群都关闭了。这时候任何一个节点启动MGR服务后,都不会自动成为Primary节点,需要在启动MGR服务前,先设置 group_replication_bootstrap_group=ON,使其成为引导节点,再启动MGR服务,它才会成为Primary节点,[参考文章](后续启动的其他节点也才能正常加入集群。

P.S,第一个节点启动完毕后,记得重置选项 group_replication_bootstrap_group=OFF,避免在后续的操作中导致MGR集群分裂。

如果是用mysql shell重启MGR集群,调用 rebootClusterFromCompl)eteOutage() 函数即可,它会自动判断各节点的状态,选择其中一个作为Primary节点,然后拉起各节点上的MGR服务,完成MGR集群重启。

参考文章
https://gitee.com/GreatSQL/GreatSQL-Doc/blob/master/deep-dive-mgr/deep-dive-mgr-05.md

  • 12
    点赞
  • 12
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值