MySQL InnoDB Cluster部署

本文详细介绍了如何在Linux环境下安装配置MySQL,创建InnoDBCluster,设置管理及Router账户,以及进行故障转移测试。从下载Yum存储库到启动Router,每个步骤都清晰呈现,包括设置临时口令、修改口令、创建集群、添加实例和监控集群状态。最后,通过停用和恢复节点来演示故障转移过程,展示了集群的高可用性和容错能力。
摘要由CSDN通过智能技术生成

安装

下载Yum存储库

wget https://dev.mysql.com/get/mysql80-community-release-el7-6.noarch.rpm

安装发布包

yum install -y mysql80-community-release-el7-6.noarch.rpm

导入密钥

rpm --import https://repo.mysql.com/RPM-GPG-KEY-mysql-2022

安装MySQL

sudo yum install -y mysql-community-server

启动MySQL

systemctl start mysqld

查看临时口令

grep 'temporary password' /var/log/mysqld.log

登录并修改口令

mysql -uroot -p
ALTER USER 'root'@'localhost' IDENTIFIED BY '12345678';

安装MySQL Shell

sudo yum install -y mysql-shell

InnoDB Cluster 账户配置

服务器配置账户

mysql> create user root@'%' identified by '12345678';
mysql> grant all privileges on * to 'root'@'%';
mysql> flush privileges;

配置集群管理权限

GRANT CLONE_ADMIN, CONNECTION_ADMIN, CREATE USER, EXECUTE, FILE, GROUP_REPLICATION_ADMIN, PERSIST_RO_VARIABLES_ADMIN, PROCESS, RELOAD, REPLICATION CLIENT, REPLICATION SLAVE, REPLICATION_APPLIER, REPLICATION_SLAVE_ADMIN, ROLE_ADMIN, SELECT, SHUTDOWN, SYSTEM_VARIABLES_ADMIN ON *.* TO 'root'@'%' WITH GRANT OPTION;
GRANT DELETE, INSERT, UPDATE ON mysql.* TO 'root'@'%' WITH GRANT OPTION;
GRANT ALTER, ALTER ROUTINE, CREATE, CREATE ROUTINE, CREATE TEMPORARY TABLES, CREATE VIEW, DELETE, DROP, EVENT, EXECUTE, INDEX, INSERT, LOCK TABLES, REFERENCES, SHOW VIEW, TRIGGER, UPDATE ON mysql_innodb_cluster_metadata.* TO 'root'@'%' WITH GRANT OPTION;
GRANT ALTER, ALTER ROUTINE, CREATE, CREATE ROUTINE, CREATE TEMPORARY TABLES, CREATE VIEW, DELETE, DROP, EVENT, EXECUTE, INDEX, INSERT, LOCK TABLES, REFERENCES, SHOW VIEW, TRIGGER, UPDATE ON mysql_innodb_cluster_metadata_bkp.* TO 'root'@'%' WITH GRANT OPTION;
GRANT ALTER, ALTER ROUTINE, CREATE, CREATE ROUTINE, CREATE TEMPORARY TABLES, CREATE VIEW, DELETE, DROP, EVENT, EXECUTE, INDEX, INSERT, LOCK TABLES, REFERENCES, SHOW VIEW, TRIGGER, UPDATE ON mysql_innodb_cluster_metadata_previous.* TO 'root'@'%' WITH GRANT OPTION;

集群管理员帐户

创建集群后通过函数创建集群管理员帐户:

 MySQL  192.168.2.201:3306 ssl  mysql  JS > cluster.setupAdminAccount('clusterAdmin');

Missing the password for new account clusterAdmin@%. Please provide one.
Password for new account: ********
Confirm password: ********

Creating user clusterAdmin@%.
Account clusterAdmin@% was successfully created.

Router帐户

 MySQL  192.168.2.201:3306 ssl  mysql  JS > cluster.setupRouterAccount('routerUser1')

Missing the password for new account routerUser1@%. Please provide one.
Password for new account: ********
Confirm password: ********

Creating user routerUser1@%.
Account routerUser1@% was successfully created.

创建InnoDB Cluster集群

检查实例

dba.checkInstanceConfiguration('root@192.168.2.201:3306')

配置集群

 MySQL  localhost:33060+ ssl  JS > dba.configureInstance('root@192.168.2.201:3306');
Configuring local MySQL instance listening at port 3306 for use in an InnoDB cluster...

This instance reports its own address as node1:3306
Clients and other cluster members will communicate with it through this address by default. If this is not correct, the report_host MySQL system variable should be changed.

applierWorkerThreads will be set to the default value of 4.

The instance 'node1:3306' is valid to be used in an InnoDB cluster.
The instance 'node1:3306' is already ready to be used in an InnoDB cluster.

Successfully enabled parallel appliers.

创建集群

 MySQL  192.168.2.201:3306 ssl  mysql  JS > var cluster = dba.createCluster('testCluster')
A new InnoDB Cluster will be created on instance 'node1:3306'.

Validating instance configuration at 192.168.2.201:3306...

This instance reports its own address as node1:3306

Instance configuration is suitable.
NOTE: Group Replication will communicate with other members using 'node1:3306'. Use the localAddress option to override.

Creating InnoDB Cluster 'testCluster' on 'node1:3306'...

Adding Seed Instance...
Cluster successfully created. Use Cluster.addInstance() to add MySQL instances.
At least 3 instances are needed for the cluster to be able to withstand up to
one server failure.

查看集群状态


 MySQL  192.168.2.201:3306 ssl  mysql  JS > cluster.status()
{
    "clusterName": "testCluster", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "node1:3306", 
        "ssl": "REQUIRED", 
        "status": "OK_NO_TOLERANCE", 
        "statusText": "Cluster is NOT tolerant to any failures.", 
        "topology": {
            "node1:3306": {
                "address": "node1:3306", 
                "memberRole": "PRIMARY", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "replicationLag": "applier_queue_applied", 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.30"
            }
        }, 
        "topologyMode": "Single-Primary"
    }, 
    "groupInformationSourceMember": "node1:3306"
}

添加实例

 MySQL  192.168.2.201:3306 ssl  mysql  JS > cluster.addInstance('root@192.168.2.202:3306')

NOTE: The target instance 'node2:3306' has not been pre-provisioned (GTID set is empty). The Shell is unable to decide whether incremental state recovery can correctly provision it.
The safest and most convenient way to provision a new instance is through automatic clone provisioning, which will completely overwrite the state of 'node2:3306' with a physical snapshot from an existing cluster member. To use this method by default, set the 'recoveryMethod' option to 'clone'.

The incremental state recovery may be safely used if you are sure all updates ever executed in the cluster were done with GTIDs enabled, there are no purged transactions and the new instance contains the same GTID set as the cluster or a subset of it. To use this method by default, set the 'recoveryMethod' option to 'incremental'.


Please select a recovery method [C]lone/[I]ncremental recovery/[A]bort (default Clone): C
Validating instance configuration at 192.168.2.202:3306...

This instance reports its own address as node2:3306

Instance configuration is suitable.
NOTE: Group Replication will communicate with other members using 'node2:3306'. Use the localAddress option to override.

A new instance will be added to the InnoDB cluster. Depending on the amount of
data on the cluster this might take from a few seconds to several hours.

Adding instance to the cluster...

Monitoring recovery process of the new cluster member. Press ^C to stop monitoring and let it continue in background.
Clone based state recovery is now in progress.

NOTE: A server restart is expected to happen as part of the clone process. If the
server does not support the RESTART command or does not come back after a
while, you may need to manually start it back.

* Waiting for clone to finish...
NOTE: node2:3306 is being cloned from node1:3306
** Stage DROP DATA: Completed
** Clone Transfer  
    FILE COPY  ############################################################  100%  Completed
    PAGE COPY  ############################################################  100%  Completed
    REDO COPY  ############################################################  100%  Completed

NOTE: node2:3306 is shutting down...

* Waiting for server restart... ready 
* node2:3306 has restarted, waiting for clone to finish...
** Stage RESTART: Completed
* Clone process has finished: 72.61 MB transferred in 3 sec (24.20 MB/s)

State recovery already finished for 'node2:3306'

The instance 'node2:3306' was successfully added to the cluster.

配置Router

[root@node1 mysql8.0]# mysqlrouter --bootstrap root@192.168.2.201:3306 --account=mysqlrouter --user=root
Please enter MySQL password for root: 
# Bootstrapping system MySQL Router instance...

Please enter MySQL password for mysqlrouter: 
- Creating account(s) (only those that are needed, if any)
- Verifying account (using it to run SQL queries that would be run by Router)
- Storing account in keyring
- Adjusting permissions of generated files
- Creating configuration /etc/mysqlrouter/mysqlrouter.conf

Existing configuration backed up to '/etc/mysqlrouter/mysqlrouter.conf.bak'

# MySQL Router configured for the InnoDB Cluster 'testCluster'

After this MySQL Router has been started with the generated configuration

    $ /etc/init.d/mysqlrouter restart
or
    $ systemctl start mysqlrouter
or
    $ mysqlrouter -c /etc/mysqlrouter/mysqlrouter.conf

InnoDB Cluster 'testCluster' can be reached by connecting to:

## MySQL Classic protocol

- Read/Write Connections: localhost:6446
- Read/Only Connections:  localhost:6447

## MySQL X protocol

- Read/Write Connections: localhost:6448
- Read/Only Connections:  localhost:6449

启动Router

mysqlrouter -c /tmp/myrouter/mysqlrouter.conf

故障转移测试

停用node1实例

systemctl stop mysqld

查看node2日志

2022-09-03T05:36:49.419627Z 0 [Warning] [MY-011499] [Repl] Plugin group_replication reported: 'Members removed from the group: node1:3306'
2022-09-03T05:36:49.419648Z 0 [System] [MY-011500] [Repl] Plugin group_replication reported: 'Primary server with address node1:3306 left the group. Electing new Primary.'
2022-09-03T05:36:49.420017Z 0 [System] [MY-011507] [Repl] Plugin group_replication reported: 'A new primary with address node2:3306 was elected. The new primary will execute all previous group transactions before allowing writes.'
2022-09-03T05:36:49.420363Z 0 [System] [MY-011503] [Repl] Plugin group_replication reported: 'Group membership changed to node2:3306, node3:3306 on view 16620968151373714:14.'
2022-09-03T05:36:49.422096Z 19 [System] [MY-013731] [Repl] Plugin group_replication reported: 'The member action "mysql_disable_super_read_only_if_primary" for event "AFTER_PRIMARY_ELECTION" with priority "1" will be run.'
2022-09-03T05:36:49.422301Z 19 [System] [MY-011566] [Repl] Plugin group_replication reported: 'Setting super_read_only=OFF.'
2022-09-03T05:36:49.422341Z 19 [System] [MY-013731] [Repl] Plugin group_replication reported: 'The member action "mysql_start_failover_channels_if_primary" for event "AFTER_PRIMARY_ELECTION" with priority "10" will be run.'
2022-09-03T05:36:49.423681Z 7439 [System] [MY-011510] [Repl] Plugin group_replication reported: 'This server is working as primary member.'

查看集群状态

 MySQL  node2:33060+ ssl  JS > cluster.status()
{
    "clusterName": "testCluster", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "node2:3306", 
        "ssl": "REQUIRED", 
        "status": "OK_NO_TOLERANCE_PARTIAL", 
        "statusText": "Cluster is NOT tolerant to any failures. 1 member is not active.", 
        "topology": {
            "node1:3306": {
                "address": "node1:3306", 
                "memberRole": "SECONDARY", 
                "mode": "n/a", 
                "readReplicas": {}, 
                "role": "HA", 
                "shellConnectError": "MySQL Error 2003: Could not open connection to 'node1:3306': Can't connect to MySQL server on 'node1:3306' (111)", 
                "status": "(MISSING)"
            }, 
            "node2:3306": {
                "address": "node2:3306", 
                "memberRole": "PRIMARY", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "replicationLag": "applier_queue_applied", 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.30"
            }, 
            "node3:3306": {
                "address": "node3:3306", 
                "memberRole": "SECONDARY", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "replicationLag": "applier_queue_applied", 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.30"
            }
        }, 
        "topologyMode": "Single-Primary"
    }, 
    "groupInformationSourceMember": "node2:3306"
}

此时无法检测node1的状态,且整个集群的状态为OK_NO_TOLERANCE_PARTIAL,即无法容忍再发生故障了。

重新启动node1实例

systemctl start mysqld

查看集群状态

 MySQL  node2:33060+ ssl  JS > cluster.status()
{
    "clusterName": "testCluster", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "node2:3306", 
        "ssl": "REQUIRED", 
        "status": "OK", 
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", 
        "topology": {
            "node1:3306": {
                "address": "node1:3306", 
                "memberRole": "SECONDARY", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "replicationLag": "applier_queue_applied", 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.30"
            }, 
            "node2:3306": {
                "address": "node2:3306", 
                "memberRole": "PRIMARY", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "replicationLag": "applier_queue_applied", 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.30"
            }, 
            "node3:3306": {
                "address": "node3:3306", 
                "memberRole": "SECONDARY", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "replicationLag": "applier_queue_applied", 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.30"
            }
        }, 
        "topologyMode": "Single-Primary"
    }, 
    "groupInformationSourceMember": "node2:3306"
}

此时集群状态为OK,但node1是作为只读实例加入到集群。

  • 1
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值