【MySQL InnoDB Cluster8.0.28安装配置】

MySQL InnoDB Cluster

1、InnoDB集群

在默认的单主服务器模式下,InnoDB集群具有单个读写服务器实例-主服务器。多个辅助服务器实例是主服务器的副本。如果主服务器发生故障,则辅助服务器将自动升级为主服务器角色。MySQL Router会检测到此情况并将客户端应用程序转发到新的主服务器。高级用户还可以将集群配置为具有多个主数据库。

2、InnoDB Cluster概述

在这里插入图片描述

3、环境信息

1)、工具准备:

下载地址:https://downloads.mysql.com/archives/
分别下载安装包:MySQL Community Server、MySQL Router、MySQL Shell选择对应的数据库版本,(要求统一版本);
注:MySQL Router 和 MySQL Shell 可以和 MySQL Server 部署在同一台服务器上

2)、端口:

数据库端口:3306,33061(shell用来检测),33081(组复制,可变更),MySQL Router :6446(写)6447(读)端口;

3)、服务器:

三台;配置可根据实际情况准备;
4)、关闭所有节点的防火墙及selinux,配置每个节点的/etc/hosts主机映射

#关闭防火墙

systemctl stop firewalld.service 或者 systemctl stop firewalld
systemctl disable firewalld.service 或者 systemctl disable firewalld
systemctl status firewalld
// 查看防火墙状态
systemctl status firewalld
//开放端口
sudo firewall-cmd --permanent --add-port=3306/tcp
//关闭端口3306
firewall-cmd --zone=public --remove-port=3306/tcp --permanent
//重新载入一下防火墙设置,使设置生效
firewall-cmd --reload
// 查询打开的端口
firewall-cmd --zone=public --list-ports

#关闭selinux

getenforce
setenforce 0
vim /etc/selinux/config
SELINUX=disabled

#添加IP(重要)

vim /etc/hosts
IP1 node01
IP2 node02
IP3 node03

5)、服务器优化配置参数;

6)、安装顺序:

(1)MySQL Server 8.0.28-> (2)启动MGR组复制 -> (3)MySQL Shell 8.0.28 -> (4)创建 Innodb Cluster 集群 -> (5)MySQL Router 8.0.28
这里我们先安装 MGR 组复制成功之后, 再安装 MySQL Shell 8.0.28 组件和部署 MySQL Router 8.0.28,即“采用组复制的方式”部署 MySQL InnoDB Cluter 集群。即先有组复制的存在,然后使用它来创建集群。 后续我们在创建集群的时候,需要将 {adoptFromGR: true} 选项传递 给该 dba.createCluster()功能。创建的InnoDB群集匹配复制组是以单主数据库还是多主数据库运行。

4、数据库配置安装:(安装步骤my.ini中需要增加的必要参数项,这里还需要增加上协同产品要求的一些参数项,具体可查看产品数据库安装说明;)

三台主机配置 mysql 用户环境变量
vim ~/.bash_profile
MYSQL_HOME=/app/mysql8.0.34
PATH=&PATH:&HOME/.local/bin:&HOME/bin:&MYSQL_HOME/bin
source ~/.bash_profile
which mysql

1)、参数修改:

1.1、centos-01 机器

[mysqld]
#开启GTID,必须开启
gtid_mode=on
#强制GTID的一致性
enforce-gtid-consistency=on
#binlog格式,MGR要求必须是ROW,不过就算不是MGR,也最好用row
binlog_format=row
#server-id必须是唯一的
server-id = 201
#MGR使用乐观锁,所以官网建议隔离级别是RC,减少锁粒度
transaction_isolation = READ-COMMITTED
#因为集群会在故障恢复时互相检查binlog的数据,
#所以需要记录下集群内其他服务器发过来已经执行过的binlog,GTID来区分是否执行过.
log-slave-updates=on
#binlog校验规则,5.6之后的高版本是CRC32,低版本都是NONE,但是MGR要求使用NONE
binlog_checksum=NONE
#基于安全的考虑,MGR集群要求复制模式要改成slave记录记录到表中,不然就报错
master_info_repository=TABLE
#同上配套
relay_log_info_repository=TABLE
#记录事务的算法,官网建议设置该参数使用 XXHASH64 算法
transaction_write_set_extraction = XXHASH64
#相当于此GROUP的名字,UUID,不能和集群内其他GTID值的UUID混用,可用uuidgen来生成一个新的,
#主要是用来区分整个内网里边的各个不同的GROUP,而且也是这个group内的GTID值的UUID
loose-group_replication_group_name = 'b5a9bdff-3c79-4cb4-97a1-e38c45fad878'
#是否随服务器启动而自动启动组复制,不建议直接启动,怕故障恢复时有扰乱数据准确性的特殊情况
loose-group_replication_start_on_boot = OFF
#用MGR的单主模式
loose-group_replication_single_primary_mode = on 
#本地MGRIP地址和端口,host:port,MGR的端口,不是数据库的端口
loose-group_replication_local_address = 'IP:33081'
#需要接受本MGR实例控制的服务器IP地址和端口,MGR的端口,不是数据库的端口
loose-group_replication_group_seeds = 'IP1:33081,IP2:33081,IP3:33081'
#开启引导模式,添加组成员,用于第一次搭建MGR或重建MGR的时候使用,只需要在集群内的其中一台开启,
loose-group_replication_bootstrap_group = OFF

1.2、centos-02 机器的区别:(01机器上面的参数改成以下值)
#server-id必须是唯一的

server-id = 202
loose-group_replication_local_address = '{IP}:33081'

1.3、centos-03 机器的区别:(01机器上面的参数改成以下值)
#server-id必须是唯一的

server-id = 203
loose-group_replication_local_address = '{IP}:33081'

2)、安装

2.1、解压软件包,先把软件包上传
root@centos-01 ~# cd /opt
root@centos-01 /opt# tar -xvf mysql-8.0.28-linux-glibc2.12-x86_64.tar.xz
root@centos-01 /opt# cp -Rf ./mysql-8.0.28-linux-glibc2.12-x86_64/* /usr/local/mysql/
root@centos-01 /opt# chown -R mysql:mysql /usr/local/mysql/
root@centos-01 /opt# chmod -R 755 /usr/local/mysql/
2.2、创建mysql需要的数据目录
root@centos-01 ~# mkdir -p /data/mysql/{data,binlog,relay}
root@centos-01 ~# chown -R mysql:mysql /data/mysql/{data,binlog,relay}
root@centos-01 ~# chmod -R 750 /data/mysql/
2.3、初始化数据库
root@centos-01/usr/local/mysql/bin/mysqld--defaults-file=/etc/my.cnf--basedir=/usr/local/mysql --datadir=/data/mysql/data --user=mysql --initialize-insecure
2.4、启动数据库
root@centos-01 ~# /usr/local/mysql/support-files/mysql.server start
2.5、重置密码,关闭数据库只读模式
root@centos-01 ~# /usr/local/mysql/bin/mysql -uroot -S /data/mysql/data/mysql.sock -e "SET SQL_LOG_BIN=0;
SET global super_read_only=OFF; 
SET global read_only=OFF; 
alter user 'root'@'localhost' password expire never; set password for 'root'@'localhost'='123456';
flush privileges; 
SET SQL_LOG_BIN=1;"
root@centos-01 ~# /usr/local/mysql/bin/mysql -uroot -p -S /data/mysql/mysql.sock
2.6、组复制安装Group Replication插件(三台都装)
mysql> INSTALL PLUGIN group_replication SONAME 'group_replication.so';
mysql>SELECT PLUGIN_NAME, PLUGIN_STATUS FROM INFORMATION_SCHEMA.PLUGINS  WHERE PLUGIN_NAME LIKE '%group_replication%';
2.7、三台分别配置MGR三节点组复制安装
#centos-01 
#启动引导,注意,只有这套开启引导,其他两台都请忽略这一步
mysql> SET GLOBAL group_replication_bootstrap_group=ON;
#创建一个用户来做同步的用户,并授权,所有集群内的服务器都需要做
mysql> create user 'repl'@'%' identified by '123456';
mysql> grant REPLICATION SLAVE on *.* to 'repl'@'%' with grant option;
#清空所有旧的GTID信息,避免冲突
mysql> reset master;
#创建同步规则认证信息,就是刚才授权的那个用户,和一般的主从规则写法不太一样
mysql> CHANGE MASTER TO MASTER_USER='repl', MASTER_PASSWORD='123456' FOR CHANNEL 'group_replication_recovery';
#启动MGR
mysql> start group_replication;
#查看是否启动成功,看到online就是成功了
mysql> SELECT * FROM performance_schema.replication_group_members;
#这个时候,就可以先关闭引导了
mysql> SET GLOBAL group_replication_bootstrap_group=OFF;

#其他机器
#centos-02
#创建一个用户来做同步的用户,并授权,所有集群内的服务器都需要做
mysql> create user 'repl'@'%' identified by '123456';
mysql> grant REPLICATION SLAVE on *.* to 'repl'@'%' with grant option;
#清空所有旧的GTID信息,避免冲突
mysql> reset master;
#创建同步规则认证信息,就是刚才授权的那个用户,和一般的主从规则写法不太一样
mysql> CHANGE MASTER TO MASTER_USER='repl', MASTER_PASSWORD='123456' FOR CHANNEL 'group_replication_recovery';
#启动MGR
mysql> start group_replication;
#查看是否启动成功,看到online就是成功了
mysql> SELECT * FROM performance_schema.replication_group_members;

#centos-03 配置类似centos-02
#查看是否启动成功,看到online就是成功了
mysql> SELECT * FROM performance_schema.replication_group_members;
#全部配置完后
mysql> SELECT * FROM performance_schema.replication_group_members;
#授权远程用户
CREATE USER 'root'@'%' IDENTIFIED BY '123456';
GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' WITH GRANT OPTION;
FLUSH PRIVILEGES;
可以忽略:
#为了能够管理群集,必须确保您拥有一个具有所需特权的合适用户。推荐的方法是创建一个管理用户。如果在配置实例时未创建管理用户,请使用以下SQL语句来操作。例如,要创建一个名为的用户icadmin,该用户可以管理分配给变量的InnoDB集群 cluster,参与如下操作:这个步骤可选的,如果你后续想直接通过root账号来创建集群也可以的。
mysql> SET SQL_LOG_BIN=0;
mysql> CREATE USER IF NOT EXISTS `icadmin`@`192.168.1.%` IDENTIFIED BY '123456' PASSWORD EXPIRE NEVER;
mysql> GRANT SELECT, RELOAD, SHUTDOWN, PROCESS, FILE, SUPER, EXECUTE, REPLICATION SLAVE, REPLICATION CLIENT, CREATE USER ON *.* TO `icadmin`@`192.168.1.%` WITH GRANT OPTION;
mysql> GRANT BACKUP_ADMIN,CLONE_ADMIN,PERSIST_RO_VARIABLES_ADMIN,REPLICATION_APPLIER,SYSTEM_VARIABLES_ADMIN ON *.* TO `icadmin`@`192.168.1.%` WITH GRANT OPTION;                                                                                             mysql> GRANT INSERT, UPDATE, DELETE ON `mysql`.* TO `icadmin`@`192.168.1.%` WITH GRANT OPTION;                                           
mysql> GRANT INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, INDEX, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, EVENT, TRIGGER ON `mysql_innodb_cluster_metadata`.* TO `icadmin`@`192.168.1.%` WITH GRANT OPTION;
mysql> GRANT INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, INDEX, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, EVENT, TRIGGER ON `mysql_innodb_cluster_metadata_bkp`.* TO `icadmin`@`192.168.1.%` WITH GRANT OPTION;      
mysql> GRANT INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, INDEX, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, CREATE VIEW, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, EVENT, TRIGGER ON `mysql_innodb_cluster_metadata_previous`.* TO `icadmin`@`192.168.1.%` WITH GRANT OPTION; 
mysql> FLUSH PRIVILEGES;
mysql> SET SQL_LOG_BIN=1;

5、MySQL Shell安装(用于交互式代码执行、批处理代码执行、X协议支持等)

1)、上传解压,三个节点部署操作一样;

root@centos-01 ~# cd /opt/soft
root@centos-01 /opt/soft# tar -zxvf mysql-shell-8.0.28-linux-glibc2.12-x86-64bit.tar.gz 
root@centos-01 /opt/soft# cp -Rf ./mysql-shell-8.0.28-linux-glibc2.12-x86-64bit/* /usr/local/mysqlshell/
root@centos-01 /opt/soft# chown -R mysql:mysql /usr/local/mysqlshell
root@centos-01 /opt/soft# chmod -R 755 /usr/local/mysqlshell

2)、设置环境变量

root@centos-01 /opt/soft# echo "export PATH=\$PATH:/usr/local/mysqlshell/bin" >> /etc/profile
root@centos-01 /opt/soft# source /etc/profile 

3)、创建 Innodb Cluster 集群

在 db-node01 上创建集群,通过 db-node01 上的 shell 连接db-node01 的 mysql
执行:mysqlsh
#执行配置命令,也需要密码
#然后需要输入MySQL配置文件路径;
#接下来需要创建供其他主机访问的用户,这里选择第1项,为root用户授权

[root@db-node01 ~]# mysqlsh
...................
mysql-js> shell.connect('root@localhost:3306');
Please provide the password for 'root@localhost:3306':               #输入密码123456
Creating a Session to 'root@localhost:3306'
Classic Session successfully established. No default schema selected.
#检查是否满足集群(如果有显示需要调整的参数,调整完成之后再执行);
mysql-js> mysqlsh --log-level=DEBUG3 
mysql-js> dba.checkInstanceConfiguration('root@localhost:3306')

mysql-js> dba.configureLocalInstance();
Please provide the password for 'root@localhost:3306':               #输入密码123456
 
Detecting the configuration file...
Found configuration file at standard location: /etc/my.cnf
Do you want to modify this file? [Y|n]:                                           #直接回车, 使用默认的/etc/my.cnf配置文件
MySQL user 'root' cannot be verified to have access to other hosts in the network.
 
1) Create root@% with necessary grants
2) Create account with different name
3) Continue without creating account
4) Cancel
Please select an option [1]: 1                         #选择第1项,为root用户授权, 创建供#其他主机访问的用户
Password for new account:                             #输入供其他主机访问的用户root用户授权#的密码. 这里依然设置123456
Confirm password:
Validating instance...
 
Dba.configureLocalInstance: Your password does not satisfy the current policy requirements (MySQL Error 1819)
#出现上面报错的解决办法:打开另一个终端窗口, 登录node01节点的mysql,执行下面命令:

[root@db-node01 ~]# mysql -p123456
.................
mysql> set global validate_password_policy=;
Query OK,  rows affected (0.00 sec)
 
mysql> set global validate_password_length=1;
Query OK,  rows affected (0.00 sec)
然后接着上面db-node01终端窗口的mysql-shell继续执行:

mysql-js> shell.connect('root@localhost:3306');
Please provide the password for 'root@localhost:3306':            #输入密码123456
Creating a Session to 'root@localhost:3306'
Classic Session successfully established. No default schema selected.
 
mysql-js> dba.configureLocalInstance();
Please provide the password for 'root@localhost:3306':           #输入密码123456
 
Detecting the configuration file...
Found configuration file at standard location: /etc/my.cnf
Do you want to modify this file? [Y|n]:                                       #直接回车, #使用默认的/etc/my.cnf配置文件
MySQL user 'root' cannot be verified to have access to other hosts in the network.
 
1) Create root@% with necessary grants
2) Create account with different name
3) Continue without creating account
4) Cancel
Please select an option [1]: 1                     # 选择第1项,为root用户授权, 即供其他主#机访问的用户root用户授权的密码, 否则其他机器使用root用户连接不上不本机的mysql
Password for new account:                         #输入. 这里依然设置123456. 这里授权之后, #登录db-node01节点的mysql, 执行"select host, user from mysql.user" 命令就能发现
Confirm password:
Validating instance...
 
The instance 'localhost:3306' is valid for Cluster usage
You can now use it in an InnoDB Cluster.
 
{
    "status": "ok"
}
mysql-js>
由上面的信息看出, status 为 ok 说明配置没问题了,可以用来创建cluster。
#### 3.1、node01 创建 cluster
通过 db-route01 的 mysql-shell 连接 node01 创建 cluster
[root@db-route01 ~]# mysqlsh
................
#连接db-node01
mysql-js> shell.connect('root@db-node01:3306');                  
Please provide the password for 'root@db-node01:3306':         #输入密码123456
Creating a Session to 'root@db-node01:3306'
Classic Session successfully established. No default schema selected.
 
#创建一个 cluster,命名为 'myCluster'
mysql-js> var cluster = dba.createCluster('myCluster');
A new InnoDB cluster will be created on instance 'root@db-node01:3306'.
 
Creating InnoDB cluster 'myCluster' on 'root@db-node01:3306'...
Adding Seed Instance...
 
Cluster successfully created. Use Cluster.addInstance() to add MySQL instances.
At least 3 instances are needed for the cluster to be able to withstand up to
one server failure.
#如上的信息, 如果创建成功, 则会输出的信息中会有类似“Cluster successfully created.”的语句。

#创建成功后,查看cluster状态
mysql-js> cluster.status();
{
    "clusterName": "myCluster",
    "defaultReplicaSet": {
        "name": "default",
        "primary": "db-node01:3306",
        "status": "OK_NO_TOLERANCE",
        "statusText": "Cluster is NOT tolerant to any failures.",
        "topology": {
            "db-node01:3306": {
                "address": "db-node01:3306",
                "mode": "R/W",
                "readReplicas": {},
                "role": "HA",
                "status": "ONLINE"
            }
        }
    }
}
 
mysql-js> dba.getCluster();
<Cluster:myCluster>
注意上面这个db-route01的mysql-shell终端窗口就不要关闭了, 一直保持连接中,也就是一直在当前集群状态中, 后面添加其他节点到cluster集群中会用到! (后面说到常用命令时会解释)
3.2、node02 创建 cluste

添加节点 db-node02到上面创建的"myCluster"集群中
通过db-node02本机 mysql-shell 对 mysql 进行配置

[root@db-node02 ~]# mysqlsh
................
 MySQL  JS > shell.connect('root@localhost:3306');
Creating a session to 'root@localhost:3306'
Please provide the password for 'root@localhost:3306': ******        #输入密码123456
Save password for 'root@localhost:3306'? [Y]es/[N]o/Ne[v]er (default No): Y
Fetching schema names for autocompletion... Press ^C to stop.
Your MySQL connection id is 251
Server version: log MySQL Community Server (GPL)
No default schema selected; type \use <schema> to set one.
<ClassicSession:root@localhost:3306>
  
  MySQL  localhost:3306 ssl  JS > dba.configureLocalInstance();
Configuring local MySQL instance listening at port 3306 for use in an InnoDB cluster...
 
This instance reports its own address as XXXXXXXXX
 
Please select an option [1]: 1
Please provide a source address filter for the account (e.g: 192.168.% or % etc) or leave empty and press Enter to cancel.
Account Host: %                         #选择%, 表示允许任何机器远程使用root用户连接本机的mysql
 
The instance 'localhost:3306' is valid for InnoDB cluster usage.
 
Your password does not satisfy the current policy requirements (MySQL Error 1819)
#解决办法:登录db-node02节点的mysql

[root@db-node02 ~]# mysql -p123456
...............
 
mysql> set global validate_password_policy=;
Query OK,  rows affected (0.00 sec)
 
mysql> set global validate_password_length=1;
Query OK,  rows affected (0.00 sec)
#接着继续登录db-node02本机的mysql-shell 进行配置

[root@db-node02 ~]# mysqlsh
.............
MySQL  JS > shell.connect('root@localhost:3306');
Creating a session to 'root@localhost:3306'
Fetching schema names for autocompletion... Press ^C to stop.
Your MySQL connection id is 180
Server version: log MySQL Community Server (GPL)
No default schema selected; type \use <schema> to set one.
<ClassicSession:root@localhost:3306>
 
 MySQL  localhost:3306 ssl  JS > dba.configureLocalInstance();
Configuring local MySQL instance listening at port 3306 for use in an InnoDB cluster...
 
This instance reports its own address as db-node02
Clients and other cluster members will communicate with it through this address by default. If this is not correct, the report_host MySQL system variable should be changed.
  
Please select an option [1]: 1
Please provide a source address filter for the account (e.g: 192.168.% or % etc) or leave empty and press Enter to cancel.
Account Host: %
 
The instance 'localhost:3306' is valid for InnoDB cluster usage.
 
Cluster admin user 'root'@'%' created.
 
 MySQL  localhost:3306 ssl  JS >
#然后登录db-node02节点的mysql, 发现上面使用root用户远程连接的授权已经有了

[root@db-node02 ~]# mysql -p123456
 ..............
mysql> select host,user from mysql.user;
+-----------+---------------+
| host      | user          |
+-----------+---------------+
| %         | root          |
| localhost | mysql.session |
| localhost | mysql.sys     |
| localhost | root          |
+-----------+---------------+
4 rows in set (0.00 sec)
#接着修改 my.cnf,添加配置项:

[root@db-node02 ~]# vim /etc/my.cnf
............
loose-group_replication_allow_local_disjoint_gtids_join=on
#重启mysql服务

[root@db-node02 ~]# systemctl restart mysqld
#然后通过 db-route01节点 的 mysql-shell 添加 node02 到 "myCluster"集群中 接着上面的db-route01的mysql-shell终端窗口  (注意这个终端窗口是上面执行后, 没有关闭一直开着的)

mysql-js> cluster.addInstance('root@db-node02:3306');
A new instance will be added to the InnoDB cluster. Depending on the amount of
data on the cluster this might take from a few seconds to several hours.
 
Please provide the password for 'root@db-node02:3306':
Adding instance to the cluster ...
 
The instance 'root@db-node02:3306' was successfully added to the cluster.
#上面信息表示db-node02节点已经成功添加到"myCluster"集群中了. 如下查看集群状态

mysql-js> cluster.status();
{
    "clusterName": "myCluster",
    "defaultReplicaSet": {
        "name": "default",
        "primary": "db-node01:3306",
        "status": "OK_NO_TOLERANCE",
        "statusText": "Cluster is NOT tolerant to any failures.",
        "topology": {
            "db-node01:3306": {
                "address": "db-node01:3306",
                "mode": "R/W",
                "readReplicas": {},
                "role": "HA",
                "status": "ONLINE"
            },
            "db-node02:3306": {
                "address": "db-node02:3306",
                "mode": "R/O",
                "readReplicas": {},
                "role": "HA",
                "status": "ONLINE"
            }
        }
    }
}
#同样, 上面操作后, 这个db-route01节点的mysql-shell当前终端窗口不要关闭,继续保持在集群状态中, 下面添加db-node03节点到集群中会用到这里.(后面常用命令中会提到)
3.3、node03 创建 cluste

添加节点 db-node03到上面创建的"myCluster"集群中
首先登录db-node03节点的mysql

[root@db-node03 ~]# mysql -p123456
............
mysql> set global validate_password_length=1;
Query OK,  rows affected (0.00 sec)
 
mysql> set global validate_password_length=1;
Query OK,  rows affected (0.00 sec)
#登录db-node03节点的mysql-shell, 进行配置

[root@db-node03 ~]# mysqlsh
.............
 MySQL  JS > shell.connect('root@localhost:3306');
Creating a session to 'root@localhost:3306'
Please provide the password for 'root@localhost:3306': ******
Save password for 'root@localhost:3306'? [Y]es/[N]o/Ne[v]er (default No): Y
Fetching schema names for autocompletion... Press ^C to stop.
Your MySQL connection id is 393
Server version: log MySQL Community Server (GPL)
No default schema selected; type \use <schema> to set one.
<ClassicSession:root@localhost:3306>
 
 MySQL  localhost:3306 ssl  JS > dba.configureLocalInstance();
Configuring local MySQL instance listening at port 3306 for use in an InnoDB cluster...
 
This instance reports its own address as db-node03
Clients and other cluster members will communicate with it through this address by default. If this is not correct, the report_host MySQL system variable should be changed.
  
Please select an option [1]: 1
Please provide a source address filter for the account (e.g: 192.168.% or % etc) or leave empty and press Enter to cancel.
Account Host: %
 
The instance 'localhost:3306' is valid for InnoDB cluster usage.
 
Cluster admin user 'root'@'%' created.
#接着修改 my.cnf,添加配置项:

[root@db-node03 ~]# vim /etc/my.cnf
............
loose-group_replication_allow_local_disjoint_gtids_join=on
#重启mysql服务

[root@db-node03 ~]# systemctl restart mysqld
#然后通过 db-route01节点 的 mysql-shell 添加 node03 到 "myCluster"集群中 接着上面的db-route01的mysql-shell终端窗口  (注意这个终端窗口是上面执行后, 没有关闭一直开着的)

mysql-js> cluster.addInstance('root@db-node03:3306');
A new instance will be added to the InnoDB cluster. Depending on the amount of
data on the cluster this might take from a few seconds to several hours.

Please provide the password for 'root@db-node03:3306':
Adding instance to the cluster ...

The instance 'root@db-node03:3306' was successfully added to the cluster.
#上面信息表示db-node02节点已经成功添加到"myCluster"集群中了. 如下查看集群状态

mysql-js> cluster.status();
{
    "clusterName": "myCluster",
    "defaultReplicaSet": {
        "name": "default",
        "primary": "db-node01:3306",
        "status": "OK",
        "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
        "topology": {
            "db-node01:3306": {
                "address": "db-node01:3306",
                "mode": "R/W",
                "readReplicas": {},
                "role": "HA",
                "status": "ONLINE"
            },
            "db-node02:3306": {
                "address": "db-node02:3306",
                "mode": "R/O",
                "readReplicas": {},
                "role": "HA",
                "status": "ONLINE"
            },
            "db-node03:3306": {
                "address": "db-node03:3306",
                "mode": "R/O",
                "readReplicas": {},
                "role": "HA",
                "status": "ONLINE"
            }
        }
    }
}
#通过上面cluster集群信息可知, db-node01节点是主节点, 具有R/W读写权限, 其他两个节点是从节点, 具有R/O 只读权限

4)日常使用的几个重要命令 (mysqlsh的JS语法)

dba.checkInstanceConfiguration("root@hostname:3306")     #检查节点配置实例,用于加入cluster之前
dba.rebootClusterFromCompleteOutage('myCluster');        #重启 
dba.dropMetadataSchema();                                #删除schema
var cluster = dba.getCluster('myCluster')                #获取当前集群
cluster.checkInstanceState("root@hostname:3306")         #检查cluster里节点状态
cluster.rejoinInstance("root@hostname:3306")             #重新加入节点,我本地测试的时候发现rejoin一直无效,每次是delete后
addcluster.dissolve({force:true})                       #删除集群
cluster.addInstance("root@hostname:3306")                #增加节点
cluster.removeInstance("root@hostname:3306")             #删除节点
cluster.removeInstance('root@host:3306',{force:true})    #强制删除节点
cluster.dissolve({force:true})                           #解散集群
cluster.describe();                                      #集群描述

6、MySQL Router systemctl 启动配置(中间件,独立服务器安装或者与软件服务器安装)

tar zxvf /opt/mysql-router-8.0.11-linux-glibc2.12-x86-64bit.tar.gz -C /opt
mv /opt/mysql-router-8.0.11-linux-glibc2.12-x86-64bit /opt/mysqlrouter
chmod -R 755 mysqlrouter
#进入 db-route01管理节点中mysql-router 安装目录,配置并启动 router

1)、初始化 MySQL Router

[root@db-node01 ~]# /data/tools/mysqlrouter/bin/mysqlrouter --bootstrap root@192.168.101.181:3306 -d myrouter --user=root --directory /data/tools/mysqlrouter/mysqlrouter6446 --conf-base-port 6446 --conf-use-sockets --force 
#(注意:--conf-base-port 6446 这个参数是 mysqlrouter 参数  默认是 6446)
Please enter MySQL password for root:

Bootstrapping MySQL Router instance at /root/myrouter...
MySQL Router  has now been configured for the InnoDB cluster 'myCluster'.
 
The following connection information can be used to connect to the cluster.
 
Classic MySQL protocol connections to cluster 'myCluster':
- Read/Write Connections: localhost:6446             #读写端口
- Read/Only Connections: localhost:6447              #只读端口
 
X protocol connections to cluster 'myCluster':
- Read/Write Connections: localhost:64460
- Read/Only Connections: localhost:64470
#这里会在当前目录下产生mysql-router 目录, 并生成router配置文件,接着把配置文件修改一下:配置启动:
vim /etc/systemd/system/mysqlrouter@3306.service 
[Unit]
Description= Mysql router 
Documentation=man:mysqlrouter(8)
After=network.target
After=syslog.target
[Install]
WantedBy=multi-user.target
[Service]
User=root
Group=root
Type=forking
CPUQuota=100%
TimeoutSec=30
PermissionsStartOnly=true
PIDFile=/data/mysqlrouter/mysqlrouter6446/mysqlrouter.pid
ExecStart=/usr/bin/bash -c "/data/mysqlrouter/mysqlrouter6446/start.sh &"
ExecStop=/usr/bin/bash -c "/data/mysqlrouter/mysqlrouter6446/stop.sh &"
LimitNOFILE = 102400
Restart=no

2)、启动 MySQL Router

[root@db-node01 ~]# cd /data/tools/mysqlrouter/mysqlrouter6446
[root@db-node01 ~]#ls
data  log  mysqlrouter.conf  mysqlrouter.key  run  start.sh  stop.sh
#然后启动mysqlroute
[root@db-node01 ~]#./start.sh
PID 16484 written to /root/myrouter/mysqlrouter.pid             
#查看日志:
[root@db-node01 ~]# cat mysqlrouter.log
#测试一下mysql能不能进去;
mysql -uroot -p -h192.168.101.181 -P 64466446是 mysqlrouer 读写端口)
mysql -uroot -p -h192.168.101.181 -P 64476447是 mysqlrouer 只读端口)

[root@db-node01 ~]# ps -ef | grep mysql
#登录两个从库,执行从库只读设置(注意只在从库执行,主库不用执行)。
mysql> set global read_only=on;
mysql> set global super_read_only=on;
show master status;
#这样就可以使用MySQL客户端连接router了.  下面验证下连接router:
a) 管理节点本机mysql-shell连接:
[root@db-node03 ~]# mysql -u root -h 192.168.101.183 -P 6446 -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 1054
Server version: log MySQL Community Server (GPL)
 Copyright (c) 2000, 2019, Oracle and/or its affiliates. All rights reserved.
 Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
 Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
 
mysql> show databases;
+-------------------------------+
| Database                      |
+-------------------------------+
| information_schema            |
| mysql                         |
| mysql_innodb_cluster_metadata |
| performance_schema            |
| sys                           |
+-------------------------------+
5 rows in set (0.00 sec)

3)、测试测试库

mysql> CREATE DATABASE kevin CHARACTER SET utf8 COLLATE utf8_general_ci; 
Query OK, 1 row affected (0.06 sec)
mysql> select host,user from mysql.user;
mysql> use kevin;
Database changed
mysql> create table if not exists haha (id int(10) PRIMARY KEY AUTO_INCREMENT,name varchar(50) NOT NULL);
Query OK,  rows affected (0.22 sec)
 
mysql> insert into kevin.haha values(1,"wangshibo"),(2,"guohuihui"),(3,"yangyang"),(4,"shikui");
Query OK, 4 rows affected (0.13 sec)
Records: 4  Duplicates: 0  Warnings: 0
 
mysql> select * from kevin.haha;
+----+-----------+
| id | name      |
+----+-----------+
|  1 | wangshibo |
|  2 | guohuihui |
|  3 | yangyang  |
|  4 | shikui    |
+----+-----------+
4 rows in set (0.00 sec)
#分别登录三个cluster节点的mysql, 发现测试库kevin已经完成同步了, 其中:写操作的数据会先写到db-node01节点, 然后同步到db-node02和db-node03只读节点上.

#注意: 上面使用6446端口连接的route, 可以进行读写操作. 但是使用6447端口连接后, 就只能进行只读操作了. 登录后可以执行" select @@hostname" 查看登录到哪个节点上.

[root@db-node03 ~]# mysql -u root-h 192.168.101.182 -P 6447 -p
.............
mysql> select * from kevin.haha;
+----+-----------+
| id | name      |
+----+-----------+
|  1 | wangshibo |
|  2 | guohuihui |
|  3 | yangyang  |
|  4 | shikui    |
+----+-----------+
4 rows in set (0.00 sec)

7、常用语句及问题

#重启cluster:
 dba.rebootClusterFromCompleteOutage('testCluster');
#获取cluster:
var cluster=dba.getCluster('myCluster')
#地址加入白名单:
set global group_replication_ip_allowlist="192.168.101.181,192.168.101.182";
Cluster有问题时可以先清理后再重新创建;登录损坏节点,清理集群信息
dba.dropMetadataSchema()
#清除之后重新创建: var cluster = dba.createCluster('myCluster');

8、如果是第一台机已经安装完成克隆到第二机的:

修改/etc/hosts   把各个节点中的IP及机名写入进去(各个几点都需要调整)
例如:
192.168.101.181 node01
192.168.101.182 node02
...
修改机名:hostname
Sudo hostname node03

修改固定的IP:
etc/sysconfig/network-scripts/ifcfg-ens160
重新生成mac地址:
虚拟机设置中—网络设备器—高级—生成
#修改文件my.cnf中的对应的IPID;
#修改mysql的serverID :
mysql> select uuid();
mysql> show variables like 'datadir';
#修改auto/cnf中的UUID:
vim /data/data/auto.cnf
#修改完之后重启:  
service mysqld restart
#重新执行:(成功之后就可以再节点1中增加节点2进去;)
 MySQL  JS > shell.connect('root@localhost:3306');
Creating a session to 'root@localhost:3306'
Fetching schema names for autocompletion... Press ^C to stop.
Your MySQL connection id is 15
Server version: 8.0.28 MySQL Community Server - GPL
No default schema selected; type \use <schema> to set one.
<ClassicSession:root@localhost:3306>
 MySQL  localhost:3306 ssl  JS > dba.configureLocalInstance();
Configuring local MySQL instance listening at port 3306 for use in an InnoDB cluster...

This instance reports its own address as 192.168.101.182:3306

applierWorkerThreads will be set to the default value of 4.

The instance '192.168.101.182:3306' is valid to be used in an InnoDB cluster.
The instance '192.168.101.182:3306' is already ready to be used in an InnoDB cluster.

Successfully enabled parallel appliers.
#node01节点执行加入node02;
 MySQL  localhost:3306 ssl  JS > cluster.addInstance('root@node02:3306');

WARNING: A GTID set check of the MySQL instance at '192.168.101.182:3306' determined that it contains transactions that do not originate from th
e cluster, which must be discarded before it can join the cluster.
192.168.101.182:3306 has the following errant GTIDs that do not exist in the cluster:
3fa4ca93-53ce-11ef-93f6-000c29731638:2-4,
d5364e36-d19f-11ec-900e-000c29e2ab08:1

WARNING: Discarding these extra GTID events can either be done manually or by completely overwriting the state of 192.168.101.182:3306 with a ph
ysical snapshot from an existing cluster member. To use this method by default, set the 'recoveryMethod' option to 'clone'.
Having extra GTID events is not expected, and it is recommended to investigate this further and ensure that the data can be removed prior to cho
osing the clone recovery method.
Please select a recovery method [C]lone/[A]bort (default Abort): c
Validating instance configuration at node02:3306...

This instance reports its own address as 192.168.101.182:3306

Instance configuration is suitable.
NOTE: Group Replication will communicate with other members using '192.168.101.182:33061'. Use the localAddress option to override.

A new instance will be added to the InnoDB cluster. Depending on the amount of
data on the cluster this might take from a few seconds to several hours.

Adding instance to the cluster...

Monitoring recovery process of the new cluster member. Press ^C to stop monitoring and let it continue in background.
Clone based state recovery is now in progress.

NOTE: A server restart is expected to happen as part of the clone process. If the
server does not support the RESTART command or does not come back after a
while, you may need to manually start it back.

* Waiting for clone to finish...
NOTE: 192.168.101.182:3306 is being cloned from node01:3306
** Stage DROP DATA: Completed 
** Clone Transfer  
    FILE COPY  ############################################################  100%  Completed
    PAGE COPY  ############################################################  100%  Completed
    REDO COPY  ############################################################  100%  Completed

NOTE: 192.168.101.182:3306 is shutting down...

* Waiting for server restart... ready 
* 192.168.101.182:3306 has restarted, waiting for clone to finish...
** Stage RESTART: Completed
* Clone process has finished: 72.61 MB transferred in 2 sec (36.30 MB/s)

State recovery already finished for '192.168.101.182:3306'

The instance '192.168.101.182:3306' was successfully added to the cluster.
 MySQL  localhost:3306 ssl  JS > cluster.status();
{
    "clusterName": "myCluster", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "node01:3306", 
        "ssl": "REQUIRED", 
        "status": "OK_NO_TOLERANCE", 
        "statusText": "Cluster is NOT tolerant to any failures.", 
        "topology": {
            "192.168.101.182:3306": {
                "address": "192.168.101.182:3306", 
                "memberRole": "SECONDARY", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "replicationLag": null, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.28"
            }, 
            "node01:3306": {
                "address": "node01:3306", 
                "memberRole": "PRIMARY", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "replicationLag": null, 
                "role": "HA", 
                "status": "ONLINE", 
                "version": "8.0.28"
            }
        }, 
        "topologyMode": "Single-Primary"
    }, 
    "groupInformationSourceMember": "node01:3306"
}

9、重启:

1)关闭防火墙,或者把3306开放;达到其他几个节点都可以连接的状态;

systemctl stop firewalld.service

2)启动主节点数据库:

service mysqld start

3)主节点: 因为刚启动时不清楚哪个为主,所以要设置一个主节点;

mysql> set global group_replication_bootstrap_group=on;
mysql> start group_replication;
mysql> set global group_replication_bootstrap_group=off;

4)启动成功之后从节点只需要启动数据库即可,启动数据库之后会自动加入到组复制中:

5)启动MySQL Router(这个服务是一个中间件,可以单独的服务器也可以放在产品服务器中安装执行);

存在问题:

2024-08-21T09:50:14.883276+08:00 75 [System] [MY-013587] [Repl] Plugin group_replication reported: 'Plugin 'group_replication' is starting.'
2024-08-21T09:50:14.915108+08:00 78 [System] [MY-010597] [Repl] 'CHANGE MASTER TO FOR CHANNEL 'group_replication_applier' executed'. Previous st
ate master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''. New state master_host='<NULL>', master_port= 0, master_log_file='', master_log_pos= 4, master_bind=''.2024-08-21T09:50:21.018477+08:00 0 [ERROR] [MY-011526] [Repl] Plugin group_replication reported: 'This member has more executed transactions tha
n those present in the group. Local transactions: 3fa4ca93-53ce-11ef-93f6-000c29731638:1-3, b5a9bdff-3c79-4cb4-97a1-e38c45fad878:1-64, bcd903e2-5bb1-11ef-bc63-000c29731638:1-1414, bcd909a0-5bb1-11ef-bc63-000c29731638:1-9, d5364e36-d19f-11ec-900e-000c29e2ab08:1 > Group transactions: 3fa4ca93-53ce-11ef-93f6-000c29731638:1-3, b5a9bdff-3c79-4cb4-97a1-e38c45fad878:1-64, bcd903e2-5bb1-11ef-bc63-000c29731638:1-1414, bcd909a0-5bb1-11ef-bc63-000c29731638:1-9'2024-08-21T09:50:21.018595+08:00 0 [ERROR] [MY-011522] [Repl] Plugin group_replication reported: 'The member contains transactions not present i
n the group. The member will now exit the group.'2024-08-21T09:50:21.018640+08:00 0 [System] [MY-011503] [Repl] Plugin group_replication reported: 'Group membership changed to 192.168.101.181:3
306, 192.168.101.182:3306 on view 17242028526588715:6.'2024-08-21T09:50:24.076443+08:00 0 [System] [MY-011504] [Repl] Plugin group_replication reported: 'Group membership changed: This member has lef
t the group.'
#可以尝试执行试试;
show variables like '%func%';
set global log_bin_trust_function_creators=1;
start group_replication;
select * from performance_schema.replication_group_members;
#主要处理方法:
#主从机全部执行:
mysql> reset master;
#主机再重新启动后再启动从节点服务器:
mysql> start group_replication;

验证:

1、主节点再执行数据,从节点挂掉了,从节点启动之后会自动加入组复制执行未执行的数据;(没问题);
2A作为主节点挂掉,会不会自动故障转移到B从节点作为主节点?这里不会自动故障转移,存在问题;转移到B作为主节点,新增数据后,重新启动AA会不会自动加入组复制执行数据?必须要三台及以上的数据库服务才会自动执行(已解决)
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值