文章目录
1、mysql 5.7安装
一键安装mysql——5.2 onkey_install_mysql
2、增删改查sql语句
insert into tablename(field1,field2,field...) values (xx,xx,xx...);
delete from tablename where field1='xx';
update tablename set field2=xx where field1=xx;
select * from tablename;
写出Mysql创建HRM数据库的语句,如果HRM数据库存在,删除它重建;并在HRM数据库创建一张user表,表包含自增量id,以及姓名name(25个字符串)
DROP DATABASE IF EXISTS HRM;
create database HRM;
use HRM;
create table user(id int auto_increment,name varchar(25));
3、公司单机数据库延伸主从架构 实施
A)先确保基础数据已经一致
主:
mysqldump -uroot -p'123456' --all-databases >/backup/all_db.sql
scp /backup/all_db.sql root@192.168.169.171:~/
从:
mysql -uroot -p'123456' < ~/all_db.sql
B)开启二进制日志,vim /etc/my.cnf
[mysqld]
log_bin
server_id = 1 # 从服务器上为2
C) 主服务器上创建有复制权限的用户
create user 'slave'@'192.168.169.171' identified by '123456';
grant replication slave on *.* to 'slave'@'192.168.169.171';
D)从服务器上启用master信息
CHANGE MASTER TO MASTER_HOST='192.168.169.170' ,
MASTER_USER='slave',
MASTER_PASSWORD='123456',
MASTER_PORT=3306,
MASTER_LOG_FILE='mysql-bin.000002',
MASTER_LOG_POS=120;
E)关闭防火墙和selinux
service firewalld stop
F)从服务器上启动slave服务
mysql> start slave;
#检查是否启动成功
root@(none) 05:57 mysql>show slave status\G; # 启动效果
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.169.170
Master_User: slave
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000002
Read_Master_Log_Pos: 154
Relay_Log_File: node1-relay-bin.000004
Relay_Log_Pos: 320
Relay_Master_Log_File: mysql-bin.000002
Slave_IO_Running: Yes # IO线程已开启
Slave_SQL_Running: Yes # SQL线程已开启
4、主主模式实战演示
操作同上,设置有一点不同
给写入的表设置不同的起始值和相同的偏移量
master A 服务器配置: auto-increment-offset=1
auto-increment-increment=2
master B 服务器配置: auto-increment-offset=2
auto-increment-increment=2
5、给开发建立一个用户kf 密码Hello88 授权只能select读取权限
create user kf identified by "Hello88";
grant select on *.* to kf@'%';
6、 数据库备份命令
mysqldump
-h 指定要备份数据所在的服务器ip
-P 指定服务器端口号
-u -p 指定用户名密码
-A 备份所有数据库
-B 指定要备份的数据库
7、数据库架构延伸 实施读写分离 一个主库负责写 一个从库负责读取
A)中间服务器安装MySQLrouter
# centos7
wget https://dev.mysql.com/get/Downloads/MySQL-Router/mysql-router-community-8.0.23-1.el7.x86_64.rpm
# centos8
wget https://dev.mysql.com/get/Downloads/MySQL-Router/mysql-router-community-8.0.23-1.el8.x86_64.rpm
rpm -ivh mysql-router-community-8.0.23-1.el8.x86_64.rpm
B)中间服务器修改配置文件
vim /etc/mysqlrouter/mysqlrouter.conf
[routing:read_write]
bind_address = 192.168.169.138 # 本机(中间服务器)ip
bind_port = 7001
mode = read-write
destinations = 192.168.169.170:3306 # master的ip
max_connections = 65535
max_connect_errors = 100
client_connect_timeout = 2
[routing:read_only]
bind_address = 192.168.169.138 # 本机(中间服务器)ip
bind_port = 7002
mode = read-only
destinations = 192.168.169.171:3306 # slave的ip
max_connections = 65535
max_connect_errors = 100
client_connect_timeout = 2
C)service mysqlrouter start #注:启动mysqlrouter服务
netstat -anplut |grep mysqlrouter 可查看7001、7002端口是否开放
D)在master上创建2个用户,一个读写,一个读
grant all on *.* to "u_rw"@'%' identified by '123456';
grant select on *.* to "u_read"@'%' identified by '123456';
E)测试读写
1、mysql -u'u_rw' -p'123456' -P7001 -h 192.168.169.138
2、use hejin
3、u_rw@hejin 04:47 mysql>insert into hejin(id,name) values(3,'zzz'); # 写
Query OK, 1 row affected (0.00 sec)
4、u_rw@hejin 04:47 mysql>select * from hejin; # 读
+------+------+
| id | name |
+------+------+
| 1 | xxx |
| 2 | yyy |
| 3 | zzz |
+------+------+
F)测试读
mysql -u'u_read' -p'123456' -P7002 -h 192.168.169.138
1、没有写权限
u_read@hejin 04:55 mysql>insert into hejin(id,name) values(4,'aaa');
ERROR 1142 (42000): INSERT command denied to user 'u_read'@'192.168.169.138' for table 'hejin'
2、有读权限
u_read@hejin 04:55 mysql>select * from hejin;
+------+------+
| id | name |
+------+------+
| 1 | xxx |
| 2 | yyy |
| 3 | zzz |
| 4 | aaa |
| 4 | aaa |
| 4 | aaa |
+------+------+
#注:读写的控制不在于mysqlrouter,而在于mysql 把账户权限控制好;mysqlrouter仅仅做了分离而已
8、数据库MHA 高可用实施
详细可参考:mysql实现高可用架构之MHA
准备
manager:192.168.169.169 # 控制器
master:192.168.169.170
slave1:192.168.169.171
slave2:192.168.169.172
A) master上配置
vim /etc/my.cnf
[mysqld]
server-id = 1 //复制集群中的各节点的id均必须唯一
log-bin = master-log //开启二进制日志
relay-log = relay-log //开启中继日志
B) slave上配置
vim /etc/my.cnf
[mysqld]
server-id = 2 //复制集群中的各节点的id均必须唯一;
relay-log = relay-log //开启中继日志
log-bin = master-log //开启二进制日志
read_only = ON //启用只读属性
relay_log_purge = 0 //是否自动清空不再需要中继日志
log_slave_updates = 1 //使得更新的数据写进二进制日志中
C) 重启mysql服务
service mysqld restart
# 登录slave的mysql
CHANGE MASTER TO MASTER_HOST='192.168.169.170' ,
MASTER_USER='slave',
MASTER_PASSWORD='123456',
MASTER_PORT=3306,
MASTER_LOG_FILE='mysql-bin.000003',
MASTER_LOG_POS=120;
# 启动
start slave
此处踩坑:slave2上面start slave后,IO线程一直处于Connecting状态,连不上master
解决办法:create user 'slave'@'192.168.169.171' identified by '123456';
grant replication slave on *.* to 'slave'@'192.168.169.171';
D) 安装 MHA 包
master上运行:grant all on *.* to 'mhaadmin'@'192.168.%.%' identified by '123456';
保证机器之间互通:ssh-keygen -t rsa
ssh-copy-id -i .ssh/id_rsa.pub root@192.168.169.170/171/172等等
E) 定义 MHA 管理配置文件
四个节点都需安装:mha4mysql-node-0.56-0.el6.norch.rpm
Manager 节点另需要安装:mha4mysql-manager-0.56-0.el6.noarch.rpm
yum install -y mha4mysql-node-0.56-0.el6.noarch.rpm
yum install -y mha4mysql-manager-0.56-0.el6.noarch.rpm
注:Manager 节点一定要先安装mha4mysql-node再安装mha4mysql-node,否则安装mha4mysql-node会报错,后面无法进行。
F) manager节点定义 MHA 管理配置文件
为MHA专门创建一个管理用户, 方便以后使用, 在mysql的主节点上, 三个节点自动同步:
mkdir /etc/mha_master
vim /etc/mha_master/mha.cnf
配置文件内容如下
[server default]
user=mhaadmin
password=123456
manager_workdir=/etc/mha_master/app1
manager_log=/etc/mha_master/manager.log
remote_workdir=/mydata/mha_master/app1
ssh_user=root
repl_user=slave
repl_password=123456
ping_interval=1
[server1]
hostname=192.168.169.170
ssh_port=22
candidate_master=1
[server2]
hostname=192.168.169.171
ssh_port=22
candidate_master=1
[server3]
hostname=192.168.169.172
ssh_port=22
candidate_master=1
G) 对四个节点进行检测
-
检测各节点之间ssh 互信通信配置是否 ok
manager机器上执行masterha_check_ssh -conf=/etc/mha_master/mha.cnf
如果最后一行显示为[info]All SSH connection tests passed successfully.则表示成功。 -
检查管理的MySQL复制集群的连接配置参数是否OK
manager机器上执行masterha_check_repl -conf=/etc/mha_master/mha.cnf
如果检测失败,报MySQL Replication Health is NOT OK!
这可能是因为从节点上没有账号,因为这个架构,任何一个从节点, 将有可能成为主节点, 所以也需要创建账号。
因此,我们需要在master节点上再次执行以下操作:
grant replication slave,replication client on *.* to 'slave'@'192.168.%.%' identified by '123456';
flush privileges;
还是检测有问题,解决方案参考https://blog.csdn.net/zhuoweichen1/article/details/126599763
H) manager节点启动MHA
nohup masterha_manager -conf=/etc/mha_master/mha.cnf &> /etc/mha_master/manager.log &
- 检查master节点状态
[root@localhost ~]# masterha_check_status -conf=/etc/mha_master/mha.cnf
mha (pid:37819) is running(0:PING_OK), master:192.168.169.170
停止MHA,使用stop命令masterha_stop -conf=/etc/mha_master/mha.cnf
I) 测试故障转移
- 在 master 节点关闭 mysql服务,模拟主节点数据崩溃
[root@localhost ~]# service mysqld stop
Redirecting to /bin/systemctl stop mysqld.service
- 在 manger 节点查看日志
[root@localhost ~]# tail -200 /etc/mha_master/manager.log
...
Master 192.168.169.170(192.168.169.170:3306) is down!
Check MHA Manager logs at localhost.localdomain:/etc/mha_master/manager.log for details.
Started automated(non-interactive) failover.
The latest slave 192.168.169.171(192.168.169.171:3306) has all relay logs for recovery.
Selected 192.168.169.171(192.168.169.171:3306) as a new master.
192.168.169.171(192.168.169.171:3306): OK: Applying all logs succeeded.
192.168.169.172(192.168.169.172:3306): This host has the latest relay log events.
Generating relay diff files from the latest slave succeeded.
192.168.169.172(192.168.169.172:3306): OK: Applying all logs succeeded. Slave started, replicating from 192.168.169.171(192.168.169.171:3306)
192.168.169.171(192.168.169.171:3306): Resetting slave info succeeded.
##表示 manager 检测到192.168.169.170节点故障, 而后自动执行故障转移, 将192.168.169.171提升为主节点。
Master failover to 192.168.169.171(192.168.169.171:3306) completed successfully.
[2]- 完成 nohup masterha_manager -conf=/etc/mha_master/mha.cnf &>/etc/mha_master/manager.log
# 注意,故障转移完成后, manager将会自动停止
[root@localhost ~]# masterha_check_status -conf=/etc/mha_master/mha.cnf
mha is stopped(2:NOT_RUNNING).