MySQL
主从复制
第一节 为什么要使用MySQL
主从复制
-
在业务复杂的系统中,有这么一个情景,有一句sql语句需要锁表,导致暂时不能使用读的服务,那么就很影响运行中的业务,使用主从复制,让主库负责写,从库负责读,这样,即使主库出现了锁表的情景,通过读从库也可以保证业务的正常运行。
-
做数据的热备,主库宕机后能够及时替换主库,保证业务可用性。
-
架构的扩展。业务量越来越大,I/O访问频率过高,单机无法满足,此时做多库的存储,降低磁盘I/O访问的频率,提高单个机器的I/O性能。
alter user root@'localhost' identified with mysql_native_password by 'root'; -- 修改密码
第二节 MySQL
主从复制流程
- 主库db的更新事件(update、insert、delete)被写到
binlog
- 从库启动之后,创建一个I/O线程,读取主库的
binlog
内容 - 主库创建一个
binlog dump thread
,把binlog
的内容发送到从库 - 从库读取主库传过来的
binlog
内容并写入到relay log - 从库创建一个
SQL
线程,从relay log里面读取内容,从Exec_Master_Log_Pos
位置开始执行读取到的更新事件,将更新内容写入到slave的db
第三节 MySQL
主从复制原理
MySQL
主从复制是一个异步的复制过程,主库发送更新事件到从库,从库读取更新记录,并执行更新记录,使得从库的内容与主库保持一致。
**binlog
:**binary log,主库中保存所有更新事件日志的二进制文件。binlog
是数据库服务启动的一刻起,保存数据库所有变更记录(数据库结构和内容)的文件。在主库中,只要有更新事件出现,就会被依次地写入到binlog
中,之后会推送到从库中作为从库进行复制的数据源。
**binlog
输出线程:**每当有从库连接到主库的时候,主库都会创建一个线程然后发送binlog
内容到从库。 对于每一个即将发送给从库的SQL
事件,binlog
输出线程会将其锁住。一旦该事件被线程读取完之后,该锁会被释放,即在该事件完全发送到从库的时候,该锁也会被释放。
在从库中,当复制开始时,从库就会创建从库I/O线程和从库的SQL
线程进行复制处理。
**从库I/O线程:**当START SLAVE语句在从库开始执行之后,从库创建一个I/O线程,该线程连接到主库并请求主库发送binlog
里面的更新记录到从库上。 从库I/O线程读取主库的binlog
输出线程发送的更新并拷贝这些更新到本地文件,其中包括relay log文件。
**从库的SQL
线程:**从库创建一个SQL
线程,这个线程读取从库I/O线程写到relay log的更新事件并执行。
第四节 MySQL
主从复制搭建
本地计算机部署的 MySQL
作为从库, 阿里云部署的 MySQL
数据库作为主库
1. 修改主服务器 root 账号权限
USE mysql
UPDATE `user` SET host='%' WHERE user='root';
GRANT ALL PRIVILEGES ON *.* TO 'root'@'%';
FLUSH PRIVILEGES;
2. 创建从库同步账号
CREATE USER 'sync'@'%' IDENTIFIED WITH mysql_native_password BY 'root';
GRANT REPLICATION SLAVE ON *.* TO 'sync'@'%';
FLUSH PRIVILEGES;
3. 修改主库配置
vi /etc/my.cnf
末尾添加如下内容:
log-bin=master-bin
server-id=30001
重新启动 MySQL
服务,让配置生效
systemctl restart mysqld
4. 查看主服务状态
SHOW MASTER STATUS;
免安装版仓库配置:在MySQL安装目录mysql-8.0.29-winx64中新建一个my.ini文件,内容如下:
[mysqld]
character-set-server=utf8
bind-address = 0.0.0.0
port = 3306
basedir=D:/mysql-8.0.29-winx64/
datadir=D:/mysql-8.0.29-winx64/data/
max_connections=2000
default-storage-engine=INNODB
[mysql]
default-character-set=utf8
[mysql.server]
default-character-set=utf8
[client]
default-character-set=utf8
log-bin=slave-bin
server-id=21000
首先执行命令: mysqld.exe --initialize-insecure,等待安装完成,安装完成后会出现一个data文件夹,配置服务的命令:mysqld install 服务名,启动服务命令:net start 服务名,连接MySQL服务器命令: mysql -u root -p,如果这个命令连接不上,那么就直接通过navicat去连接就可以了。后续步骤接着执行第6步
5. 修改从库配置
配置文件路径: C:\ProgramData\MySQL\MySQL Server 8.0\my.ini
# 修改这两项的值
log-bin=slave-bin
server-id=30002
6. 主从建立配置
STOP SLAVE; # 停止 slave 确保主从配置正确建立
CHANGE MASTER TO master_host = '121.199.174.183',
master_user = 'erp',
master_password = 'root',
master_log_file = 'master-bin.000001',
master_log_pos = 156;
START SLAVE; # 开启 slave
7. 查看从库状态
SHOW SLAVE STATUS;
8. 测试
在主库中建库建表,查看从库同步情况
第五节 Spring Boot 实现 MySQL
读写分离
1. AbstractRoutingDataSource
这是一个抽象类,本身并不能直接获取连接,而是从提供的可路由的数据源中选择一个数据源来获取连接。
package com.qf.mysql.replication.datasource;
import org.springframework.jdbc.datasource.lookup.AbstractRoutingDataSource;
import javax.sql.DataSource;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import java.util.Set;
public class ErpRoutingDataSource extends AbstractRoutingDataSource {
private int index;
//这个方法就是决定使用查找数据的键,由此我们可以猜想,当有多个数据源的时候
//会形成一个键对应一个数据源,这样才会有根据键来查找数据库
@Override
protected Object determineCurrentLookupKey() {
Map<Object, DataSource> resolvedDataSources = getResolvedDataSources();
DataSourceType type = DataSourceTypeHolder.get();
Set<Object> objects = resolvedDataSources.keySet();
List<DataSourceKey> dataSourceKeys = new ArrayList<>();
for(Object key: objects){
if(((DataSourceKey)key).getType() == type){
dataSourceKeys.add((DataSourceKey) key);
}
}
Object dataSourceKey;
if(dataSourceKeys.isEmpty()) throw new RuntimeException("当前并无可用数据源");
else if(dataSourceKeys.size() == 1) {
dataSourceKey = dataSourceKeys.get(0);
} else {//多个数据源时,使用轮询进行负载均衡
if(index >= dataSourceKeys.size()) index = 0;
dataSourceKey = dataSourceKeys.get(index++);
}
System.out.println(dataSourceKey);
return dataSourceKey;
}
}
package com.qf.mysql.replication.datasource;
import lombok.AllArgsConstructor;
import lombok.Data;
@Data
@AllArgsConstructor
public class DataSourceKey {
private int id;
private DataSourceType type;
}
package com.qf.mysql.replication.datasource;
/**
* 数据源类型
*/
public enum DataSourceType {
MASTER,SLAVE
}
package com.qf.mysql.replication.datasource;
//数据源类型持有者,使用的是ThreadLocal来为当前线程维护一个本地变量
public class DataSourceTypeHolder {
private static ThreadLocal<DataSourceType> holder = new ThreadLocal<>();
public static void set(DataSourceType type){
holder.set(type);
}
public static DataSourceType get(){
DataSourceType type = holder.get();
if(type == null) return DataSourceType.MASTER;
return type;
}
public static void remove(){
holder.remove();
}
}
2. yml 配置
spring:
datasource:
master:
driverClassName: com.mysql.cj.jdbc.Driver
url: jdbc:mysql://121.199.174.183:3306/sync?serverTimezone=UTC
username: root
password: QFcd@2111
slave1:
driverClassName: com.mysql.cj.jdbc.Driver
url: jdbc:mysql://localhost:3306/sync?serverTimezone=UTC
username: root
password: root
slave2:
driverClassName: com.mysql.cj.jdbc.Driver
url: jdbc:mysql://localhost:3306/sync?serverTimezone=UTC
username: root
password: root
3. 数据源配置
package com.qf.mysql.replication.config;
import com.alibaba.druid.pool.DruidDataSource;
import com.qf.mysql.replication.datasource.DataSourceKey;
import com.qf.mysql.replication.datasource.DataSourceType;
import com.qf.mysql.replication.datasource.ErpRoutingDataSource;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.boot.jdbc.DataSourceBuilder;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import javax.sql.DataSource;
import java.util.HashMap;
import java.util.Map;
import java.util.UUID;
@Configuration
public class DataSourceConfig {
@Bean
@ConfigurationProperties(prefix = "spring.datasource.master")
public DruidDataSource master(){
return new DruidDataSource();
}
@Bean
@ConfigurationProperties(prefix = "spring.datasource.slave1")
public DruidDataSource slave1(){
return new DruidDataSource();
}
@Bean
@ConfigurationProperties(prefix = "spring.datasource.slave2")
public DruidDataSource slave2(){
return new DruidDataSource();
}
@Bean
public ErpRoutingDataSource erpRoutingDataSource(@Qualifier("master")DataSource master,
@Qualifier("slave1")DataSource slave1,
@Qualifier("slave2")DataSource slave2){
ErpRoutingDataSource dataSource = new ErpRoutingDataSource();
Map<Object, Object> dataSourceMap = new HashMap<>();
dataSourceMap.put(new DataSourceKey(0, DataSourceType.MASTER), master);
dataSourceMap.put(new DataSourceKey(1, DataSourceType.SLAVE), slave1);
dataSourceMap.put(new DataSourceKey(2, DataSourceType.SLAVE), slave2);
//主从复制可选择的数据源
dataSource.setTargetDataSources(dataSourceMap);
//默认使用的数据源
dataSource.setDefaultTargetDataSource(master);
return dataSource;
}
}
3. 定义业务层切面
业务层切面主要实现数据源类型的切换
package com.qf.mysql.replication.aop;
import com.qf.mysql.replication.anno.Query;
import com.qf.mysql.replication.anno.Update;
import com.qf.mysql.replication.datasource.DataSourceTypeHolder;
import org.aspectj.lang.JoinPoint;
import org.aspectj.lang.ProceedingJoinPoint;
import org.aspectj.lang.annotation.Around;
import org.aspectj.lang.annotation.Aspect;
import org.aspectj.lang.annotation.Before;
import org.aspectj.lang.annotation.Pointcut;
import org.aspectj.lang.reflect.MethodSignature;
import org.springframework.core.annotation.Order;
import org.springframework.stereotype.Component;
import java.lang.reflect.Method;
@Aspect
@Component
@Order(0)
public class DataSourceAspect {
@Pointcut("execution(* com.qf.mysql.replication.service..*(..))")
public void pointcut(){}
@Around("pointcut()")
public Object around(ProceedingJoinPoint pjp) throws Throwable {
MethodSignature signature = (MethodSignature) pjp.getSignature();
Method method = signature.getMethod();
if(method.isAnnotationPresent(Query.class)){
DataSourceTypeHolder.set("slave");
} else if(method.isAnnotationPresent(Update.class)){
DataSourceTypeHolder.set("master");
}
try {
return pjp.proceed();
} catch (Throwable t) {
throw t;
} finally {
DataSourceTypeHolder.remove();
}
}
}
package com.qf.mysql.replication.aop;
import com.qf.mysql.replication.anno.DataSourceChoose;
import com.qf.mysql.replication.datasource.DataSourceTypeHolder;
import org.aspectj.lang.ProceedingJoinPoint;
import org.aspectj.lang.annotation.Around;
import org.aspectj.lang.annotation.Aspect;
import org.aspectj.lang.annotation.Pointcut;
import org.aspectj.lang.reflect.MethodSignature;
import org.springframework.core.annotation.Order;
import org.springframework.stereotype.Component;
import java.lang.reflect.Method;
@Aspect
@Component
@Order(0)
public class DataSourceAspect {
@Pointcut("execution(* com.qf.mysql.replication.service..*(..))")
public void pointcut(){}
@Around("pointcut()")
public Object around(ProceedingJoinPoint pjp) throws Throwable {
MethodSignature signature = (MethodSignature) pjp.getSignature();
Method method = signature.getMethod();
DataSourceChoose dataSourceChoose = method.getAnnotation(DataSourceChoose.class);
if(dataSourceChoose != null){
DataSourceTypeHolder.set(dataSourceChoose.value());
}
try {
return pjp.proceed();
} catch (Throwable t) {
throw t;
} finally {
DataSourceTypeHolder.remove();
}
}
}
package com.qf.mysql.replication.anno;
import com.qf.mysql.replication.datasource.DataSourceType;
import java.lang.annotation.ElementType;
import java.lang.annotation.Retention;
import java.lang.annotation.RetentionPolicy;
import java.lang.annotation.Target;
@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.METHOD)
public @interface DataSourceChoose {
DataSourceType value();
}
4. 事务管理配置
package com.qf.mysql.replication.config;
import org.apache.ibatis.session.SqlSessionFactory;
import org.mybatis.spring.SqlSessionFactoryBean;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.core.io.support.PathMatchingResourcePatternResolver;
import org.springframework.jdbc.datasource.DataSourceTransactionManager;
import org.springframework.transaction.PlatformTransactionManager;
import org.springframework.transaction.annotation.EnableTransactionManagement;
import javax.annotation.Resource;
import javax.sql.DataSource;
@Configuration
@EnableTransactionManagement
public class MybatisConfig {
@Resource(name="erpRoutingDataSource")
private DataSource erpRoutingDataSource;
//SqlSessionFactory配置
@Bean
public SqlSessionFactory sqlSessionFactory() throws Exception {
SqlSessionFactoryBean factoryBean = new SqlSessionFactoryBean();
factoryBean.setDataSource(erpRoutingDataSource);
factoryBean.setMapperLocations(new PathMatchingResourcePatternResolver().getResources("classpath:mapper/*Mapper.xml"));
factoryBean.setTypeAliasesPackage("com.qf.mysql.replication.pojo");
return factoryBean.getObject();
}
//这个就是支持多数据源事务处理的管理器
@Bean
public PlatformTransactionManager platformTransactionManager(){
return new DataSourceTransactionManager(erpRoutingDataSource);
}
}
5. 业务编写
package com.qf.mysql.replication.service;
import com.qf.mysql.replication.pojo.User;
import java.util.List;
public interface UserService {
int addUser(User user);
List<User> searchUsers();
}
package com.qf.mysql.replication.service.impl;
import com.qf.mysql.replication.anno.DataSourceChoose;
import com.qf.mysql.replication.datasource.DataSourceType;
import com.qf.mysql.replication.mapper.UserMapper;
import com.qf.mysql.replication.pojo.User;
import com.qf.mysql.replication.service.UserService;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.stereotype.Service;
import org.springframework.transaction.annotation.Transactional;
import java.util.List;
@Service
public class UserServiceImpl implements UserService {
@Autowired
private UserMapper userMapper;
@Transactional(rollbackFor = Exception.class)
@DataSourceChoose(DataSourceType.MASTER)
@Override
public int addUser(User user) {
//保存之前可以先检测用户是否存在
return userMapper.addUser(user);
}
@Transactional(readOnly = true)
@DataSourceChoose(DataSourceType.SLAVE)
@Override
public List<User> searchUsers() {
return userMapper.searchUsers();
}
}
package com.qf.mysql.replication.pojo;
import lombok.Data;
@Data
public class User {
private String username;
private String name;
private String password;
private int state;
private long tenantId;
private Long roleId;
}
package com.qf.mysql.replication.mapper;
import com.qf.mysql.replication.pojo.User;
import org.apache.ibatis.annotations.Mapper;
import org.apache.ibatis.annotations.Param;
import java.util.List;
@Mapper
public interface UserMapper {
int addUser(@Param("user") User user);
List<User> searchUsers();
}
<?xml version="1.0" encoding="UTF-8" ?>
<!DOCTYPE mapper PUBLIC "-//mybatis.org//DTD Mapper 3.0//EN"
"http://mybatis.org/dtd/mybatis-3-mapper.dtd">
<!--namespace = 所需实现的接口全限定名-->
<mapper namespace="com.qf.mysql.replication.mapper.UserMapper">
<select id="searchUsers" resultType="user">
SELECT
username,
name,
`password`,
state,
tenant_id tenantId,
role_id roleId
FROM
`user`
</select>
<insert id="addUser">
INSERT INTO `user`(
username,
name,
`password`,
state,
tenant_id,
role_id
) VALUES (
#{user.username},
#{user.name},
#{user.password},
#{user.state},
#{user.tenantId},
#{user.roleId}
)
</insert>
</mapper>
6. 测试案例
package com.qf.mysql.replication;
import com.qf.mysql.replication.pojo.User;
import com.qf.mysql.replication.service.UserService;
import org.junit.jupiter.api.Test;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.test.context.SpringBootTest;
@SpringBootTest
class MysqlReplicationApplicationTests {
@Autowired
private UserService userService;
@Test
void addUser() {
User user = new User();
user.setName("测试");
user.setUsername("test1");
user.setPassword("123456");
user.setRoleId(1L);
user.setState(0);
user.setTenantId(0);
userService.addUser(user);
}
@Test
void search(){
userService.searchUsers().forEach(System.out::println);
System.out.println("================================");
userService.searchUsers().forEach(System.out::println);
System.out.println("================================");
userService.searchUsers().forEach(System.out::println);
}
}
第六节 canal
canal是阿里研发的一款用于数据同步的中间件,主要用途是基于 MySQL 数据库增量日志解析,提供增量数据订阅和消费
1. canal的工作原理
- canal 模拟 MySQL slave 的交互协议,伪装自己为 MySQL slave ,向 MySQL master 发送dump 协议
- MySQL master 收到 dump 请求,开始推送 binary log 给 slave (即 canal )
- canal 解析 binary log 对象(原始为 byte 流)
2. canal 环境搭建
2.1 主库中创建同步账号
CREATE USER 'canal'@'%' IDENTIFIED WITH mysql_native_password BY 'root';
GRANT SELECT, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'canal'@'%';
FLUSH PRIVILEGES;
2.2 canal环境搭建
编辑conf/example/instance.properties
文件
# slaveId随便取
canal.instance.mysql.slaveId=1234567
canal.instance.dbUsername=canal
canal.instance.dbPassword=root
启动canal
- canal中bin目录下的startup.bat
3. Java 操作canal
引入依赖
<dependency>
<groupId>com.alibaba</groupId>
<artifactId>fastjson</artifactId>
<version>2.0.12</version>
</dependency>
<dependency>
<groupId>com.alibaba.otter</groupId>
<artifactId>canal.client</artifactId>
<version>1.1.2</version>
</dependency>
编写测试代码
package com.qf.rabbitmq;
import com.alibaba.fastjson.JSONObject;
import com.alibaba.otter.canal.client.CanalConnector;
import com.alibaba.otter.canal.client.CanalConnectors;
import com.alibaba.otter.canal.protocol.CanalEntry;
import com.alibaba.otter.canal.protocol.Message;
import com.google.protobuf.ByteString;
import com.google.protobuf.InvalidProtocolBufferException;
import java.net.InetSocketAddress;
import java.net.SocketAddress;
import java.util.List;
import java.util.Optional;
import java.util.concurrent.TimeUnit;
public class CanalTest {
public static void main(String[] args) {
//canal的网络通信地址
SocketAddress address = new InetSocketAddress("127.0.0.1", 11111);
//canal连接器
CanalConnector connector = CanalConnectors.newSingleConnector(address, "example", "canal", "root");
connector.connect();//建立链接
connector.subscribe("shopping-goods.*");//订阅shopping-goods库的所有表
//回滚到未进行ack的地方,相当于一个标记,下次获取的时候,可以从最后一个没有ack的地方开始拿
connector.rollback();
while (true){
Message message = connector.getWithoutAck(100, 10L, TimeUnit.SECONDS);//尝试获取100条数据,可能没有那么多,甚至没有
List<CanalEntry.Entry> entries = message.getEntries();//获取所有的数据
if(entries.isEmpty()){
System.out.println("暂无数据变更");
} else {
processEntries(entries);
}
//回执一个信息,表示当前这些发生变更的数据已经处理
connector.ack(message.getId());
}
}
private static void processEntries(List<CanalEntry.Entry> entries) {
for (CanalEntry.Entry entry : entries) {
//获取表名
String tableName = entry.getHeader().getTableName();
//获取类型
CanalEntry.EntryType entryType = entry.getEntryType();
if(entryType == CanalEntry.EntryType.ROWDATA){//类型如果是一行数据
ByteString storeValue = entry.getStoreValue();//获取序列化后的数据
try {
//解析数据,得到改变的行信息
CanalEntry.RowChange change = CanalEntry.RowChange.parseFrom(storeValue);
//获取触发改变的事件类型
CanalEntry.EventType eventType = change.getEventType();
//获取所有的改变的行
List<CanalEntry.RowData> rowDataList = change.getRowDatasList();
for (CanalEntry.RowData rowData : rowDataList) {
processRowData(eventType, rowData);
}
} catch (InvalidProtocolBufferException e) {
e.printStackTrace();
}
}
}
}
private static void processRowData(CanalEntry.EventType eventType, CanalEntry.RowData rowData) {
List<CanalEntry.Column> beforeColumnsList = rowData.getBeforeColumnsList();
if(eventType == CanalEntry.EventType.DELETE){//如果是删除动作
Optional<CanalEntry.Column> opt = beforeColumnsList.stream().filter(column -> "id".equals(column.getName())).findFirst();
if(opt.isPresent()){
//获取id
Long id = Long.valueOf(opt.get().getValue());
//根据id去es中删除对应数据,需要考虑是sku的id还是spu的id
System.out.println("根据id去删除es中的数据:" + id);
}
} else if(eventType == CanalEntry.EventType.UPDATE){//更新事件
//更新事件需要考虑是审核状态的更新还是数据的更新
List<CanalEntry.Column> afterColumnsList = rowData.getAfterColumnsList();
JSONObject json = new JSONObject();
afterColumnsList.forEach(column -> json.put(column.getName(), column.getValue()));
System.out.println("监听到改变数据:" + json);
}
}
}