## 读写分离:缓解数据库的压力,让主数据库处理事务性增、改、删操作,而从数据库处理查询操作。
**一、**首先要部署两个Mysql数据库并实现主从复制,详情请参考我之前的文章
dokcer容器环境实现mysql主从复制
**二、**在pom文件中添加如下依赖:
<dependencies>
<dependency>
<groupId>com.alibaba</groupId>
<artifactId>druid-spring-boot-starter</artifactId>
<version>1.1.10</version>
</dependency>
<dependency>
<groupId>org.mybatis.spring.boot</groupId>
<artifactId>mybatis-spring-boot-starter</artifactId>
<version>1.3.2</version>
</dependency>
<dependency>
<groupId>tk.mybatis</groupId>
<artifactId>mapper-spring-boot-starter</artifactId>
<version>2.1.5</version>
</dependency>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>8.0.16</version>
</dependency>
<!-- 动态数据源 所需依赖 ### start-->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-jdbc</artifactId>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-aop</artifactId>
<scope>provided</scope>
</dependency>
<!-- 动态数据源 所需依赖 ### end-->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<optional>true</optional>
</dependency>
<dependency>
<groupId>com.alibaba</groupId>
<artifactId>fastjson</artifactId>
<version>1.2.4</version>
</dependency>
</dependencies>
**三、**添加主库和从库的数据源配置
spring:
datasource:
type: com.alibaba.druid.pool.DruidDataSource
driver-class-name: com.mysql.cj.jdbc.Driver
master:
url: jdbc:mysql://127.0.0.1:3306/test?characterEncoding=utf-8&useSSL=false
username: root
password:
slave:
url: jdbc:mysql://127.0.0.7:3307/test?characterEncoding=utf-8&useSSL=false
username: root
password:
**四、**创建枚举类来代替主从标识,方便我们使用
@Getter
public enum DynamicDataSourceEnum {
MASTER("master"),
SLAVE("slave")
;
private String dataSourceName;
DynamicDataSourceEnum(String dataSourceName) {
this.dataSourceName = dataSourceName;
}
}
**五、**数据源配置信息类 DataSourceConfig,这里配置了两个数据源,masterDb和slaveDb
@Configuration
@MapperScan(basePackages = "com.xjt.proxy.mapper", sqlSessionTemplateRef = "sqlTemplate")
public class DataSourceConfig {
//主库
@Bean
@ConfigurationProperties(prefix = "spring.datasource.master")
public DataSource masterDb() {
return DruidDataSourceBuilder.create().build();
}
//从库
@Bean
@ConditionalOnProperty(prefix = "spring.datasource", name = "slave", matchIfMissing = true)
@ConfigurationProperties(prefix = "spring.datasource.slave")
public DataSource slaveDb() {
return DruidDataSourceBuilder.create().build();
}
// 主从动态配置
@Bean
public DynamicDataSource dynamicDb(@Qualifier("masterDb") DataSource masterDataSource,
@Autowired(required = false) @Qualifier("slaveDb") DataSource slaveDataSource) {
DynamicDataSource dynamicDataSource = new DynamicDataSource();
Map<Object, Object> targetDataSources = new HashMap<>();
targetDataSources.put(DynamicDataSourceEnum.MASTER.getDataSourceName(), masterDataSource);
// 默认连接为主库,从库的DataSource 不是比传参数,如果slaveDataSource不等于null时则切换到从库
if (slaveDataSource != null) {
targetDataSources.put(DynamicDataSourceEnum.SLAVE.getDataSourceName(), slaveDataSource);
}
dynamicDataSource.setTargetDataSources(targetDataSources);
dynamicDataSource.setDefaultTargetDataSource(masterDataSource);
return dynamicDataSource;
}
@Bean
public SqlSessionFactory sessionFactory(@Qualifier("dynamicDb") DataSource dynamicDataSource) throws Exception {
SqlSessionFactoryBean bean = new SqlSessionFactoryBean();
bean.setMapperLocations(
new PathMatchingResourcePatternResolver().getResources("classpath*:mappers/*.xml"));
bean.setDataSource(dynamicDataSource);
return bean.getObject();
}
@Bean
public SqlSessionTemplate sqlTemplate(@Qualifier("sessionFactory") SqlSessionFactory sqlSessionFactory) {
return new SqlSessionTemplate(sqlSessionFactory);
}
@Bean(name = "dataSourceTx")
public DataSourceTransactionManager dataSourceTx(@Qualifier("dynamicDb") DataSource dynamicDataSource) {
DataSourceTransactionManager dataSourceTransactionManager = new DataSourceTransactionManager();
dataSourceTransactionManager.setDataSource(dynamicDataSource);
return dataSourceTransactionManager;
}
}
****六、路由的设置和获取
设置路由的目的为了方便查找对应的数据源,我们可以用ThreadLocal保存数据源的信息到每个线程中,方便我们需要时获取
public class DataSourceContextHolder {
private static final ThreadLocal<String> DYNAMIC_DATASOURCE_CONTEXT = new ThreadLocal<>();
public static void set(String datasourceType) {
DYNAMIC_DATASOURCE_CONTEXT.set(datasourceType);
}
public static String get() {
return DYNAMIC_DATASOURCE_CONTEXT.get();
}
public static void clear() {
DYNAMIC_DATASOURCE_CONTEXT.remove();
}
}
获取路由
public class DynamicDataSource extends AbstractRoutingDataSource {
@Override
protected Object determineCurrentLookupKey() {
return DataSourceContextHolder.get();
}
}
AbstractRoutingDataSource的作用是基于查找key路由到对应的数据源,它内部维护了一组目标数据源,并且做了路由key与目标数据源之间的映射,提供基于key查找数据源的方法。
七、自定义数据源注解
为了可以方便切换数据源,我们可以写一个注解,注解中包含数据源对应的枚举值,默认是主库:DynamicDataSourceEnum.MASTER
@Retention(RetentionPolicy.RUNTIME)
@Target(ElementType.METHOD)
@Documented
public @interface DataSourceSelector {
DynamicDataSourceEnum value() default DynamicDataSourceEnum.MASTER;
boolean clear() default true;
}
八、aop切换数据源
定义一个aop类,对有注解的方法做切换数据源的操作
@Slf4j
@Aspect
@Order(value = 1)
@Component
public class DataSourceContextAop {
//写自定义注解的包路径
@Around("@annotation(com.jiahao.mall.face.DataSourceSelector)")
public Object setDynamicDataSource(ProceedingJoinPoint pjp) throws Throwable {
boolean clear = true;
try {
Method method = this.getMethod(pjp);
DataSourceSelector dataSourceImport = method.getAnnotation(DataSourceSelector.class);
clear = dataSourceImport.clear();
DataSourceContextHolder.set(dataSourceImport.value().getDataSourceName());
log.info("========数据源切换至:{}", dataSourceImport.value().getDataSourceName());
return pjp.proceed();
} finally {
if (clear) {
DataSourceContextHolder.clear();
}
}
}
private Method getMethod(JoinPoint pjp) {
MethodSignature signature = (MethodSignature)pjp.getSignature();
return signature.getMethod();
}
}
九、SpringBoot结合AOP实现读写分离,分别调用主从两个数据库就已经实现了
简单写了一个ServiceImpl,包含查询方法和新增方法,在insert方法上添加主库的注解,在查询方法上添加从库的注解
@Service
public class AntDashboardsServiceImpl implements AntDashboardsService {
@Autowired
private AntDashboardsMapper antDashboardsMapper;
@Override
public ResponseVo<AntDashboardVo> delete(Integer id) {
return null;
}
@Override
@DataSourceSelector(value = DynamicDataSourceEnum.MASTER)
public ResponseVo<AntDashboardVo> insert(AntDashboards antDashboards) {
int insert = antDashboardsMapper.insert(antDashboards);
return ResponseVo.success();
}
@Override
@DataSourceSelector(value = DynamicDataSourceEnum.SLAVE)
public ResponseVo<PageInfo> selectAll() {
List<AntDashboards> result = antDashboardsMapper.selectAll();
List<AntDashboardVo> antDashboardVoList = result.stream().
map(e -> {
AntDashboardVo antDashboardVo = new AntDashboardVo();
BeanUtils.copyProperties(e, antDashboardVo);
return antDashboardVo;
}).collect(Collectors.toList());
PageInfo pageInfo = new PageInfo<>(result);
pageInfo.setList(antDashboardVoList);
return ResponseVo.success(pageInfo);
}
}
现在我们测试一下
@RestController
@RequestMapping("/dashboards")
public class AntDashboardsController {
@Autowired
private AntDashboardsService antDashboardsService;
@GetMapping("/selectAll")
public ResponseVo<PageInfo> selectAll(){
ResponseVo<PageInfo> responseVo = antDashboardsService.selectAll();
System.out.println("=======查询接口=======");
return responseVo;
}
@GetMapping("/insert")
public ResponseVo insert(){
AntDashboards dashboards = new AntDashboards();
dashboards.setName("test");
dashboards.setCreatedAt(new Date());
dashboards.setUpdatedAt(new Date());
ResponseVo<AntDashboardVo> responseVo = antDashboardsService.insert(dashboards);
System.out.println("=======新增接口=======");
return responseVo;
}
}
1.测试查询接口:
2.测试新增接口:
执行之后,比对数据库就可以发现主从库都修改了数据,说明我们的读写分离是成功的。
读写分离的作用是为了缓解数据库的压力,但一定要基于数据一致性的原则,就是保证主从库之间的数据一定要一致。如果一个方法涉及到写的逻辑,那么该方法里所有的数据库操作都要走主库。