前言:根据阿里巴巴《Java 开发手册》提出,单表行数超过 500 万行或者单表容量超过 2GB,才推荐进行分库分表。
通常我们先看 ShardingSphere 的官方介绍 官网地址
看一下官网的文档和分库分表配置
根据官方文档的介绍,我们这里进行测试验证,这里只测试单库分库。
首先贴出pom文件,springboot 2.1 版本 数据库mysql ,为了方便,使用 mybatis-plus
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.1.17.RELEASE</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>
<groupId>com.example</groupId>
<artifactId>springboot-jdbc</artifactId>
<version>0.0.1-SNAPSHOT</version>
<name>springboot-jdbc</name>
<description>Demo project for Spring Boot</description>
<properties>
<java.version>1.8</java.version>
</properties>
<dependencies>
<dependency>
<groupId>org.apache.shardingsphere</groupId>
<artifactId>sharding-jdbc-spring-boot-starter</artifactId>
<version>4.0.0-RC1</version>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter</artifactId>
</dependency>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>com.alibaba</groupId>
<artifactId>druid-spring-boot-starter</artifactId>
<version>1.1.20</version>
</dependency>
<dependency>
<groupId>com.baomidou</groupId>
<artifactId>mybatis-plus-boot-starter</artifactId>
<version>3.3.1</version>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
</dependency>
<!-- SpringBoot Web容器 -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
</plugin>
</plugins>
</build>
</project>
本地测试建了三个 order_info 订单表 如图,
看一下 springboot 配置文件的配置,这是使用单库分单表
server.port=8088
#允许覆盖重复的Bean
spring.main.allow-bean-definition-overriding=true
#配置数据源
spring.shardingsphere.datasource.names=m1
#配置数据源具体内容,
spring.shardingsphere.datasource.m1.type = com.alibaba.druid.pool.DruidDataSource
spring.shardingsphere.datasource.m1.driver-class-name=com.mysql.cj.jdbc.Driver
spring.shardingsphere.datasource.m1.url = jdbc:mysql://127.0.0.1:3306/mytest?characterEncoding=utf-8
spring.shardingsphere.datasource.m1.username = root
spring.shardingsphere.datasource.m1.password = Root_1234
#需要分表的表名和数量 order_info_
spring.shardingsphere.sharding.tables.order_info.actual-data-nodes=m1.order_info_$->{0..2}
#指定order_info表里面的键id生成策略 id为雪花算法生成
spring.shardingsphere.sharding.tables.order_info.key-generator.column=id
spring.shardingsphere.sharding.tables.order_info.key-generator.type=SNOWFLAKE
#指定分片策略 id
spring.shardingsphere.sharding.tables.order_info.table-strategy.inline.sharding-column=id
spring.shardingsphere.sharding.tables.order_info.table-strategy.inline.algorithm-expression=order_info_$->{id % 3}
#打开sql输出日志
spring.shardingsphere.props.sql.show=true
然后写个 controller 进行测试
package com.example.springbootjdbc.controller;
import com.example.springbootjdbc.entity.OrderInfo;
import com.example.springbootjdbc.mapper.OrderInfoMapper;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RestController;
/**
* @version V1.0
* @author: hqk
* @date: 2020/10/06 22:50
* @Description: 单库分表测试
*/
@RestController
public class TestController {
@Autowired
private OrderInfoMapper orderInfoMapper;
@RequestMapping("test")
public String gettest(){
for (int i = 0; i < 10 ; i++) {
OrderInfo orderInfo=new OrderInfo();
orderInfo.setAmount("金额"+i);
orderInfo.setStatus("状态"+i);
orderInfoMapper.insert(orderInfo);
}
return "0";
}
}
接下来我们看看测试的结果,数据已经落到不同的数据库表中了
源码已放到码云,码云地址