源码Github地址
1.简单粗暴一次性搞定
直接在项目设计之初,就预估数据量的峰值会达到多少;以此设计为4*4、8*8、16*16或者32*32等的拆分。
例如在刚开始时就拆分为16*16的话,则可以在前期将16*16=256张表均存储到一台服务器的多个数据库空间(1个表空间也可以存储不止16张表);随着数据量的增大,逐步追加服务器并将数据库拆分迁移到新的服务器,此时只需要改动配置文件的指向而不必修改代码即可达到效果。
2.翻倍式扩容
例如第一期拆分为4张表,按4取模存储;当单表数据量大于500w时,进行表拆分;此时即可拆分为8张表,并分别手动将tb_0、tb_1、tb_2、tb_3表中的一半数据迁移到tb_4、tb_5、tb_6、tb_7;然后修改分片规则配置项为按8取模即可,后续16、32...等也可按此逻辑递推进行处理。
拆分前 | |
TB_0 | 4,8,12,16,20,24,... |
TB_1 | 1,5,9,13,17,21,25,... |
TB_2 | 2,6,10,14,18,22,26,... |
TB_3 | 3,7,11,15,19,23,27,... |
拆分后 | |
TB_0 | 8,16,24,32,40,48,... |
TB_1 | 1,9,17,25,33,41,49,... |
TB_2 | 2,10,18,26,34,42,50,... |
TB_3 | 3,11,19,27,35,43,51,... |
TB_4 | 4,12,20,28,36,44,52,... |
TB_5 | 5,13,21,29,37,45,53,... |
TB_6 | 6,14,22,30,38,46,54,... |
TB-7 | 7,15,23,31,39,47,55,... |
3.不停机动态扩容
3.1.实现思想
参考自:http://www.uml.org.cn/sjjm/201211212.asp
通过添加tb_sharding_group、tb_sharding、tb_fragment三张表,以实现动态对本实例分库分表规则的变更,表设计如下
同一时间内只能由一条tb_sharding_group数据的write_able标识为true(数据库存储1);表示当前此组分片规则有效;
tb_sharding_group用作存储总体分片规则维护、tb_sharding用作数据库分片规则维护、tb_fragment用做表分片规则维护。
3.2.实现目标
想象***项目一期预计3000W条用户数据,现有两台数据库服务器内存分别为4G和8G,则可以考虑将0~1200W存储在4G这台服务器,1200W~3000W这一部分数据存储到8G这台服务器。
当某一天数据量增长快要超过3000W容量时,则可以考虑增加服务器,在基础库中添加自定义的分片规则(tb_sharding_group、tb_sharding、tb_fragment)即可;此处演示增加两台服务器(两个数据库)分别存储3000W~5000W、5000W~7000W,当再次达到瓶颈时,可参照此规则继续进行处理。
基础表结构数据如下(Demo项目,start_id、end_id等以3000代指3000W):
3.3.实现过程
(1)、数据源连接配置
spring.shardingsphere.datasource.names=basic,db0,db1,db2,db3
# 基础数据库连接信息
spring.shardingsphere.datasource.basic.type=com.alibaba.druid.pool.DruidDataSource
spring.shardingsphere.datasource.basic.driver-class-name=com.mysql.cj.jdbc.Driver
spring.shardingsphere.datasource.basic.url=jdbc:mysql://localhost:3306/srd_study_basic?serverTimezone=Asia/Shanghai&useSSL=false&useUnicode=true&characterEncoding=UTF-8
spring.shardingsphere.datasource.basic.username=root
spring.shardingsphere.datasource.basic.password=Sharding123!
spring.shardingsphere.datasource.basic.max-active=16
# 业务库1连接信息
spring.shardingsphere.datasource.db0.type=com.alibaba.druid.pool.DruidDataSource
spring.shardingsphere.datasource.db0.driver-class-name=com.mysql.cj.jdbc.Driver
spring.shardingsphere.datasource.db0.url=jdbc:mysql://localhost:3306/srd_study_db0?serverTimezone=Asia/Shanghai&useSSL=false&useUnicode=true&characterEncoding=UTF-8
spring.shardingsphere.datasource.db0.username=root
spring.shardingsphere.datasource.db0.password=Sharding123!
spring.shardingsphere.datasource.db0.ax-active=16
# 业务库2连接信息
spring.shardingsphere.datasource.db1.type=com.alibaba.druid.pool.DruidDataSource
spring.shardingsphere.datasource.db1.driver-class-name=com.mysql.cj.jdbc.Driver
spring.shardingsphere.datasource.db1.url=jdbc:mysql://localhost:3306/srd_study_db1?serverTimezone=Asia/Shanghai&useSSL=false&useUnicode=true&characterEncoding=UTF-8
spring.shardingsphere.datasource.db1.username=root
spring.shardingsphere.datasource.db1.password=Sharding123!
spring.shardingsphere.datasource.db1.ax-active=16
# 业务库3连接信息
spring.shardingsphere.datasource.db2.type=com.alibaba.druid.pool.DruidDataSource
spring.shardingsphere.datasource.db2.driver-class-name=com.mysql.cj.jdbc.Driver
spring.shardingsphere.datasource.db2.url=jdbc:mysql://localhost:3306/srd_study_db2?serverTimezone=Asia/Shanghai&useSSL=false&useUnicode=true&characterEncoding=UTF-8
spring.shardingsphere.datasource.db2.username=root
spring.shardingsphere.datasource.db2.password=Sharding123!
spring.shardingsphere.datasource.db2.ax-active=16
# 业务库4连接信息
spring.shardingsphere.datasource.db3.type=com.alibaba.druid.pool.DruidDataSource
spring.shardingsphere.datasource.db3.driver-class-name=com.mysql.cj.jdbc.Driver
spring.shardingsphere.datasource.db3.url=jdbc:mysql://localhost:3306/srd_study_db3?serverTimezone=Asia/Shanghai&useSSL=false&useUnicode=true&characterEncoding=UTF-8
spring.shardingsphere.datasource.db3.username=root
spring.shardingsphere.datasource.db3.password=Sharding123!
spring.shardingsphere.datasource.db3.ax-active=16
# 配置专库专表的基础表
spring.shardingsphere.sharding.tables.tb_sharding_group.actual-data-nodes=basic.tb_sharding_group
spring.shardingsphere.sharding.tables.tb_sharding.actual-data-nodes=basic.tb_sharding
spring.shardingsphere.sharding.tables.tb_fragment.actual-data-nodes=basic.tb_fragment
# 打开ShardingSphere日志
spring.shardingsphere.props.sql.show=true
(2)、自定义分库分表配置
# 逻辑表对应实际表的配置
spring.shardingsphere.sharding.tables.tb_userinfo.actual-data-nodes=db$->{0..3}.tb_userinfo_$->{0..3}
spring.shardingsphere.sharding.tables.tb_address.actual-data-nodes=db$->{0..3}.tb_address_$->{0..3}
# 自定义分库(用户表)
spring.shardingsphere.sharding.tables.tb_userinfo.database-strategy.standard.sharding-column=id
spring.shardingsphere.sharding.tables.tb_userinfo.database-strategy.standard.precise-algorithm-class-name=com.xiudoua.micro.study.config.DatabaseShardingAlgorithm
# 自定义分表(用户表)
spring.shardingsphere.sharding.tables.tb_userinfo.table-strategy.standard.sharding-column=id
spring.shardingsphere.sharding.tables.tb_userinfo.table-strategy.standard.precise-algorithm-class-name=com.xiudoua.micro.study.config.TableShardingAlgorithm
# 自定义分库(地址表)
spring.shardingsphere.sharding.tables.tb_address.database-strategy.standard.sharding-column=user_id
spring.shardingsphere.sharding.tables.tb_address.database-strategy.standard.precise-algorithm-class-name=com.xiudoua.micro.study.config.DatabaseShardingAlgorithm
# 自定义分表(地址表)
spring.shardingsphere.sharding.tables.tb_address.table-strategy.standard.sharding-column=user_id
spring.shardingsphere.sharding.tables.tb_address.table-strategy.standard.precise-algorithm-class-name=com.xiudoua.micro.study.config.TableShardingAlgorithm
(3)、初始化将有效分片细化fragment数据存储至Redis
@Order(1)
@Component
public class LoadFragmentData implements CommandLineRunner{
private static final Logger logger = LoggerFactory.getLogger(LoadFragmentData.class);
@Autowired
private IFragmentService fragmentService;
@Autowired
private RedisTemplate<String, String> redisTemplate;
public void run(String... args) throws Exception {
List<FragmentEntity> fragmentList = fragmentService.findEffectiveList();
if(fragmentList != null && !fragmentList.isEmpty()) {
redisTemplate.delete(CommonConstant.REDIS_KEY_EXPAND_HASH_DATA);
Map<String,String> map = new HashMap<String, String>();
for(FragmentEntity fragment : fragmentList) {
map.put(fragment.getStartId() + "_" + fragment.getEndId(), JsonUtil.objectToJson(fragment));
}
redisTemplate.opsForHash().putAll(CommonConstant.REDIS_KEY_EXPAND_HASH_DATA, map);
logger.info("往缓存中加载了{}条基础分库分表相关数据!",fragmentList.size());
}
}
}
(4)、分库算法
public class DatabaseShardingAlgorithm implements PreciseShardingAlgorithm<Long> {
private RedisTemplate<String,String> redisTemplate;
public String doSharding(Collection<String> availableTargetNames, PreciseShardingValue<Long> shardingValue) {
return getDbSuffix(shardingValue.getValue());
}
/**
* @desc 获取配置的数据库后缀
* @param value
* @return
*/
@SuppressWarnings("unchecked")
private String getDbSuffix(Long value) {
String resKey = null;
redisTemplate = ApplicationContextHolder.getBean("stringRedisTemplate",RedisTemplate.class);
HashOperations<String,String,String> opsHash = redisTemplate.opsForHash();
Set<String> fragmentKeySet = opsHash.keys(CommonConstant.REDIS_KEY_EXPAND_HASH_DATA);
if(fragmentKeySet == null || fragmentKeySet.isEmpty()) {
return null;
}
for(String fragmentKey : fragmentKeySet) {
String[] idArr = fragmentKey.split("_");
Long startId = Long.parseLong(idArr[0]);
Long endId = Long.parseLong(idArr[1]);
if(value > startId && value <= endId) {
String fragmentStr = opsHash.get(CommonConstant.REDIS_KEY_EXPAND_HASH_DATA, fragmentKey);
FragmentEntity fragment = JsonUtil.string2Obj(fragmentStr, FragmentEntity.class);
resKey = fragment.getShard().getDbSuffix();
break;
}
}
return resKey;
}
}
(5)、分表算法
public class TableShardingAlgorithm implements PreciseShardingAlgorithm<Long> {
private RedisTemplate<String,String> redisTemplate;
public String doSharding(Collection<String> availableTargetNames, PreciseShardingValue<Long> shardingValue) {
String shardTableName = null;
String tableSuffix = getTableSuffix(shardingValue.getValue());
for(String tableName : availableTargetNames) {
if(tableName.contains(tableSuffix)) {
shardTableName = tableName;
break;
}
}
return shardTableName;
}
/**
* @desc 获取配置的数据库后缀
* @param value
* @return
*/
@SuppressWarnings("unchecked")
private String getTableSuffix(Long value) {
String resKey = null;
redisTemplate = ApplicationContextHolder.getBean("stringRedisTemplate",RedisTemplate.class);
HashOperations<String,String,String> opsHash = redisTemplate.opsForHash();
Set<String> fragmentKeySet = opsHash.keys(CommonConstant.REDIS_KEY_EXPAND_HASH_DATA);
if(fragmentKeySet == null || fragmentKeySet.isEmpty()) {
return null;
}
for(String fragmentKey : fragmentKeySet) {
String[] idArr = fragmentKey.split("_");
Long startId = Long.parseLong(idArr[0]);
Long endId = Long.parseLong(idArr[1]);
if(value > startId && value <= endId) {
String fragmentStr = opsHash.get(CommonConstant.REDIS_KEY_EXPAND_HASH_DATA, fragmentKey);
FragmentEntity fragment = JsonUtil.string2Obj(fragmentStr, FragmentEntity.class);
resKey = fragment.getTableSuffix();
break;
}
}
return resKey;
}
}
(6)、单元测试
@RunWith(SpringRunner.class)
@SpringBootTest(classes = InitApp.class)
@FixMethodOrder(MethodSorters.NAME_ASCENDING)
public class ExpandTest {
private static final Logger logger = LoggerFactory.getLogger(ExpandTest.class);
@Autowired
private IUserinfoService userinfoService;
@Autowired
private IAddressService addressService;
@Autowired
private RedisTemplate<String, String> redisTemplate;
@Test
public void testSave() {
for(int i=1;i<4001;i++) {
Long userId = redisTemplate.opsForValue().increment(IdGeneraterKey.REDIS_KEY_GENERATER_USERINFO,1);
UserinfoVO userinfo = new UserinfoVO("Tony" + i,"11111","test@qq.com");
userinfo.setId(userId);
userId = userinfoService.save(userinfo);
if(userId == null) {
redisTemplate.opsForValue().decrement(IdGeneraterKey.REDIS_KEY_GENERATER_USERINFO,1);
}else {
logger.info("=====单元测试插入用户数据成功,用户ID={}======",userId);
}
int num = new Random().nextInt(2) + 1;
List<AddressVO> addressList = new ArrayList<AddressVO>();
for(int j=0;j<num;j++) {
Long addressId = redisTemplate.opsForValue().increment(IdGeneraterKey.REDIS_KEY_GENERATER_ADDRESS,1);
AddressVO address = new AddressVO("测试地址" + j, "155****1234");
address.setId(addressId);
address.setUserinfo(userinfo);
addressList.add(address);
}
int successCount = addressService.batchSave(addressList);
redisTemplate.opsForValue().decrement(IdGeneraterKey.REDIS_KEY_GENERATER_ADDRESS,addressList.size() - successCount);
logger.info("=====单元测试插入{}用户地址数据成功,查了了{}条地址数据",userId,successCount);
}
}
}