项目需要分表分库,在网上找到了shardingsphere插件,接入过程还是蛮曲折的,今天终于有时间搞通了。sharding的介绍等会再说,还是直接上代码。
一、引入jar包,我使用的v4.0.0版本
<!-- 集成sharding -->
<dependency>
<groupId>org.apache.shardingsphere</groupId>
<artifactId>sharding-jdbc-core</artifactId>
<version>${sharding-sphere.version}</version>
</dependency>
<!-- for spring namespace -->
<dependency>
<groupId>org.apache.shardingsphere</groupId>
<artifactId>sharding-jdbc-spring-namespace</artifactId>
<version>${sharding-sphere.version}</version>
</dependency>
二、因为是SpringMVC,所以采用基于Spring命名空间的规则配置,具体配置如下:
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xmlns:context="http://www.springframework.org/schema/context"
xmlns:tx="http://www.springframework.org/schema/tx"
xmlns:sharding="http://shardingsphere.apache.org/schema/shardingsphere/sharding"
xmlns="http://www.springframework.org/schema/beans"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-4.0.xsd
http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-4.0.xsd
http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-4.0.xsd
http://shardingsphere.apache.org/schema/shardingsphere/sharding http://shardingsphere.apache.org/schema/shardingsphere/sharding/sharding.xsd">
<context:component-scan base-package="com.*.dal.dao"/>
<!-- datasource -->
<bean id="dataSource" class="com.alibaba.druid.pool.DruidDataSource"
init-method="init" destroy-method="close">
<property name="driverClassName" value="${jdbc.driver}"/>
<property name="url" value="${jdbc.url}"/>
<property name="username" value="${jdbc.username}"/>
<property name="password" value="${jdbc.password}"/>
<property name="filters" value="config,stat"/>
<property name="connectionInitSqls" value="set names utf8mb4;"/>
<property name="connectionProperties"
value="config.decrypt=${druid.config.decrypt};config.decrypt.key=${druid.config.decrypt.publicKey}"/>
<!--最大连接数量 -->
<property name="maxActive" value="50"/>
<!--初始化连接 -->
<property name="initialSize" value="10"/>
<property name="maxWait" value="60000"/>
<property name="minIdle" value="5"/>
<property name="timeBetweenEvictionRunsMillis" value="60000"/>
<property name="minEvictableIdleTimeMillis" value="300000"/>
<property name="testWhileIdle" value="true"/>
<property name="testOnBorrow" value="true"/>
<property name="testOnReturn" value="true"/>
<!-- 超过时间限制是否回收 -->
<property name="removeAbandoned" value="true"/>
<!-- 超时时间;单位为秒。180秒=3分钟 -->
<property name="removeAbandonedTimeout" value="1800"/>
<!-- 关闭abanded连接时输出错误日志 -->
<property name="logAbandoned" value="true"/>
</bean>
<!-- sql session factory -->
<bean id="sqlSessionFactory" class="org.mybatis.spring.SqlSessionFactoryBean">
<property name="dataSource" ref="shardingDataSource"/>
<property name="mapperLocations" >
<list>
<value>classpath*:mapper/*.xml</value>
<value>classpath*:mapper/**/*.xml</value>
</list>
</property>
<property name="configLocation" value="classpath:mybatis.xml"/>
<property name="plugins">
<bean class="com.github.pagehelper.PageInterceptor">
<property name="properties">
<value>
helperDialect=mysql
</value>
</property>
</bean>
</property>
</bean>
<bean class="org.mybatis.spring.mapper.MapperScannerConfigurer">
<property name="basePackage" value="com.*.dal.dao"/>
</bean>
<!-- 配置事务管理器 -->
<bean id="transactionManager"
class="org.springframework.jdbc.datasource.DataSourceTransactionManager">
<property name="dataSource" ref="shardingDataSource"/>
</bean>
<!-- 集成分片 -->
<bean id="shardingTableAlgorithm" class="com.*.dal.sharding.ShardingTableAlgorithm" />
<bean id="shardingRangTableAlgorithm" class="com.*.dal.sharding.ShardingRangTableAlgorithm" />
<sharding:standard-strategy id="orderTableStrategy" sharding-column="pay_at" precise-algorithm-ref="shardingTableAlgorithm" range-algorithm-ref="shardingRangTableAlgorithm"/>
<sharding:data-source id="shardingDataSource">
<sharding:sharding-rule data-source-names="dataSource" >
<sharding:table-rules>
<sharding:table-rule logic-table="tb_order" actual-data-nodes="dataSource.tb_order,dataSource.tb_order20$->{20..30}0$->{1..9},dataSource.tb_order20$->{20..30}$->{10..12}" table-strategy-ref="orderTableStrategy" />
</sharding:table-rules>
</sharding:sharding-rule>
</sharding:data-source>
</beans>
我是采用StandardShardingStrategy,标准分片策略提供对SQL语句中的=, >, <, >=, <=, IN和BETWEEN AND的分片操作支持。StandardShardingStrategy只支持单分片键,提供PreciseShardingAlgorithm和RangeShardingAlgorithm两个分片算法。PreciseShardingAlgorithm是必选的,用于处理=和IN的分片。RangeShardingAlgorithm是可选的,用于处理BETWEEN AND, >, <, >=, <=分片,如果不配置RangeShardingAlgorithm,SQL中的BETWEEN AND将按照全库路由处理。
重写了精准分片算法和范围分片算法。具体代码如下:
/**
* 精准分表算法
* @author chuhx
* @date 2020/3/6 11:48
**/
@Component
public class ShardingTableAlgorithm implements PreciseShardingAlgorithm<Date> {
@Override
public String doSharding(Collection<String> tableNames, PreciseShardingValue<Date> preciseShardingValue) {
String tableName = "tb_order";
if(preciseShardingValue.getValue()==null){
return tableName;
}
tableName = Lists.newArrayList(tableNames).get(0);
String sdValue = DateUtils.format(preciseShardingValue.getValue(),"yyyyMMddHHmmss");
if(StringUtils.isBlank(sdValue)){
return tableName;
}
return tableName+sdValue.substring(0,6);
}
}
/**
* 范围分表算法
* @author chuhx
* @date 2020/3/9 17:07
**/
public class ShardingRangTableAlgorithm implements RangeShardingAlgorithm<Date> {
/**
* Sharding.
*
* @param availableTargetNames available data sources or tables's names
* @param shardingValue sharding value
* @return sharding results for data sources or tables's names
*/
@Override
public Collection<String> doSharding(Collection<String> availableTargetNames, RangeShardingValue<Date> shardingValue) {
String tableName = "tb_order";
Range<Date> range = shardingValue.getValueRange();
Date startDate = range.lowerEndpoint();
Date endDate = range.upperEndpoint();
if(endDate.after(new Date())){
endDate = new Date();
}
Set<String> dateSet = new HashSet<>();
dateSet.add(tableName);
for(;startDate.before(endDate);startDate = DateUtils.dateAdd(DateUtils.DATE_INTERVAL_DAY,startDate,1)){
String sdValue = DateUtils.format(startDate,"yyyyMMddHHmmss");
if(StringUtils.isNotBlank(sdValue) && sdValue.length()>6){
dateSet.add(tableName+sdValue.substring(0,6));
}
}
return dateSet;
}
}
OK,这样就全部代码,接下来可以写方法进行测试。由于是根据支付时间分表的,直接写个简单查询。
/**
* 根据支付时间查询订单
* @param payStartDate
* @param payEndDate
* @return
*/
List<Order> selectOrderListByPayAt1(@Param("payStartDate") Date payStartDate, @Param("payEndDate") Date payEndDate);
@ResponseBody
@RequestMapping(value = "/test",method = RequestMethod.GET)
public String sendMsgToKafka(){
Date endDate = new Date();
Date startDate = DateUtils.dateAdd(DateUtils.DATE_INTERVAL_DAY,endDate,-30);
List<Order> order = orderMapper.selectOrderListByPayAt1(startDate,endDate);
return JSON.toJSONString("订单信息="+JSON.toJSONString(order));
}
可以直接运行一切OK。
三、接入遇到的坑
1、最开始我接入的时候,没有指定实际数据表,运行时一致报错有两个数据源。最后检查了半天是这个问题。
2、由于是按照时间分片,需要按照范围查找的,所有又重写了范围分片算法。
3、可以写个定时任务每月建一张表。
4、把事务等数据源都改为:shardingDataSource
四、结论
在引入第三方插件时,建议仔细看官方文档和源码,不要病急乱投医。不是有句古话:磨刀不如砍柴工。别人的东西是挺省事,关键是要适合自己。特此记录下来,供以后学习参考,方便自己与方便他人,何乐而不为!