前言:公司业务需求用到了多数据源,为了实现事务一致性所以选择使用了Atomikos。结合网上的资源做了一下配置,只保证了业务的实现具体原理没多做研究。
----XA是啥?
XA是由X/Open组织提出的分布式事务的架构(或者叫协议)。XA架构主要定义了(全局)事务管理器(Transaction Manager)和(局部)资源管理器(Resource Manager)之间的接口。XA接口是双向的系统接口,在事务管理器(Transaction Manager)以及一个或多个资源管理器(Resource Manager)之间形成通信桥梁。也就是说,在基于XA的一个事务中,我们可以针对多个资源进行事务管理,例如一个系统访问多个数据库,或即访问数据库、又访问像消息中间件这样的资源。这样我们就能够实现在多个数据库和消息中间件直接实现全部提交、或全部取消的事务。XA规范不是java的规范,而是一种通用的规范,
目前各种数据库、以及很多消息中间件都支持XA规范。
JTA是满足XA规范的、用于Java开发的规范。所以,当我们说,使用JTA实现分布式事务的时候,其实就是说,使用JTA规范,实现系统内多个数据库、消息中间件等资源的事务。
JTA(Java Transaction API),是J2EE的编程接口规范,它是XA协议的JAVA实现。它主要定义了:
一个事务管理器的接口javax.transaction.TransactionManager,定义了有关事务的开始、提交、撤回等>操作。
一个满足XA规范的资源定义接口javax.transaction.xa.XAResource,一种资源如果要支持JTA事务,就需要让它的资源实现该XAResource接口,并实现该接口定义的两阶段提交相关的接口。
如果我们有一个应用,它使用JTA接口实现事务,应用在运行的时候,就需要一个实现JTA的容器,一般情况下,这是一个J2EE容器,像JBoss,Websphere等应用服务器。但是,也有一些独立的框架实现了JTA,例如Atomikos, bitronix都提供了jar包方式的JTA实现框架。这样我们就能够在Tomcat或者Jetty之类的服务器上运行使用JTA实现事务的应用系统。
在上面的本地事务和外部事务的区别中说到,JTA事务是外部事务,可以用来实现对多个资源的事务性。它正是通过每个资源实现的XAResource来进行两阶段提交的控制。感兴趣的同学可以看看这个接口的方法,除了commit, rollback等方法以外,还有end(), forget(), isSameRM(), prepare()等等。光从这些接口就能够想象JTA在实现两阶段事务的复杂性。
1·添加依赖
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-jta-atomikos</artifactId>
</dependency>
2·数据源配置文件
spring:
datasource:
# 数据库访问配置, 使用druid数据源
type: com.alibaba.druid.pool.DruidDataSource
systemData:
#type: com.alibaba.druid.pool.xa.DruidXADataSource
#type: com.alibaba.druid.pool.DruidDataSource
#driverClassName: com.mysql.cj.jdbc.Driver
url: jdbc:mysql://192.168.13.xxx:3306/oa_salary_test?allowMultiQueries=true&useUnicode=true&characterEncoding=utf8&serverTimezone=GMT%2B8&useSSL=false&autoReconnect=true&failOverReadOnly=false
username: root
password: 123456
initialSize: 5
minIdle: 5
maxActive: 20
maxWait: 30000
timeBetweenEvictionRunsMillis: 60000
minEvictableIdleTimeMillis: 300000
validationQuery: select 1
validationQueryTimeout: 10000
testWhileIdle: true
testOnBorrow: false
testOnReturn: false
poolPreparedStatements: true
maxPoolPreparedStatementPerConnectionSize: 20
filters: stat,wall
connectionProperties: druid.stat.mergeSql=true;druid.stat.slowSqlMillis=5000
#物料专用数据库
materialData:
#driverClassName: com.mysql.cj.jdbc.Driver
url: jdbc:mysql://192.168.13.xxx:3306/cloud_rd?allowMultiQueries=true&useUnicode=true&characterEncoding=utf8&serverTimezone=GMT%2B8&useSSL=false&autoReconnect=true&failOverReadOnly=false
username: root
password: 123456
initialSize: 5
minIdle: 5
maxActive: 20
maxWait: 30000
timeBetweenEvictionRunsMillis: 60000
minEvictableIdleTimeMillis: 300000
validationQuery: select 1
validationQueryTimeout: 10000
testWhileIdle: true
testOnBorrow: false
testOnReturn: false
poolPreparedStatements: true
maxPoolPreparedStatementPerConnectionSize: 20
filters: stat,wall
connectionProperties: druid.stat.mergeSql=true;druid.stat.slowSqlMillis=5000
3·数据源配置
package com.dls.dynamicDataSourceConfig;
import com.alibaba.druid.filter.stat.StatFilter;
import com.alibaba.druid.support.http.StatViewServlet;
import com.alibaba.druid.support.http.WebStatFilter;
import com.alibaba.druid.wall.WallConfig;
import com.alibaba.druid.wall.WallFilter;
import com.atomikos.icatch.jta.UserTransactionImp;
import com.atomikos.icatch.jta.UserTransactionManager;
import java.util.Properties;
import javax.sql.DataSource;
import javax.transaction.UserTransaction;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.jta.atomikos.AtomikosDataSourceBean;
import org.springframework.boot.web.servlet.FilterRegistrationBean;
import org.springframework.boot.web.servlet.ServletRegistrationBean;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Primary;
import org.springframework.core.env.Environment;
import org.springframework.transaction.jta.JtaTransactionManager;
/**
* Created with IntelliJ IDEA. Created by yutang 2021/3/26 10:43 Description:配置所有数据源
*/
@Configuration
public class DataSourceConfig {
/**
* 公共模块的数据源
* @param env
* @return
*/
@Bean(name = "systemDataSource")
@Primary
public DataSource systemDataSource(Environment env) {
AtomikosDataSourceBean ds = new AtomikosDataSourceBean();
Properties prop = build(env, "spring.datasource.systemData.");
ds.setXaDataSourceClassName("com.alibaba.druid.pool.xa.DruidXADataSource");
ds.setUniqueResourceName("systemDb");
ds.setPoolSize(5);
ds.setXaProperties(prop);
return ds;
}
/**
* 物料模块数据源
* @param env
* @return
*/
@Autowired
@Bean(name = "materialDataSource")
public DataSource materialDataSource(Environment env) {
AtomikosDataSourceBean ds = new AtomikosDataSourceBean();
Properties prop = build(env, "spring.datasource.materialData.");
ds.setXaDataSourceClassName("com.alibaba.druid.pool.xa.DruidXADataSource");
ds.setUniqueResourceName("materialDb");
ds.setPoolSize(5);
ds.setXaProperties(prop);
return ds;
}
/**
* 统一的事务管理器
* @return
*/
@Bean
@Primary//覆盖公共模块里的事物管理器
public JtaTransactionManager jtaTransactionManager () {
UserTransactionManager userTransactionManager = new UserTransactionManager();
UserTransaction userTransaction = new UserTransactionImp();
return new JtaTransactionManager(userTransaction, userTransactionManager);
}
/**
* @Description 配置读取通用的方法
* @param env 环境
* @param prefix 前缀
* @return java.util.Properties
* @throws
*/
private Properties build(Environment env, String prefix) {
Properties prop = new Properties();
prop.put("url", env.getProperty(prefix + "url"));
prop.put("username", env.getProperty(prefix + "username"));
prop.put("password", env.getProperty(prefix + "password"));
prop.put("initialSize", env.getProperty(prefix + "initialSize", Integer.class));
prop.put("minIdle", env.getProperty(prefix + "minIdle", Integer.class));
prop.put("maxActive", env.getProperty(prefix + "maxActive", Integer.class));
prop.put("maxWait", env.getProperty(prefix + "maxWait", Integer.class));
prop.put("timeBetweenEvictionRunsMillis",env.getProperty(prefix + "timeBetweenEvictionRunsMillis", Integer.class));
prop.put("minEvictableIdleTimeMillis", env.getProperty(prefix + "minEvictableIdleTimeMillis", Integer.class));
prop.put("validationQuery", env.getProperty(prefix + "validationQuery"));
prop.put("validationQueryTimeout", env.getProperty(prefix + "validationQueryTimeout", Integer.class));
prop.put("testWhileIdle", env.getProperty(prefix + "testWhileIdle", Boolean.class));
prop.put("testOnBorrow", env.getProperty(prefix + "testOnBorrow", Boolean.class));
prop.put("testOnReturn", env.getProperty(prefix + "testOnReturn", Boolean.class));
prop.put("poolPreparedStatements", env.getProperty(prefix + "poolPreparedStatements", Boolean.class));
prop.put("maxPoolPreparedStatementPerConnectionSize", env.getProperty(prefix + "maxPoolPreparedStatementPerConnectionSize", Integer.class));
prop.put("filters", env.getProperty(prefix + "filters"));
prop.put("connectionProperties", env.getProperty(prefix + "connectionProperties"));
return prop;
}
/**
* @Description 添加对druid的安全访问
* @param
* @return org.springframework.boot.web.servlet.ServletRegistrationBean
* @throws
*/
@Bean
public ServletRegistrationBean druidServlet() {
ServletRegistrationBean servletRegistrationBean = new ServletRegistrationBean(new StatViewServlet(), "/druid/*");
//控制台管理用户,加入下面2行 进入druid后台就需要登录
//servletRegistrationBean.addInitParameter("loginUsername", "admin");
//servletRegistrationBean.addInitParameter("loginPassword", "admin");
return servletRegistrationBean;
}
/**
* @Description
* @param
* @return org.springframework.boot.web.servlet.FilterRegistrationBean
* @throws
*/
@Bean
public FilterRegistrationBean filterRegistrationBean() {
FilterRegistrationBean filterRegistrationBean = new FilterRegistrationBean();
filterRegistrationBean.setFilter(new WebStatFilter());
filterRegistrationBean.addUrlPatterns("/*");
filterRegistrationBean.addInitParameter("exclusions", "*.js,*.gif,*.jpg,*.png,*.css,*.ico,/druid/*");
filterRegistrationBean.addInitParameter("profileEnable", "true");
return filterRegistrationBean;
}
@Bean
public StatFilter statFilter(){
StatFilter statFilter = new StatFilter();
statFilter.setLogSlowSql(true); //slowSqlMillis用来配置SQL慢的标准,执行时间超过slowSqlMillis的就是慢。
statFilter.setMergeSql(true); //SQL合并配置
statFilter.setSlowSqlMillis(1000);//slowSqlMillis的缺省值为3000,也就是3秒。
return statFilter;
}
@Bean
public WallFilter wallFilter(){
WallFilter wallFilter = new WallFilter();
//允许执行多条SQL
WallConfig config = new WallConfig();
config.setMultiStatementAllow(true);
wallFilter.setConfig(config);
return wallFilter;
}
}
4·配置数据源包相关
package com.dls.dynamicDataSourceConfig;
import com.baomidou.mybatisplus.extension.plugins.PaginationInterceptor;
import com.baomidou.mybatisplus.extension.spring.MybatisSqlSessionFactoryBean;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import javax.sql.DataSource;
import org.apache.ibatis.plugin.Interceptor;
import org.apache.ibatis.session.SqlSessionFactory;
import org.mybatis.spring.SqlSessionTemplate;
import org.mybatis.spring.annotation.MapperScan;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Primary;
import org.springframework.core.io.Resource;
import org.springframework.core.io.support.PathMatchingResourcePatternResolver;
/**
* Created with IntelliJ IDEA. Created by yutang 2021/3/25 13:58 Description:公共模块数据源
*/
@Configuration
@MapperScan(basePackages = {"com.dls.common.dao", "com.dls.job.dao",
"com.dls.production.**.dao", "com.dls.sale.**.dao",
"com.dls.sysm.android.dao", "com.dls.sysm.bankCard.dao",
"com.dls.sysm.dataDictionary.dao", "com.dls.sysm.log.dao",
"com.dls.sysm.logAndSafety.dao", "com.dls.sysm.permission.dao",
"com.dls.sysm.productPrice.dao",
"com.dls.sysm.pushMsg.dao"}, sqlSessionFactoryRef = "systemSqlSessionFactory")
public class SystemDataSourceConfig {
private String[] mapperLocation = {"classpath*:com/dls/job/dao/xml/*.xml",
"classpath*:com/dls/production/**/dao/xml/*.xml",
"classpath*:com/dls/sale/**/dao/xml/*.xml",
"classpath*:com/dls/sysm/android/dao/xml/*.xml",
"classpath*:com/dls/sysm/bankCard/dao/xml/*.xml",
"classpath*:com/dls/sysm/dataDictionary/dao/xml/*.xml",
"classpath*:com/dls/sysm/log/dao/xml/*.xml",
"classpath*:com/dls/sysm/logAndSafety/dao/xml/*.xml",
"classpath*:com/dls/sysm/permission/dao/xml/*.xml",
"classpath*:com/dls/sysm/productPrice/dao/xml/*.xml",
"classpath*:com/dls/sysm/pushMsg/dao/xml/*.xml"};
private String domainPackage = "com.dls.**.entity.**,com.dls.**.vo.**";
@Autowired
@Qualifier("systemDataSource")
private DataSource systemDataSource;
@Bean(name = "systemSqlSessionFactory")
@Primary
public SqlSessionFactory systemSqlSessionFactory(
@Qualifier("systemPaginationInterceptor") PaginationInterceptor paginationInterceptor)
throws Exception {
//注意,这里引入的是MP的工厂,而不是mybatis的工厂SqlSessionFactoryBean
MybatisSqlSessionFactoryBean bean = new MybatisSqlSessionFactoryBean();
bean.setDataSource(systemDataSource);
//引入Mapper.xml文件的位置
PathMatchingResourcePatternResolver resolver = new PathMatchingResourcePatternResolver();
List<Resource> resources = new ArrayList<>();
if (mapperLocation.length > 0) {
for (int i = 0; i < mapperLocation.length; i++) {
Resource[] resource = resolver.getResources(mapperLocation[i]);
resources.addAll(Arrays.asList(resource));
}
}
bean.setMapperLocations(resources.toArray(new Resource[resources.size()]));
//bean.getObject().getConfiguration().setMapUnderscoreToCamelCase(true);
bean.setTypeAliasesPackage(domainPackage);
Interceptor[] plugins = new Interceptor[]{paginationInterceptor};
bean.setPlugins(plugins);
return bean.getObject();
}
/**
* 分页插件
*/
@Bean("systemPaginationInterceptor")
public PaginationInterceptor paginationInterceptor() {
return new PaginationInterceptor();
}
@Bean(name = "systemSqlSessionTemplate")
@Primary
public SqlSessionTemplate systemSqlSessionTemplate(
@Qualifier("systemSqlSessionFactory") SqlSessionFactory sqlSessionFactory) throws Exception {
return new SqlSessionTemplate(sqlSessionFactory);
}
}
由于是老项目之前没有很好的区分模块,所以扫描包很麻烦。
@MapperScan里的basePackages不知道为什么不能接收数组,只好全写在里面了。多个mapperLocation需要循环写入,这2个配置不知道有没有更好的方式,目前先这样。
package com.dls.dynamicDataSourceConfig;
import com.baomidou.mybatisplus.extension.plugins.PaginationInterceptor;
import com.baomidou.mybatisplus.extension.spring.MybatisSqlSessionFactoryBean;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import javax.sql.DataSource;
import org.apache.ibatis.plugin.Interceptor;
import org.apache.ibatis.session.SqlSessionFactory;
import org.mybatis.spring.SqlSessionTemplate;
import org.mybatis.spring.annotation.MapperScan;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.beans.factory.annotation.Qualifier;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.core.io.Resource;
import org.springframework.core.io.support.PathMatchingResourcePatternResolver;
/**
* Created with IntelliJ IDEA. Created by yutang 2021/3/25 13:58 Description:物料数据源
*/
@Configuration
@MapperScan(basePackages = {"com.dls.material.dao", "com.dls.sysm.activity.dao",
"com.dls.common.dao"}, sqlSessionFactoryRef = "materialSqlSessionFactory")
public class MaterialDataSourceConfig {
private String[] mapperLocation = {"classpath*:com/dls/material/dao/xml/*.xml",
"classpath*:com/dls/sysm/activity/dao/xml/*.xml",
"classpath*:com/dls/common/dao/xml/*.xml"};
//private static final String DOMAIN_PACKAGE = "com.dls.job.material.entity.**,com.dls.material.vo.**,"
// + "com.dls.common.entity.**,com.dls.sysm.activity.entity.**,com.dls.sysm.activity.vo.**";
@Autowired
@Qualifier("materialDataSource")
private DataSource materialDataSource;
@Bean(name = "materialSqlSessionFactory")
public SqlSessionFactory materialSqlSessionFactory(
@Qualifier("materialPaginationInterceptor") PaginationInterceptor paginationInterceptor)
throws Exception {
MybatisSqlSessionFactoryBean bean = new MybatisSqlSessionFactoryBean();
bean.setDataSource(materialDataSource);
//引入Mapper.xml文件的位置
PathMatchingResourcePatternResolver resolver = new PathMatchingResourcePatternResolver();
List<Resource> resources = new ArrayList<>();
if (mapperLocation.length > 0) {
for (int i = 0; i < mapperLocation.length; i++) {
Resource[] resource = resolver.getResources(mapperLocation[i]);
resources.addAll(Arrays.asList(resource));
}
}
bean.setMapperLocations(resources.toArray(new Resource[resources.size()]));
//bean.getObject().getConfiguration().setMapUnderscoreToCamelCase(true);
//bean.setTypeAliasesPackage(DOMAIN_PACKAGE);
Interceptor[] plugins = new Interceptor[]{paginationInterceptor};
bean.setPlugins(plugins);
return bean.getObject();
}
/**
* 分页插件
*/
@Bean("materialPaginationInterceptor")
public PaginationInterceptor paginationInterceptor() {
return new PaginationInterceptor();
}
@Bean(name = "materialSqlSessionTemplate")
public SqlSessionTemplate materialSqlSessionTemplate(
@Qualifier("materialSqlSessionFactory") SqlSessionFactory sqlSessionFactory)
throws Exception {
return new SqlSessionTemplate(sqlSessionFactory);
}
}
以上基本都配置完成了,具体使用和平时一样,在需要用事务的方法加上@Transactional就可以了。
日志打印的处理:
配置好之后控制台会默认一直打印atomikos的info日志很烦,网上找了半天配置也没弄好,后来在spring日志里给配置好了。
在配置文件里做如下配置就好了。
#日志级别TRACE < DEBUG < INFO < WARN < ERROR < FATAL。
logging:
level:
com.atomikos: WARN
(转载)相关bug:
问题描述:
1· IDEA运行springboot整合atomikos控制台一直输出:
com.atomikos.icatch.HeurHazardException: Heuristic Exception:
解决方案:
删除项目下的tmlog.lck、tmlog782.log的文件即可。
问题描述:
2·如果两个或多个springboot项目,都通过atomikos配置了多数据源,且放在同一Tomcat下运行,报错:
com.atomikos.recovery.LogException: Log already in use? tmlog in D:\eclipse\transaction-logs
出错原因:
atomikos默认日志打印:tomcat\transaction-logs\tmlog.lck 和 tomcat\transaction-logs\tmlog0.log
(如果在eclipse中启动,则打印在D:\eclipse\transaction-logs\)
导致多项目共用同一日志文件,前面的项目将会锁定文件,后面启动的项目将会无法写入。
解决思路:
修改默认日志文件名称;
修改默认日志文件路径;
关闭日志打印。
具体设置org.springframework.boot.jta.atomikos.AtomikosProperties.class中的属性即可
解决方案(三选一即可):
在application.properties或其他指定的属性文件中加入如下代码即可。
修改日志文件名称(建议名称最好与项目名保持一致)
spring.jta.atomikos.properties.log-base-name=test
修改日志文件路径
spring.jta.atomikos.properties.log-base-dir=./log/test1
关闭日志打印(只开启一个,其他关闭)
spring.jta.atomikos.properties.enable-logging=false
至此已经全部搞定了。项目并发量不高,目前还没考虑到优化。有不懂的可以留言咨询。