官方文档: https://github.com/seata/seata-samples/blob/master/doc/quick-integration-with-spring-cloud.md
一、undolog 表结构导入
Seata 的核心在于对业务 sql 进行解析,转换成 undolog ,所以只要支持 Fescar 分布式事务的微服务数据都需要导入该表结构,我们在每个微服务的数据库中都导入下面表结构:
CREATE TABLE `undo_log` (
`id` bigint(20) NOT NULL AUTO_INCREMENT,
`branch_id` bigint(20) NOT NULL,
`xid` varchar(100) NOT NULL,
`rollback_info` longblob NOT NULL,
`log_status` int(11) NOT NULL,
`log_created` datetime NOT NULL,
`log_modified` datetime NOT NULL,
`ext` varchar(100) DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `idx_unionkey` (`xid`,`branch_id`)
) ENGINE=InnoDB AUTO_INCREMENT=200 DEFAULT CHARSET=utf8;
二、Seata 工程搭建
如上图,在所有微服务工程中,不一定所有工程都需要使用分布式事务,我们可以创建一个独立的分布式事务工程,指定微服务需要支持分布式事务的时候,直接依赖独立的分布式工程即可。
搭建一个 changgou-transaction-Seata 提供 Seata 分布式事务支持。
(1)pom.xml 依赖
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<parent>
<artifactId>changgou-parent</artifactId>
<groupId>com.changgou</groupId>
<version>1.0-SNAPSHOT</version>
</parent>
<modelVersion>4.0.0</modelVersion>
<artifactId>changgou-transaction-Seata</artifactId>
<dependencies>
<dependency>
<groupId>com.alibaba.cloud</groupId>
<artifactId>spring-cloud-alibaba-seata</artifactId>
</dependency>
<dependency>
<groupId>com.zaxxer</groupId>
<artifactId>HikariCP</artifactId>
</dependency>
</dependencies>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>com.alibaba.cloud</groupId>
<artifactId>spring-cloud-alibaba-dependencies</artifactId>
<version>2.1.1.RELEASE</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
</project>
数据源用的是 HikariCP 。
(2)启动 seata-server
启动 Seata-Server,在 https://github.com/seata/seata/releases 下载相应版本的 Seata-Server,我用的是 1.4.2,运行 bin 下的 seata-server.bat:
(3)根据官方文档,提供 file.conf 和 registry.conf :
file.conf:
transport {
# tcp udt unix-domain-socket
type = "TCP"
#NIO NATIVE
server = "NIO"
#enable heartbeat
heartbeat = true
#thread factory for netty
thread-factory {
boss-thread-prefix = "NettyBoss"
worker-thread-prefix = "NettyServerNIOWorker"
server-executor-thread-prefix = "NettyServerBizHandler"
share-boss-worker = false
client-selector-thread-prefix = "NettyClientSelector"
client-selector-thread-size = 1
client-worker-thread-prefix = "NettyClientWorkerThread"
# netty boss thread size,will not be used for UDT
boss-thread-size = 1
#auto default pin or 8
worker-thread-size = 8
}
shutdown {
# when destroy server, wait seconds
wait = 3
}
serialization = "seata"
compressor = "none"
}
service {
#vgroup->rgroup
vgroup_mapping.my_test_tx_group = "default"
#only support single node
default.grouplist = "127.0.0.1:8091"
#degrade current not support
enableDegrade = false
#disable
disable = false
#unit ms,s,m,h,d represents milliseconds, seconds, minutes, hours, days, default permanent
max.commit.retry.timeout = "-1"
max.rollback.retry.timeout = "-1"
}
client {
async.commit.buffer.limit = 10000
lock {
retry.internal = 10
retry.times = 30
}
report.retry.count = 5
}
## transaction log store
store {
## store mode: file、db
mode = "file"
## file store
file {
dir = "sessionStore"
# branch session size , if exceeded first try compress lockkey, still exceeded throws exceptions
max-branch-session-size = 16384
# globe session size , if exceeded throws exceptions
max-global-session-size = 512
# file buffer size , if exceeded allocate new buffer
file-write-buffer-cache-size = 16384
# when recover batch read size
session.reload.read_size = 100
# async, sync
flush-disk-mode = async
}
## database store
db {
## the implement of javax.sql.DataSource, such as DruidDataSource(druid)/BasicDataSource(dbcp) etc.
datasource = "dbcp"
## mysql/oracle/h2/oceanbase etc.
db-type = "mysql"
url = "jdbc:mysql://127.0.0.1:3306/seata"
user = "mysql"
password = "mysql"
min-conn = 1
max-conn = 3
global.table = "global_table"
branch.table = "branch_table"
lock-table = "lock_table"
query-limit = 100
}
}
lock {
## the lock store mode: local、remote
mode = "remote"
local {
## store locks in user's database
}
remote {
## store locks in the seata's server
}
}
recovery {
committing-retry-delay = 30
asyn-committing-retry-delay = 30
rollbacking-retry-delay = 30
timeout-retry-delay = 30
}
transaction {
undo.data.validation = true
undo.log.serialization = "jackson"
}
## metrics settings
metrics {
enabled = false
registry-type = "compact"
# multi exporters use comma divided
exporter-list = "prometheus"
exporter-prometheus-port = 9898
}
transport transport :用于定义 Netty 相关的参数,TM、RM 与 fescar-server 之间使用 Netty 进行通信。
而且注意到 service{… 中有 default.grouplist = "127.0.0.1:8091" ,也就是启动 seata-server 的地址,如果是在本地就不需要修改,如果后续部署到云服务器上,需要修改成对应的地址。
registry.conf:
registry {
# file 、nacos 、eureka、redis、zk、consul、etcd3、sofa
type = "file"
nacos {
serverAddr = "localhost"
namespace = "public"
cluster = "default"
}
eureka {
serviceUrl = "http://localhost:8761/eureka"
application = "default"
weight = "1"
}
redis {
serverAddr = "localhost:6379"
db = "0"
}
zk {
cluster = "default"
serverAddr = "127.0.0.1:2181"
session.timeout = 6000
connect.timeout = 2000
}
consul {
cluster = "default"
serverAddr = "127.0.0.1:8500"
}
etcd3 {
cluster = "default"
serverAddr = "http://localhost:2379"
}
sofa {
serverAddr = "127.0.0.1:9603"
application = "default"
region = "DEFAULT_ZONE"
datacenter = "DefaultDataCenter"
cluster = "default"
group = "SEATA_GROUP"
addressWaitTime = "3000"
}
file {
name = "file.conf"
}
}
config {
# file、nacos 、apollo、zk、consul、etcd3
type = "file"
nacos {
serverAddr = "localhost"
namespace = "public"
cluster = "default"
}
consul {
serverAddr = "127.0.0.1:8500"
}
apollo {
app.id = "seata-server"
apollo.meta = "http://192.168.1.204:8801"
}
zk {
serverAddr = "127.0.0.1:2181"
session.timeout = 6000
connect.timeout = 2000
}
etcd3 {
serverAddr = "http://localhost:2379"
}
file {
name = "file.conf"
}
}
(4)提供配置类:
public class SeataAutoConfiguration {
/***
* 创建代理数据库
* 会将undo_log绑定到本地事务中
* @param environment 环境配置
* @return 代理数据源
*
* @Primary作用:如果一个接口有多个实现类,如果没有指定用那个,那么可以使用@Primary来指定一个默认值
*/
@Primary
@Bean
public DataSourceProxy dataSource(Environment environment) {
//创建数据源对象
HikariDataSource dataSource = new HikariDataSource();
//获取数据源链接地址
dataSource.setJdbcUrl(environment.getProperty("spring.datasource.url"));
//设置数据库驱动
dataSource.setDriverClassName(environment.getProperty("spring.datasource.driver-class-name"));
//获取数据库名字
dataSource.setUsername(environment.getProperty("spring.datasource.username"));
//获取数据库密码
dataSource.setPassword(environment.getProperty("spring.datasource.password"));
//将数据库封装成一个代理数据库
return new DataSourceProxy(dataSource);
}
/***
* 全局事务扫描器
* 用来解析带有@GlobalTransactional注解的方法,然后采用AOP的机制控制事务
* @param environment 环境配置
* @return 全局事务注解扫描
*/
@Bean
public GlobalTransactionScanner globalTransactionScanner(Environment environment) {
//事务分组名称
String applicationName = environment.getProperty("spring.application.name");
String groupName = environment.getProperty("seata.group.name");
if (applicationName == null) {
return new GlobalTransactionScanner(groupName == null ? "my_test_tx_group" : groupName);
} else {
return new GlobalTransactionScanner(applicationName, groupName == null ? "my_test_tx_group" : groupName);
}
}
}
可以看到,dataSource 方法创建了代理数据源,globalTransactionScanner 方法创建了一个全局事务扫描器,用来解析带有 @GlobalTranscational 注解的方法,之后采用 AOP 的机制控制事务。
(5)在 resource 下创建 META-INF 目录,然后在该目录下创建 spring.factories 文件在里面添加如下配置:
org.springframework.boot.autoconfigure.EnableAutoConfiguration=\
com.leyou.seata.SeataAutoConfiguration
三、分布式事务测试
(1)为需要使用到分布式事务的微服务 order、goods、user 添加 changgou-transaction-Seata 依赖。
<dependency>
<groupId>com.changgou</groupId>
<artifactId>changgou-transaction-Seata</artifactId>
<version>1.0-SNAPSHOT</version>
</dependency>
(2)在订单微服务的 OrderServiceImpl 的 add 方法上增加 @GlobalTransactional(name = "add")
注解:
在业务的发起方的方法上使用 @GlobalTransactional 开启全局事务,Seata 会将事务的 xid 通过拦截器,添加到调用其他服务的请求中,实现分布式事务。
这里涉及到几个微服务的调用,我们先查询下数据库数据,然后再测试一次,如果输出添加订单完成和库存减少完毕,则表明订单微服务和商品微服务的事务已经完成,这时候我们在添加积分的方法中制造一个异常,如果积分添加异常,而商品微服务中数据没发生变化,则表明分布式事务控制成功。
修改用户微服务,在添加用户积分的地方制造异常,代码如下:
@Transactional(rollbackFor = Exception.class)
@Override
public void addPoints(String username, Integer points) {
System.out.println("===========");
int rows= userMapper.addPoints(username,points);
System.out.println("受影响行数"+rows);
int error=1/0;
}
先把异常代码注释掉,运行程序,看到控制台输出 添加订单完毕、回滚库存完毕、添加积分完毕,再到数据库中查看,库存、积分数值确实改变了。这时再把注释放开,重启 user 微服务。
先需要加入购物车,在 Redis 中生成相应的记录:
再下单:
可以看到,在 user 服务中抛出 / by zero异常了,而且在数据库中,库存和积分没有变化,还可以在 seata-server 控制台看到回滚信息:
xid 是全局事务的唯一标识,由 ip:port:sequence 组成。
说明实现了分布式事务的功能。
四、Seata 和 Fescar 的使用差异
每次微服务和微服务之间相互调用,要想控制全局事务,每次 TM 都会请求 TC 生成一个 XID,每次执行下一个事务,也就是调用其他微服务的时候都需要将该 XID 传递过去,所以我们可以每次请求的时候,都获取头中的 XID ,并将 XID 传递到下一个微服务。
之前,Seata 还叫 Fescar 的时候,还需要手动写程序, 给每个线程绑定一个 XID,比如:
public class FescarRMRequestFilter extends OncePerRequestFilter {
private static final Logger LOGGER = org.slf4j.LoggerFactory.getLogger(FescarRMRequestFilter.class);
/**
* 给每次线程请求绑定一个XID
*/
@Override
protected void doFilterInternal(HttpServletRequest request, HttpServletResponse response, FilterChain filterChain) throws ServletException, IOException {
String currentXID = request.getHeader(FescarAutoConfiguration.FESCAR_XID);
if(!StringUtils.isEmpty(currentXID)){
RootContext.bind(currentXID);
LOGGER.info("当前线程绑定的XID :" + currentXID);
}
try{
filterChain.doFilter(request, response);
} finally {
String unbindXID = RootContext.unbind();
if(unbindXID != null){
LOGGER.info("当前线程从指定XID中解绑 XID :" + unbindXID);
if(!currentXID.equals(unbindXID)){
LOGGER.info("当前线程的XID发生变更");
}
}
if(currentXID != null){
LOGGER.info("当前线程的XID发生变更");
}
}
}
}
创建 FescarRestInterceptor 过滤器,每次请求其他微服务的时候,都将 XID 携带过去:
public class FescarRestInterceptor implements RequestInterceptor, ClientHttpRequestInterceptor {
@Override
public void apply(RequestTemplate requestTemplate) {
String xid = RootContext.getXID();
if(!StringUtils.isEmpty(xid)){
requestTemplate.header(FescarAutoConfiguration.FESCAR_XID, xid);
}
}
@Override
public ClientHttpResponse intercept(HttpRequest request, byte[] body, ClientHttpRequestExecution execution) throws IOException {
String xid = RootContext.getXID();
if(!StringUtils.isEmpty(xid)){
HttpHeaders headers = request.getHeaders();
headers.put(FescarAutoConfiguration.FESCAR_XID, Collections.singletonList(xid));
}
return execution.execute(request, body);
}
}
还需要获取 XID,并将 XID 传递到下一个微服务:
/***
* 每次微服务和微服务之间相互调用
* 要想控制全局事务,每次 TM 都会请求 TC 生成一个 XID,每次执行下一个事务,也就是调用其他微服务的时候都需要将该 XID 传递过去
* 所以我们可以每次请求的时候,都获取头中的 XID,并将 XID 传递到下一个微服务
* @param restTemplates
* @return
*/
@ConditionalOnBean({RestTemplate.class})
@Bean
public Object addFescarInterceptor(Collection<RestTemplate> restTemplates){
restTemplates.stream()
.forEach(restTemplate -> {
List<ClientHttpRequestInterceptor> interceptors = restTemplate.getInterceptors();
if(interceptors != null){
interceptors.add(fescarRestInterceptor());
}
});
return new Object();
}