Seata 高可用版本Demo

Seata 高可用版本Demo

背景

Seata 官方文档: Seata 官方文档

Github 官方库:Seata

Github 官方Sample库: Seata-Samples

版本选择参考: 毕业版本依赖关系推荐使用

Nacos Version : 2.1.0 Nacos 2.1.0
Seata Version : 1.5.1 Seata 1.5.1

服务端搭建

安装

下载1.5.1 zip包到本地,解压即可。

本地file 版本

默认为本地file版本,就不做介绍了。

Nacos 配置中心 + Nacos 注册中心 + Mysql 版本 (高可用版本)

Nacos 的搭建略过。

创建 Seata Server端的MySql运行库

参考官方文档:高可用部署
SQL脚本:Mysql.sql

-- -------------------------------- The script used when storeMode is 'db' --------------------------------
-- the table to store GlobalSession data
CREATE TABLE IF NOT EXISTS `global_table`
(
    `xid`                       VARCHAR(128) NOT NULL,
    `transaction_id`            BIGINT,
    `status`                    TINYINT      NOT NULL,
    `application_id`            VARCHAR(32),
    `transaction_service_group` VARCHAR(32),
    `transaction_name`          VARCHAR(128),
    `timeout`                   INT,
    `begin_time`                BIGINT,
    `application_data`          VARCHAR(2000),
    `gmt_create`                DATETIME,
    `gmt_modified`              DATETIME,
    PRIMARY KEY (`xid`),
    KEY `idx_status_gmt_modified` (`status` , `gmt_modified`),
    KEY `idx_transaction_id` (`transaction_id`)
) ENGINE = InnoDB
  DEFAULT CHARSET = utf8mb4;

-- the table to store BranchSession data
CREATE TABLE IF NOT EXISTS `branch_table`
(
    `branch_id`         BIGINT       NOT NULL,
    `xid`               VARCHAR(128) NOT NULL,
    `transaction_id`    BIGINT,
    `resource_group_id` VARCHAR(32),
    `resource_id`       VARCHAR(256),
    `branch_type`       VARCHAR(8),
    `status`            TINYINT,
    `client_id`         VARCHAR(64),
    `application_data`  VARCHAR(2000),
    `gmt_create`        DATETIME(6),
    `gmt_modified`      DATETIME(6),
    PRIMARY KEY (`branch_id`),
    KEY `idx_xid` (`xid`)
) ENGINE = InnoDB
  DEFAULT CHARSET = utf8mb4;

-- the table to store lock data
CREATE TABLE IF NOT EXISTS `lock_table`
(
    `row_key`        VARCHAR(128) NOT NULL,
    `xid`            VARCHAR(128),
    `transaction_id` BIGINT,
    `branch_id`      BIGINT       NOT NULL,
    `resource_id`    VARCHAR(256),
    `table_name`     VARCHAR(32),
    `pk`             VARCHAR(36),
    `status`         TINYINT      NOT NULL DEFAULT '0' COMMENT '0:locked ,1:rollbacking',
    `gmt_create`     DATETIME,
    `gmt_modified`   DATETIME,
    PRIMARY KEY (`row_key`),
    KEY `idx_status` (`status`),
    KEY `idx_branch_id` (`branch_id`),
    KEY `idx_xid_and_branch_id` (`xid` , `branch_id`)
) ENGINE = InnoDB
  DEFAULT CHARSET = utf8mb4;

CREATE TABLE IF NOT EXISTS `distributed_lock`
(
    `lock_key`       CHAR(20) NOT NULL,
    `lock_value`     VARCHAR(20) NOT NULL,
    `expire`         BIGINT,
    primary key (`lock_key`)
) ENGINE = InnoDB
  DEFAULT CHARSET = utf8mb4;

INSERT INTO `distributed_lock` (lock_key, lock_value, expire) VALUES ('AsyncCommitting', ' ', 0);
INSERT INTO `distributed_lock` (lock_key, lock_value, expire) VALUES ('RetryCommitting', ' ', 0);
INSERT INTO `distributed_lock` (lock_key, lock_value, expire) VALUES ('RetryRollbacking', ' ', 0);
INSERT INTO `distributed_lock` (lock_key, lock_value, expire) VALUES ('TxTimeoutCheck', ' ', 0);
修改 Seata Server端配置
修改配置中心为Nacos
  • 访问Nacos的控制面板,增加相关的配置信息

1、增加seata-server-dev 命名空间
在这里插入图片描述

2、增加seataServer.yml 的配置,配置内容为

store:
  # support: file 、 db 、 redis
  mode: db
  session:
    mode: db
  lock:
    mode: db
  db:
    datasource: druid
    dbType: mysql
    driverClassName: com.mysql.jdbc.Driver
    url: jdbc:mysql://127.0.0.1:3306/seata?rewriteBatchedStatements=true
    user: root
    password: root
    minConn: 5
    maxConn: 100
    globalTable: global_table
    branchTable: branch_table
    lockTable: lock_table
    distributedLockTable: distributed_lock
    queryLimit: 100
    maxWait: 5000

在这里插入图片描述

注意:
配置项请严格按照以上配置样例进行配置,Seata的配置加载对于Nacos的配置内容并没有做命名风格的适配。详情可以查看Seata的源码io.seata.common.ConfigurationKeys 类中的常量。

  • 修改seata\conf\application.yml 配置文件
    1、修改 seata.config.typenacos
    2、修改nacos的配置

最终相关的配置内容如下:

seata:
  config:
    # support: file 、 nacos 、 consul 、 apollo 、 zk  、 etcd3
    type: nacos
    nacos:
      server-addr: 127.0.0.1:8848
      namespace: seata-server-dev
      group: SEATA_GROUP
      username: nacos
      password: nacos
      ##if use MSE Nacos with auth, mutex with username/password attribute
      #access-key: ""
      #secret-key: ""
      data-id: seataServer.yml
修改注册中心为Nacos
  • 修改seata\conf\application.yml 配置文件
    1、修改 seata.registry.typenacos
    2、修改nacos的配置

最终相关的配置内容如下:

seata:
  registry:
    # support: file 、 nacos 、 eureka 、 redis 、 zk  、 consul 、 etcd3 、 sofa
    type: nacos
    #preferred-networks: 30.240.*
    nacos:
      application: seata-server
      server-addr: 127.0.0.1:8848
      group: SEATA_GROUP
      namespace: seata-server-dev
      cluster: default
      username: nacos
      password: nacos
      ##if use MSE Nacos with auth, mutex with username/password attribute
      #access-key: ""
      #secret-key: ""
启动 Seata Server

双击运行seata\bin\seata-server.bat ,启动Seata 服务端。
默认控制台端口为7091,默认的服务注册端口为8091

Seata控制台:
在这里插入图片描述

Nacos控制台:

  • 配置监听

在这里插入图片描述

  • 服务注册

在这里插入图片描述

在这里插入图片描述

运行 客户端 demo

部署 Demo项目

  • 下载项目到本地: seata-samples

  • 导入项目到IDEA,选择springboot-dubbo-seata 模块测试demo

  • 修改pom.xml,关闭dubbo 模块,开启springboot-dubbo-seata 模块

    <modules>
<!--        <module>dubbo</module>-->
        <!-- <module>springboot</module> -->
        <!-- <module>nacos</module> -->
         <module>springboot-dubbo-seata</module>
        <!-- <module>tcc</module> -->
        <!-- <module>springcloud-jpa-seata</module> -->
        <!-- <module>nutzboot-dubbo-fescar</module> -->
        <!-- <module>ha</module> -->
        <!-- <module>springcloud-eureka-seata</module> -->
        <!-- <module>multiple-datasource</module> -->
        <!-- <module>springboot-mybatis</module> -->
        <!-- <module>springcloud-nacos-seata</module> -->
        <!-- <module>api</module> -->
        <!-- <module>springboot-shardingsphere-seata</module> -->
        <!-- <module>multiple-datasource-mybatis-plus</module> -->
        <!-- <module>saga</module> -->
        <!-- <module>spring-cloud-alibaba-samples</module> -->
        <!-- <module>seata-spring-boot-starter-samples</module> -->
        <!-- <module>springboot-dubbo-seata-zk</module> -->
        <!-- <module>seata-xa</module> -->
        <!-- <module>seata-samples-jit</module> -->
        <!-- <module>springcloud-seata-sharding-jdbc-mybatis-plus-samples</module> --> 
    </modules>

  • 修改springboot-dubbo-seata/pom.xml 文件,修改Seata的版本号为1.5.1
<seata.version>1.5.1</seata.version>
  • 导入springboot-dubbo-seata/sql/db_seata.sql 到数据库
/*
Navicat MySQL Data Transfer

Source Server         : account
Source Server Version : 50614
Source Host           : localhost:3306
Source Database       : db_gts_fescar

Target Server Type    : MYSQL
Target Server Version : 50614
File Encoding         : 65001

Date: 2019-01-26 10:23:10
*/

SET
FOREIGN_KEY_CHECKS=0;

-- ----------------------------
-- Table structure for t_account
-- ----------------------------
DROP TABLE IF EXISTS `t_account`;
CREATE TABLE `t_account`
(
    `id`      int(11) NOT NULL AUTO_INCREMENT,
    `user_id` varchar(255) DEFAULT NULL,
    `amount`  double(14, 2
) DEFAULT '0.00',
  PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8;

-- ----------------------------
-- Records of t_account
-- ----------------------------
INSERT INTO `t_account`
VALUES ('1', '1', '4000.00');

-- ----------------------------
-- Table structure for t_order
-- ----------------------------
DROP TABLE IF EXISTS `t_order`;
CREATE TABLE `t_order`
(
    `id`             int(11) NOT NULL AUTO_INCREMENT,
    `order_no`       varchar(255) DEFAULT NULL,
    `user_id`        varchar(255) DEFAULT NULL,
    `commodity_code` varchar(255) DEFAULT NULL,
    `count`          int(11) DEFAULT '0',
    `amount`         double(14, 2
) DEFAULT '0.00',
  PRIMARY KEY (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=64 DEFAULT CHARSET=utf8;

-- ----------------------------
-- Records of t_order
-- ----------------------------

-- ----------------------------
-- Table structure for t_stock
-- ----------------------------
DROP TABLE IF EXISTS `t_stock`;
CREATE TABLE `t_stock`
(
    `id`             int(11) NOT NULL AUTO_INCREMENT,
    `commodity_code` varchar(255) DEFAULT NULL,
    `name`           varchar(255) DEFAULT NULL,
    `count`          int(11) DEFAULT '0',
    PRIMARY KEY (`id`),
    UNIQUE KEY `commodity_code` (`commodity_code`)
) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8;

-- ----------------------------
-- Records of t_stock
-- ----------------------------
INSERT INTO `t_stock`
VALUES ('1', 'C201901140001', '水杯', '1000');

-- ----------------------------
-- Table structure for undo_log
-- 注意此处0.3.0+ 增加唯一索引 ux_undo_log
-- ----------------------------
DROP TABLE IF EXISTS `undo_log`;
CREATE TABLE `undo_log`
(
    `id`            bigint(20) NOT NULL AUTO_INCREMENT,
    `branch_id`     bigint(20) NOT NULL,
    `xid`           varchar(100) NOT NULL,
    `context`       varchar(128) NOT NULL,
    `rollback_info` longblob     NOT NULL,
    `log_status`    int(11) NOT NULL,
    `log_created`   datetime     NOT NULL,
    `log_modified`  datetime     NOT NULL,
    PRIMARY KEY (`id`),
    UNIQUE KEY `ux_undo_log` (`xid`,`branch_id`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;

-- ----------------------------
-- Records of undo_log
-- ----------------------------
SET
FOREIGN_KEY_CHECKS=1;

在这里插入图片描述

本地file 版本

模版为本地文件版本,略过

Nacos + Seata 高可用版本

配置dubbo的注册中心和Spring Cloud的Nacos的发现中心

修改各子模块的配置文件application.properties,配置dubbo的注册中心和Spring Cloud的Nacos的发现中心,以及数据源配置。

  • samples-account 模块
server.port=8102
spring.application.name=dubbo-account-example
#====================================Dubbo config===============================================
dubbo.application.id=dubbo-account-example
dubbo.application.name=dubbo-account-example
dubbo.protocol.id=dubbo
dubbo.protocol.name=dubbo
dubbo.registry.id=dubbo-account-example-registry
dubbo.registry.address=nacos://127.0.0.1:8848
dubbo.protocol.port=20880
dubbo.application.qosEnable=false
#===================================registry config==========================================
#Nacos\u6CE8\u518C\u4E2D\u5FC3
spring.cloud.nacos.discovery.server-addr=127.0.0.1:8848
management.endpoints.web.exposure.include=*
#====================================mysql config============================================
spring.datasource.driver-class-name=com.mysql.jdbc.Driver
spring.datasource.url=jdbc:mysql://127.0.0.1:3306/seatatest?useSSL=false&useUnicode=true&characterEncoding=utf-8&allowMultiQueries=true
spring.datasource.username=root
spring.datasource.password=root
#=====================================mybatis config======================================
mybatis.mapper-locations=classpath*:/mapper/*.xml

  • samples-order 模块
server.port=8101
spring.application.name=dubbo-order-example
#====================================Dubbo config===============================================
dubbo.application.id=dubbo-order-example
dubbo.application.name=dubbo-order-example
dubbo.protocol.id=dubbo
dubbo.protocol.name=dubbo
dubbo.registry.id=dubbo-order-example-registry
dubbo.registry.address=nacos://127.0.0.1:8848
dubbo.protocol.port=20881
dubbo.application.qosEnable=false
#===================================registry config==========================================
#Nacos\u6CE8\u518C\u4E2D\u5FC3
spring.cloud.nacos.discovery.server-addr=127.0.0.1:8848
management.endpoints.web.exposure.include=*
#====================================mysql =============================================
spring.datasource.driver-class-name=com.mysql.jdbc.Driver
spring.datasource.url=jdbc:mysql://127.0.0.1:3306/seatatest?useSSL=false&useUnicode=true&characterEncoding=utf-8&allowMultiQueries=true
spring.datasource.username=root
spring.datasource.password=root
#=====================================mybatis\u914D\u7F6E======================================
mybatis.mapper-locations=classpath*:/mapper/*.xml

  • samples-stock 模块
server.port=8100
spring.application.name=dubbo-stock-example
#====================================Dubbo config===============================================
dubbo.application.id=dubbo-stock-example
dubbo.application.name=dubbo-stock-example
dubbo.protocol.id=dubbo
dubbo.protocol.name=dubbo
dubbo.registry.id=dubbo-stock-example-registry
dubbo.registry.address=nacos://127.0.0.1:8848
dubbo.protocol.port=20882
dubbo.application.qosEnable=false
#===================================registry config==========================================
#Nacos\u6CE8\u518C\u4E2D\u5FC3
spring.cloud.nacos.discovery.server-addr=127.0.0.1:8848
management.endpoints.web.exposure.include=*
#====================================mysql config===========================================
spring.datasource.driver-class-name=com.mysql.jdbc.Driver
spring.datasource.url=jdbc:mysql://127.0.0.1:3306/seatatest?useSSL=false&useUnicode=true&characterEncoding=utf-8&allowMultiQueries=true
spring.datasource.username=root
spring.datasource.password=root
#=====================================mybatis config======================================
mybatis.mapper-locations=classpath*:/mapper/*.xml

  • samples-business 模块
server.port=8104
spring.application.name=dubbo-business-example
#============================dubbo config==============================================
dubbo.application.id=dubbo-business-example
dubbo.application.name=dubbo-business-example
dubbo.protocol.id=dubbo
dubbo.protocol.name=dubbo
dubbo.protocol.port=10001
dubbo.registry.id=dubbo-business-example-registry
dubbo.registry.address=nacos://127.0.0.1:8848
dubbo.provider.version=1.0.0
dubbo.application.qosEnable=false
#==================================nacos==============================================
spring.cloud.nacos.discovery.server-addr=127.0.0.1:8848
management.endpoints.web.exposure.include=*

修改各模块的SeataAutoConfig.java ,将默认的事务分组名称修改为default_tx_group

Seata 1.5.1 版本以后的默认的事务分组名称由原先的my_test_tx_group 修改为default_tx_group,因此在初始化GlobalTransactionScanner 时,需要修改传入的事务分组的参数名称。

同时Nacos的配置默认也为default_tx_group
在这里插入图片描述

相关代码

    @Bean
    public GlobalTransactionScanner globalTransactionScanner() {
        return new GlobalTransactionScanner("order-gts-seata-example", "default_tx_group");
    }
修改各模块的registry.conf 配置文件 吗,配置Nacos为注册中心和配置中心
registry {
  # file 、nacos 、eureka、redis、zk、consul、etcd3、sofa
  type = "nacos"

  nacos {
      application = "seata-server"
      serverAddr = "127.0.0.1:8848"
      group = "SEATA_GROUP"
      cluster = "default"
      namespace = "seata-server-dev"
      username = "nacos"
      password = "nacos"
  }
  
}

config {
  # file、nacos 、apollo、zk、consul、etcd3、springCloudConfig
  type = "nacos"

  nacos {
      serverAddr = "127.0.0.1:8848"
      namespace = "seata-client-dev"
      group = "SEATA_GROUP"
      username = "nacos"
      password = "nacos"
  }
  
}

上传 Seata 客户端配置到Nacos

参考附录:上传 Seata 客户端配置到Nacos

上传成功之后就可以删除每个模块里的file.conf 文件了。

启动demo

依次启动每个模块,如果启动失败,可以执行下重构项目再启动,实测demo本身是可以正常启动的。

测试

使用能模拟发送HTTP请求的测试工具测试以下请求:

POST http://localhost:8104/business/dubbo/buy

{
  "userId": "1",
  "commodityCode": "C201901140001",
  "name": "水杯",
  "count": 1,
  "amount": 100
}

在这里插入图片描述

测试正常提交

不做任何修改,发送请求后就能看到正常提交日志。

  • samples-business 模块
2022-07-11 17:33:52.129  INFO 33772 --- [nio-8104-exec-8] i.s.s.i.c.controller.BusinessController  : 请求参数:BusinessDTO(userId=1, commodityCode=C201901140001, name=水杯, count=1, amount=100)
2022-07-11 17:33:52.168  INFO 33772 --- [nio-8104-exec-8] i.seata.tm.api.DefaultGlobalTransaction  : Begin new global transaction [188.188.188.11:8091:6980862426907865301]
开始全局事务,XID = 188.188.188.11:8091:6980862426907865301
2022-07-11 17:33:53.036  INFO 33772 --- [nio-8104-exec-8] i.seata.tm.api.DefaultGlobalTransaction  : Suspending current transaction, xid = 188.188.188.11:8091:6980862426907865301
2022-07-11 17:33:53.036  INFO 33772 --- [nio-8104-exec-8] i.seata.tm.api.DefaultGlobalTransaction  : [188.188.188.11:8091:6980862426907865301] commit status: Committed
2022-07-11 17:34:16.347  INFO 33772 --- [eoutChecker_2_1] i.s.c.r.netty.NettyClientChannelManager  : will connect to 188.188.188.11:8091
2022-07-11 17:34:16.348  INFO 33772 --- [eoutChecker_2_1] i.s.core.rpc.netty.NettyPoolableFactory  : NettyPool create channel to transactionRole:RMROLE,address:188.188.188.11:8091,msg:< RegisterRMRequest{resourceIds='null', applicationId='dubbo-gts-seata-example', transactionServiceGroup='default_tx_group'} >
2022-07-11 17:34:16.355  INFO 33772 --- [eoutChecker_2_1] i.s.c.rpc.netty.RmNettyRemotingClient    : register RM success. client version:1.5.1, server version:1.5.1,channel:[id: 0xc4082ce7, L:/188.188.188.11:51803 - R:/188.188.188.11:8091]
2022-07-11 17:34:16.356  INFO 33772 --- [eoutChecker_2_1] i.s.core.rpc.netty.NettyPoolableFactory  : register success, cost 5 ms, version:1.5.1,role:RMROLE,channel:[id: 0xc4082ce7, L:/188.188.188.11:51803 - R:/188.188.188.11:8091]
  • samples-stock 模块
全局事务id :188.188.188.11:8091:6980862426907865301
2022-07-11 17:33:53.794  INFO 27916 --- [ch_RMROLE_1_7_8] i.s.c.r.p.c.RmBranchCommitProcessor      : rm client handle branch commit process:xid=188.188.188.11:8091:6980862426907865301,branchId=6980862426907865303,branchType=AT,resourceId=jdbc:mysql://127.0.0.1:3306/seatatest,applicationData={"autoCommit":false}
2022-07-11 17:33:53.794  INFO 27916 --- [ch_RMROLE_1_7_8] io.seata.rm.AbstractRMHandler            : Branch committing: 188.188.188.11:8091:6980862426907865301 6980862426907865303 jdbc:mysql://127.0.0.1:3306/seatatest {"autoCommit":false}
2022-07-11 17:33:53.794  INFO 27916 --- [ch_RMROLE_1_7_8] io.seata.rm.AbstractRMHandler            : Branch commit result: PhaseTwo_Committed
  • samples-account 模块
全局事务id :188.188.188.11:8091:6980862426907865301
2022-07-11 17:33:53.825  INFO 9940 --- [ch_RMROLE_1_1_8] i.s.c.r.p.c.RmBranchCommitProcessor      : rm client handle branch commit process:xid=188.188.188.11:8091:6980862426907865301,branchId=6980862426907865305,branchType=AT,resourceId=jdbc:mysql://127.0.0.1:3306/seatatest,applicationData=null
2022-07-11 17:33:53.825  INFO 9940 --- [ch_RMROLE_1_1_8] io.seata.rm.AbstractRMHandler            : Branch committing: 188.188.188.11:8091:6980862426907865301 6980862426907865305 jdbc:mysql://127.0.0.1:3306/seatatest null
2022-07-11 17:33:53.825  INFO 9940 --- [ch_RMROLE_1_1_8] io.seata.rm.AbstractRMHandler            : Branch commit result: PhaseTwo_Committed
  • samples-order 模块
全局事务id :188.188.188.11:8091:6980862426907865301
2022-07-11 17:33:53.849  INFO 10592 --- [ch_RMROLE_1_5_8] i.s.c.r.p.c.RmBranchCommitProcessor      : rm client handle branch commit process:xid=188.188.188.11:8091:6980862426907865301,branchId=6980862426907865310,branchType=AT,resourceId=jdbc:mysql://127.0.0.1:3306/seatatest,applicationData={"autoCommit":false,"skipCheckLock":true}
2022-07-11 17:33:53.849  INFO 10592 --- [ch_RMROLE_1_5_8] io.seata.rm.AbstractRMHandler            : Branch committing: 188.188.188.11:8091:6980862426907865301 6980862426907865310 jdbc:mysql://127.0.0.1:3306/seatatest {"autoCommit":false,"skipCheckLock":true}
2022-07-11 17:33:53.849  INFO 10592 --- [ch_RMROLE_1_5_8] io.seata.rm.AbstractRMHandler            : Branch commit result: PhaseTwo_Committed
  • 服务端日志

17:33:52.131  INFO --- [     batchLoggerPrint_1_1] i.s.c.r.p.server.BatchLogHandler         : timeout=300000,transactionName=dubbo-gts-seata-example,clientIp:188.188.188.11,vgroup:default_tx_group
17:33:52.166  INFO --- [verHandlerThread_1_30_500] i.s.s.coordinator.DefaultCoordinator     : Begin new global transaction applicationId: dubbo-gts-seata-example,transactionServiceGroup: default_tx_group, transactionName: dubbo-gts-seata-example,timeout:300000,xid:188.188.188.11:8091:6980862426907865301
17:33:52.252  INFO --- [     batchLoggerPrint_1_1] i.s.c.r.p.server.BatchLogHandler         : SeataMergeMessage xid=188.188.188.11:8091:6980862426907865301,branchType=AT,resourceId=jdbc:mysql://127.0.0.1:3306/seatatest,lockKey=t_stock:1
,clientIp:188.188.188.11,vgroup:default_tx_group
17:33:52.411  INFO --- [verHandlerThread_1_31_500] i.seata.server.coordinator.AbstractCore  : Register branch successfully, xid = 188.188.188.11:8091:6980862426907865301, branchId = 6980862426907865303, resourceId = jdbc:mysql://127.0.0.1:3306/seatatest ,lockKeys = t_stock:1
17:33:52.572  INFO --- [     batchLoggerPrint_1_1] i.s.c.r.p.server.BatchLogHandler         : SeataMergeMessage xid=188.188.188.11:8091:6980862426907865301,branchType=AT,resourceId=jdbc:mysql://127.0.0.1:3306/seatatest,lockKey=t_account:1
,clientIp:188.188.188.11,vgroup:default_tx_group
17:33:52.736  INFO --- [verHandlerThread_1_32_500] i.seata.server.coordinator.AbstractCore  : Register branch successfully, xid = 188.188.188.11:8091:6980862426907865301, branchId = 6980862426907865305, resourceId = jdbc:mysql://127.0.0.1:3306/seatatest ,lockKeys = t_account:1
17:33:52.840  INFO --- [     batchLoggerPrint_1_1] i.s.c.r.p.server.BatchLogHandler         : SeataMergeMessage xid=188.188.188.11:8091:6980862426907865301,branchType=AT,resourceId=jdbc:mysql://127.0.0.1:3306/seatatest,lockKey=t_order:104
,clientIp:188.188.188.11,vgroup:default_tx_group
17:33:52.921  INFO --- [verHandlerThread_1_33_500] i.seata.server.coordinator.AbstractCore  : Register branch successfully, xid = 188.188.188.11:8091:6980862426907865301, branchId = 6980862426907865310, resourceId = jdbc:mysql://127.0.0.1:3306/seatatest ,lockKeys = t_order:104
17:33:52.969  INFO --- [     batchLoggerPrint_1_1] i.s.c.r.p.server.BatchLogHandler         : xid=188.188.188.11:8091:6980862426907865301,extraData=null,clientIp:188.188.188.11,vgroup:default_tx_group
17:33:53.938  INFO --- [      AsyncCommitting_1_1] io.seata.server.coordinator.DefaultCore  : Committing global transaction is successfully done, xid = 188.188.188.11:8091:6980862426907865301.
17:34:16.354  INFO --- [verHandlerThread_1_35_500] i.s.c.r.processor.server.RegRmProcessor  : RM register success,message:RegisterRMRequest{resourceIds='null', applicationId='dubbo-gts-seata-example', transactionServiceGroup='default_tx_group'},channel:[id: 0x21d341a8, L:/188.188.188.11:8091 - R:/188.188.188.11:51803],client version:1.5.1
17:35:37.851  INFO --- [     RetryRollbacking_1_1] io.seata.server.coordinator.DefaultCore  : Rollback global transaction successfully, xid = 188.188.188.11:8091:6980862426907865251.
17:35:53.869  INFO --- [     RetryRollbacking_1_1] io.seata.server.coordinator.DefaultCore  : Rollback global transaction successfully, xid = 188.188.188.11:8091:6980862426907865275.

测试异常回滚

修改io.seata.samples.integration.call.service.BusinessServiceImpl#handleBusiness 方法的第73行到75行,将注释掉的代码放开,然后重启samples-business 模块,在重新发送前面的请求。

  • samples-business 模块
2022-07-11 18:09:12.501  INFO 30620 --- [nio-8104-exec-1] i.s.s.i.c.controller.BusinessController  : 请求参数:BusinessDTO(userId=1, commodityCode=C201901140001, name=水杯, count=1, amount=100)
2022-07-11 18:09:12.666  INFO 30620 --- [nio-8104-exec-1] io.seata.tm.TransactionManagerHolder     : TransactionManager Singleton io.seata.tm.DefaultTransactionManager@7b230f8
2022-07-11 18:09:12.721  INFO 30620 --- [nio-8104-exec-1] i.seata.tm.api.DefaultGlobalTransaction  : Begin new global transaction [188.188.188.11:8091:6980862426907865541]
开始全局事务,XID = 188.188.188.11:8091:6980862426907865541
2022-07-11 18:09:14.469  INFO 30620 --- [nio-8104-exec-1] i.seata.tm.api.DefaultGlobalTransaction  : Suspending current transaction, xid = 188.188.188.11:8091:6980862426907865541
2022-07-11 18:09:14.469  INFO 30620 --- [nio-8104-exec-1] i.seata.tm.api.DefaultGlobalTransaction  : [188.188.188.11:8091:6980862426907865541] rollback status: Rollbacked
2022-07-11 18:09:14.480 ERROR 30620 --- [nio-8104-exec-1] o.a.c.c.C.[.[.[/].[dispatcherServlet]    : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is java.lang.RuntimeException: 测试抛异常后,分布式事务回滚!] with root cause

java.lang.RuntimeException: 测试抛异常后,分布式事务回滚!
	at io.seata.samples.integration.call.service.BusinessServiceImpl.handleBusiness(BusinessServiceImpl.java:73) ~[classes/:na]
	at io.seata.samples.integration.call.service.BusinessServiceImpl$$FastClassBySpringCGLIB$$2ab3d645.invoke(<generated>) ~[classes/:na]
	at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204) ~[spring-core-5.0.8.RELEASE.jar:5.0.8.RELEASE]
	at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:746) ~[spring-aop-5.0.8.RELEASE.jar:5.0.8.RELEASE]
	at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163) ~[spring-aop-5.0.8.RELEASE.jar:5.0.8.RELEASE]
	at io.seata.spring.annotation.GlobalTransactionalInterceptor$2.execute(GlobalTransactionalInterceptor.java:205) ~[seata-all-1.5.1.jar:1.5.1]
	at io.seata.tm.api.TransactionalTemplate.execute(TransactionalTemplate.java:127) ~[seata-all-1.5.1.jar:1.5.1]
	at io.seata.spring.annotation.GlobalTransactionalInterceptor.handleGlobalTransaction(GlobalTransactionalInterceptor.java:202) ~[seata-all-1.5.1.jar:1.5.1]
	at io.seata.spring.annotation.GlobalTransactionalInterceptor.invoke(GlobalTransactionalInterceptor.java:172) ~[seata-all-1.5.1.jar:1.5.1]
	at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:185) ~[spring-aop-5.0.8.RELEASE.jar:5.0.8.RELEASE]
	at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:688) ~[spring-aop-5.0.8.RELEASE.jar:5.0.8.RELEASE]
	at io.seata.samples.integration.call.service.BusinessServiceImpl$$EnhancerBySpringCGLIB$$1356cc55.handleBusiness(<generated>) ~[classes/:na]
	at io.seata.samples.integration.call.controller.BusinessController.handleBusiness(BusinessController.java:54) ~[classes/:na]
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_202]
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_202]
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_202]
	at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_202]
	at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:209) ~[spring-web-5.0.12.RELEASE.jar:5.0.12.RELEASE]
	at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:136) ~[spring-web-5.0.12.RELEASE.jar:5.0.12.RELEASE]
	at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:102) ~[spring-webmvc-5.0.12.RELEASE.jar:5.0.12.RELEASE]
	at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:891) ~[spring-webmvc-5.0.12.RELEASE.jar:5.0.12.RELEASE]
	at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:797) ~[spring-webmvc-5.0.12.RELEASE.jar:5.0.12.RELEASE]
	at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:87) ~[spring-webmvc-5.0.12.RELEASE.jar:5.0.12.RELEASE]
	at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:991) ~[spring-webmvc-5.0.12.RELEASE.jar:5.0.12.RELEASE]
	at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:925) ~[spring-webmvc-5.0.12.RELEASE.jar:5.0.12.RELEASE]
	at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:981) ~[spring-webmvc-5.0.12.RELEASE.jar:5.0.12.RELEASE]
	at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:884) ~[spring-webmvc-5.0.12.RELEASE.jar:5.0.12.RELEASE]
	at javax.servlet.http.HttpServlet.service(HttpServlet.java:661) ~[tomcat-embed-core-8.5.37.jar:8.5.37]
	at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:858) ~[spring-webmvc-5.0.12.RELEASE.jar:5.0.12.RELEASE]
	at javax.servlet.http.HttpServlet.service(HttpServlet.java:742) ~[tomcat-embed-core-8.5.37.jar:8.5.37]
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:231) ~[tomcat-embed-core-8.5.37.jar:8.5.37]
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-8.5.37.jar:8.5.37]
	at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) ~[tomcat-embed-websocket-8.5.37.jar:8.5.37]
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[tomcat-embed-core-8.5.37.jar:8.5.37]
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-8.5.37.jar:8.5.37]
	at org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:99) ~[spring-web-5.0.12.RELEASE.jar:5.0.12.RELEASE]
	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) ~[spring-web-5.0.12.RELEASE.jar:5.0.12.RELEASE]
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[tomcat-embed-core-8.5.37.jar:8.5.37]
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-8.5.37.jar:8.5.37]
	at org.springframework.web.filter.HttpPutFormContentFilter.doFilterInternal(HttpPutFormContentFilter.java:109) ~[spring-web-5.0.12.RELEASE.jar:5.0.12.RELEASE]
	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) ~[spring-web-5.0.12.RELEASE.jar:5.0.12.RELEASE]
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[tomcat-embed-core-8.5.37.jar:8.5.37]
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-8.5.37.jar:8.5.37]
	at org.springframework.web.filter.HiddenHttpMethodFilter.doFilterInternal(HiddenHttpMethodFilter.java:93) ~[spring-web-5.0.12.RELEASE.jar:5.0.12.RELEASE]
	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) ~[spring-web-5.0.12.RELEASE.jar:5.0.12.RELEASE]
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[tomcat-embed-core-8.5.37.jar:8.5.37]
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-8.5.37.jar:8.5.37]
	at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:200) ~[spring-web-5.0.12.RELEASE.jar:5.0.12.RELEASE]
	at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) ~[spring-web-5.0.12.RELEASE.jar:5.0.12.RELEASE]
	at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:193) ~[tomcat-embed-core-8.5.37.jar:8.5.37]
	at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:166) ~[tomcat-embed-core-8.5.37.jar:8.5.37]
	at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:198) ~[tomcat-embed-core-8.5.37.jar:8.5.37]
	at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:96) [tomcat-embed-core-8.5.37.jar:8.5.37]
	at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:493) [tomcat-embed-core-8.5.37.jar:8.5.37]
	at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:140) [tomcat-embed-core-8.5.37.jar:8.5.37]
	at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:81) [tomcat-embed-core-8.5.37.jar:8.5.37]
	at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:87) [tomcat-embed-core-8.5.37.jar:8.5.37]
	at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:342) [tomcat-embed-core-8.5.37.jar:8.5.37]
	at org.apache.coyote.http11.Http11Processor.service(Http11Processor.java:800) [tomcat-embed-core-8.5.37.jar:8.5.37]
	at org.apache.coyote.AbstractProcessorLight.process(AbstractProcessorLight.java:66) [tomcat-embed-core-8.5.37.jar:8.5.37]
	at org.apache.coyote.AbstractProtocol$ConnectionHandler.process(AbstractProtocol.java:806) [tomcat-embed-core-8.5.37.jar:8.5.37]
	at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1498) [tomcat-embed-core-8.5.37.jar:8.5.37]
	at org.apache.tomcat.util.net.SocketProcessorBase.run(SocketProcessorBase.java:49) [tomcat-embed-core-8.5.37.jar:8.5.37]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [na:1.8.0_202]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [na:1.8.0_202]
	at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) [tomcat-embed-core-8.5.37.jar:8.5.37]
	at java.lang.Thread.run(Thread.java:748) [na:1.8.0_202]

2022-07-11 18:09:16.891  INFO 30620 --- [eoutChecker_2_1] i.s.c.r.netty.NettyClientChannelManager  : will connect to 188.188.188.11:8091
2022-07-11 18:09:16.892  INFO 30620 --- [eoutChecker_2_1] i.s.core.rpc.netty.NettyPoolableFactory  : NettyPool create channel to transactionRole:RMROLE,address:188.188.188.11:8091,msg:< RegisterRMRequest{resourceIds='null', applicationId='dubbo-gts-seata-example', transactionServiceGroup='default_tx_group'} >
2022-07-11 18:09:16.903  INFO 30620 --- [eoutChecker_2_1] i.s.c.rpc.netty.RmNettyRemotingClient    : register RM success. client version:1.5.1, server version:1.5.1,channel:[id: 0xba19e32d, L:/188.188.188.11:57166 - R:/188.188.188.11:8091]
2022-07-11 18:09:16.903  INFO 30620 --- [eoutChecker_2_1] i.s.core.rpc.netty.NettyPoolableFactory  : register success, cost 6 ms, version:1.5.1,role:RMROLE,channel:[id: 0xba19e32d, L:/188.188.188.11:57166 - R:/188.188.188.11:8091]
  • samples-stock 模块
全局事务id :188.188.188.11:8091:6980862426907865541
2022-07-11 18:09:14.298  INFO 27916 --- [ch_RMROLE_1_8_8] i.s.c.r.p.c.RmBranchRollbackProcessor    : rm handle branch rollback process:xid=188.188.188.11:8091:6980862426907865541,branchId=6980862426907865544,branchType=AT,resourceId=jdbc:mysql://127.0.0.1:3306/seatatest,applicationData={"autoCommit":false}
2022-07-11 18:09:14.298  INFO 27916 --- [ch_RMROLE_1_8_8] io.seata.rm.AbstractRMHandler            : Branch Rollbacking: 188.188.188.11:8091:6980862426907865541 6980862426907865544 jdbc:mysql://127.0.0.1:3306/seatatest
2022-07-11 18:09:14.403  INFO 27916 --- [ch_RMROLE_1_8_8] i.s.r.d.undo.AbstractUndoLogManager      : xid 188.188.188.11:8091:6980862426907865541 branch 6980862426907865544, undo_log deleted with GlobalFinished
2022-07-11 18:09:14.420  INFO 27916 --- [ch_RMROLE_1_8_8] io.seata.rm.AbstractRMHandler            : Branch Rollbacked result: PhaseTwo_Rollbacked

  • samples-account 模块
全局事务id :188.188.188.11:8091:6980862426907865541
2022-07-11 18:09:14.137  INFO 9940 --- [ch_RMROLE_1_2_8] i.s.c.r.p.c.RmBranchRollbackProcessor    : rm handle branch rollback process:xid=188.188.188.11:8091:6980862426907865541,branchId=6980862426907865546,branchType=AT,resourceId=jdbc:mysql://127.0.0.1:3306/seatatest,applicationData=null
2022-07-11 18:09:14.137  INFO 9940 --- [ch_RMROLE_1_2_8] io.seata.rm.AbstractRMHandler            : Branch Rollbacking: 188.188.188.11:8091:6980862426907865541 6980862426907865546 jdbc:mysql://127.0.0.1:3306/seatatest
2022-07-11 18:09:14.236  INFO 9940 --- [ch_RMROLE_1_2_8] i.s.r.d.undo.AbstractUndoLogManager      : xid 188.188.188.11:8091:6980862426907865541 branch 6980862426907865546, undo_log deleted with GlobalFinished
2022-07-11 18:09:14.251  INFO 9940 --- [ch_RMROLE_1_2_8] io.seata.rm.AbstractRMHandler            : Branch Rollbacked result: PhaseTwo_Rollbacked

  • samples-order 模块
全局事务id :188.188.188.11:8091:6980862426907865541
2022-07-11 18:09:13.976  INFO 10592 --- [ch_RMROLE_1_6_8] i.s.c.r.p.c.RmBranchRollbackProcessor    : rm handle branch rollback process:xid=188.188.188.11:8091:6980862426907865541,branchId=6980862426907865548,branchType=AT,resourceId=jdbc:mysql://127.0.0.1:3306/seatatest,applicationData={"autoCommit":false,"skipCheckLock":true}
2022-07-11 18:09:13.976  INFO 10592 --- [ch_RMROLE_1_6_8] io.seata.rm.AbstractRMHandler            : Branch Rollbacking: 188.188.188.11:8091:6980862426907865541 6980862426907865548 jdbc:mysql://127.0.0.1:3306/seatatest
2022-07-11 18:09:14.064  INFO 10592 --- [ch_RMROLE_1_6_8] i.s.r.d.undo.AbstractUndoLogManager      : xid 188.188.188.11:8091:6980862426907865541 branch 6980862426907865548, undo_log deleted with GlobalFinished
2022-07-11 18:09:14.080  INFO 10592 --- [ch_RMROLE_1_6_8] io.seata.rm.AbstractRMHandler            : Branch Rollbacked result: PhaseTwo_Rollbacked

  • 服务端日志

18:09:12.670  INFO --- [     batchLoggerPrint_1_1] i.s.c.r.p.server.BatchLogHandler         : timeout=300000,transactionName=dubbo-gts-seata-example,clientIp:188.188.188.11,vgroup:default_tx_group
18:09:12.711  INFO --- [verHandlerThread_1_36_500] i.s.s.coordinator.DefaultCoordinator     : Begin new global transaction applicationId: dubbo-gts-seata-example,transactionServiceGroup: default_tx_group, transactionName: dubbo-gts-seata-example,timeout:300000,xid:188.188.188.11:8091:6980862426907865541
18:09:13.010  INFO --- [     batchLoggerPrint_1_1] i.s.c.r.p.server.BatchLogHandler         : SeataMergeMessage xid=188.188.188.11:8091:6980862426907865541,branchType=AT,resourceId=jdbc:mysql://127.0.0.1:3306/seatatest,lockKey=t_stock:1
,clientIp:188.188.188.11,vgroup:default_tx_group
18:09:13.117  INFO --- [verHandlerThread_1_37_500] i.seata.server.coordinator.AbstractCore  : Register branch successfully, xid = 188.188.188.11:8091:6980862426907865541, branchId = 6980862426907865544, resourceId = jdbc:mysql://127.0.0.1:3306/seatatest ,lockKeys = t_stock:1
18:09:13.388  INFO --- [     batchLoggerPrint_1_1] i.s.c.r.p.server.BatchLogHandler         : SeataMergeMessage xid=188.188.188.11:8091:6980862426907865541,branchType=AT,resourceId=jdbc:mysql://127.0.0.1:3306/seatatest,lockKey=t_account:1
,clientIp:188.188.188.11,vgroup:default_tx_group
18:09:13.500  INFO --- [verHandlerThread_1_38_500] i.seata.server.coordinator.AbstractCore  : Register branch successfully, xid = 188.188.188.11:8091:6980862426907865541, branchId = 6980862426907865546, resourceId = jdbc:mysql://127.0.0.1:3306/seatatest ,lockKeys = t_account:1
18:09:13.734  INFO --- [     batchLoggerPrint_1_1] i.s.c.r.p.server.BatchLogHandler         : SeataMergeMessage xid=188.188.188.11:8091:6980862426907865541,branchType=AT,resourceId=jdbc:mysql://127.0.0.1:3306/seatatest,lockKey=t_order:105
,clientIp:188.188.188.11,vgroup:default_tx_group
18:09:13.836  INFO --- [verHandlerThread_1_39_500] i.seata.server.coordinator.AbstractCore  : Register branch successfully, xid = 188.188.188.11:8091:6980862426907865541, branchId = 6980862426907865548, resourceId = jdbc:mysql://127.0.0.1:3306/seatatest ,lockKeys = t_order:105
18:09:13.888  INFO --- [     batchLoggerPrint_1_1] i.s.c.r.p.server.BatchLogHandler         : xid=188.188.188.11:8091:6980862426907865541,extraData=null,clientIp:188.188.188.11,vgroup:default_tx_group
18:09:14.127  INFO --- [verHandlerThread_1_40_500] io.seata.server.coordinator.DefaultCore  : Rollback branch transaction successfully, xid = 188.188.188.11:8091:6980862426907865541 branchId = 6980862426907865548
18:09:14.295  INFO --- [verHandlerThread_1_40_500] io.seata.server.coordinator.DefaultCore  : Rollback branch transaction successfully, xid = 188.188.188.11:8091:6980862426907865541 branchId = 6980862426907865546
18:09:14.466  INFO --- [verHandlerThread_1_40_500] io.seata.server.coordinator.DefaultCore  : Rollback branch transaction successfully, xid = 188.188.188.11:8091:6980862426907865541 branchId = 6980862426907865544
18:09:14.467  INFO --- [verHandlerThread_1_40_500] io.seata.server.coordinator.DefaultCore  : Rollback global transaction successfully, xid = 188.188.188.11:8091:6980862426907865541.
18:09:16.901  INFO --- [verHandlerThread_1_41_500] i.s.c.r.processor.server.RegRmProcessor  : RM register success,message:RegisterRMRequest{resourceIds='null', applicationId='dubbo-gts-seata-example', transactionServiceGroup='default_tx_group'},channel:[id: 0xfc55c990, L:/188.188.188.11:8091 - R:/188.188.188.11:57166],client version:1.5.1

坑点

Nacos配置只能用驼峰不能用中划线分隔的格式

Seata 加载Nacos的配置内容并没有做命名风格的适配。详情可以查看Seata的源码io.seata.common.ConfigurationKeys 类中的常量,配置的名称必须和常量定义的一致才能被加载。

不要参考Seata 服务端包里的application.example.yml 配置文件,而是要参考config.txt 文件。如果使用yml 格式的配置,记得还得修改seata.config.nacos.data-id ,保持Nacos和Seata配置格式一致,不然会识别不了。

Client端必须要有registry.conf配置文件

描述

Client端必须要有registry.conf类似的配置文件来配置客户端的注册中心和配置中心,不能直接在Spring Boot的配置文件里配置注册中和配置中心。

原因是在io.seata.config.ConfigurationFactory#load 方法使用了FileConfiguration来加载配置,

private static void load() {
        String seataConfigName = System.getProperty(SYSTEM_PROPERTY_SEATA_CONFIG_NAME);
        if (seataConfigName == null) {
            seataConfigName = System.getenv(ENV_SEATA_CONFIG_NAME);
        }
        if (seataConfigName == null) {
            seataConfigName = REGISTRY_CONF_DEFAULT;
        }
        String envValue = System.getProperty(ENV_PROPERTY_KEY);
        if (envValue == null) {
            envValue = System.getenv(ENV_SYSTEM_KEY);
        }
        //下面这个语句加载`registry.conf` 配置文件
        Configuration configuration = (envValue == null) ? new FileConfiguration(seataConfigName,
                false) : new FileConfiguration(seataConfigName + "-" + envValue, false);
        Configuration extConfiguration = null;
        try {
            extConfiguration = EnhancedServiceLoader.load(ExtConfigurationProvider.class).provide(configuration);
            if (LOGGER.isInfoEnabled()) {
                LOGGER.info("load Configuration from :{}", extConfiguration == null ?
                    configuration.getClass().getSimpleName() : "Spring Configuration");
            }
        } catch (EnhancedServiceNotFoundException ignore) {

        } catch (Exception e) {
            LOGGER.error("failed to load extConfiguration:{}", e.getMessage(), e);
        }
        CURRENT_FILE_INSTANCE = extConfiguration == null ? configuration : extConfiguration;
    }

在Server端由于引入的seata-config-core的包在ClassPath路径有一个默认的registry.conf 配置文件,而通过seat-all 方式引入的包里没有该文件,最终导致io.seata.config.ConfigurationFactory#buildConfiguration 方法无法获取seata.config.type 的配置,最终导致无法加载配置。

private static Configuration buildConfiguration() {
        String configTypeName = CURRENT_FILE_INSTANCE.getConfig(
                ConfigurationKeys.FILE_ROOT_CONFIG + ConfigurationKeys.FILE_CONFIG_SPLIT_CHAR
                        + ConfigurationKeys.FILE_ROOT_TYPE);

        if (StringUtils.isBlank(configTypeName)) {
            throw new NotSupportYetException("config type can not be null");
        }
        ConfigType configType = ConfigType.getType(configTypeName);

        Configuration extConfiguration = null;
        Configuration configuration;
        if (ConfigType.File == configType) {
            String pathDataId = String.join(ConfigurationKeys.FILE_CONFIG_SPLIT_CHAR,
                    ConfigurationKeys.FILE_ROOT_CONFIG, FILE_TYPE, NAME_KEY);
            String name = CURRENT_FILE_INSTANCE.getConfig(pathDataId);
            configuration = new FileConfiguration(name);
            try {
                extConfiguration = EnhancedServiceLoader.load(ExtConfigurationProvider.class).provide(configuration);
                if (LOGGER.isInfoEnabled()) {
                    LOGGER.info("load Configuration from :{}", extConfiguration == null ?
                        configuration.getClass().getSimpleName() : "Spring Configuration");
                }
            } catch (EnhancedServiceNotFoundException ignore) {

            } catch (Exception e) {
                LOGGER.error("failed to load extConfiguration:{}", e.getMessage(), e);
            }
        } else {
            configuration = EnhancedServiceLoader
                    .load(ConfigurationProvider.class, Objects.requireNonNull(configType).name()).provide();
        }
        try {
            Configuration configurationCache;
            if (null != extConfiguration) {
                configurationCache = ConfigurationCache.getInstance().proxy(extConfiguration);
            } else {
                configurationCache = ConfigurationCache.getInstance().proxy(configuration);
            }
            if (null != configurationCache) {
                extConfiguration = configurationCache;
            }
        } catch (EnhancedServiceNotFoundException ignore) {

        } catch (Exception e) {
            LOGGER.error("failed to load configurationCacheProvider:{}", e.getMessage(), e);
        }
        return null == extConfiguration ? configuration : extConfiguration;
    }
解决方案

不使用seat-all 方式,而使用seata-spring-autoconfigure-client 方式。

引入seata-spring-autoconfigure-client 依赖

        <dependency>
            <groupId>io.seata</groupId>
            <artifactId>seata-spring-autoconfigure-client</artifactId>
            <version>${seata.version}</version>
        </dependency>

如果项目里没有spring-tx 依赖,还得引入对应版本的spring-tx

        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-tx</artifactId>
            <version>5.0.8.RELEASE</version>
            <scope>compile</scope>
            <exclusions>
                <exclusion>
                    <artifactId>jcl-over-slf4j</artifactId>
                    <groupId>org.slf4j</groupId>
                </exclusion>
            </exclusions>
        </dependency>

操作完之后,就可以在Spring Boot的配置文件中配置Seata的注册中心和配置中心了。

seata.config.type=nacos
seata.config.nacos.namespace=seata-client-dev
seata.config.nacos.server-addr=127.0.0.1:8848
seata.config.nacos.group=SEATA_GROUP
seata.config.nacos.username=nacos
seata.config.nacos.password=nacos
##if use MSE Nacos with auth, mutex with username/password attribute
#seata.config.nacos.access-key=
#seata.config.nacos.secret-key=
#seata.config.nacos.data-id=seata.properties

#seata.config.custom.name=

seata.registry.type=nacos
seata.registry.nacos.application=seata-server
seata.registry.nacos.server-addr=127.0.0.1:8848
seata.registry.nacos.group=SEATA_GROUP
seata.registry.nacos.namespace=seata-server-dev
seata.registry.nacos.username=nacos
seata.registry.nacos.password=nacos
##if use MSE Nacos with auth, mutex with username/password attribute
#seata.registry.nacos.access-key=
#seata.registry.nacos.secret-key=
##if use Nacos naming meta-data for SLB  service registry, specify nacos address pattern rules here
#seata.registry.nacos.slb-pattern=

参考

附录

上传 Seata 客户端配置到Nacos

上传Seata的配置到NacosSeata-client-dev命名空间

下载配置文件到本地

config.txt

内容:

#For details about configuration items, see https://seata.io/zh-cn/docs/user/configurations.html
#Transport configuration, for client and server
transport.type=TCP
transport.server=NIO
transport.heartbeat=true
transport.enableTmClientBatchSendRequest=false
transport.enableRmClientBatchSendRequest=true
transport.enableTcServerBatchSendResponse=false
transport.rpcRmRequestTimeout=30000
transport.rpcTmRequestTimeout=30000
transport.rpcTcRequestTimeout=30000
transport.threadFactory.bossThreadPrefix=NettyBoss
transport.threadFactory.workerThreadPrefix=NettyServerNIOWorker
transport.threadFactory.serverExecutorThreadPrefix=NettyServerBizHandler
transport.threadFactory.shareBossWorker=false
transport.threadFactory.clientSelectorThreadPrefix=NettyClientSelector
transport.threadFactory.clientSelectorThreadSize=1
transport.threadFactory.clientWorkerThreadPrefix=NettyClientWorkerThread
transport.threadFactory.bossThreadSize=1
transport.threadFactory.workerThreadSize=default
transport.shutdown.wait=3
transport.serialization=seata
transport.compressor=none

#Transaction routing rules configuration, only for the client
service.vgroupMapping.default_tx_group=default
#If you use a registry, you can ignore it
service.default.grouplist=127.0.0.1:8091
service.enableDegrade=false
service.disableGlobalTransaction=false

#Transaction rule configuration, only for the client
client.rm.asyncCommitBufferLimit=10000
client.rm.lock.retryInterval=10
client.rm.lock.retryTimes=30
client.rm.lock.retryPolicyBranchRollbackOnConflict=true
client.rm.reportRetryCount=5
client.rm.tableMetaCheckEnable=true
client.rm.tableMetaCheckerInterval=60000
client.rm.sqlParserType=druid
client.rm.reportSuccessEnable=false
client.rm.sagaBranchRegisterEnable=false
client.rm.sagaJsonParser=fastjson
client.rm.tccActionInterceptorOrder=-2147482648
client.tm.commitRetryCount=5
client.tm.rollbackRetryCount=5
client.tm.defaultGlobalTransactionTimeout=60000
client.tm.degradeCheck=false
client.tm.degradeCheckAllowTimes=10
client.tm.degradeCheckPeriod=2000
client.tm.interceptorOrder=-2147482648
client.undo.dataValidation=true
client.undo.logSerialization=jackson
client.undo.onlyCareUpdateColumns=true
server.undo.logSaveDays=7
server.undo.logDeletePeriod=86400000
client.undo.logTable=undo_log
client.undo.compress.enable=true
client.undo.compress.type=zip
client.undo.compress.threshold=64k
#For TCC transaction mode
tcc.fence.logTableName=tcc_fence_log
tcc.fence.cleanPeriod=1h

#Log rule configuration, for client and server
log.exceptionRate=100

#Transaction storage configuration, only for the server. The file, DB, and redis configuration values are optional.
store.mode=file
store.lock.mode=file
store.session.mode=file
#Used for password encryption
store.publicKey=

#If `store.mode,store.lock.mode,store.session.mode` are not equal to `file`, you can remove the configuration block.
store.file.dir=file_store/data
store.file.maxBranchSessionSize=16384
store.file.maxGlobalSessionSize=512
store.file.fileWriteBufferCacheSize=16384
store.file.flushDiskMode=async
store.file.sessionReloadReadSize=100

#These configurations are required if the `store mode` is `db`. If `store.mode,store.lock.mode,store.session.mode` are not equal to `db`, you can remove the configuration block.
store.db.datasource=druid
store.db.dbType=mysql
store.db.driverClassName=com.mysql.jdbc.Driver
store.db.url=jdbc:mysql://127.0.0.1:3306/seata?useUnicode=true&rewriteBatchedStatements=true
store.db.user=username
store.db.password=password
store.db.minConn=5
store.db.maxConn=30
store.db.globalTable=global_table
store.db.branchTable=branch_table
store.db.distributedLockTable=distributed_lock
store.db.queryLimit=100
store.db.lockTable=lock_table
store.db.maxWait=5000

#These configurations are required if the `store mode` is `redis`. If `store.mode,store.lock.mode,store.session.mode` are not equal to `redis`, you can remove the configuration block.
store.redis.mode=single
store.redis.single.host=127.0.0.1
store.redis.single.port=6379
store.redis.sentinel.masterName=
store.redis.sentinel.sentinelHosts=
store.redis.maxConn=10
store.redis.minConn=1
store.redis.maxTotal=100
store.redis.database=0
store.redis.password=
store.redis.queryLimit=100

#Transaction rule configuration, only for the server
server.recovery.committingRetryPeriod=1000
server.recovery.asynCommittingRetryPeriod=1000
server.recovery.rollbackingRetryPeriod=1000
server.recovery.timeoutRetryPeriod=1000
server.maxCommitRetryTimeout=-1
server.maxRollbackRetryTimeout=-1
server.rollbackRetryTimeoutUnlockEnable=false
server.distributedLockExpireTime=10000
server.xaerNotaRetryTimeout=60000
server.session.branchAsyncQueueSize=5000
server.session.enableBranchAsyncRemove=false
server.enableParallelRequestHandle=false

#Metrics configuration, only for the server
metrics.enabled=false
metrics.registryType=compact
metrics.exporterList=prometheus
metrics.exporterPrometheusPort=9898
下载脚本到本地

nacos-config.sh

修改 nacos-config.sh 中以下内容
原:

for line in $(cat $(dirname "$PWD")/config.txt | sed s/[[:space:]]//g); do
    if [[ "$line" =~ ^"${COMMENT_START}".*  ]]; then
      continue
    fi
    count=`expr $count + 1`
	  key=${line%%=*}
    value=${line#*=}
	  addConfig "${key}" "${value}"
done

为:

for line in $(cat ./config.txt | sed s/[[:space:]]//g); do
    if [[ "$line" =~ ^"${COMMENT_START}".*  ]]; then
      continue
    fi
    count=`expr $count + 1`
	  key=${line%%=*}
    value=${line#*=}
	  addConfig "${key}" "${value}"
done
执行上传

打开Bash
在这里插入图片描述

执行命令:

sh nacos-config.sh -h 127.0.0.1 -p 8848 -g SEATA_GROUP -t seata-client-dev -u nacos -w nacos

上传结果:

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-ZmkGR5Mb-1657694349744)(./1657265269607.png)]

4个失败的原因是配置项的值为空

  • 0
    点赞
  • 4
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值