spring cloud alibaba项目+seata分布式事务
1 使用版本
名称 | 版本 |
---|---|
spring cloud alibaba | 2021.0.4.0 |
nacos | 2.0.4 |
spring cloud gateway | 3.1.6 |
openFeign | 3.1.4 |
seata | 1.5.2 |
部署环境 | windows |
其他组件就不一 一列举了 |
目的:使用seata 实现分布式事务管理
1.1 seata 分布式事务架构步骤
- Nacos 注册中心与配置中心
- 部署TC 组件(seata server)
- 部署TM 组件 (微服务中使用注解@GlobalTransactional 的类称为TM)
- 部署RM 组件 (微服务)
- 验证分布式
2 部署
2.1 nacos注册中心与配置中心
这里不做细说,具体可以查看官网的 很简单,
部署后要记住nacos地址:例如我的是127.0.0.1:8848
2.2 部署TC 组件(seata-server)
2.2.1 下载seata-server
- 下载地址:选择版本1.5.2 然后选择对应版本我这里是基于windows,seata-server-1.5.2.zip
- 解压,然后修改配置
- 修改cofig/application.yml 要参考 config/application.example.yml 这文件来修改
- 修改点:port、config、registry、store,复制 application.example.yml中对应的节点到application.yml。 这里的使用的nacos 配置,如果需要看file配置可以查看 这个是seata运行示例,里面基本都是使用的file配置,建议查看这个案例上面有各种环境的使用方法(一定要看看)
server: port: 17091 # 修改端口 默认7091 我这里修改了 spring: application: name: seata-server #名称不用修改 logging: config: classpath:logback-spring.xml #日志配置 file: # path: ${user.home}/logs/seata path: D:/data/seata-server/logs/seata # 指定输出日志位置 便于查找信息 extend: logstash-appender: destination: 127.0.0.1:4560 kafka-appender: bootstrap-servers: 127.0.0.1:9092 topic: logback_to_logstash console: # 无需变动 user: username: seata password: seata seata: config: #这里指的是seata的配置方式,这里我们使用nacos # support: nacos, consul, apollo, zk, etcd3 type: nacos #修改为 naocs 默认file nacos: server-addr: 127.0.0.1:8848 #配置nacos 地址 namespace: bea5bdf4-a9ad-451d-8610-769092 ac24bf #nacos 空间 可自定义 要与后面nacos 中的配置文件对应 group: SEATA_GROUP #nacos group 可自定义 但是要与后面nacos 中的配置文件对应 username: nacos #nacos 用户名密码 password: nacos #nacos 用户名密码 data-id: seataServer.properties #指定data-id, 后面要在nacos中配置文件 registry :# 注册方式 使用nacos 方式 与上面 基本一致 # support: nacos, eureka, redis, zk, consul, etcd3, sofa type: nacos #修改为 naocs 默认file nacos: application: seata-server server-addr: 127.0.0.1:8848 group: SEATA_GROUP namespace: bea5bdf4-a9ad-451d-8610-769092ac24bf cluster: default username: nacos password: nacos ##if use MSE Nacos with auth, mutex with username/password attribute #access-key: "" #secret-key: "" server: # 指定配置 service-port: 18091 # 这个是端口 微服务就是要与这个端口连接,如果不配置就是server.port+1000 max-commit-retry-timeout: -1 max-rollback-retry-timeout: -1 rollback-retry-timeout-unlock-enable: false enable-check-auth: true enable-parallel-request-handle: true retry-dead-threshold: 130000 xaer-nota-retry-timeout: 60000 recovery: handle-all-session-period: 1000 undo: log-save-days: 7 log-delete-period: 86400000 session: branch-async-queue-size: 5000 #branch async remove queue size enable-branch-async-remove: false #enable to asynchronous remove branchSession store: #存储方式 这里选择db ,需要执行脚本,下面会说到那里找脚本 # support: file 、 db 、 redis mode: db #这里改为db 然后修改下面配置 db: datasource: druid db-type: mysql driver-class-name: com.mysql.cj.jdbc.Driver url: jdbc:mysql://127.0.0.1:3306/seata?rewriteBatchedStatements=true&serverTimezone=Asia/Shanghai user: root password: 123456 min-conn: 5 max-conn: 100 global-table: seata_global # 表名 原 global_table 如果要修改,记的下面脚本的表名也要修改 branch-table: seata_branch # 表名 原 branch_table 如果要修改,记的下面脚本的表名也要修改 lock-table: seata_lock # 表名 原 lock_table 如果要修改,记的下面脚本的表名也要修改 distributed-lock-table: seata_distributed # 表名 原 distributed_table 如果要修改,记的下面脚本的表名也要修改 query-limit: 100 max-wait: 5000 # server: # service-port: 8091 #If not configured, the default is '${server.port} + 1000' security: secretKey: SeataSecretKey0c382ef121d778043159209298fd40bf3850a017 tokenValidityInMilliseconds: 1800000 ignore: #按照你的需求来修改,一般默认即可 urls: /,/**/*.css,/**/*.js,/**/*.html,/**/*.map,/**/*.svg,/**/*.png,/**/*.ico,/console-fe/public/**,/api/v1/auth/login metrics: # 从模板配置中复制过来 enabled: false registry-type: compact exporter-list: prometheus exporter-prometheus-port: 9898 transport: rpc-tc-request-timeout: 30000 enable-tc-server-batch-send-response: false shutdown: wait: 3 thread-factory: boss-thread-prefix: NettyBoss worker-thread-prefix: NettyServerNIOWorker boss-thread-size: 1
2.2.2 nacos中配置seataServer.properties
-
路径:seata-server(解压路径)\script\config-center 中找到config.txt并作一下修改
- 修改service配置
service.vgroupMapping.default_tx_group=default #这里特别重要 default_tx_group 修改为 my_tx_group 一定要记住 #If you use a registry, you can ignore it service.default.grouplist=127.0.0.1:18091 #这里要修改为 application.yml 中配置的server.service_port service.enableDegrade=false service.disableGlobalTransaction=false
- store 存储设置
#Transaction storage configuration, only for the server. The file, DB, and redis configuration values are optional. store.mode=file #这里默认是file 全部改成DB store.lock.mode=file store.session.mode=file #Used for password encryption store.publicKey= # 对应的 datasource 配置要修改 store.db.datasource=druid store.db.dbType=mysql store.db.driverClassName=com.mysql.jdbc.Driver store.db.url=jdbc:mysql://127.0.0.1:3306/seata?useUnicode=true&rewriteBatchedStatements=true store.db.user=username store.db.password=password store.db.minConn=5 store.db.maxConn=30 store.db.globalTable=global_table #这里的表名别忘了要修改跟application.yml 中配置的一致 store.db.branchTable=branch_table #这里的表名别忘了要修改跟application.yml 中配置的一致 store.db.distributedLockTable=distributed_lock #这里的表名别忘了要修改跟application.yml 中配置的一致 store.db.lockTable=lock_table #这里的表名别忘了要修改跟application.yml 中配置的一致 store.db.queryLimit=100 store.db.maxWait=5000
- 修改完的最终配置:
#Transport configuration, for client and server transport.type=TCP transport.server=NIO transport.heartbeat=true transport.enableTmClientBatchSendRequest=false transport.enableRmClientBatchSendRequest=true transport.enableTcServerBatchSendResponse=false transport.rpcRmRequestTimeout=30000 transport.rpcTmRequestTimeout=30000 transport.rpcTcRequestTimeout=30000 transport.threadFactory.bossThreadPrefix=NettyBoss transport.threadFactory.workerThreadPrefix=NettyServerNIOWorker transport.threadFactory.serverExecutorThreadPrefix=NettyServerBizHandler transport.threadFactory.shareBossWorker=false transport.threadFactory.clientSelectorThreadPrefix=NettyClientSelector transport.threadFactory.clientSelectorThreadSize=1 transport.threadFactory.clientWorkerThreadPrefix=NettyClientWorkerThread transport.threadFactory.bossThreadSize=1 transport.threadFactory.workerThreadSize=default transport.shutdown.wait=3 transport.serialization=seata transport.compressor=none #Transaction routing rules configuration, only for the client #service.vgroupMapping.default_tx_group=default service.vgroupMapping.eid_tx_group=default #mark 此处修改了 #If you use a registry, you can ignore it service.default.grouplist=127.0.0.1:18091 #mark 此处修改了 service.enableDegrade=false service.disableGlobalTransaction=false #Transaction rule configuration, only for the client client.rm.asyncCommitBufferLimit=10000 client.rm.lock.retryInterval=10 client.rm.lock.retryTimes=30 client.rm.lock.retryPolicyBranchRollbackOnConflict=true client.rm.reportRetryCount=5 client.rm.tableMetaCheckEnable=true client.rm.tableMetaCheckerInterval=60000 client.rm.sqlParserType=druid client.rm.reportSuccessEnable=false client.rm.sagaBranchRegisterEnable=false client.rm.sagaJsonParser=fastjson client.rm.tccActionInterceptorOrder=-2147482648 client.tm.commitRetryCount=5 client.tm.rollbackRetryCount=5 client.tm.defaultGlobalTransactionTimeout=60000 client.tm.degradeCheck=false client.tm.degradeCheckAllowTimes=10 client.tm.degradeCheckPeriod=2000 client.tm.interceptorOrder=-2147482648 client.undo.dataValidation=true client.undo.logSerialization=jackson client.undo.onlyCareUpdateColumns=true server.undo.logSaveDays=7 server.undo.logDeletePeriod=86400000 client.undo.logTable=undo_log client.undo.compress.enable=true client.undo.compress.type=zip client.undo.compress.threshold=64k #For TCC transaction mode tcc.fence.logTableName=tcc_fence_log tcc.fence.cleanPeriod=1h #Log rule configuration, for client and server log.exceptionRate=100 #Transaction storage configuration, only for the server. The file, DB, and redis configuration values are optional. store.mode=DB #mark 此处修改了 store.lock.mode=DB #mark 此处修改了 store.session.mode=DB #mark 此处修改了 #Used for password encryption store.publicKey= #These configurations are required if the `store mode` is `db`. If `store.mode,store.lock.mode,store.session.mode` are not equal to `db`, you can remove the configuration block. store.db.datasource=druid store.db.dbType=mysql store.db.driverClassName=com.mysql.cj.jdbc.Driver #mark 此处修改了 用的mysql 8所以改了驱动 5的话不用修改 store.db.url=jdbc:mysql://127.0.0.1:3396/seata?useUnicode=true&rewriteBatchedStatements=true&serverTimezone=Asia/Shanghai store.db.user=root store.db.password=123456 store.db.minConn=5 store.db.maxConn=30 store.db.globalTable=seata_global #mark 此处修改了 store.db.branchTable=seata_branch #mark 此处修改了 store.db.distributedLockTable=seata_distributed #mark 此处修改了 store.db.queryLimit=100 store.db.lockTable=seata_lock #mark 此处修改了 store.db.maxWait=5000 #Transaction rule configuration, only for the server server.recovery.committingRetryPeriod=1000 server.recovery.asynCommittingRetryPeriod=1000 server.recovery.rollbackingRetryPeriod=1000 server.recovery.timeoutRetryPeriod=1000 server.maxCommitRetryTimeout=-1 server.maxRollbackRetryTimeout=-1 server.rollbackRetryTimeoutUnlockEnable=false server.distributedLockExpireTime=10000 server.xaerNotaRetryTimeout=60000 server.session.branchAsyncQueueSize=5000 server.session.enableBranchAsyncRemove=false server.enableParallelRequestHandle=false #Metrics configuration, only for the server metrics.enabled=false metrics.registryType=compact metrics.exporterList=prometheus metrics.exporterPrometheusPort=9898
- 修改service配置
-
配置到nacos中,如下图步骤
-
nacos中配置 service.vgroupMapping 不配置,微服务是找不到seata-server
- seata 命名空间中 添加以下配置,data-id 一定要与config.txt 中配置的一致
- seata 命名空间中 添加以下配置,data-id 一定要与config.txt 中配置的一致
-
执行mysql 脚本
- 新建库seata
CREATE DATABASE `seata` /*!40100 DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci */ /*!80016 DEFAULT ENCRYPTION='N' */ ```
- 执行脚本,路径 seata-server(seata-server路径)\script\server\db 查找脚本,复制mysql.sql 文件并修改内容 具体如下
# 修改表名 为 application中表名 CREATE TABLE IF NOT EXISTS `seata_global` ( `xid` VARCHAR(128) NOT NULL, `transaction_id` BIGINT, `status` TINYINT NOT NULL, `application_id` VARCHAR(32), `transaction_service_group` VARCHAR(32), `transaction_name` VARCHAR(128), `timeout` INT, `begin_time` BIGINT, `application_data` VARCHAR(2000), `gmt_create` DATETIME, `gmt_modified` DATETIME, PRIMARY KEY (`xid`), KEY `idx_status_gmt_modified` (`status` , `gmt_modified`), KEY `idx_transaction_id` (`transaction_id`) ) ENGINE = InnoDB DEFAULT CHARSET = utf8mb4; -- the table to store BranchSession data CREATE TABLE IF NOT EXISTS `seata_branch` ( `branch_id` BIGINT NOT NULL, `xid` VARCHAR(128) NOT NULL, `transaction_id` BIGINT, `resource_group_id` VARCHAR(32), `resource_id` VARCHAR(256), `branch_type` VARCHAR(8), `status` TINYINT, `client_id` VARCHAR(64), `application_data` VARCHAR(2000), `gmt_create` DATETIME(6), `gmt_modified` DATETIME(6), PRIMARY KEY (`branch_id`), KEY `idx_xid` (`xid`) ) ENGINE = InnoDB DEFAULT CHARSET = utf8mb4; CREATE TABLE IF NOT EXISTS `seata_lock` ( `row_key` VARCHAR(128) NOT NULL, `xid` VARCHAR(128), `transaction_id` BIGINT, `branch_id` BIGINT NOT NULL, `resource_id` VARCHAR(256), `table_name` VARCHAR(32), `pk` VARCHAR(36), `status` TINYINT NOT NULL DEFAULT '0' COMMENT '0:locked ,1:rollbacking', `gmt_create` DATETIME, `gmt_modified` DATETIME, PRIMARY KEY (`row_key`), KEY `idx_status` (`status`), KEY `idx_branch_id` (`branch_id`), KEY `idx_xid_and_branch_id` (`xid` , `branch_id`) ) ENGINE = InnoDB DEFAULT CHARSET = utf8mb4; CREATE TABLE IF NOT EXISTS `seata_distributed` ( `lock_key` CHAR(20) NOT NULL, `lock_value` VARCHAR(20) NOT NULL, `expire` BIGINT, primary key (`lock_key`) ) ENGINE = InnoDB DEFAULT CHARSET = utf8mb4; INSERT INTO `seata_distributed` (lock_key, lock_value, expire) VALUES ('AsyncCommitting', ' ', 0); INSERT INTO `seata_distributed` (lock_key, lock_value, expire) VALUES ('RetryCommitting', ' ', 0); INSERT INTO `seata_distributed` (lock_key, lock_value, expire) VALUES ('RetryRollbacking', ' ', 0); INSERT INTO `seata_distributed` (lock_key, lock_value, expire) VALUES ('TxTimeoutCheck', ' ', 0);
- 新建库seata
2.2.3 启动seata-server
路径seata-server/bin 下 seata-server.bat 文件,启动后如图,banner 有可能乱码,可忽略
3.部署TM、RM
3.1 修改微服务支持 seata
修改a服务、b服务支持seata分布式服务,以下操作a、b服务都要进行操作
3.1.1 添加依赖
添加 【spring-cloud-starter-alibaba-seata】
<dependency>
<groupId>com.alibaba.cloud</groupId>
<artifactId>spring-cloud-starter-alibaba-seata</artifactId>
<version>2021.0.4.0</version>
</dependency>
3.1.2 修改配置 application.yml
seata:
# 开启seata分布式事务
enabled: true
# 事务服务分组名
tx-service-group: my_tx_group#这里要修改为 前面配置文件配置的 my_tx_group
# 是否启用数据源代理
enable-auto-data-source-proxy: true
# 事务服务配置
service:
disable-global-transaction: false
enable-degrade: false
vgroup-mapping:
my_tx_group: default # 这里要与上面的 my_tx_group 对应 且值为default
grouplist:
default: 127.0.0.1:18091 #seata-server 地址
# nacos 配置中心
config:
type: nacos
nacos:
namespace: bea5bdf4-a9ad-451d-8610-769092ac24bf
group: SEATA_GROUP
server-addr: 127.0.0.1:8848
username: nacos
password: nacos
registry:
type: nacos
nacos:
application: seata-server
namespace: bea5bdf4-a9ad-451d-8610-769092ac24bf
group: SEATA_GROUP
cluster: default
server-addr: 127.0.0.1:8848
username: nacos
password: nacos
3.2 方法中添加注解@GlobalTransactional
- a、b serviceApplication 添加注解 @EnableTransactionManagement
- aServer 添加全局事务注解
@GlobalTransactional
@Transactional
public List<UserDTO> saveForSeata(List<UserDTO> userList) throws AppException {
// a服务保存
userMapper.saveBatch(userList);
// b 服务保存
ResponseDTO dto = bFeignService.seataSave(inReqData);
return list;
}
- bServer 添加事务注解 @Transactional
@Transactional // 添加此注解
public int seataSaveData(List<PurOrderSaveBiz> bizList) throws AppException {
purOrderDetailMapper.saveBatch(bizList);
return bizList.size();
}
4. 验证分布式
- 给bService 抛出异常,发现 aService 数据也没有保存即完成了全局事务管理