1. SpringCloud Alibaba Seata处理分布式事务
1.1 分布式事务问题由来
我们之前都是一个应用程序连一个库,而且应用程序和数据库被放在同一台机器上,但是在分布式微服务的系统架构中,将原来的三个模块拆分成三个独立的应用,分别使用三个独立的数据源。
1.仓储服务:对给定商品扣除仓储数量。
2.订单服务:根据采购需求创建订单。
3.账户服务:从用户账户中扣除余额。
上述操作都需要操作相应的数据库,但是如何保证数据库的全局一致性是一个很大的问题。一次业务操作需要跨多个数据源或需要跨多个系统进行远程调用,就会产生分布式事务问题。
1.2 什么是Seata?
Seata是一款开源的分布式事务解决方案,致力于在微服务架构下提供高性能和简单易用的分布式事务服务。
1.3 分布式事务的相关概念
分布式事务处理过程的一ID+三组件模型,一ID即Transaction ID XID,全局唯一的事务ID。
三组件:
1.TC (Transaction Coordinator) - 事务协调者
维护全局和分支事务的状态,驱动全局事务提交或回滚。
2.TM (Transaction Manager) - 事务管理器
定义全局事务的范围:开始全局事务、提交或回滚全局事务。
3.RM (Resource Manager) - 资源管理器
管理分支事务处理的资源,与TC交谈以注册分支事务和报告分支事务的状态,并驱动分支事务提交或回滚。
- TM向TC申请开启一个全局事务,全局事务创建成功并生成一个全局唯一的XID。
- XID在微服务调用链路的上下文中传播。
- RM向TC注册分支事务,将其纳入XID对应的全局事务的管辖。
- TM向TC发起针对XID的全局提交或回滚决议。
- TC调度XID下管辖的全部分支事务完成提交或者回滚请求。
2. Docker安装Seata
必要条件:
- mysql | otherdb
- nacos(服务注册与发现及配置中心-Nacos使用笔记)
- docker
查询Seata镜像
# docker search seata
NAME DESCRIPTION STARS OFFICIAL AUTOMATED
seataio/seata-server Distributed transaction solution with high p… 69
seatable/seatable Note: this docker repository is no longer up… 27
seatable/seatable-developer Beyond Spreadsheet – official Docker image f… 18
seatable/seatable-enterprise Beyond Spreadsheet – official Docker image f… 8
seatable/python-runner Beyond Spreadsheet – official Docker image f… 2
fancyfong/seata 1
zhaoyunxing/seata 分布式事物中间件-seata 1
vqui/seatable-python-runner https://github.com/vquie/seatable-python-run… 0
vqui/seatable-faas-scheduler https://github.com/vquie/seatable-faas-sched… 0
seatable/seatable-syncer-test 0
seatable/seatable-faas-scheduler-test 0
guaiwolo/seata-server 0
seatable/seatable-enterprise-testing 0
hellowoodes/seata Seata Server for Alibaba Seata 0
wjq1028cs2/seata-server fix a bug from seataio/seata-server 0
ssgssg/seata seata 0
seatable/dtable-server-proxy 0
shuogesha/seata1.1.0 seata1.1.0 0
seatable/seatable-faas-scheduler Beyond Spreadsheet – official Docker image f… 0
lovechen/seatable-developer Beyond Spreadsheet – official Docker image f… 0
infinivision/seata 0
seatabay/ubuntu-nodejs 0
majiajue/seata seata-server 0
wildwind113/seata 0
levygat2b/seatable-components a repo for the seatable image separated into… 0
使用第一个,拉取镜像
# docker pull seataio/seata-server:1.5.2
启动命令
# docker run -it -d --name=seata -p 8091:8091 -e SEATA_PORT=8091 seataio/seata-server:1.5.2
复制并修改容器配置文件
# docker cp seata:/seata-server/ /usr/local/docker/seata/
# cd /usr/local/docker/seata/seata-server/resources/
# vim application.yml
需要修改的地方使用 #{} 标注
server:
port: 7091
spring:
application:
name: seata-server
logging:
config: classpath:logback-spring.xml
file:
path: ${user.home}/logs/seata
extend:
logstash-appender:
destination: 127.0.0.1:4560
kafka-appender:
bootstrap-servers: 127.0.0.1:9092
topic: logback_to_logstash
console:
user:
username: seata
password: seata
seata:
config:
# support: nacos, consul, apollo, zk, etcd3
type: nacos
nacos:
server-addr: #{127.0.0.1}:#{8848}
namespace: #{seata}
group: SEATA_GROUP
username: #{username}
password: #{password}
registry:
# support: nacos, eureka, redis, zk, consul, etcd3, sofa
type: nacos
preferred-networks: 30.240.*
nacos:
application: seata-server
server-addr: #{127.0.0.1}:#{8848}
group: SEATA_GROUP
namespace: #{seata}
cluster: default
username: #{username}
password: #{password}
store:
# support: file 、 db 、 redis
mode: db
session:
mode: db
lock:
mode: db
file:
dir: sessionStore
max-branch-session-size: 16384
max-global-session-size: 512
file-write-buffer-cache-size: 16384
session-reload-read-size: 100
flush-disk-mode: async
db:
datasource: druid
db-type: mysql
driver-class-name: com.mysql.jdbc.Driver
url: jdbc:mysql://#{127.0.0.1}:#{3306}/seata?rewriteBatchedStatements=true
user: #{username}
password: #{password}
min-conn: 5
max-conn: 100
global-table: global_table
branch-table: branch_table
lock-table: lock_table
distributed-lock-table: distributed_lock
query-limit: 100
max-wait: 5000
server:
service-port: 8091 #If not configured, the default is '${server.port} + 1000'
max-commit-retry-timeout: -1
max-rollback-retry-timeout: -1
rollback-retry-timeout-unlock-enable: false
enable-check-auth: true
enable-parallel-request-handle: true
retry-dead-threshold: 130000
xaer-nota-retry-timeout: 60000
recovery:
handle-all-session-period: 1000
undo:
log-save-days: 7
log-delete-period: 86400000
session:
branch-async-queue-size: 5000 #branch async remove queue size
enable-branch-async-remove: false #enable to asynchronous remove branchSession
# server:
# service-port: 8091 #If not configured, the default is '${server.port} + 1000'
metrics:
enabled: false
registry-type: compact
exporter-list: prometheus
exporter-prometheus-port: 9898
transport:
rpc-tc-request-timeout: 30000
enable-tc-server-batch-send-response: false
shutdown:
wait: 3
thread-factory:
boss-thread-prefix: NettyBoss
worker-thread-prefix: NettyServerNIOWorker
boss-thread-size: 1
security:
secretKey: SeataSecretKey0c382ef121d778043159209298fd40bf3850a017
tokenValidityInMilliseconds: 1800000
ignore:
urls: /,/**/*.css,/**/*.js,/**/*.html,/**/*.map,/**/*.svg,/**/*.png,/**/*.ico,/console-fe/public/**,/api/v1/auth/login
配置文件上传至nacos
# mkdir -p /seata/conf
# mkdir -p /seata/bin
# cd /seata/conf
# vim file.conf
##transaction log store, only used in seata-server
store {
## store mode: file、db、redis
mode = "db" ## 默认是file
## file store property
file {
## store location dir
dir = "sessionStore"
# branch session size , if exceeded first try compress lockkey, still exceeded throws exceptions
maxBranchSessionSize = 16384
# globe session size , if exceeded throws exceptions
maxGlobalSessionSize = 512
# file buffer size , if exceeded allocate new buffer
fileWriteBufferCacheSize = 16384
# when recover batch read size
sessionReloadReadSize = 100
# async, sync
flushDiskMode = async
}
##database store property
db {
datasource = "druid"
## mysql/oracle/postgresql/h2/oceanbase etc.
dbType = "mysql"
driverClassName = "com.mysql.jdbc.Driver"
url = "jdbc:mysql://127.0.0.1:3306/seata?serverTimezone=GMT%2B8"
##引号不要去掉
user = "#{username}"
password = "#{password}"
minConn = 5
maxConn = 30
globalTable = "global_table"
branchTable = "branch_table"
lockTable = "lock_table"
queryLimit = 100
maxWait = 5000
}
# vim registry.conf
##transaction log store, only used in seata-server
store {
## store mode: file、db、redis
mode = "db" ## 默认是file
## file store property
file {
## store location dir
dir = "sessionStore"
maxBranchSessionSize = 16384
# globe session size , if exceeded throws exceptions
maxGlobalSessionSize = 512
# file buffer size , if exceeded allocate new buffer
fileWriteBufferCacheSize = 16384
# when recover batch read size
}
}
#服务,将type从file改成nacos,将seata服务配置进nacos
registry {
type = "nacos"
nacos {
application = "seata-server"
serverAddr = "127.0.0.1:8848"
group = "SEATA_GROUP"
namespace = "seata"
cluster = "default"
username = "#{username}"
password = "#{password}"
}
}
config {
# file、nacos 、apollo、zk、consul、etcd3
type = "file"
nacos {
serverAddr = "127.0.0.1:8848"
namespace = "seata"
group = "SEATA_GROUP"
cluster = "default"
username = "#{username}"
password = "#{password}"
}
file {
name = "file.conf"
}
}
# cd /seata
# vim config.txt
#For details about configuration items, see https://seata.io/zh-cn/docs/user/configurations.html
#Transport configuration, for client and server
transport.type=TCP
transport.server=NIO
transport.heartbeat=true
transport.enableTmClientBatchSendRequest=false
transport.enableRmClientBatchSendRequest=true
transport.enableTcServerBatchSendResponse=false
transport.rpcRmRequestTimeout=30000
transport.rpcTmRequestTimeout=30000
transport.rpcTcRequestTimeout=30000
transport.threadFactory.bossThreadPrefix=NettyBoss
transport.threadFactory.workerThreadPrefix=NettyServerNIOWorker
transport.threadFactory.serverExecutorThreadPrefix=NettyServerBizHandler
transport.threadFactory.shareBossWorker=false
transport.threadFactory.clientSelectorThreadPrefix=NettyClientSelector
transport.threadFactory.clientSelectorThreadSize=1
transport.threadFactory.clientWorkerThreadPrefix=NettyClientWorkerThread
transport.threadFactory.bossThreadSize=1
transport.threadFactory.workerThreadSize=default
transport.shutdown.wait=3
transport.serialization=seata
transport.compressor=none
#Transaction routing rules configuration, only for the client
service.vgroupMapping.default_tx_group=default
#If you use a registry, you can ignore it
service.default.grouplist=127.0.0.1:8091
service.enableDegrade=false
service.disableGlobalTransaction=false
#Transaction rule configuration, only for the client
client.rm.asyncCommitBufferLimit=10000
client.rm.lock.retryInterval=10
client.rm.lock.retryTimes=30
client.rm.lock.retryPolicyBranchRollbackOnConflict=true
client.rm.reportRetryCount=5
client.rm.tableMetaCheckEnable=true
client.rm.tableMetaCheckerInterval=60000
client.rm.sqlParserType=druid
client.rm.reportSuccessEnable=false
client.rm.sagaBranchRegisterEnable=false
client.rm.sagaJsonParser=fastjson
client.rm.tccActionInterceptorOrder=-2147482648
client.tm.commitRetryCount=5
client.tm.rollbackRetryCount=5
client.tm.defaultGlobalTransactionTimeout=60000
client.tm.degradeCheck=false
client.tm.degradeCheckAllowTimes=10
client.tm.degradeCheckPeriod=2000
client.tm.interceptorOrder=-2147482648
client.undo.dataValidation=true
client.undo.logSerialization=jackson
client.undo.onlyCareUpdateColumns=true
server.undo.logSaveDays=7
server.undo.logDeletePeriod=86400000
client.undo.logTable=undo_log
client.undo.compress.enable=true
client.undo.compress.type=zip
client.undo.compress.threshold=64k
#For TCC transaction mode
tcc.fence.logTableName=tcc_fence_log
tcc.fence.cleanPeriod=1h
#Log rule configuration, for client and server
log.exceptionRate=100
#Transaction storage configuration, only for the server. The file, db, and redis configuration values are optional.
store.mode=db
store.lock.mode=db
store.session.mode=db
#Used for password encryption
#If `store.mode,store.lock.mode,store.session.mode` are not equal to `file`, you can remove the configuration block.
store.file.dir=file_store/data
store.file.maxBranchSessionSize=16384
store.file.maxGlobalSessionSize=512
store.file.fileWriteBufferCacheSize=16384
store.file.flushDiskMode=async
store.file.sessionReloadReadSize=100
#These configurations are required if the `store mode` is `db`. If `store.mode,store.lock.mode,store.session.mode` are not equal to `db`, you can remove the configuration block.
store.db.datasource=druid
store.db.dbType=mysql
store.db.driverClassName=com.mysql.jdbc.Driver
store.db.url=jdbc:mysql://127.0.0.1:3306/seata?useUnicode=true&rewriteBatchedStatements=true
store.db.user=#{username}
store.db.password=#{password}
store.db.minConn=5
store.db.maxConn=30
store.db.globalTable=global_table
store.db.branchTable=branch_table
store.db.distributedLockTable=distributed_lock
store.db.queryLimit=100
store.db.lockTable=lock_table
store.db.maxWait=5000
#These configurations are required if the `store mode` is `redis`. If `store.mode,store.lock.mode,store.session.mode` are not equal to `redis`, you can remove the configuration block.
#Transaction rule configuration, only for the server
server.recovery.committingRetryPeriod=1000
server.recovery.asynCommittingRetryPeriod=1000
server.recovery.rollbackingRetryPeriod=1000
server.recovery.timeoutRetryPeriod=1000
server.maxCommitRetryTimeout=-1
server.maxRollbackRetryTimeout=-1
server.rollbackRetryTimeoutUnlockEnable=false
server.distributedLockExpireTime=10000
server.xaerNotaRetryTimeout=60000
server.session.branchAsyncQueueSize=5000
server.session.enableBranchAsyncRemove=false
server.enableParallelRequestHandle=false
#Metrics configuration, only for the server
metrics.enabled=false
metrics.registryType=compact
metrics.exporterList=prometheus
metrics.exporterPrometheusPort=9898
# cd /seata/bin/
# vim nacos-config.sh
#!/usr/bin/env bash
while getopts ":h:p:g:t:u:w:" opt
do
case $opt in
h)
host=$OPTARG
;;
p)
port=$OPTARG
;;
g)
group=$OPTARG
;;
t)
tenant=$OPTARG
;;
u)
username=$OPTARG
;;
w)
password=$OPTARG
;;
?)
echo " USAGE OPTION: $0 [-h host] [-p port] [-g group] [-t tenant] [-u username] [-w password] "
exit 1
;;
esac
done
if [[ -z ${host} ]]; then
host=localhost
fi
if [[ -z ${port} ]]; then
port=8848
fi
if [[ -z ${group} ]]; then
group="SEATA_GROUP"
fi
if [[ -z ${tenant} ]]; then
tenant=""
fi
if [[ -z ${username} ]]; then
username=""
fi
if [[ -z ${password} ]]; then
password=""
fi
nacosAddr=$host:$port
contentType="content-type:application/json;charset=UTF-8"
echo "set nacosAddr=$nacosAddr"
echo "set group=$group"
failCount=0
tempLog=$(mktemp -u)
function addConfig() {
curl -X POST -H "${contentType}" "http://$nacosAddr/nacos/v1/cs/configs?dataId=$1&group=$group&content=$2&tenant=$tenant&username=$username&password=$password" >"${tempLog}" 2>/dev/null
if [[ -z $(cat "${tempLog}") ]]; then
echo " Please check the cluster status. "
exit 1
fi
if [[ $(cat "${tempLog}") =~ "true" ]]; then
echo "Set $1=$2 successfully "
else
echo "Set $1=$2 failure "
(( failCount++ ))
fi
}
count=0
for line in $(cat $(dirname "$PWD")/config.txt | sed s/[[:space:]]//g); do
(( count++ ))
key=${line%%=*}
value=${line#*=}
addConfig "${key}" "${value}"
done
echo "========================================================================="
echo " Complete initialization parameters, total-count:$count , failure-count:$failCount "
echo "========================================================================="
if [[ ${failCount} -eq 0 ]]; then
echo " Init nacos config finished, please start seata-server. "
else
echo " init nacos config fail. "
fi
config.txt与nacos-config.sh文件不能在同一个目录下,否则执行不了
# cd /seata/bin/
# sh nacos-config.sh -h 127.0.0.1 -p 8848 -g SEATA_GROUP -t #{namespace} -u #{username} -w #{password}
刷新Nacos页面如下即为成功
-- 删除旧容器
# docker stop seata
# docker rm seata
# docker run -it -d --name=seata --restart=always
-v /usr/local/docker/seata/seata-server/:/seata-server/
-e SEATA_IP=124.223.23.120 -p 8091:8091
-e SEATA_PORT=8091 seataio/seata-server:1.5.2
-- 查询容器启动情况
# docker logs -f seata
3. SpringBoot整合Seata
3.1 导入JAR包
<!--版本锁定-->
<dependency>
<groupId>com.alibaba.cloud</groupId>
<artifactId>spring-cloud-starter-alibaba-seata</artifactId>
</dependency>
<!--版本和安装seata一致-->
<dependency>
<groupId>io.seata</groupId>
<artifactId>seata-spring-boot-starter</artifactId>
<version>1.5.2</version>
<exclusions>
<exclusion>
<artifactId>asm</artifactId>
<groupId>org.ow2.asm</groupId>
</exclusion>
</exclusions>
</dependency>
3.2 配置文件(yml版)
seata:
enabled: true
enable-auto-data-source-proxy: false
tx-service-group: dekun_my_gyl_group
service:
disable-global-transaction: false
vgroup-mapping:
dekun_my_gyl_group: default
# grouplist:
# default: ip:8091
registry:
type: nacos
nacos:
application: seata-server
server-addr: 124.223.23.120:8848
namespace: seata
group: SEATA_GROUP
username: nacos
password: nacos
# 可选 默认
cluster: default
config:
nacos:
server-addr: 124.223.23.120:8848
namespace: seata
group: SEATA_GROUP
username: nacos
password: nacos
# client:
# rm:
# report-success-enable: false
3.3 使用
需要进行微服务事务控制的方法上添加 @GlobalTransactional 即可
3.4 踩坑笔记
3.4.1 feignHystrixBuilder
报错日志:
Description:
The bean ‘feignHystrixBuilder’, defined in class path resource [org/springframework/cloud/sleuth/instrument/web/client/feign/TraceFeignClientAutoConfiguration.class], could not be registered. A bean with that name has already been defined in class path resource [com/alibaba/cloud/seata/feign/SeataFeignClientAutoConfiguration.class] and overriding is disabled.
Action:
Consider renaming one of the beans or enabling overriding by setting spring.main.allow-bean-definition-overriding=true
原因:sleuth与seata冲突导致
解决方式:添加yml文件配置
spring:
sleuth:
feign:
enabled: false
3.4.2 globalTransactionScanner
报错日志
Error creating bean with name ‘globalTransactionScanner’ defined in class path resource [io/seata/spring/boot/autoconfigure/SeataAutoConfiguration.class]: Bean instantiation via factory method failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [io.seata.spring.annotation.GlobalTransactionScanner]: Factory method ‘globalTransactionScanner’ threw exception; nested exception is java.lang.ExceptionInInitializerError
原因:jdk版本导致
解决方式:本文使用jdk8,切换后启动成功,8以上版本添加启动参数 :
--add-opens java.base/java.lang=ALL-UNNAMED