使用nacos作为saeta的注册中心和配置中心
1.启动seata-server
-
下载seata-server(直接下载打包好的)
seata下载地址 -
修改seata-server下conf/regitry.conf配置
registry {
# file 、nacos 、eureka、redis、zk、consul、etcd3、sofa
type = "nacos" # 修改注册中心的类型为nacos
nacos {
application = "seata-server" # seata注册的服务名称
serverAddr = "127.0.0.1:8848" # nacos的注册地址
group = "SEATA_GROUP" # nacos服务分组
namespace = "" # nacos服务空间 默认为"public"
cluster = "default"
username = "nacos" # 启用nacos密码服务验证需要
password = "nacos" # 启用nacos密码服务验证需要
}
eureka {
serviceUrl = "http://localhost:8761/eureka"
application = "default"
weight = "1"
}
redis {
serverAddr = "localhost:6379"
db = 0
password = ""
cluster = "default"
timeout = 0
}
zk {
cluster = "default"
serverAddr = "127.0.0.1:2181"
sessionTimeout = 6000
connectTimeout = 2000
username = ""
password = ""
}
consul {
cluster = "default"
serverAddr = "127.0.0.1:8500"
aclToken = ""
}
etcd3 {
cluster = "default"
serverAddr = "http://localhost:2379"
}
sofa {
serverAddr = "127.0.0.1:9603"
application = "default"
region = "DEFAULT_ZONE"
datacenter = "DefaultDataCenter"
cluster = "default"
group = "SEATA_GROUP"
addressWaitTime = "3000"
}
file {
name = "file.conf"
}
}
config {
# file、nacos 、apollo、zk、consul、etcd3
type = "nacos" # 修改配置中心的类型为nacos
nacos {
serverAddr = "127.0.0.1:8848"
namespace = ""
group = "SEATA_GROUP"
username = "nacos"
password = "nacos"
dataId = "seataServer.properties" # 使用nacos的配置 后面要在nacos配置中心创建
}
consul {
serverAddr = "127.0.0.1:8500"
aclToken = ""
}
apollo {
appId = "seata-server"
## apolloConfigService will cover apolloMeta
apolloMeta = "http://192.168.1.204:8801"
apolloConfigService = "http://192.168.1.204:8080"
namespace = "application"
apolloAccesskeySecret = ""
cluster = "seata"
}
zk {
serverAddr = "127.0.0.1:2181"
sessionTimeout = 6000
connectTimeout = 2000
username = ""
password = ""
nodePath = "/seata/seata.properties"
}
etcd3 {
serverAddr = "http://localhost:2379"
}
file {
name = "file.conf"
}
}
- 启动seata-server
直接启动seata-server-1.4.2\bin下的bat或sh,看到nacos服务列表里面有seata-server则表示seata服务启动成功
2.seata配置
-
创建data-id为seataServer.properties的配置,与seata-server conf里面的data-id保持一致
配置内容:主要修改注释的内容
transport.type=TCP transport.server=NIO transport.heartbeat=true transport.enableClientBatchSendRequest=false transport.threadFactory.bossThreadPrefix=NettyBoss transport.threadFactory.workerThreadPrefix=NettyServerNIOWorker transport.threadFactory.serverExecutorThreadPrefix=NettyServerBizHandler transport.threadFactory.shareBossWorker=false transport.threadFactory.clientSelectorThreadPrefix=NettyClientSelector transport.threadFactory.clientSelectorThreadSize=1 transport.threadFactory.clientWorkerThreadPrefix=NettyClientWorkerThread transport.threadFactory.bossThreadSize=1 transport.threadFactory.workerThreadSize=default transport.shutdown.wait=3 // 与服务端的事务分组保持一致 service.vgroupMapping.my_test_tx_group=default service.default.grouplist=127.0.0.1:8091 service.enableDegrade=false service.disableGlobalTransaction=false client.rm.asyncCommitBufferLimit=10000 client.rm.lock.retryInterval=10 client.rm.lock.retryTimes=30 client.rm.lock.retryPolicyBranchRollbackOnConflict=true client.rm.reportRetryCount=5 client.rm.tableMetaCheckEnable=false client.rm.tableMetaCheckerInterval=60000 client.rm.sqlParserType=druid client.rm.reportSuccessEnable=false client.rm.sagaBranchRegisterEnable=false client.tm.commitRetryCount=5 client.tm.rollbackRetryCount=5 client.tm.defaultGlobalTransactionTimeout=60000 client.tm.degradeCheck=false client.tm.degradeCheckAllowTimes=10 client.tm.degradeCheckPeriod=2000 store.mode=db # 改成db模式 store.publicKey= store.file.dir=file_store/data store.file.maxBranchSessionSize=16384 store.file.maxGlobalSessionSize=512 store.file.fileWriteBufferCacheSize=16384 store.file.flushDiskMode=async store.file.sessionReloadReadSize=100 store.db.datasource=druid # 数据库连接池类型 store.db.dbType=mysql # 数据库类型 store.db.driverClassName=com.mysql.jdbc.Driver store.db.url=jdbc:mysql://127.0.0.1:3306/seata?useUnicode=true&rewriteBatchedStatements=true #自己的数据库连接 store.db.user=root # 用户名 store.db.password=root # 密码 store.db.minConn=5 store.db.maxConn=30 store.db.globalTable=global_table store.db.branchTable=branch_table store.db.queryLimit=100 store.db.lockTable=lock_table store.db.maxWait=5000 store.redis.mode=single store.redis.single.host=127.0.0.1 store.redis.single.port=6379 store.redis.sentinel.masterName= store.redis.sentinel.sentinelHosts= store.redis.maxConn=10 store.redis.minConn=1 store.redis.maxTotal=100 store.redis.database=0 store.redis.password= store.redis.queryLimit=100 server.recovery.committingRetryPeriod=1000 server.recovery.asynCommittingRetryPeriod=1000 server.recovery.rollbackingRetryPeriod=1000 server.recovery.timeoutRetryPeriod=1000 server.maxCommitRetryTimeout=-1 server.maxRollbackRetryTimeout=-1 server.rollbackRetryTimeoutUnlockEnable=false client.undo.dataValidation=true client.undo.logSerialization=jackson client.undo.onlyCareUpdateColumns=true server.undo.logSaveDays=7 server.undo.logDeletePeriod=86400000 client.undo.logTable=undo_log client.undo.compress.enable=true client.undo.compress.type=zip client.undo.compress.threshold=64k log.exceptionRate=100 transport.serialization=seata transport.compressor=none metrics.enabled=false metrics.registryType=compact metrics.exporterList=prometheus metrics.exporterPrometheusPort=9898
提示:添加完配置之后要重启seata-server使配置生效
3.添加seata需要的mysql表
需要将seata需要的表导入到数据库当中,由于seata与业务共用一套事务,最好把seata的数据库表放到业务表里面。我测试如果不需要记录的话可以选择不添加seata表也是可以回滚事务的。
4.服务端配置
每个服务都需要添加这些配置
-
maven配置
<!-- seata分布式事务 --> <!--seata begin--> <dependency> <groupId>com.alibaba.cloud</groupId> <artifactId>spring-cloud-starter-alibaba-seata</artifactId> <exclusions> <exclusion> <groupId>io.seata</groupId> <artifactId>seata-spring-boot-starter</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>io.seata</groupId> <artifactId>seata-spring-boot-starter</artifactId> <version>1.4.2</version> </dependency>
-
配置文件
# ** seata的配置 **# seata.config.type=nacos # 防止疯狂打日志 seata.config.nacos.namespace= seata.config.nacos.data-id=seataServer.properties seata.config.nacos.server-addr=localhost:8848 seata.config.nacos.group=SEATA_GROUP seata.config.nacos.username=nacos seata.config.nacos.password=nacos # ** seata配置 **# seata.enabled=true seata.application-id=seata-server # ** 客户端和服务端在同一个事务组 **# seata.tx-service-group=my_test_tx_group #**at模式下数据源自动代理**# seata.enable-auto-data-source-proxy=true seata.service.vgroup-mapping.my_test_tx_group=default #***seata服务注册到nacos***# seata.registry.type=nacos # 防止疯狂打日志 seata.registry.nacos.namespace= # seata的服务名称 seata.registry.nacos.application=seata-server seata.registry.nacos.group=SEATA_GROUP seata.registry.nacos.server-addr=localhost:8848 seata.registry.nacos.cluster=default seata.registry.nacos.username=nacos seata.registry.nacos.password=nacos
5.@GlobalTransactional全局事务
使用@GlobalTransactional全局事务注解,注解在需要分布式事务的业务上面
@GetMapping("/pay")
@GlobalTransactional(name="seata-server")
@Transactional
public Result userPay(){
SysUser sysUser = new SysUser();
sysUser.setUserName("123");
sysUserService.save(sysUser);
// 获取全局的xid
String xid = GlobalTransactionContext.getCurrentOrCreate().getXid();
log.info("xid:{}",xid);
// openfeign远程调用的事务
testService.test();
// 异常测试 出现异常后test数据回滚
int i = 1/0;
// 用户支付
return Result.success("ok");
}
默认使用的是at模式不侵入业务代码,tcc和xa模式需要根据业务调整代码具有较强的侵入性但更灵活,也具有更高的性能。
6.注意事项
分布式事务不要把异常捕获,不然事务不会回滚,也不要使用spring boot的全局异常捕获也会导致分布式事务回滚不了。