1. 前言
分布式微服务架构越来越成熟,使得业务耦合度降低,提高系统的高可用性,分布式事务的重要性也显得尤为的重要,由于使用了Dubbo作为分布式服务架构,所以选用同样阿里系的Seata作为分布式事务,实践证明Seata还是很轻量化的,网上对于这块集成缺乏系统的资料,所以本文将系统的进行说明。
如果注册中心为Nacos:Springboot+Nacos+Seata+Dubbo组成的微服务架构下,分布式事务解决方案
2. Zookeeper
Zookeeper官网:https://zookeeper.apache.org/
1)什么是Zookeeper?
ZooKeeper致力于开发和维护可实现高度可靠的分布式协调的开源服务器。
ZooKeeper is a centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services. All of these kinds of services are used in some form or another by distributed applications. Each time they are implemented there is a lot of work that goes into fixing the bugs and race conditions that are inevitable. Because of the difficulty of implementing these kinds of services, applications initially usually skimp on them, which make them brittle in the presence of change and difficult to manage. Even when done correctly, different implementations of these services lead to management complexity when the applications are deployed.
2)Zookeeper服务端部署
Zookeeper在Windows和Linux操作部署步骤一致,参考博客:springboot dubbo(zookeeper) nginx前后端分离 linux服务器环境安装
3. Seata
Seata官网:https://seata.io/zh-cn/docs/overview/what-is-seata.html
1)什么是Seata?
Seata 是一款开源的分布式事务解决方案,致力于提供高性能和简单易用的分布式事务服务。Seata 将为用户提供了 AT、TCC、SAGA 和 XA 事务模式,为用户打造一站式的分布式解决方案。
2)Seata服务端部署
先下载Seata服务:https://seata.io/zh-cn/blog/download.html
1、最新版本1.2.0,然后点击下载,并且解压。这里source和binary都要下载解压。
2、通过source下载得到seata-1.2.0.zip,解压。
3、进入目录\seata-1.2.0\script\server\db,根据不同的数据库进行sql语句执行。
4、进入目录\seata-1.2.0\script\config-center,修改config.txt文件。
这里需要修改存储模式为数据库模式store.mode=db,和修改数据库用户名密码store.db.user=xxx,store.db.password=xxx
transport.type=TCP
transport.server=NIO
transport.heartbeat=true
transport.enableClientBatchSendRequest=false
transport.threadFactory.bossThreadPrefix=NettyBoss
transport.threadFactory.workerThreadPrefix=NettyServerNIOWorker
transport.threadFactory.serverExecutorThreadPrefix=NettyServerBizHandler
transport.threadFactory.shareBossWorker=false
transport.threadFactory.clientSelectorThreadPrefix=NettyClientSelector
transport.threadFactory.clientSelectorThreadSize=1
transport.threadFactory.clientWorkerThreadPrefix=NettyClientWorkerThread
transport.threadFactory.bossThreadSize=1
transport.threadFactory.workerThreadSize=default
transport.shutdown.wait=3
service.vgroupMapping.my_test_tx_group=default
service.default.grouplist=127.0.0.1:8091
service.enableDegrade=false
service.disableGlobalTransaction=false
client.rm.asyncCommitBufferLimit=10000
client.rm.lock.retryInterval=10
client.rm.lock.retryTimes=30
client.rm.lock.retryPolicyBranchRollbackOnConflict=true
client.rm.reportRetryCount=5
client.rm.tableMetaCheckEnable=false
client.rm.sqlParserType=druid
client.rm.reportSuccessEnable=false
client.rm.sagaBranchRegisterEnable=false
client.tm.commitRetryCount=5
client.tm.rollbackRetryCount=5
store.mode=db
store.file.dir=file_store/data
store.file.maxBranchSessionSize=16384
store.file.maxGlobalSessionSize=512
store.file.fileWriteBufferCacheSize=16384
store.file.flushDiskMode=async
store.file.sessionReloadReadSize=100
store.db.datasource=druid
store.db.dbType=mysql
store.db.driverClassName=com.mysql.jdbc.Driver
store.db.url=jdbc:mysql://127.0.0.1:3306/seata?useUnicode=true
store.db.user=root
store.db.password=yuyi
store.db.minConn=5
store.db.maxConn=30
store.db.globalTable=global_table
store.db.branchTable=branch_table
store.db.queryLimit=100
store.db.lockTable=lock_table
store.db.maxWait=5000
server.recovery.committingRetryPeriod=1000
server.recovery.asynCommittingRetryPeriod=1000
server.recovery.rollbackingRetryPeriod=1000
server.recovery.timeoutRetryPeriod=1000
server.maxCommitRetryTimeout=-1
server.maxRollbackRetryTimeout=-1
server.rollbackRetryTimeoutUnlockEnable=false
client.undo.dataValidation=true
client.undo.logSerialization=jackson
server.undo.logSaveDays=7
server.undo.logDeletePeriod=86400000
client.undo.logTable=undo_log
client.log.exceptionRate=100
transport.serialization=seata
transport.compressor=none
metrics.enabled=false
metrics.registryType=compact
metrics.exporterList=prometheus
metrics.exporterPrometheusPort=9898
5、进入目录\seata-1.2.0\script\config-center\zk,发现windows环境没有脚本可以把Seata配置到Zookeeper中,这里通过自己写一个程序把配置导入到Zookeeper中。
6、进入目录\seata-1.2.0\script\config-center,然后复制config.txt到项目的resources目录下面中,并且命名为seata-config.properties
7、创建SeataRegisterZookeeper.java类,通过该类main函数把Seata配置信息注册到Zookeeper中。
/**
* <p>
* Seata配置文件注册到Zookeeper中
* </p>
*
* @author yui (1060771195@qq.com)
*/
@Slf4j
public class SeataRegisterZookeeper {
private static volatile ZkClient zkClient;
public static void main(String[] args) {
if (zkClient == null) {
zkClient = new ZkClient("127.0.0.1:2181", 6000, 2000);
}
if (!zkClient.exists("/seata")) {
zkClient.createPersistent("/seata", true);
}
//获取key对应的value值
Properties properties = new Properties();
// 使用ClassLoader加载properties配置文件生成对应的输入流
// 使用properties对象加载输入流
try {
File file = ResourceUtils.getFile("classpath:seata-config.properties");
InputStream in = new FileInputStream(file);
properties.load(in);
Set<Object> keys = properties.keySet();//返回属性key的集合
for (Object key : keys) {
boolean b = putConfig(key.toString(), properties.get(key).toString());
log.info(key.toString() + "=" + properties.get(key)+"result="+b);
}
} catch (IOException e) {
e.printStackTrace();
}
}
public static boolean putConfig(final String dataId, final String content) {
Boolean flag = false;
String path = "/seata/" + dataId;
if (!zkClient.exists(path)) {
zkClient.create(path, content, CreateMode.PERSISTENT);
flag = true;
} else {
zkClient.writeData(path, content);
flag = true;
}
return flag;
}
}
参考seata-all.jar源码中ZookeeperConfiguration类的写法。
8、通过binary下载得到seata-server-1.2.0.zip,解压。
9、进入conf目录,修改file.conf文件和registry.conf文件。
修改file.conf文件,修改mode为db,修改db的数据库账号密码。
## transaction log store, only used in seata-server
store {
## store mode: file、db
mode = "db"
## file store property
file {
## store location dir
dir = "sessionStore"
# branch session size , if exceeded first try compress lockkey, still exceeded throws exceptions
maxBranchSessionSize = 16384
# globe session size , if exceeded throws exceptions
maxGlobalSessionSize = 512
# file buffer size , if exceeded allocate new buffer
fileWriteBufferCacheSize = 16384
# when recover batch read size
sessionReloadReadSize = 100
# async, sync
flushDiskMode = async
}
## database store property
db {
## the implement of javax.sql.DataSource, such as DruidDataSource(druid)/BasicDataSource(dbcp) etc.
datasource = "druid"
## mysql/oracle/postgresql/h2/oceanbase etc.
dbType = "mysql"
driverClassName = "com.mysql.jdbc.Driver"
url = "jdbc:mysql://127.0.0.1:3306/seata"
user = "root"
password = "yuyi"
minConn = 5
maxConn = 30
globalTable = "global_table"
branchTable = "branch_table"
lockTable = "lock_table"
queryLimit = 100
maxWait = 5000
}
}
修改registry.conf文件,type改为zk
registry {
# file 、nacos 、eureka、redis、zk、consul、etcd3、sofa
type = "zk"
nacos {
application = "seata-server"
serverAddr = "localhost"
namespace = ""
cluster = "default"
username = ""
password = ""
}
eureka {
serviceUrl = "http://localhost:8761/eureka"
application = "default"
weight = "1"
}
redis {
serverAddr = "localhost:6379"
db = 0
password = ""
cluster = "default"
timeout = 0
}
zk {
cluster = "default"
serverAddr = "127.0.0.1:2181"
sessionTimeout = 6000
connectTimeout = 2000
username = ""
password = ""
}
consul {
cluster = "default"
serverAddr = "127.0.0.1:8500"
}
etcd3 {
cluster = "default"
serverAddr = "http://localhost:2379"
}
sofa {
serverAddr = "127.0.0.1:9603"
application = "default"
region = "DEFAULT_ZONE"
datacenter = "DefaultDataCenter"
cluster = "default"
group = "SEATA_GROUP"
addressWaitTime = "3000"
}
file {
name = "file.conf"
}
}
config {
# file、nacos 、apollo、zk、consul、etcd3
type = "zk"
nacos {
serverAddr = "localhost:8848"
namespace = ""
group = "SEATA_GROUP"
username = ""
password = ""
}
consul {
serverAddr = "127.0.0.1:8500"
}
apollo {
appId = "seata-server"
apolloMeta = "http://192.168.1.204:8801"
namespace = "application"
}
zk {
serverAddr = "127.0.0.1:2181"
sessionTimeout = 6000
connectTimeout = 2000
username = ""
password = ""
}
etcd3 {
serverAddr = "http://localhost:2379"
}
file {
name = "file.conf"
}
}
10、进入bin目录,双击seata-server.bat即可启动Seata服务。
4. 集成
1. Maven引入相应的Jar
<!-- 分布式事务-->
<dependency>
<groupId>io.seata</groupId>
<artifactId>seata-spring-boot-starter</artifactId>
</dependency>
<!-- dubbo -->
<dependency>
<groupId>org.apache.dubbo</groupId>
<artifactId>dubbo-spring-boot-starter</artifactId>
</dependency>
<dependency>
<groupId>org.apache.dubbo</groupId>
<artifactId>dubbo</artifactId>
</dependency>
<!-- zookeeper -->
<dependency>
<groupId>com.101tec</groupId>
<artifactId>zkclient</artifactId>
</dependency>
<dependency>
<groupId>org.apache.dubbo</groupId>
<artifactId>dubbo-dependencies-zookeeper</artifactId>
<type>pom</type>
</dependency>
3. 打开application.yml文件配置,配置Zookeeper作为为Dubbo注册中心。
dubbo:
application:
name: ${spring.application.name}
protocol:
name: dubbo
port: 20801
registry:
address: zookeeper://127.0.0.1:2181
file: target/dubbo-registry/dubbo-registry.properties
config-center:
address: ${dubbo.registry.address}
metadata-report:
address: ${dubbo.registry.address}
scan:
basePackages: yui
3. 进入目录\seata-1.2.0\script\client\spring,复制application.yml中内容到项目中的application.yml中。
seata:
enabled: true
application-id: ${spring.application.name}
tx-service-group: my_test_tx_group
enable-auto-data-source-proxy: true
use-jdk-proxy: false
client:
rm:
async-commit-buffer-limit: 1000
report-retry-count: 5
table-meta-check-enable: false
report-success-enable: false
lock:
retry-interval: 10
retry-times: 30
retry-policy-branch-rollback-on-conflict: true
tm:
commit-retry-count: 5
rollback-retry-count: 5
undo:
data-validation: true
log-serialization: jackson
log-table: undo_log
log:
exceptionRate: 100
service:
vgroup-mapping:
my_test_tx_group: default
grouplist:
default: 127.0.0.1:8091
enable-degrade: false
disable-global-transaction: false
transport:
shutdown:
wait: 3
thread-factory:
boss-thread-prefix: NettyBoss
worker-thread-prefix: NettyServerNIOWorker
server-executor-thread-prefix: NettyServerBizHandler
share-boss-worker: false
client-selector-thread-prefix: NettyClientSelector
client-selector-thread-size: 1
client-worker-thread-prefix: NettyClientWorkerThread
worker-thread-size: default
boss-thread-size: 1
type: TCP
server: NIO
heartbeat: true
serialization: seata
compressor: none
enable-client-batch-send-request: true
config:
type: zk
nacos:
namespace:
server-addr: localhost:8848
group: SEATA_GROUP
zk:
server-addr: 127.0.0.1:2181
session-timeout: 6000
connect-timeout: 2000
registry:
type: zk
nacos:
server-addr: localhost:8848
namespace:
zk:
server-addr: 127.0.0.1:2181
session-timeout: 6000
connect-timeout: 2000
4. 启动服务发现报错
org.I0Itec.zkclient.exception.ZkTimeoutException: Unable to connect to zookeeper server '127.0.0.1:2181' with timeout of 0 ms
这里有个奇怪的现象,不知道是我配置的有问题,还是Seata对Zookeeper的配置没到位,application.yml中Seata的Zookeeper配置信息没有生效,所以在初始化Zkclient的时候,sessionTimeout和connectTimeout都为0,最终报错。
解决办法把Seata服务conf目录中的registry.conf配置文件放入到resources下面。然后再启动就不会报错了。
registry {
# file 、nacos 、eureka、redis、zk、consul、etcd3、sofa
type = "zk"
nacos {
serverAddr = "localhost:8848"
namespace = ""
cluster = "default"
}
eureka {
serviceUrl = "http://localhost:8761/eureka"
application = "default"
weight = "1"
}
redis {
serverAddr = "localhost:6379"
db = "0"
}
zk {
cluster = "default"
serverAddr = "127.0.0.1:2181"
session.timeout = 6000
connect.timeout = 2000
}
consul {
cluster = "default"
serverAddr = "127.0.0.1:8500"
}
etcd3 {
cluster = "default"
serverAddr = "http://localhost:2379"
}
sofa {
serverAddr = "127.0.0.1:9603"
application = "default"
region = "DEFAULT_ZONE"
datacenter = "DefaultDataCenter"
cluster = "default"
group = "SEATA_GROUP"
addressWaitTime = "3000"
}
file {
name = "file.conf"
}
}
config {
# file、nacos 、apollo、zk、consul、etcd3
type = "zk"
nacos {
serverAddr = "localhost:8848"
namespace = ""
group = "SEATA_GROUP"
}
consul {
serverAddr = "127.0.0.1:8500"
}
apollo {
app.id = "seata-server"
apollo.meta = "http://192.168.1.204:8801"
namespace = "application"
}
zk {
serverAddr = "127.0.0.1:2181"
session.timeout = 6000
connect.timeout = 2000
}
etcd3 {
serverAddr = "http://localhost:2379"
}
file {
name = "file.conf"
}
}
5. 测试
本次测试需要至少启动两个服务,这里选用了yui3-system-provider和yui3-test-web服务
测试类TestEMgrImpl ,使用分布式事务,需要再方法上加入注解@GlobalTransactional(rollbackFor = Exception.class)
@Slf4j
@Service
public class TestEMgrImpl extends BaseMgrImpl<TestEDao, TestEVo, TestEDto>
implements TestEMgr {
@Reference
private SysAdMgr sysAdMgr;
@GlobalTransactional(rollbackFor = Exception.class)
@Override
public void testSeata() {
SysAdDto adDto = sysAdMgr.getById(4342213847225347L);
adDto.getSysAdVo().setNm("seata test");
sysAdMgr.update(adDto.getSysAdVo());
TestEDto dto = getById(4635127064400896L);
dto.getTestEVo().setA15("seata test");
update(dto.getTestEVo());
BssExpUtils.error("@GlobalTransactional test", log);
}
}
1. 原先t_sys_ad表中数据为
2. 通过Debug调试模式进行测试,执行到BssExpUtils.error("@GlobalTransactional test", log),再次查看t_sys_ad表中数据,发现表中数据已经被更新掉, t_test_e中表中的数据同样也被更新了。
3. 打开seata数据库中的branch_table中可以看到记录了需要回滚的数据。
4. 最后抛出错误,数据库数据回滚到1状态,说明分布式事务在发生错误后,对数据库进行了反向操作。
6. 总结
Zookeeper作为注册中心,有一些坑在里面,稍微注意一下,还是能得到有效的解决。
Seata作为分布式事务还是比较轻量化,耦合性低的特点,表名可以自行定义比较友好。