分布式事务解决方案及seata

1.分布式事务的产生

事务是通过jdbc Connection对象进行开启,及提交/回滚,同jvm 使用@Transaction注解时,只要业务method没有结束,spring是不会退回connection到连接池的,但不同jvm是不是spring上下文来管理connection的(即不同本地事务)。微服务划分因尽量的高内聚低耦合,减少分布式事务,但是随着系统逐渐变得庞大,分布式事务也情况也会增多,需要一种简易分布式事务解决方案

2.常见分布式事务解决方案

  • 本地事务:只是服务拆分,数据库未分库,系统较简单时且仅个别分布式事务场景,可以考虑牺牲系统耦合性,需要在不相关的领域模型写入其实没啥关系的sql语句,如为解决分布式事务可能有userMapper里可能有select  user  和select  order的语句。
  • 基于MQ最终一致性方案:该方案性能较好,一般情况都是准实时一致,除非MQ消费不过来才会有一定延时,对程序员要求也较高。该方案比较时候电商购物等长事务场景(支付及支付成功消息在一个本地事务里),如kafka消费者端只要没有成功消费消息即消息处理报错了kafka会只一直消费该消息,直到该消息被正常处理并commit。可能出现消费端一致报错,这需要监控报警即使发现并解决,如果长时间不能解决,数据会较长时间不能达到一致性。
  • TCC即Try、Confirm和Cancel,代码侵入太高,需要自己写补偿接口,如下单失败,还得在cancel代码片段写库存恢复接口,而且cancel里的数据库操作也可能会出错。
  • XA事务:CP模型,强一致性,性能也较差,一把是有关全局事务管理器如后面的seata-server。低并发,要求强一致性,接受一定的性能损失,需要一种简单的方式解决分布式事务就应该选择XA事务。

3.分布式事务中间件:seata ,LCN,阿里为开源的GTS,sharding sphere的xa等

LCN前两年发展还不错,今天一看官网已打不开,由于开源资金,阿里的seata的顺势而来,github已宣布停滞维护,LCN的使用还是很方便的,一个全局事务注解即可完成(参考)。GTS阿里没有开源,就不用想了,但是开源了一个seata,目前还是比较活跃的。github地址官方文档。seata 有个AT(自动事务)模式:

一阶段:业务数据和回滚日志记录在同一个本地事务中提交,释放本地锁和连接

二阶段:commit:提交异步化,非常快速地完成 ; rollback:回滚通过一阶段的回滚日志进行反向补偿。

4.springcloud集成seata基本调研   官方文档

4.1 seata server端安装:下载https://github.com/seata/seata/releases,案例使用1.1.0版本测试,也可docker

1.先默认启动
docker run --name seata-server \
        -p 8091:8091 \
        seataio/seata-server  
2.拷贝容器内/seata-server下的整个resources拷到宿主机进行修改, cp seata-server:/seata-server/resources/  /root/seata/resources
3.正式启动
docker run --name seata-server \
        -p 8091:8091 \
        -e STORE_MODE=file \
        -v /root/seata/resources:/seata-server/resources  \
        seataio/seata-server

解压进入config目录修改配置: 首先是registry.conf ,注册与配置type 都使用file,bin目录启动:./seata-server.sh

registry {
  # file 、nacos 、eureka、redis、zk、consul、etcd3、sofa
  type = "file"
  file {
    name = "file.conf"
  }
}

config {
  # file、nacos 、apollo、zk、consul、etcd3、springCloudConfig
  type = "file"
  file {
    name = "file.conf"
  }
}

file.conf

transport {
  # tcp udt unix-domain-socket
  type = "TCP"
  #NIO NATIVE
  server = "NIO"
  #enable heartbeat
  heartbeat = true
  # the client batch send request enable
  enableClientBatchSendRequest = true
  #thread factory for netty
  threadFactory {
    bossThreadPrefix = "NettyBoss"
    workerThreadPrefix = "NettyServerNIOWorker"
    serverExecutorThread-prefix = "NettyServerBizHandler"
    shareBossWorker = false
    clientSelectorThreadPrefix = "NettyClientSelector"
    clientSelectorThreadSize = 1
    clientWorkerThreadPrefix = "NettyClientWorkerThread"
    # netty boss thread size,will not be used for UDT
    bossThreadSize = 1
    #auto default pin or 8
    workerThreadSize = "default"
  }
  shutdown {
    # when destroy server, wait seconds
    wait = 3
  }
  serialization = "seata"
  compressor = "none"
}
service {
  #transaction service group mapping
  vgroupMapping.AAAA="default"
  #only support when registry.type=file, please don't set multiple addresses
  default.grouplist = "192.168.203.132:8091"
  #degrade, current not support
  enableDegrade = false
  #disable seata
  disableGlobalTransaction = false
}

client {
  rm {
    asyncCommitBufferLimit = 10000
    lock {
      retryInterval = 10
      retryTimes = 30
      retryPolicyBranchRollbackOnConflict = true
    }
    reportRetryCount = 5
    tableMetaCheckEnable = false
    reportSuccessEnable = false
  }
  tm {
    commitRetryCount = 5
    rollbackRetryCount = 5
  }
  undo {
    dataValidation = true
    logSerialization = "jackson"
    logTable = "undo_log"
  }
  log {
    exceptionRate = 100
  }
}

4.2 springcloud2.X微服务集成seata

增加依赖

       <dependency>
            <groupId>com.alibaba.cloud</groupId>
            <artifactId>spring-cloud-starter-alibaba-nacos-discovery</artifactId>
            <version>2.0.0.RELEASE</version>
        </dependency>
    
        <!--seata-->
        <dependency>
            <groupId>com.alibaba.cloud</groupId>
            <artifactId>spring-cloud-alibaba-seata</artifactId>
            <version>2.0.0.RELEASE</version>
            <exclusions>
                <exclusion>
                    <groupId>io.seata</groupId>
                    <artifactId>seata-all</artifactId>
                </exclusion>
            </exclusions>
        </dependency>
        <dependency>
            <groupId>io.seata</groupId>
            <artifactId>seata-all</artifactId>
            <version>1.1.0</version>
        </dependency>

配置文件:resource增加registry.conf file.conf ,文件内容同seata server 

spring:      
  cloud:
    nacos:
      discovery:
        server-addr: 192.168.203.132:8848
    alibaba:
      seata:
        tx-service-group: AAAA

启动类

@SpringBootApplication(exclude = DataSourceAutoConfiguration.class)
@EnableDiscoveryClient
@MapperScan("com.XXX.user.mapper")
=============================================================================
//这里去除了dataSource自动配置,需要在自定义配置,参考如下

import com.alibaba.druid.pool.DruidDataSource;
import com.baomidou.mybatisplus.autoconfigure.MybatisPlusProperties;
import com.baomidou.mybatisplus.extension.spring.MybatisSqlSessionFactoryBean;
import io.seata.rm.datasource.DataSourceProxy;
import javax.sql.DataSource;
import org.apache.commons.lang3.StringUtils;
import org.apache.ibatis.session.SqlSessionFactory;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.boot.context.properties.EnableConfigurationProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.core.io.Resource;
import org.springframework.core.io.support.PathMatchingResourcePatternResolver;
import org.springframework.core.io.support.ResourcePatternResolver;


@Configuration
@EnableConfigurationProperties({MybatisPlusProperties.class})
public class DataSourceConfiguration {

    @Bean
    @ConfigurationProperties(prefix = "spring.datasource")
    public DataSource dataSource() {
        return new DruidDataSource();
    }

    @Bean
    public DataSourceProxy dataSourceProxy(DataSource dataSource) {
        return new DataSourceProxy(dataSource);
    }

    @Bean
    public SqlSessionFactory sqlSessionFactoryBean(DataSourceProxy dataSourceProxy,
        MybatisPlusProperties mybatisProperties) {
        MybatisSqlSessionFactoryBean bean = new MybatisSqlSessionFactoryBean();
        bean.setDataSource(dataSourceProxy);
        ResourcePatternResolver resolver = new PathMatchingResourcePatternResolver();
        try {
            Resource[] mapperLocaltions = resolver
                .getResources(mybatisProperties.getMapperLocations()[0]);
            bean.setMapperLocations(mapperLocaltions);

            if (StringUtils.isNotBlank(mybatisProperties.getConfigLocation())) {
                Resource[] resources = resolver.getResources(mybatisProperties.getConfigLocation());
                bean.setConfigLocation(resources[0]);
            }
            return bean.getObject();
        } catch (Exception e) {
            e.printStackTrace();
        }
        return null;
    }
}

4.3启动及测试 

 A服务

    @Override
    @GlobalTransactional
    public void updateUserName() {
        userMapper.updateUserName();//A服务先修改一条记录
        String s= restTemplate.put("http://user8887/user/role");//调用B服务
        throw new CommonException("A服务异常");
 
    }


 B服务
    @Override
    @GlobalTransactional
    public void updateRole() {
        userMapper.updateRoleName();
        int a=1/0; //B服务异常
    }

启动A B服务,默认applicationId即spring.application.name,事务组AAAA 

!.测试异常场景seata server宕机:AB服务均立即报错can not connect to 192.168.203.132:8091 cause:can not register RM,err:can not connect to services-server,请求接口也会返回该错误,seata server恢复,AB服务立即停止异常信息输出。

 可以猜测: @GlobalTransactional是基于aop实现的,不要连接server端即抛出异常

!!.测试异常场景B服务宕机:A服务抛出异常io.seata.common.exception.FrameworkException: No available service,A正常回滚,seata server 及A服务打印如rollbacked相关日志

!!!.断点打restTemplate调B服务处,检查该行修改已生效,此时再去手动修改A服务修改的那条记录,可发现是可以修改的没有锁定,断点打throw new CommonException("A服务异常")处,此时B修改已生效也可以修改,A抛出异常后服务方法走完,开始回滚事务,这里有个严重BUG,比如原始username =zs1 ,userMapper.updateUserName()修改为zs2,这时打个断点手动将zs2改为zs3,然后方法走完开始回滚发现不能正常回滚,死循环重试BUG????????这就是官方说的dirty data,  这时手动将zs3改回zs2,即可结束死循环并成功回滚zs1.为避免这种脏数据产生,应配合本地事务注解@Transaction ,全局事务开启,提交回滚可以监听的,默认实现是DefaultFailureHandlerImpl,需要自定义需要实现FailureHand接口,并自定义配置GlobalTransactionScanner 。 官方说明脏数据需手动处理,根据日志提示修正数据或者将对应undo删除(可自定义实现FailureHandler做邮件通知或其他)

4.4 seata有严重的版本坑,从2.0.0到2.2.0版本,可以去掉registry.conf,及file.conf,改为application.yml配置,大多有默认值

      <dependency>
            <groupId>com.alibaba.cloud</groupId>
            <artifactId>spring-cloud-alibaba-seata</artifactId>
            <version>2.2.0.RELEASE</version>
        </dependency>

 


spring:  
  cloud:
    nacos:
      discovery:
        server-addr: 192.168.203.132:8848
    alibaba:
      seata:
        tx-service-group: AAAA
        
#去除registry.conf file.conf,大多参数为默认值,主要配置一下事务组及seata server地址及禁止datasource自动代码,(因为默认开启自动代理,有坑,启动报错,还是需要手动配置datasource,官网称后期会拒绝)
seata:
  client:
    support:
      spring:
        datasource-autoproxy: false
  tx-service-group: AAAA
  service:
    grouplist: 192.168.203.132:8091

启动类main方法加个配置,否则会不停打印warn日志,这个后期应该会优化:System.setProperty("service.disableGlobalTransaction", "false"); 

 4.5 tc使用db存储全局事务session信息,使用nacos做配置中心,前面都使用的都是file

 修改seata server主配置registry.conf

#这里没有指定端口,可能是端口写死了8848,namespace=""或不配
registry {
  type = "nacos"
  nacos {
    serverAddr = "192.168.203.132"
    namespace = ""
    cluster = "default"
  }
}
config {
  type = "nacos"
  nacos {
    serverAddr = "192.168.203.132"
    namespace = ""
  }
}

之前tc的配置信息配置file.conf里,现改为nacos,先在nacos-config.txt 配置好,主要是db.进入nacos bin目录启动sh startup.sh  -m  standalone然后通过sh nacos-config.sh  192.168.203.132 将配置批量写进nacos(其实就是post请求),也可以手动去nacos里改。这里面有几个参数由中画线改为了驼峰如dbType,driverClassName,版本不同可能报错

transport.type=TCP
transport.server=NIO
transport.heartbeat=true
transport.thread-factory.boss-thread-prefix=NettyBoss
transport.thread-factory.worker-thread-prefix=NettyServerNIOWorker
transport.thread-factory.server-executor-thread-prefix=NettyServerBizHandler
transport.thread-factory.share-boss-worker=false
transport.thread-factory.client-selector-thread-prefix=NettyClientSelector
transport.thread-factory.client-selector-thread-size=1
transport.thread-factory.client-worker-thread-prefix=NettyClientWorkerThread
transport.thread-factory.boss-thread-size=1
transport.thread-factory.worker-thread-size=8
transport.shutdown.wait=3

service.vgroup_mapping.AAAA=default  

service.enableDegrade=false
service.disable=false
service.max.commit.retry.timeout=-1
service.max.rollback.retry.timeout=-1
client.async.commit.buffer.limit=10000
client.lock.retry.internal=10
client.lock.retry.times=30
client.lock.retry.policy.branch-rollback-on-conflict=true
client.table.meta.check.enable=true
client.report.retry.count=5
client.tm.commit.retry.count=1
transport.type=TCP
transport.server=NIO
transport.heartbeat=true
transport.thread-factory.boss-thread-prefix=NettyBoss
transport.thread-factory.worker-thread-prefix=NettyServerNIOWorker
transport.thread-factory.server-executor-thread-prefix=NettyServerBizHandler
transport.thread-factory.share-boss-worker=false
transport.thread-factory.client-selector-thread-prefix=NettyClientSelector
transport.thread-factory.client-selector-thread-size=1
transport.thread-factory.client-worker-thread-prefix=NettyClientWorkerThread
transport.thread-factory.boss-thread-size=1
transport.thread-factory.worker-thread-size=8
transport.shutdown.wait=3

service.vgroup_mapping.AAAA=default

service.enableDegrade=false
service.disable=false
service.max.commit.retry.timeout=-1
service.max.rollback.retry.timeout=-1
client.async.commit.buffer.limit=10000
client.lock.retry.internal=10
client.lock.retry.times=30
client.lock.retry.policy.branch-rollback-on-conflict=true
client.table.meta.check.enable=true
client.report.retry.count=5
client.tm.commit.retry.count=1
client.tm.rollback.retry.count=1
store.mode=db
store.file.dir=file_store/data
store.file.max-branch-session-size=16384
store.file.max-global-session-size=512
store.file.file-write-buffer-cache-size=16384
store.file.flush-disk-mode=async
store.file.session.reload.read_size=100
store.db.datasource=dbcp

store.db.dbType=mysql
store.db.driverClassName=com.mysql.jdbc.Driver
store.db.url=jdbc:mysql://127.0.0.1:3306/seata?useUnicode=true
store.db.user=root
store.db.password=123456

store.db.min-conn=1
store.db.max-conn=3
store.db.global.table=global_table
store.db.branch.table=branch_table
store.db.query-limit=100
store.db.lock-table=lock_table
recovery.committing-retry-period=1000
recovery.asyn-committing-retry-period=1000
recovery.rollbacking-retry-period=1000
recovery.timeout-retry-period=1000
transaction.undo.data.validation=true
transaction.undo.log.serialization=jackson
transaction.undo.log.save.days=7
transaction.undo.log.delete.period=86400000
transaction.undo.log.table=undo_log
transport.serialization=seata
transport.compressor=none
metrics.enabled=false
metrics.registry-type=compact
metrics.exporter-list=prometheus
metrics.exporter-prometheus-port=9898
support.spring.datasource.autoproxy=false

重启,然后访问nacos管理页面 192.168.203.132:8848/nacos   nacos/nacos 

seata-server :docker run -d --name seata-server  --net host  -v /root/seata/resources:/seata-server/resources   seataio/seata-server

springcloud客户端配置修改

spring:  
  cloud:
    nacos:
      discovery:
        server-addr: 192.168.203.132:8848
    alibaba:
      seata:
        tx-service-group: AAAA

#seata配置
seata:
  client:
    support:
      spring:
        datasource-autoproxy: false
  tx-service-group: AAAA
  service:
    grouplist: 192.168.203.132:8091
    disable-global-transaction: false
  registry:
    type: nacos
    nacos:
      server-addr: 192.168.203.132
  config:
    nacos:
      server-addr: 192.168.203.132
    type: nacos

测试:A服务抛异常处打断点,观察业务库的undo_log临时写入全局事务记录,放开断点,事务走完,undo_log事务记录删除

©️2020 CSDN 皮肤主题: 大白 设计师:CSDN官方博客 返回首页