分布式系统(三) 分布式事务服务搭建

分布式系统(三)分布式事务服务搭建(入门)

接着前面说的分布式事务,本章节是用来记录分布式事务的服务的一个搭建过程

1.微服务环境准备

服务注册配置中心:nacos

服务调用和负载均衡:OpenFeign + Ribbon

案例服务

  • home-manager 可以看做用户信息服务
  • home-order 可以看做是用户订单服务

场景

用户扣除积分,并添加一条订单信息

中间件支持

redis mysql

2.tx-lcn

tx-lcn 搭建分布式事务:官网可以快速入门:https://www.codingapi.com/docs/txlcn-start/

两个服务 ,一个是服务管理者 (TM transaction Manager),一个是服务客户端 (TC transaction client );于此对应两个服务jar 包;

注还是原来的步骤:加依赖,加注解,加配置,写代码

2.1 TM 的搭建

tm 可以借助 springboot 快速搭建服务

2.1.1 加依赖
    <dependencies>
         <!--spring cloud alibaba  一些依赖包的汇总 -->
        <dependency>
            <groupId>com.ffs</groupId>
            <artifactId>home-base</artifactId>
            <version>1.0-SNAPSHOT</version>
        </dependency>

        <!--tm-->
        <!-- tm  manager -->
        <dependency>
            <groupId>com.codingapi.txlcn</groupId>
            <artifactId>txlcn-tm</artifactId>
            <version>${txlcn.version}</version>
        </dependency>

        <dependency>
            <groupId>com.codingapi.txlcn</groupId>
            <artifactId>txlcn-tc</artifactId>
            <version>${txlcn.version}</version>
        </dependency>
        <dependency>
            <groupId>com.codingapi.txlcn</groupId>
            <artifactId>txlcn-txmsg-netty</artifactId>
            <version>${txlcn.version}</version>
        </dependency>


        <dependency>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-test</artifactId>
            <scope>test</scope>
            <exclusions>
                <exclusion>
                    <groupId>org.junit.vintage</groupId>
                    <artifactId>junit-vintage-engine</artifactId>
                </exclusion>
            </exclusions>
        </dependency>

    </dependencies>
2.1.2 加注解
@EnableTransactionManagement//开始事务
@SpringBootApplication
@EnableTransactionManagerServer //tx lcn  TM
@Slf4j
@EnableDiscoveryClient//开启注册发现
public class LcnTmApplication {

    public static void main(String[] args) {
        SpringApplication.run(LcnTmApplication.class, args);
    }
}

2.1.3 加配置

数据初始化

DROP TABLE IF EXISTS `t_tx_exception`;
CREATE TABLE `t_tx_exception`  (
  `id` bigint(20) NOT NULL AUTO_INCREMENT,
  `group_id` varchar(32) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL,
  `unit_id` varchar(32) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL,
  `mod_id` varchar(32) CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci NULL DEFAULT NULL,
  `transaction_state` tinyint(4) NULL DEFAULT NULL,
  `registrar` tinyint(4) NULL DEFAULT NULL COMMENT '-1 未知 0 Manager 通知事务失败, 1 client询问事务状态失败2 事务发起方关闭事务组失败',
  `ex_state` tinyint(4) NULL DEFAULT NULL COMMENT '0 待处理 1已处理',
  `create_time` datetime(0) NULL DEFAULT NULL,
  PRIMARY KEY (`id`) USING BTREE
) ENGINE = InnoDB AUTO_INCREMENT = 967 CHARACTER SET = utf8mb4 COLLATE = utf8mb4_general_ci ROW_FORMAT = Dynamic;

SET FOREIGN_KEY_CHECKS = 1;

nacos

tx-lcn.manager.host=127.0.0.1
###  分布式事务之间netty通信服务端口
tx-lcn.manager.port=20640   
tx-lcn.manager.dtx-time=60000

# TM事务管理器的服务端WEB访问端口。提供一个可视化的界面。端口自定义。
server.port=20630

# TM事务管理器,需要访问数据库,实现分布式事务状态记录。
spring.datasource.driver-class-name=com.mysql.cj.jdbc.Driver
spring.datasource.url=jdbc:mysql://192.168.141.131:3306/home_tx_manager?characterEncoding=UTF-8&serverTimezone=Asia/Shanghai
spring.datasource.username=root
spring.datasource.password=root

# TM事务管理器,是依赖Redis使用分布式事务协调的。尤其是TCC和TXC两种事务模型。
spring.redis.host=192.168.141.141
spring.redis.port=6666
spring.redis.database=0

# 为spring应用起名。
spring.application.name=home-lcn-tm

# TM事务管理器,提供的WEB管理平台的登录密码。无用户名。 默认是codingapi
tx-lcn.manager.admin-key=ffs
# 日志。如果需要TM记录日志。则开启,赋值为true,并提供后续的配置。
tx-lcn.logger.enabled=true

# 为日志功能,提供数据库连接。和之前配置的分布式事务管理依赖使用的数据源不同。
tx-lcn.logger.driver-class-name=com.mysql.cj.jdbc.Driver
tx-lcn.logger.jdbc-url=jdbc:mysql://192.168.141.131:3306/home_tx_manager?characterEncoding=UTF-8&serverTimezone=Asia/Shanghai
tx-lcn.logger.username=root
tx-lcn.logger.password=root

2.1.4 写代码

tm 服务除了启动类 没有代码。。。

2.1.5 启动成功

http://localhost:20630/admin/index.html#/

在这里插入图片描述

2.2 TC 的搭建

TC transaction client 。其实就是我们的业务节点,这里我们有两个服务

home-manager 和 home-order

场景:

用户扣除积分,并添加一条订单信息

2.3 TC (home-manager) 一 的搭建

2.3.1 加依赖

        <!-- lcn -->
        <dependency>
            <groupId>com.codingapi.txlcn</groupId>
            <artifactId>txlcn-tc</artifactId>
            <version>${txlcn.version}</version>
        </dependency>

        <dependency>
            <groupId>com.codingapi.txlcn</groupId>
            <artifactId>txlcn-txmsg-netty</artifactId>
            <version>${txlcn.version}</version>
        </dependency>
2.3.2 加注解
@SpringBootApplication//SpringBoot 应用注解
@EnableTransactionManagement//开始事务
@EnableDiscoveryClient//开启注册发现
@EnableDistributedTransaction//分布式事务注解     =====================================>>>>>>>>不同于TM 
@MapperScan(basePackages = {"com.ffs.*.mapper"})// Mybatis  注解mapper 接口扫描
@EnableFeignClients(basePackages = {"com.ffs.*.api"}) // 发现包里面的 feign  接口
@Slf4j
public class ManagerApplication {
2.3.3 加配置 nacos
####  配置TM 节点的通信地址
tm_ip=127.0.0.1
tm_port=20640
###  注 多个节点 , 分割
tx-lcn.client.manager-address=${tm_ip}:${tm_port}
2.3.4 写代码
@Service
public class UserServiceImpl  implements UserService {

    @Autowired
    private  UserMapper userMapper;

    @Autowired
    private OrderApi orderApi;


    @Override
    @LcnTransaction //LCN 模式事务注解  LCN 可以和TCC 混用
    @Transactional(rollbackFor = Exception.class)
    public User payIntegral(UserGiftReq userGiftReq) {

        User user = userMapper.selectById(userGiftReq.getUserId());
        Result<Boolean> stringResult = orderApi.sendGift(userGiftReq);
        if(stringResult.getCode().equals(200)){
            user.setIntegral(user.getIntegral()-userGiftReq.getReduceIntegral());
        }
        userMapper.updateById(user);
//        int a = 10/0;
        return user;
    }
}

2.4 TC (home-order) 二 的搭建

2.4.1 加依赖

2.3.1 一样 略

2.4.2 加注解

2.3.2一样 略

2.4.3 加配置 nacos

2.3.3 一样 略

2.4.4 写代码

manager 是LCN

order 是 使用 LCN事务 情况的代码

    @Override
    @LcnTransaction
    @Transactional(rollbackFor = Exception.class)
    public Boolean addOrderInfo(UserGiftReq userGiftReq) {
        OrderInfo orderInfo = new OrderInfo();
        orderInfo.setUserId(userGiftReq.getUserId());
        orderInfo.setReduceIntegral(userGiftReq.getReduceIntegral());
        int insert = orderInfoMapper.insert(orderInfo);
        return insert == 1;
    }

order 是 使用 TCC事务 情况的代码

    @Override
    @TccTransaction
    @Transactional(rollbackFor = Exception.class)
    public Boolean addOrderInfo(UserGiftReq userGiftReq) {
        OrderInfo orderInfo = new OrderInfo();
        orderInfo.setUserId(userGiftReq.getUserId());
        orderInfo.setReduceIntegral(userGiftReq.getReduceIntegral());
        int insert = orderInfoMapper.insert(orderInfo);
        maps.put("ID", userGiftReq.getUserId());
        return insert == 1;
    }
    private static Map<String, Integer> maps = new HashMap<>();

    public Boolean confirmAddOrderInfo(UserGiftReq userGiftReq) {
        System.out.println("confirmAddOrderInfo");
        return true;
    }


    /**
     * 逆sql
     * @param bean
     * @return
     */
    public Boolean cancelAddOrderInfo(UserGiftReq userGiftReq) {
        Integer integer = maps.get("ID");  //不严谨 测试使用
        int i = orderInfoMapper.deleteById(integer);
        return i == 1;
    }

2.5 测试成功

3.Seata

官网,jar 包下载在github

https://seata.io/zh-cn/docs/ops/deploy-guide-beginner.html

https://github.com/seata/seata/blob/develop/script/server/db/mysql.sql 【DB 模式下的 的SQL脚本】

Seata分TC、TM和RM三个角色,TC(Server端)为单独服务端部署,TM和RM(Client端)由业务系统集成。

这个TC 区别于上面的TC 这个是事务协调者 ,上面是事务客户端

本次部署模式采用:高可用部署方案,部署两个节点

3.1 TC(Server端)部署

Seata 的高可用依赖于注册中心、配置中心和数据库来实现 (原文) 可以看官网介绍

解压主要看两个配置文件 registry.conf file.conf

[root@localhost conf]# pwd
/soft/seata/seata-server-1.4.2/conf
[root@localhost conf]# ls
file.conf  file.conf.example  logback  logback.xml  META-INF  README.md  README-zh.md  registry.conf
3.1.1 配置文件 registry.conf

先修改配置文件 registry.conf

### 注册中心选择  file 、nacos 、eureka、redis、zk、consul、etcd3、sofa    
registry {

 # type = "file"  ## 高可用  使用注册中心 nacos
  type = "nacos"

  nacos {
    application = "seata-server"
    serverAddr = "192.168.141.131:8848"
    group = "SEATA_GROUP"
    namespace = "home"
    cluster = "default"
    username = ""
    password = ""
  }
  eureka {
    serviceUrl = "http://localhost:8761/eureka"
    application = "default"
    weight = "1"
  }
  redis {
    serverAddr = "localhost:6379"
    db = 0
    password = ""
    cluster = "default"
    timeout = 0
  }
  zk {
    cluster = "default"
    serverAddr = "127.0.0.1:2181"
    sessionTimeout = 6000
    connectTimeout = 2000
    username = ""
    password = ""
  }
  consul {
    cluster = "default"
    serverAddr = "127.0.0.1:8500"
    aclToken = ""
  }
  etcd3 {
    cluster = "default"
    serverAddr = "http://localhost:2379"
  }
  sofa {
    serverAddr = "127.0.0.1:9603"
    application = "default"
    region = "DEFAULT_ZONE"
    datacenter = "DefaultDataCenter"
    cluster = "default"
    group = "SEATA_GROUP"
    addressWaitTime = "3000"
  }
  file {
    name = "file.conf"
  }
}

#### 配置中心选择
config {
  # file、nacos 、apollo、zk、consul、etcd3
 # type = "file"
   type = "nacos"

  nacos {
    serverAddr = "192.168.141.131:8848"
    namespace = "home"
    group = "SEATA_GROUP"
    username = ""
    password = ""
    dataId = "seataServer.properties"
  }
  consul {
    serverAddr = "127.0.0.1:8500"
    aclToken = ""
  }
  apollo {
    appId = "seata-server"
    ## apolloConfigService will cover apolloMeta
    apolloMeta = "http://192.168.1.204:8801"
    apolloConfigService = "http://192.168.1.204:8080"
    namespace = "application"
    apolloAccesskeySecret = ""
    cluster = "seata"
  }
  zk {
    serverAddr = "127.0.0.1:2181"
    sessionTimeout = 6000
    connectTimeout = 2000
    username = ""
    password = ""
    nodePath = "/seata/seata.properties"
  }
  etcd3 {
    serverAddr = "http://localhost:2379"
  }
  file {
    name = "file.conf"
  }
}

3.1.2 配置文件 file.conf (高可用不用此模式)

这个文件需要在 registry.conf 中配置了 file 模式才会启动 主要配置分布式事务中数据的存储方式

## transaction log store, only used in seata-server
store {
  ## store mode: file、db、redis
  # mode = "file"
  mode = "db"
  ## rsa decryption public key
  publicKey = ""
  ## file store property
  file {
    ## store location dir
    dir = "sessionStore"
    # branch session size , if exceeded first try compress lockkey, still exceeded throws exceptions
    maxBranchSessionSize = 16384
    # globe session size , if exceeded throws exceptions
    maxGlobalSessionSize = 512
    # file buffer size , if exceeded allocate new buffer
    fileWriteBufferCacheSize = 16384
    # when recover batch read size
    sessionReloadReadSize = 100
    # async, sync
    flushDiskMode = async
  }

  ## database store property
  db {
    ## the implement of javax.sql.DataSource, such as DruidDataSource(druid)/BasicDataSource(dbcp)/HikariDataSource(hikari) etc.
    datasource = "druid"
    ## mysql/oracle/postgresql/h2/oceanbase etc.
    dbType = "mysql"
    driverClassName = "com.mysql.cj.jdbc.Driver"
    ## if using mysql to store the data, recommend add rewriteBatchedStatements=true in jdbc connection param
    url = "jdbc:mysql://192.168.141.131:3306/seata?rewriteBatchedStatements=true"
    user = "root"
    password = "root"
    minConn = 5
    maxConn = 100
    globalTable = "global_table"
    branchTable = "branch_table"
    lockTable = "lock_table"
    queryLimit = 100
    maxWait = 5000
  }

  ## redis store property
  redis {
    ## redis mode: single、sentinel
    mode = "single"
    ## single mode property
    single {
      host = "127.0.0.1"
      port = "6379"
    }
    ## sentinel mode property
    sentinel {
      masterName = ""
      ## such as "10.28.235.65:26379,10.28.235.65:26380,10.28.235.65:26381"
      sentinelHosts = ""
    }
    password = ""
    database = "0"
    minConn = 1
    maxConn = 10
    maxTotal = 100
    queryLimit = 100
  }
}

:file.conf 文件是基于文件的配置也可以看看,主要是各种配置 是用什么数据库等 这里先不展示了 需要注意的一点是 当我们使用MySQL 数据库且版本较高的时候需要把驱动类的类名修改下 由 com.mysql.jdbc.Driver 修改为 com.mysql.cj.jdbc.Driver; nacos 中如果也是如此;

3.1.3 nacos 配置 代替file.conf

file.conf 中选择一个模式,然后对应的配置文件配置项添加搭配nacos 中,比如我们这里使用的MySQL ,看如下配置;

nacos 的配置

##  这里使用db 的连接方式 高可用
store.mode=db
store.publicKey=
store.db.datasource=druid
store.db.dbType = mysql
store.db.driverClassName = com.mysql.cj.jdbc.Driver
store.db.url = jdbc:mysql://192.168.141.131:3306/seata?rewriteBatchedStatements=true
store.db.user = root
store.db.password = root
store.db.minConn = 5
store.db.maxConn = 100
store.db.globalTable = global_table
store.db.branchTable = branch_table
store.db.lockTable = lock_table
store.db.queryLimit = 100
store.db.maxWait = 5000

#2.修改分布式事务的分组  
service.vgroupMapping.my_tx_group = default
service.seata-server.grouplist = 192.168.141.131:8091,192.168.141.131:8888
service.disableGlobalTransaction = false
3.1.4 启动

支持的启动参数 (原文)

参数全写作用备注
-h–host指定在注册中心注册的 IP不指定时获取当前的 IP,外部访问部署在云环境和容器中的 server 建议指定
-p–port指定 server 启动的端口默认为 8091
-m–storeMode事务日志存储方式支持file,db,redis,默认为 file 注:redis需seata-server 1.3版本及以上
-n–serverNode用于指定seata-server节点ID1,2,3…, 默认为 1
-e–seataEnv指定 seata-server 运行环境dev, test 等, 服务启动时会使用 registry-dev.conf 这样的配置

本文是高可用就启动两个吧;

###  开启第一个
./bin/seata-server.sh -p 8091
    
###  开启第二个 
./bin/seata-server.sh -p 8888

nacos 中查看 已经有两个节点启动并注册

在这里插入图片描述

3.2 RM 的搭建

3.2.1 ServerA

添加依赖 下面是seata 需要的依赖

		 <!-- seata -->
        <!-- 分布式事务解决方案 -->
        <dependency>
            <groupId>io.seata</groupId>
            <artifactId>seata-spring-boot-starter</artifactId>
            <version>1.4.2</version>
        </dependency>


        <dependency>
            <groupId>com.alibaba.cloud</groupId>
            <artifactId>spring-cloud-starter-alibaba-seata</artifactId>
            <version>2021.1</version>
            <exclusions>
                <exclusion>
                    <groupId>io.seata</groupId>
                    <artifactId>seata-spring-boot-starter</artifactId>
                </exclusion>
            </exclusions>
        </dependency>
3.2.2 添加配置文件

用的是nacos

seata:
  enabled: true
  application-id: ${spring.application.name}
  # 客户端和服务端在同一个事务组  !!!!! 需要和服务端配置的一样
  tx-service-group: my_tx_group
  # 自动数据源代理
  enable-auto-data-source-proxy: true
  # 数据源代理模式(分布式事务方案)
  data-source-proxy-mode: AT
  # 事务群组,配置项值为TC集群名,需要与服务端保持一致
  service:
    vgroup-mapping:
      my_tx_group: default
  #整合nacos配置中心  
  config:
    type: nacos
    nacos:
      server-addr: 192.168.141.131:8848
      group: SEATA_GROUP
      namespace: home
      data-id: seataServer.properties
  #整合nacos注册中心  
  registry:
    type: nacos
    nacos:
      server-addr: 192.168.141.131:8848
      group: SEATA_GROUP
      namespace: home
      # 默认TC集群名
      cluster: default
      # 服务名,与服务端中registry.conf配置要一致  或者是nacos 中seata 服务名一样
      application: seata-server

在这里插入图片描述

3.2.3 代理数据源
@Slf4j
@Configuration
public class DbConfig {

    @ConfigurationProperties(prefix = "spring.datasource")
    @Bean
    public DataSource druidDataSource(){
        DruidDataSource dataSource = new DruidDataSource();
        log.info("create druid dataSource ......");
        return dataSource;
    }
    @Bean
    @Primary
    public DataSource dataSource (){
        log.info("create seata dataSourceProxy");
        return new DataSourceProxy(druidDataSource());
    }
}

3.3 ServerB

和ServerA上面的步骤是一样的 略去

3.4 AT代码

业务模拟 ServerA–>ServerB

ServerA

    @Override
    @Transactional
    @GlobalTransactional(rollbackFor = Exception.class)
    public TabA insertA() {
        Result<TabB> tabBResult = serverBApi.insertB();
        System.out.println(tabBResult);
        int a = 1/0;
        SimpleDateFormat dateFormat = new SimpleDateFormat("HH:mm:ss");
        String insertA = "A"+ dateFormat.format(new Date());
        TabA tabA = new TabA();
        tabA.setName(insertA);
        int insert = tabAMapper.insert(tabA);
        return tabA;
    }

ServerB

    @Override
    @Transactional
    public TabB insertB() {
        SimpleDateFormat dateFormat = new SimpleDateFormat("HH:mm:ss");
        String insertA = "B"+ dateFormat.format(new Date());
        TabB tabB = new TabB();
        tabB.setName(insertA);
        int insert = tabAMapper.insert(tabB);
        return tabB;
    }

接口测试 要不全成功 要么全部失败

3.5 TCC 代码

  
 //  TCC 模式 GlobalTransactional 放在控制层 否则会导致 业务层businessActionContext == null
@PostMapping("/insertATCC")
    @GlobalTransactional(rollbackFor = Exception.class)
    public Result<TabA> insertATCC() {
        TabA tabA = aServiceTCC.insertATCC(null);
        int a = 1/0;
        return Result.success(tabA);
    }

@LocalTCC
public interface AServiceTCC {

    @TwoPhaseBusinessAction(name = "insertATCC" , commitMethod = "insertATCCCommit" ,rollbackMethod = "insertATCCRollback")
    TabA insertATCC(BusinessActionContext businessActionContext);

    boolean insertATCCCommit(BusinessActionContext businessActionContext);
    boolean insertATCCRollback(BusinessActionContext businessActionContext);
}

//实现类
@Service
public class AServiceTCCImpl implements AServiceTCC {

    @Autowired
    private TabAMapper tabAMapper;

    @Autowired
    private ServerBApi serverBApi;

    @Override
    @Transactional//businessActionContext 记录事务中操作过的记录 便于Rollback 拿到操作过的数据做补偿回滚
    public TabA insertATCC(BusinessActionContext businessActionContext) {
        Result<TabB> tabBResult = serverBApi.insertBTCC();
        System.out.println(tabBResult);
//        int a = 1/0;
        SimpleDateFormat dateFormat = new SimpleDateFormat("HH:mm:ss");
        String insertA = "A"+ dateFormat.format(new Date());
        TabA tabA = new TabA();
        tabA.setName(insertA);
        int insert = tabAMapper.insert(tabA);
        Map<String,Object> idMap = new HashMap<>();
        businessActionContext.setActionContext(idMap);
        idMap.put("ID",tabA.getId());
        return tabA;
    }

    @Override
    public boolean insertATCCCommit(BusinessActionContext businessActionContext) {
        System.out.println("insertATCCCommit");
        return true;
    }

    @Override
    public boolean insertATCCRollback(BusinessActionContext businessActionContext) {
        System.out.println("insertATCCCommit");
        Map<String, Object> actionContext = businessActionContext.getActionContext();
        Integer id = Integer.valueOf(actionContext.get("ID")+"");
        tabAMapper.deleteById(id);
        return true;
    }
}

======================================================================

@LocalTCC
public interface BServiceTCC {

    @TwoPhaseBusinessAction(name = "insertBTCC", commitMethod = "insertBTCCCommit", rollbackMethod = "insertBTCCRollback")
    public TabB insertBTCC(BusinessActionContext businessActionContext);

    public boolean insertBTCCCommit(BusinessActionContext businessActionContext);

    public boolean insertBTCCRollback(BusinessActionContext businessActionContext);
}


@Service
public class BServiceTCCImpl implements BServiceTCC {

    @Autowired
    private TabBMapper tabAMapper;

    @Override
    @Transactional
    public TabB insertBTCC(BusinessActionContext businessActionContext) {
        SimpleDateFormat dateFormat = new SimpleDateFormat("HH:mm:ss");
        String insertA = "BTCC"+ dateFormat.format(new Date());
        TabB tabB = new TabB();
        tabB.setName(insertA);
        int insert = tabAMapper.insert(tabB);
        Map<String,Object> idMap = new HashMap<>();
        businessActionContext.setActionContext(idMap);
        idMap.put("ID",tabB.getId());
        return tabB;
    }

    @Override
    @Transactional
    public boolean insertBTCCCommit(BusinessActionContext businessActionContext) {
        System.out.println("insertBTCCCommit");
        return true;
    }

    @Override
    @Transactional
    public boolean insertBTCCRollback(BusinessActionContext businessActionContext) {
        System.out.println("insertBTCCRollback");
        Map<String, Object> actionContext = businessActionContext.getActionContext();
        Integer id = Integer.valueOf(actionContext.get("ID")+"");
        tabAMapper.deleteById(id);
        return true;
    }
}

3.6 XA ((代码是基于AT 模式的修改))

XA是数据库的分布式事务,强一致性,在整个过程中,数据一张锁住状态,即从prepare到commit、rollback的整个过程中,TM一直把持折数据库的锁,如果有其他人要修改数据库的该条数据,就必须等待锁的释放,存在长事务风险。需要数据库支持目前主流的数据库基本都支持XA事务,包括mysql、oracle、sqlserver、postgre。

把TM 的nacos 配置修改

  ## 默认时AT 修改成XA
  data-source-proxy-mode: XA 

原有数据源代理换成XA 代理

    @Bean
    @Primary
    public DataSource dataSource (){
        log.info("create seata dataSourceProxy");
        return new DataSourceProxyXA(druidDataSource());
    }

原有不变;

3.7 Saga 模式

开发中。。。

3.8 总结

AT 和 XA 是低代码侵入的分布式事务,只需要加一个注解就可以,AT是XA 都是代理数据源,XA 需要的是数据库支持;

TCC和 Saga 是有代码侵入的;

  • 0
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值