spring-cloud-alibaba-seata + nacos + JPA 集成分布式事物 seata 1.2.0

注意: 使用 seata-spring-boot-starter 不需要手動(dòng)配置數(shù)據(jù)源代理和不需要在每個(gè)微服務(wù)下面加file.conf和register.conf的文件

  • 第一步: 創(chuàng)建seata 數(shù)據(jù)庫(kù) 和 在參與seata的所有業(yè)務(wù)數(shù)據(jù)庫(kù)中添加undo_log 事物回滾日志表
  1. 創(chuàng)建seata 數(shù)據(jù)庫(kù)腳本
CREATE TABLE IF NOT EXISTS `global_table`
(
    `xid`                       VARCHAR(128) NOT NULL,
    `transaction_id`            BIGINT,
    `status`                    TINYINT      NOT NULL,
    `application_id`            VARCHAR(32),
    `transaction_service_group` VARCHAR(32),
    `transaction_name`          VARCHAR(128),
    `timeout`                   INT,
    `begin_time`                BIGINT,
    `application_data`          VARCHAR(2000),
    `gmt_create`                DATETIME,
    `gmt_modified`              DATETIME,
    PRIMARY KEY (`xid`),
    KEY `idx_gmt_modified_status` (`gmt_modified`, `status`),
    KEY `idx_transaction_id` (`transaction_id`)
) ENGINE = InnoDB
  DEFAULT CHARSET = utf8;

-- the table to store BranchSession data
CREATE TABLE IF NOT EXISTS `branch_table`
(
    `branch_id`         BIGINT       NOT NULL,
    `xid`               VARCHAR(128) NOT NULL,
    `transaction_id`    BIGINT,
    `resource_group_id` VARCHAR(32),
    `resource_id`       VARCHAR(256),
    `branch_type`       VARCHAR(8),
    `status`            TINYINT,
    `client_id`         VARCHAR(64),
    `application_data`  VARCHAR(2000),
    `gmt_create`        DATETIME(6),
    `gmt_modified`      DATETIME(6),
    PRIMARY KEY (`branch_id`),
    KEY `idx_xid` (`xid`)
) ENGINE = InnoDB
  DEFAULT CHARSET = utf8;

-- the table to store lock data
CREATE TABLE IF NOT EXISTS `lock_table`
(
    `row_key`        VARCHAR(128) NOT NULL,
    `xid`            VARCHAR(96),
    `transaction_id` BIGINT,
    `branch_id`      BIGINT       NOT NULL,
    `resource_id`    VARCHAR(256),
    `table_name`     VARCHAR(32),
    `pk`             VARCHAR(36),
    `gmt_create`     DATETIME,
    `gmt_modified`   DATETIME,
    PRIMARY KEY (`row_key`),
    KEY `idx_branch_id` (`branch_id`)
) ENGINE = InnoDB
  DEFAULT CHARSET = utf8;

  1. 創(chuàng)建 undo_log 腳本
CREATE TABLE `undo_log` (
  `id` bigint(20) NOT NULL AUTO_INCREMENT,
  `branch_id` bigint(20) NOT NULL,
  `xid` varchar(100) NOT NULL,
  `context` varchar(128) NOT NULL,
  `rollback_info` longblob NOT NULL,
  `log_status` int(11) NOT NULL,
  `log_created` datetime NOT NULL,
  `log_modified` datetime NOT NULL,
  `ext` varchar(100) DEFAULT NULL,
  PRIMARY KEY (`id`),
  UNIQUE KEY `ux_undo_log` (`xid`,`branch_id`)
) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=utf8 COMMENT='事物回滾日志表';
  • 第二步: 去官網(wǎng)下載 seata-server-1.2.0 和 nacos-server-1.2.1
  • 第三步: 修改配置信息 目的地址:seata-server-1.2.0\seata\conf
  1. 修改 file.conf 將store.mode 改為 db 然后修改 db 數(shù)據(jù)庫(kù)鏈接信息

## transaction log store, only used in seata-server
store {
  ## store mode: file拼缝、db
  mode = "db"

  ## file store property
  file {
    ## store location dir
    dir = "sessionStore"
    # branch session size , if exceeded first try compress lockkey, still exceeded throws exceptions
    maxBranchSessionSize = 16384
    # globe session size , if exceeded throws exceptions
    maxGlobalSessionSize = 512
    # file buffer size , if exceeded allocate new buffer
    fileWriteBufferCacheSize = 16384
    # when recover batch read size
    sessionReloadReadSize = 100
    # async, sync
    flushDiskMode = async
  }

  ## database store property
  db {
    ## the implement of javax.sql.DataSource, such as DruidDataSource(druid)/BasicDataSource(dbcp) etc.
    datasource = "druid"
    ## mysql/oracle/postgresql/h2/oceanbase etc.
    dbType = "mysql"
    driverClassName = "com.mysql.cj.jdbc.Driver"
    url = "jdbc:mysql://127.0.0.1:3306/seata?useUnicode=true&characterEncoding=UTF-8"
    user = "root"
    password = "123456"
    minConn = 5
    maxConn = 30
    globalTable = "global_table"
    branchTable = "branch_table"
    lockTable = "lock_table"
    queryLimit = 100
    maxWait = 5000
  }
}

  1. 修改 registry.conf 將 registry.type 改為 nacos 和 config.type 改為 nacos 配置自己的nacos-server地址
registry {
  # file 症歇、nacos 、eureka、redis挽牢、zk、consul俺附、etcd3矛纹、sofa
  type = "nacos"

  nacos {
    application = "seata-server"
    serverAddr = "localhost:8848"
    namespace = "public"
    cluster = "default"
    username = ""
    password = ""
  }
  eureka {
    serviceUrl = "http://localhost:8761/eureka"
    application = "default"
    weight = "1"
  }
  redis {
    serverAddr = "localhost:6379"
    db = 0
    password = ""
    cluster = "default"
    timeout = 0
  }
  zk {
    cluster = "default"
    serverAddr = "127.0.0.1:2181"
    sessionTimeout = 6000
    connectTimeout = 2000
    username = ""
    password = ""
  }
  consul {
    cluster = "default"
    serverAddr = "127.0.0.1:8500"
  }
  etcd3 {
    cluster = "default"
    serverAddr = "http://localhost:2379"
  }
  sofa {
    serverAddr = "127.0.0.1:9603"
    application = "default"
    region = "DEFAULT_ZONE"
    datacenter = "DefaultDataCenter"
    cluster = "default"
    group = "SEATA_GROUP"
    addressWaitTime = "3000"
  }
  file {
    name = "file.conf"
  }
}

config {
  # file、nacos 霜医、apollo齿拂、zk、consul肴敛、etcd3
  type = "nacos"

  nacos {
    serverAddr = "localhost:8848"
    namespace = "public"
    group = "SEATA_GROUP"
    username = ""
    password = ""
  }
  consul {
    serverAddr = "127.0.0.1:8500"
  }
  apollo {
    appId = "seata-server"
    apolloMeta = "http://192.168.1.204:8801"
    namespace = "application"
  }
  zk {
    serverAddr = "127.0.0.1:2181"
    sessionTimeout = 6000
    connectTimeout = 2000
    username = ""
    password = ""
  }
  etcd3 {
    serverAddr = "http://localhost:2379"
  }
  file {
    name = "file.conf"
  }
}
  • 第四步: 使用nacos作為配置中心署海,我們將seata配置信息config.txt初始到nacos-server中
  1. 修改config.txt中db數(shù)據(jù),配置文件在源碼中的目錄地址:seata\script\config-center
    修改 store.mode=db医男、store.db.driverClassName砸狞、 store.db.url、store.db.user镀梭、store.db.password
transport.type=TCP
transport.server=NIO
transport.heartbeat=true
transport.enableClientBatchSendRequest=false
transport.threadFactory.bossThreadPrefix=NettyBoss
transport.threadFactory.workerThreadPrefix=NettyServerNIOWorker
transport.threadFactory.serverExecutorThreadPrefix=NettyServerBizHandler
transport.threadFactory.shareBossWorker=false
transport.threadFactory.clientSelectorThreadPrefix=NettyClientSelector
transport.threadFactory.clientSelectorThreadSize=1
transport.threadFactory.clientWorkerThreadPrefix=NettyClientWorkerThread
transport.threadFactory.bossThreadSize=1
transport.threadFactory.workerThreadSize=default
transport.shutdown.wait=3
service.vgroupMapping.my_test_tx_group=default
service.default.grouplist=127.0.0.1:8091
service.enableDegrade=false
service.disableGlobalTransaction=false
client.rm.asyncCommitBufferLimit=10000
client.rm.lock.retryInterval=10
client.rm.lock.retryTimes=30
client.rm.lock.retryPolicyBranchRollbackOnConflict=true
client.rm.reportRetryCount=5
client.rm.tableMetaCheckEnable=false
client.rm.sqlParserType=druid
client.rm.reportSuccessEnable=false
client.rm.sagaBranchRegisterEnable=false
client.tm.commitRetryCount=5
client.tm.rollbackRetryCount=5
store.mode=db
store.file.dir=file_store/data
store.file.maxBranchSessionSize=16384
store.file.maxGlobalSessionSize=512
store.file.fileWriteBufferCacheSize=16384
store.file.flushDiskMode=async
store.file.sessionReloadReadSize=100
store.db.datasource=druid
store.db.dbType=mysql
store.db.driverClassName=com.mysql.cj.jdbc.Driver
store.db.url=jdbc:mysql://127.0.0.1:3306/seata?useUnicode=true
store.db.user=root
store.db.password=123456
store.db.minConn=5
store.db.maxConn=30
store.db.globalTable=global_table
store.db.branchTable=branch_table
store.db.queryLimit=100
store.db.lockTable=lock_table
store.db.maxWait=5000
server.recovery.committingRetryPeriod=1000
server.recovery.asynCommittingRetryPeriod=1000
server.recovery.rollbackingRetryPeriod=1000
server.recovery.timeoutRetryPeriod=1000
server.maxCommitRetryTimeout=-1
server.maxRollbackRetryTimeout=-1
server.rollbackRetryTimeoutUnlockEnable=false
client.undo.dataValidation=true
client.undo.logSerialization=jackson
client.undo.onlyCareUpdateColumns=true
server.undo.logSaveDays=7
server.undo.logDeletePeriod=86400000
client.undo.logTable=undo_log
client.log.exceptionRate=100
transport.serialization=seata
transport.compressor=none
metrics.enabled=false
metrics.registryType=compact
metrics.exporterList=prometheus
metrics.exporterPrometheusPort=9898
  1. 執(zhí)行初始化配置腳本 nacos-config.sh 在源碼中目錄地址:seata\script\config-center\nacos
    sh nacos-config.sh nacos-server地址
sh nacos-config.sh 127.0.0.1
  • 第五步:?jiǎn)?dòng)seata-server 和 nacos-server
    如果啟動(dòng)過(guò)程中沒(méi)有報(bào)錯(cuò)刀森,則啟動(dòng)成功
  • 第六步:查看 nacos 中是否注冊(cè)服務(wù)成功
    以后修改配置信息 咋可以直接在nacos 上修改


    image.png

    image.png
  • 第七步:改造參與seata 所有微服務(wù)pom.xml
    添加seata依賴
 <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-starter-alibaba-nacos-discovery</artifactId>
        </dependency>
        <dependency>
            <groupId>org.springframework.cloud</groupId>
            <artifactId>spring-cloud-starter-alibaba-nacos-config</artifactId>
        </dependency>
        <!-- 分布式事物管理 -->
        <dependency>
            <groupId>com.alibaba.cloud</groupId>
            <artifactId>spring-cloud-alibaba-seata</artifactId>
            <version>${alibaba.cloud.version}</version>
            <exclusions>
                <exclusion>
                    <groupId>io.seata</groupId>
                    <artifactId>seata-spring-boot-starter</artifactId>
                </exclusion>
            </exclusions>
        </dependency>
        <dependency>
            <groupId>io.seata</groupId>
            <artifactId>seata-spring-boot-starter</artifactId>
            <version>1.2.0</version>
        </dependency>
  • 第八步:改造參與seata 所有微服務(wù)application.yml
    添加seata配置信息 配置文件位于源碼目錄:seata\script\client\spring
    配置信息 根據(jù)自己需求精簡(jiǎn)更改,我是全部拷貝過(guò)來(lái)的
#================ seata config =======================

seata:
  enabled: true
  application-id: ${spring.application.name}
  tx-service-group: my_test_tx_group
  enable-auto-data-source-proxy: true
  use-jdk-proxy: false
  excludes-for-auto-proxying: firstClassNameForExclude,secondClassNameForExclude
  client:
    rm:
      async-commit-buffer-limit: 1000
      report-retry-count: 5
      table-meta-check-enable: false
      report-success-enable: false
      saga-branch-register-enable: false
      lock:
        retry-interval: 10
        retry-times: 30
        retry-policy-branch-rollback-on-conflict: true
    tm:
      commit-retry-count: 5
      rollback-retry-count: 5
    undo:
      data-validation: true
      log-serialization: jackson
      log-table: undo_log
      only-care-update-columns: true
    log:
      exceptionRate: 100
  service:
    vgroupMapping:
      my_test_tx_group: default
    grouplist:
      default: 127.0.0.1:8091
    enable-degrade: false
    disable-global-transaction: false
  transport:
    shutdown:
      wait: 3
    thread-factory:
      boss-thread-prefix: NettyBoss
      worker-thread-prefix: NettyServerNIOWorker
      server-executor-thread-prefix: NettyServerBizHandler
      share-boss-worker: false
      client-selector-thread-prefix: NettyClientSelector
      client-selector-thread-size: 1
      client-worker-thread-prefix: NettyClientWorkerThread
      worker-thread-size: default
      boss-thread-size: 1
    type: TCP
    server: NIO
    heartbeat: true
    serialization: seata
    compressor: none
    enable-client-batch-send-request: true
  config:
    # 使用 nacos 配置中心
    type: nacos
    nacos:
      namespace: public
      serverAddr: 127.0.0.1:8848
      group: SEATA_GROUP
      userName: ""
      password: ""
  registry:
    # 使用 nacos 注冊(cè)類型报账,通過(guò)nacos 發(fā)現(xiàn) seata-server
    type: nacos
    nacos:
      application: seata-server
      server-addr: 127.0.0.1:8848
      namespace: public
      cluster: default
      userName: ""
      password: ""
  • 第九步:在參與seata 所有微服務(wù)Application中添加exclude = {DataSourceAutoConfiguration.class}
@SpringBootApplication(exclude = {DataSourceAutoConfiguration.class})
public class Application {
 public static void main(String[] args) {
        SpringApplication.run(Application.class, args);
    }
}
  • 第十步:增加分布式事物xid 傳遞攔截器

SeataHandlerInterceptor

import io.seata.core.context.RootContext;
import org.apache.commons.lang3.StringUtils;
import org.springframework.web.servlet.handler.HandlerInterceptorAdapter;

import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;

public class SeataHandlerInterceptor extends HandlerInterceptorAdapter {

    /**
     * 攔截前處理
     * 將全局事務(wù)ID綁定到上下文中
     * @param request HttpServletRequest
     * @param response HttpServletResponse
     * @param handler handler
     * @return 是否繼續(xù)下一步
     * @throws Exception 異常
     */
    @Override
    public boolean preHandle(HttpServletRequest request, HttpServletResponse response, Object handler) throws Exception {
        String currentXid = RootContext.getXID();
        String globalXid = request.getHeader(RootContext.KEY_XID);
        if (StringUtils.isBlank(currentXid) && StringUtils.isNotBlank(globalXid)) {
            RootContext.bind(globalXid);
        }
        return true;
    }

    /**
     * 攔截后處理
     * @param request HttpServletRequest
     * @param response HttpServletResponse
     * @param handler handler
     * @param ex 異常
     * @throws Exception 異常
     */
    @Override
    public void afterCompletion(HttpServletRequest request, HttpServletResponse response, Object handler, Exception ex) throws Exception {
        String globalXid = request.getHeader(RootContext.KEY_XID);
        if (StringUtils.isBlank(globalXid)) {
            return;
        }
        String unBindXid = RootContext.unbind();
        //在事務(wù)期間被更改過(guò)
        if (!globalXid.equalsIgnoreCase(unBindXid)) {
            RootContext.bind(unBindXid);
        }
    }

}

SeataRestTemplateInterceptor

import io.seata.core.context.RootContext;
import lombok.extern.log4j.Log4j2;
import org.apache.commons.lang3.StringUtils;
import org.springframework.http.HttpRequest;
import org.springframework.http.client.ClientHttpRequestExecution;
import org.springframework.http.client.ClientHttpRequestInterceptor;
import org.springframework.http.client.ClientHttpResponse;
import org.springframework.http.client.support.HttpRequestWrapper;

import java.io.IOException;


@Log4j2
public class SeataRestTemplateInterceptor implements ClientHttpRequestInterceptor {

    /**
     * RestTemplate請(qǐng)求攔截器
     * 在頭部設(shè)置全局事務(wù)ID
     * @param request request
     * @param body 書序
     * @param execution ClientHttpRequestExecution
     * @return ClientHttpResponse
     * @throws IOException 異常
     */
    @Override
    public ClientHttpResponse intercept(HttpRequest request, byte[] body, ClientHttpRequestExecution execution) throws IOException {
        HttpRequestWrapper requestWrapper = new HttpRequestWrapper(request);
        String xid = RootContext.getXID();
        if (StringUtils.isNotBlank(xid)) {
            requestWrapper.getHeaders().add(RootContext.KEY_XID, xid);
            log.info("分布式事務(wù) xid:{}", xid);
        }
        return execution.execute(requestWrapper, body);
    }

}

SeataRestTemplateConfig

import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.context.annotation.Configuration;
import org.springframework.http.client.ClientHttpRequestInterceptor;
import org.springframework.web.client.RestTemplate;

import javax.annotation.PostConstruct;
import java.util.ArrayList;
import java.util.Collection;
import java.util.Iterator;
import java.util.List;


@Configuration
public class SeataRestTemplateConfig {

    @Autowired(
            required = false
    )
    private Collection<RestTemplate> restTemplates;

    public SeataRestTemplateConfig(Collection<RestTemplate> restTemplates) {
       this.restTemplates = restTemplates;
    }

    public SeataRestTemplateConfig() {
    }

    public SeataRestTemplateInterceptor seataRestTemplateInterceptor() {
        return new SeataRestTemplateInterceptor();
    }

    @PostConstruct
    public void init() {
        if (this.restTemplates != null && this.restTemplates.size() > 0) {
            Iterator var1 = this.restTemplates.iterator();
            while (var1.hasNext()) {
                RestTemplate restTemplate = (RestTemplate) var1.next();
                List<ClientHttpRequestInterceptor> interceptors = new ArrayList(restTemplate.getInterceptors());
                interceptors.add(this.seataRestTemplateInterceptor());
                restTemplate.setInterceptors(interceptors);
            }
        }
    }
}

SeataFeignClientInterceptor

import feign.RequestInterceptor;
import feign.RequestTemplate;
import io.seata.core.context.RootContext;
import lombok.extern.log4j.Log4j2;
import org.apache.commons.lang3.StringUtils;

@Log4j2
public class SeataFeignClientInterceptor implements RequestInterceptor {
    @Override
    public void apply(RequestTemplate requestTemplate) {
        String xid = RootContext.getXID();
        if (StringUtils.isNotBlank(xid)) {
            requestTemplate.header(RootContext.KEY_XID, xid);
            log.info("分布式事務(wù) xid:{}", xid);
        }
    }
}

WebConfiguration

/***
 * 注意:  在一個(gè)項(xiàng)目中WebMvcConfigurationSupport只能存在一個(gè)研底,多個(gè)的時(shí)候,只有一個(gè)會(huì)生效
*/
@Configuration
public class WebConfiguration extends WebMvcConfigurationSupport {

    @Autowired
    private RequestMappingHandlerAdapter handlerAdapter;

   @Bean
    public RequestInterceptor requestInterceptor() {
        return new SeataFeignClientInterceptor();
    }

    @Override
    protected void addInterceptors(InterceptorRegistry registry) {
        registry.addInterceptor(new SeataHandlerInterceptor()).addPathPatterns("/**");
        super.addInterceptors(registry);
    }

}
  • 第十一步:因?yàn)槭褂玫膕eata-spring-boot-starter 所以不需要在每個(gè)微服務(wù)下面加file.conf和register.conf的文件和手動(dòng)配置數(shù)據(jù)源代理
  • 第十二步:在業(yè)務(wù)的發(fā)起方添加@GlobalTransactional 全局事物笙什,其他微服務(wù)參與者不需要配置@GlobalTransactional
    全局事物發(fā)起方
@Log4j2
@Service
public class OrderServiceImpl extends BaseJpaMongoServiceImpl<OrderInfo, Long> implements OrderService {

    @Autowired
    private OrderRepository orderRepository;
    @Autowired
    private RestTemplate restTemplate;
    @Autowired
    HttpServletRequest request;

    private final String PAY_SERVICE_HOST = "http://127.0.0.1:18088/api/v1/verify/order/apportion/s";


    public OrderServiceImpl(BaseJpaRepository<OrderInfo, Long> baseRepository) {
        super(baseRepository);
    }


     @Transactional
    @Override
    public ResultInfo saveRecord(OrderDto record) {
        OrderInfo order = new OrderInfo();
        order.setOrderNumber(String.valueOf(System.currentTimeMillis()));
        order.setOrderName("測(cè)試訂單");
        order.setOrderClassify("1");
        order.setOrderStatus((byte) 0);
        order.setOrderRemarks("測(cè)試下單回滾");
        log.info("事務(wù)xid{}" + RootContext.getXID());
        OrderInfo saveObj = this.orderRepository.save(order);
        if (saveObj != null && saveObj.getId() != null) {
            HttpHeaders headers = new HttpHeaders();
            Enumeration<String> headerNames = request.getHeaderNames();
            while (headerNames.hasMoreElements()) {
                String key = (String) headerNames.nextElement();
                String value = request.getHeader(key);
                headers.add(key, value);
            }
            //調(diào)用其他服務(wù)
            ResultInfo result = restTemplate.postForObject(PAY_SERVICE_HOST, new HttpEntity<String>(headers), ResultInfo.class);
            log.info(result.getMessage());
//使用注解開啟分布式事務(wù)時(shí)飘哨,若要求事務(wù)回滾,必須將異常拋出到事務(wù)的發(fā)起方琐凭,被事務(wù)發(fā)起方的 @GlobalTransactional 注解感知到芽隆。provide 直接拋出異常 或 定義錯(cuò)誤碼由 consumer 判斷再拋出異常。
            if (!result.getSuccess()) {
                log.info("載入事務(wù){(diào)}進(jìn)行回滾" + RootContext.getXID());
                try {
                    GlobalTransactionContext.reload(RootContext.getXID()).rollback();
                } catch (TransactionException e) {
                    e.printStackTrace();
                }
            }
        }
        //int i = 0/0;
        return ResultUtil.success();
    }


    /**
     * 在業(yè)務(wù)的發(fā)起方的方法上使用@GlobalTransactional開啟全局事務(wù),Seata 會(huì)將事務(wù)的 xid 通過(guò)攔截器添加到調(diào)用其他服務(wù)的請(qǐng)求中胚吁,實(shí)現(xiàn)分布式事務(wù)
     * 業(yè)務(wù)發(fā)起方方法上增加全局事務(wù)的注解@GlobalTransactional  其他遠(yuǎn)端服務(wù)的方法中增加注解
     @Transactional牙躺,表示開啟事務(wù)。
     * 需要將本地事物放到全局事物GlobalTransactional 內(nèi)層
     * @param record
     * @return
     */
    @GlobalTransactional
    @Override
    public ResultInfo test(OrderDto record) {
        // saveRecord 本身攜帶本地事物腕扶, 將本地事物放到全局事物GlobalTransactional 內(nèi)層
        return saveRecord(record);
    }

}

其他微服務(wù)事物參與者

@Log4j2
@Service
public class OrderApportionServiceImpl extends BaseJpaMongoServiceImpl<OrderApportion, Long> implements OrderApportionService {

    @Autowired
    private OrderApportionRepository orderApportionRepository;

    public OrderApportionServiceImpl(BaseJpaRepository<OrderApportion, Long> baseRepository) {
        super(baseRepository);
    }

    @Transactional
    @Override
    public ResultInfo saveRecord(OrderApportionDto record) {
        OrderApportion info = new OrderApportion();
        info.setOrderId(1L);
        info.setOrderNumber("10");
        info.setOrderName("測(cè)試回滾");
        info.setOrderClassify("10");
        info.setStatus((byte) 0);
        info.setUserId(1L);
        info.setApportionNumber("110");
        info.setRemarks("測(cè)試");
        log.info("事務(wù)xid{}" + RootContext.getXID());
        OrderApportion saveObj = this.orderApportionRepository.save(info);
       // int i = 0/0;
        return ResultUtil.success();
    }

}
  • 第十三步 啟動(dòng)每個(gè)業(yè)務(wù)微服務(wù)孽拷,然后開始調(diào)試回滾,觀察回滾是否成功半抱,重點(diǎn)觀察log.info("事務(wù)xid{}" + RootContext.getXID()); xid 值是否一致
    如果沒(méi)有回滾每個(gè)人遇到的問(wèn)題不同脓恕,如果遇到問(wèn)題查看文檔分析

參考文檔:https://seata.io/zh-cn/docs/dev/mode/at-mode.html
常見(jiàn)問(wèn)題文檔:https://seata.io/zh-cn/docs/overview/faq.html

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
  • 序言:七十年代末,一起剝皮案震驚了整個(gè)濱河市窿侈,隨后出現(xiàn)的幾起案子炼幔,更是在濱河造成了極大的恐慌,老刑警劉巖史简,帶你破解...
    沈念sama閱讀 217,406評(píng)論 6 503
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件乃秀,死亡現(xiàn)場(chǎng)離奇詭異,居然都是意外死亡圆兵,警方通過(guò)查閱死者的電腦和手機(jī)跺讯,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 92,732評(píng)論 3 393
  • 文/潘曉璐 我一進(jìn)店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來(lái)殉农,“玉大人刀脏,你說(shuō)我怎么就攤上這事⊥程В” “怎么了火本?”我有些...
    開封第一講書人閱讀 163,711評(píng)論 0 353
  • 文/不壞的土叔 我叫張陵危队,是天一觀的道長(zhǎng)聪建。 經(jīng)常有香客問(wèn)我,道長(zhǎng)茫陆,這世上最難降的妖魔是什么金麸? 我笑而不...
    開封第一講書人閱讀 58,380評(píng)論 1 293
  • 正文 為了忘掉前任,我火速辦了婚禮簿盅,結(jié)果婚禮上挥下,老公的妹妹穿的比我還像新娘。我一直安慰自己桨醋,他們只是感情好棚瘟,可當(dāng)我...
    茶點(diǎn)故事閱讀 67,432評(píng)論 6 392
  • 文/花漫 我一把揭開白布。 她就那樣靜靜地躺著喜最,像睡著了一般偎蘸。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發(fā)上,一...
    開封第一講書人閱讀 51,301評(píng)論 1 301
  • 那天迷雪,我揣著相機(jī)與錄音限书,去河邊找鬼。 笑死章咧,一個(gè)胖子當(dāng)著我的面吹牛倦西,可吹牛的內(nèi)容都是我干的。 我是一名探鬼主播赁严,決...
    沈念sama閱讀 40,145評(píng)論 3 418
  • 文/蒼蘭香墨 我猛地睜開眼扰柠,長(zhǎng)吁一口氣:“原來(lái)是場(chǎng)噩夢(mèng)啊……” “哼!你這毒婦竟也來(lái)了疼约?” 一聲冷哼從身側(cè)響起耻矮,我...
    開封第一講書人閱讀 39,008評(píng)論 0 276
  • 序言:老撾萬(wàn)榮一對(duì)情侶失蹤,失蹤者是張志新(化名)和其女友劉穎忆谓,沒(méi)想到半個(gè)月后裆装,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體,經(jīng)...
    沈念sama閱讀 45,443評(píng)論 1 314
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡倡缠,尸身上長(zhǎng)有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 37,649評(píng)論 3 334
  • 正文 我和宋清朗相戀三年哨免,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片昙沦。...
    茶點(diǎn)故事閱讀 39,795評(píng)論 1 347
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡琢唾,死狀恐怖,靈堂內(nèi)的尸體忽然破棺而出盾饮,到底是詐尸還是另有隱情采桃,我是刑警寧澤,帶...
    沈念sama閱讀 35,501評(píng)論 5 345
  • 正文 年R本政府宣布丘损,位于F島的核電站普办,受9級(jí)特大地震影響,放射性物質(zhì)發(fā)生泄漏徘钥。R本人自食惡果不足惜衔蹲,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 41,119評(píng)論 3 328
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望呈础。 院中可真熱鬧舆驶,春花似錦、人聲如沸而钞。這莊子的主人今日做“春日...
    開封第一講書人閱讀 31,731評(píng)論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽(yáng)臼节。三九已至撬陵,卻和暖如春俱病,著一層夾襖步出監(jiān)牢的瞬間,已是汗流浹背袱结。 一陣腳步聲響...
    開封第一講書人閱讀 32,865評(píng)論 1 269
  • 我被黑心中介騙來(lái)泰國(guó)打工亮隙, 沒(méi)想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留,地道東北人垢夹。 一個(gè)月前我還...
    沈念sama閱讀 47,899評(píng)論 2 370
  • 正文 我出身青樓溢吻,卻偏偏與公主長(zhǎng)得像,于是被迫代替她去往敵國(guó)和親果元。 傳聞我的和親對(duì)象是個(gè)殘疾皇子促王,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 44,724評(píng)論 2 354