分布式事務(wù)筆記
1.最終一致性方案[異步最終一致]
1.1流程圖
1.2 業(yè)務(wù)流程步驟
- 業(yè)務(wù)系統(tǒng)A發(fā)送消息到消息系統(tǒng), 消息系統(tǒng)標(biāo)記消息狀態(tài)為"待確認(rèn)".
- 消息系統(tǒng)存儲信息到數(shù)據(jù)庫后,返回對應(yīng)的結(jié)果到系統(tǒng)A,有成功和失敗可能.
- 系統(tǒng)A完成本身業(yè)務(wù)邏輯,例如扣費(fèi), 然后通知消息系統(tǒng).
- 消息系統(tǒng)收到消息, 更改消息的狀態(tài)為"發(fā)送中",并同時利用消息中間件進(jìn)行消息發(fā)送.
- 系統(tǒng)B收到消息
- 系統(tǒng)B完成本地業(yè)務(wù)邏輯,例如增加積分,然后通知到消息系統(tǒng).
- 消息系統(tǒng)刪除掉對應(yīng)存儲的消息.
1.3異常流程處理
- 如果流程1發(fā)生異常,則由業(yè)務(wù)系統(tǒng)A進(jìn)行重試, 重試不成功則預(yù)警,業(yè)務(wù)終止,此時業(yè)務(wù)數(shù)據(jù)一致.
- 如果消息系統(tǒng)存儲"待確認(rèn)"成功,返回業(yè)務(wù)系統(tǒng)失敗, 一種解決是業(yè)務(wù)系統(tǒng)重試,二種增加定時任務(wù),重復(fù)通知.
- 定時任務(wù),掃描"發(fā)送中"狀態(tài),反向查詢系統(tǒng)B對應(yīng)狀態(tài),注意使用冪等.
- 定時任務(wù),掃描"待確認(rèn)"狀態(tài), 反向查詢系統(tǒng)A對應(yīng)狀態(tài),注意使用冪等.
- 當(dāng)定時任務(wù)重試了對應(yīng)次數(shù)和時間,則轉(zhuǎn)到一個人工處理的隊(duì)列.利用死亡隊(duì)列進(jìn)行監(jiān)聽.
1.4Demo項(xiàng)目流程
- 訂單系統(tǒng), 創(chuàng)建訂單, 修改訂單.模擬支付成功
- 用戶系統(tǒng), 創(chuàng)建u用戶,修改用戶積分.
- 消息系統(tǒng) , 創(chuàng)建消息,修改消息,定時任務(wù).
- queue系統(tǒng), 負(fù)責(zé)監(jiān)聽消息處理對應(yīng)業(yè)務(wù).
- 工具: rabbitmq
- 先生成一筆訂單六敬,訂單狀態(tài)為支付中,然后發(fā)送消息到消息系統(tǒng)進(jìn)行記錄,返回正確后,然后模擬支付成功佩耳,完成本地事務(wù)代虾, 然后修改消息狀態(tài)栈戳,然后用戶系統(tǒng)增加積分安聘, 然后刪除掉消息华糖。 最后定時任務(wù)
2.最大努力通知型[異步可丟失]
2.1流程圖
2.2業(yè)務(wù)流程步驟
- 系統(tǒng)A完成本地事務(wù),進(jìn)行異步調(diào)用消息系統(tǒng).
- 消息系統(tǒng)記錄消息,類似存一條記錄即可.
- 消息系統(tǒng)連接queue系統(tǒng)進(jìn)行發(fā)送信息.
- 如果應(yīng)答為200,或則成功,則刪除此類消息.
- 如果超過五次,則將消息刪除, 然后丟入死亡隊(duì)列.
- 注意接收方進(jìn)行冪等.
2.3Demo項(xiàng)目
- 訂單系統(tǒng), 完成訂單,調(diào)用消息系統(tǒng)
- 消息系統(tǒng), 記錄信息,定時任務(wù)發(fā)/刪消息, 調(diào)用queue系統(tǒng)
- queue系統(tǒng), 連接 rabbitmq發(fā)送信息. 可以使用ACK
- 用戶系統(tǒng), 接收用戶訂單信息.
3.LCN解決方案[強(qiáng)一致性]
說明: 現(xiàn)在官網(wǎng)打不開了.最新的版本是5.0.2. 代碼還是好用的.
https://github.com/codingapi/tx-lcn/releases
https://github.com/codingapi/txlcn-docs/tree/master/docs/zh-cn
3.1流程圖
創(chuàng)建事務(wù)組
是指在事務(wù)發(fā)起方開始執(zhí)行業(yè)務(wù)代碼之前先調(diào)用TxManager創(chuàng)建事務(wù)組對象麦向,然后拿到事務(wù)標(biāo)示GroupId的過程。
加入事務(wù)組
添加事務(wù)組是指參與方在執(zhí)行完業(yè)務(wù)方法以后客叉,將該模塊的事務(wù)信息通知給TxManager的操作诵竭。
通知事務(wù)組
是指在發(fā)起方執(zhí)行完業(yè)務(wù)代碼以后,將發(fā)起方執(zhí)行結(jié)果狀態(tài)通知給TxManager,TxManager將根據(jù)事務(wù)最終狀態(tài)和事務(wù)組的信息來通知相應(yīng)的參與模塊提交或回滾事務(wù)兼搏,并返回結(jié)果給事務(wù)發(fā)起方卵慰。
3.2LCN的3種模式
LCN模式:
通過代理connection的方式實(shí)現(xiàn)對本地事務(wù)的處理,然后再txManagaer統(tǒng)一協(xié)調(diào)控制事務(wù).
特點(diǎn):
- 對于代碼嵌入性低.
- 該模式僅限于本地存在連接對象和連接對蝦那個控制事務(wù)的模塊.
- 該模式事務(wù)提交和回滾由本地事務(wù)控制,對于數(shù)據(jù)一致性,有非常高的保障.
- 代理的連接需要隨著事務(wù)發(fā)起方一起釋放才釋放,所以占用時間比較長.
TCC模式:
Try : 嘗試執(zhí)行業(yè)務(wù), confirm:確認(rèn)執(zhí)行業(yè)務(wù), Cancel:取消執(zhí)行業(yè)務(wù).
特點(diǎn):
- 對代碼嵌入型高,要求每個業(yè)務(wù)都要寫三個步驟的操作.
- 該模式對有無本地事務(wù)都可以全面支持, 使用面比較廣.
- 數(shù)據(jù)一致性完全由開發(fā)決定, 對業(yè)務(wù)要開發(fā)要求非常高.
TXC模式:
通過SQL執(zhí)行前,了解SQL的信息和創(chuàng)建鎖, 鎖是通過redis進(jìn)行創(chuàng)建. 當(dāng)回滾的時候,通過這些SQL影響信息回滾.
特點(diǎn):
- 對代碼嵌入型低.
- 僅限支持SQL
- 每次都先查SQL影響的數(shù)據(jù).比LCN模式慢.
- 該模式不會占用數(shù)據(jù)庫連接資源.
3.3Demo項(xiàng)目
- redis項(xiàng)目
- TM項(xiàng)目
- 訂單系統(tǒng)
- 用戶系統(tǒng)
3.4SpringBoot整合LCN
3.4.1 新建項(xiàng)目
<dependency>
<groupId>com.codingapi.txlcn</groupId>
<artifactId>txlcn-tm</artifactId>
<version>5.0.2.RELEASE</version>
</dependency>
3.4.2增加配置
spring.application.name=TransactionManager
server.port=7970
spring.datasource.driver-class-name=com.mysql.cj.jdbc.Driver
spring.datasource.url=jdbc:mysql://127.0.0.1:3306/tx-manager?characterEncoding=UTF-8&serverTimezone=Asia/Shanghai
spring.datasource.username=root
spring.datasource.password=root
spring.jpa.database-platform=org.hibernate.dialect.MySQL5InnoDBDialect
spring.jpa.hibernate.ddl-auto=update
mybatis.configuration.map-underscore-to-camel-case=true
mybatis.configuration.use-generated-keys=true
#tx-lcn.logger.enabled=true
# TxManager Host Ip
#tx-lcn.manager.host=127.0.0.1
# TxClient連接請求端口
#tx-lcn.manager.port=8070
# 心跳檢測時間(ms)
#tx-lcn.manager.heart-time=15000
# 分布式事務(wù)執(zhí)行總時間
#tx-lcn.manager.dtx-time=30000
#參數(shù)延遲刪除時間單位ms
#tx-lcn.message.netty.attr-delay-time=10000
#tx-lcn.manager.concurrent-level=128
# 開啟日志
#tx-lcn.logger.enabled=true
#logging.level.com.codingapi=debug
#redis 主機(jī)
#spring.redis.host=127.0.0.1
#redis 端口
#spring.redis.port=6379
#redis 密碼
#spring.redis.password=
3.4.3 啟動TM項(xiàng)目
http://localhost:7970/admin/index.html#/task 密碼: codingapi
[圖片上傳失敗...(image-ac34af-1598023369724)]
3.4.4啟動類增加注解
@EnableDistributedTransaction
@SpringBootApplication
public class ChlLoanServiceOrderApplication {
public static void main(String[] args) {
SpringApplication.run(ChlLoanServiceOrderApplication.class, args);
}
}
3.4.5 建立client-A
新建項(xiàng)目,增加對應(yīng)的maven依賴.
<!--第一步,增加lcn的依賴-->
<!--分布式事物-->
<dependency>
<groupId>com.codingapi.txlcn</groupId>
<artifactId>txlcn-tc</artifactId>
<version>5.0.2.RELEASE</version>
</dependency>
<dependency>
<groupId>com.codingapi.txlcn</groupId>
<artifactId>txlcn-txmsg-netty</artifactId>
<version>5.0.2.RELEASE</version>
</dependency>
3.4.6 建立client-B
新建項(xiàng)目,增加對應(yīng)的maven依賴, 和上述保持一致.
3.4.7 增加兩個項(xiàng)目的配置
# 是否啟動LCN負(fù)載均衡策略(優(yōu)化選項(xiàng)向族,開啟與否呵燕,功能不受影響)
tx-lcn.ribbon.loadbalancer.dtx.enabled=true
# 默認(rèn)之配置為TM的本機(jī)默認(rèn)端口
tx-lcn.client.manager-address=127.0.0.1:8070
# 開啟日志,默認(rèn)為false
tx-lcn.logger.enabled=true
3.4.8 Client-A項(xiàng)目處理
流程如下: Client-A 中,在service層中增加一個事務(wù)的方法
@Transactional
@LcnTransaction
- 第一步,首先本地事務(wù), 加入訂單,標(biāo)志為訂單支付成功.
- 第二步, 調(diào)用client-B,給用戶增加積分.
- 在這里如果成功,則兩個項(xiàng)目都有對應(yīng)的數(shù)據(jù)增加.
- 在這里如果失敗,則兩個項(xiàng)目數(shù)據(jù)都進(jìn)行回滾.
/**
* 測試lcn
* @return
*/
@Transactional //本地事務(wù)注解
@LcnTransaction//分布式事務(wù)注解
public ResultVO testLcn()throws Exception{
//第一步: 加入訂單操作
LoanOrderPO loanOrderPO = new LoanOrderPO();
loanOrderPO.setConsumeAccount(new BigDecimal(1001));
loanOrderPO.setCreateTime(System.currentTimeMillis());
loanOrderPO.setEditTime(System.currentTimeMillis());
loanOrderPO.setOrderId(UUID.randomUUID().toString());
loanOrderPO.setUserId("1001");
loanOrderPOMapper.insertSelective(loanOrderPO);
//第二步: 給用戶增加積分
RpTransactionMessage rpTransactionMessage = new RpTransactionMessage();
String paramJson = JSON.toJSONString(loanOrderPO);
rpTransactionMessage.setConsumerQueue("order.pay");
rpTransactionMessage.setCreater("lemon-order");
rpTransactionMessage.setMessageBody(paramJson);
rpTransactionMessage.setMessageDataType("json");
rpTransactionMessage.setMessageId(UUID.randomUUID().toString());
rpTransactionMessage.setField1("paying");
paramJson = JSON.toJSONString(rpTransactionMessage);
String url = "http://127.0.0.1:8092/user/create";
String result = HttpClientUtil.postBody(url, paramJson);
ResultVO resultVO = new ResultVO();
resultVO.setData(result);
return resultVO ;
}
3.4.8 Client-B業(yè)務(wù)處理
流程如下: Client-B中,在service層中增加一個事務(wù)的方法
@Transactional
@LcnTransaction
- 首先判斷此次的請求的冪等性.
- 如果有了,直接查詢結(jié)果返回.
- 如果沒有,則繼續(xù)執(zhí)行下述邏輯.
- 處理本地事務(wù),提交用戶的積分.
- 如果本地事務(wù)發(fā)生異常,那么client-a,client-b都進(jìn)行回退.
@LcnTransaction
@Transactional
@Override
public ResultVO addUserCount(RpTransactionMessage rpTransactionMessage) {
//是否這個消息是否處理過, 消息冪等
Example example = new Example(UserLoanConsumeLogPO.class);
example.createCriteria().andEqualTo("messageId", rpTransactionMessage.getMessageId());
List<UserLoanConsumeLogPO> list = consumeLogPOMapper.selectByExample(example);
if (null != list && list.size() > 0) {
return new ResultVO();
}
//開始解析消息體
UserCountForm userCountForm = JSONObject.parseObject(rpTransactionMessage.getMessageBody()).toJavaObject(UserCountForm.class);
//判斷是否有用戶
Example useExample = new Example(UserLoanConsumePO.class);
useExample.createCriteria().andEqualTo("userId", userCountForm.getUserId());
List<UserLoanConsumePO> userLoanConsumePOList = consumePOMapper.selectByExample(useExample);
if (null != userLoanConsumePOList && userLoanConsumePOList.size() > 0) {
//增加積分
Example consumePoExample = new Example(UserLoanConsumePO.class);
consumePoExample.createCriteria().andEqualTo("userId", userCountForm.getUserId());
UserLoanConsumePO userLoanConsumePO = new UserLoanConsumePO();
userLoanConsumePO.setConsumeAccount(userLoanConsumePOList.get(0).getConsumeAccount().add(userCountForm.getConsumeAccount()));
consumePOMapper.updateByExampleSelective(userLoanConsumePO, consumePoExample);
} else {
UserLoanConsumePO userLoanConsumePO = new UserLoanConsumePO();
userLoanConsumePO.setConsumeAccount(userCountForm.getConsumeAccount());
userLoanConsumePO.setCreateTime(System.currentTimeMillis());
userLoanConsumePO.setEditTime(System.currentTimeMillis());
userLoanConsumePO.setUserId(userCountForm.getUserId());
//創(chuàng)建用戶加積分
consumePOMapper.insertSelective(userLoanConsumePO);
}
//增加消息記錄
UserLoanConsumeLogPO userLoanConsumeLogPO = new UserLoanConsumeLogPO();
userLoanConsumeLogPO.setCreateTime(System.currentTimeMillis());
userLoanConsumeLogPO.setEditTime(System.currentTimeMillis());
userLoanConsumeLogPO.setMessageId(rpTransactionMessage.getMessageId());
userLoanConsumeLogPO.setUserId(userCountForm.getUserId());
consumeLogPOMapper.insertSelective(userLoanConsumeLogPO);
if(1==1){
throw new BusinessException("safa","sdfs");
}
return new ResultVO();
}
4.seata解決方案[強(qiáng)一致性]
4.1 最新的版本
https://github.com/seata/seata/releases/tag/v1.3.0
http://seata.io/zh-cn/docs/ops/deploy-guide-beginner.html
建議目前使用 springboot 2.2.5 cloud Hoxton.SR3 alibaba 2.2.1
相關(guān)術(shù)語:
TC (Transaction Coordinator) - 事務(wù)協(xié)調(diào)者: 維護(hù)全局和分支事務(wù)的狀態(tài)件相,驅(qū)動全局事務(wù)提交或回滾再扭。
TM (Transaction Manager) - 事務(wù)管理器: 定義全局事務(wù)的范圍:開始全局事務(wù)、提交或回滾全局事務(wù)夜矗。
RM (Resource Manager) - 資源管理器: 管理分支事務(wù)處理的資源泛范,與TC交談以注冊分支事務(wù)和報告分支事務(wù)的狀態(tài),并驅(qū)動分支事務(wù)提交或回滾紊撕。
4.2 最新的文檔
http://seata.io/zh-cn/index.html
4.3 Seata支持的模式
4.3.1 AT模式
4.3.1.1 AT簡介
前提:基于支持本地ACID事務(wù)的關(guān)系型數(shù)據(jù)庫.
機(jī)制: 階段一: 業(yè)務(wù)數(shù)據(jù)和回滾日志記錄大奧一個本地事務(wù)提交, 釋放本地鎖和鏈接資源.
? 階段二: 提交異步化, 如果回滾,則通過日志反向回滾.
4.3.1.2寫隔離
兩個全局事務(wù) tx1 和 tx2罢荡,分別對 a 表的 m 字段進(jìn)行更新操作,m 的初始值 1000对扶。
tx1 先開始区赵,開啟本地事務(wù),拿到本地鎖浪南,更新操作 m = 1000 - 100 = 900笼才。本地事務(wù)提交前,先拿到該記錄的 全局鎖 络凿,本地提交釋放本地鎖骡送。 tx2 后開始,開啟本地事務(wù)絮记,拿到本地鎖摔踱,更新操作 m = 900 - 100 = 800。本地事務(wù)提交前怨愤,嘗試拿該記錄的 全局鎖 派敷,tx1 全局提交前,該記錄的全局鎖被 tx1 持有,tx2 需要重試等待 全局鎖 篮愉。
tx1 二階段全局提交般眉,釋放 全局鎖 。tx2 拿到 全局鎖 提交本地事務(wù)潜支。
如果 tx1 的二階段全局回滾,則 tx1 需要重新獲取該數(shù)據(jù)的本地鎖柿汛,進(jìn)行反向補(bǔ)償?shù)母虏僮魅吣穑瑢?shí)現(xiàn)分支的回滾。
此時络断,如果 tx2 仍在等待該數(shù)據(jù)的 全局鎖裁替,同時持有本地鎖,則 tx1 的分支回滾會失敗貌笨。分支的回滾會一直重試弱判,直到 tx2 的 全局鎖 等鎖超時,放棄 全局鎖 并回滾本地事務(wù)釋放本地鎖锥惋,tx1 的分支回滾最終成功昌腰。
因?yàn)檎麄€過程 全局鎖 在 tx1 結(jié)束前一直是被 tx1 持有的,所以不會發(fā)生 臟寫 的問題膀跌。
4.3.1.4讀隔離
在數(shù)據(jù)庫本地事務(wù)隔離級別 讀已提交(Read Committed) 或以上的基礎(chǔ)上遭商,Seata(AT 模式)的默認(rèn)全局隔離級別是 讀未提交(Read Uncommitted) 。
如果應(yīng)用在特定場景下捅伤,必需要求全局的 讀已提交 劫流,目前 Seata 的方式是通過 SELECT FOR UPDATE 語句的代理。
SELECT FOR UPDATE 語句的執(zhí)行會申請 全局鎖 丛忆,如果 全局鎖 被其他事務(wù)持有祠汇,則釋放本地鎖(回滾 SELECT FOR UPDATE 語句的本地執(zhí)行)并重試。這個過程中熄诡,查詢是被 block 住的可很,直到 全局鎖 拿到,即讀取的相關(guān)數(shù)據(jù)是 已提交 的粮彤,才返回根穷。
出于總體性能上的考慮,Seata 目前的方案并沒有對所有 SELECT 語句都進(jìn)行代理导坟,僅針對 FOR UPDATE 的 SELECT 語句屿良。
4.3.1.5 業(yè)務(wù)Demo
AT的分支,進(jìn)行業(yè)務(wù)邏輯操作:update product set name = 'GTS' where name = 'TXC';
階段一: 得到 SQL 的類型(UPDATE),表(product)惫周,條件(where name = 'TXC')等相關(guān)的信息尘惧。
階段一:查詢前鏡像:根據(jù)解析得到的條件信息,生成查詢語句递递,定位數(shù)據(jù)喷橙。
select id, name, since from product where name = 'TXC';
階段一: 執(zhí)行業(yè)務(wù) SQL:更新這條記錄的 name 為 'GTS'啥么。
階段一:查詢后鏡像:根據(jù)前鏡像的結(jié)果,通過 主鍵 定位數(shù)據(jù)。
select id, name, since from product where id = 1;
插入回滾日志:把前后鏡像數(shù)據(jù)以及業(yè)務(wù) SQL 相關(guān)的信息組成一條回滾日志記錄磅废,插入到
UNDO_LOG
表中.-
{ "branchId": 641789253, "undoItems": [{ "afterImage": { "rows": [{ "fields": [{ "name": "id", "type": 4, "value": 1 }, { "name": "name", "type": 12, "value": "GTS" }, { "name": "since", "type": 12, "value": "2014" }] }], "tableName": "product" }, "beforeImage": { "rows": [{ "fields": [{ "name": "id", "type": 4, "value": 1 }, { "name": "name", "type": 12, "value": "TXC" }, { "name": "since", "type": 12, "value": "2014" }] }], "tableName": "product" }, "sqlType": "UPDATE" }], "xid": "xid:xxx" }
- 提交前入撒,向 TC 注冊分支:申請
product
表中,主鍵值等于 1 的記錄的 全局鎖 氯迂。 - 本地事務(wù)提交:業(yè)務(wù)數(shù)據(jù)的更新和前面步驟中生成的 UNDO LOG 一并提交。
- 將本地事務(wù)提交的結(jié)果上報給 TC言缤。
- 階段二收到回滾:通過 XID 和 Branch ID 查找到相應(yīng)的 UNDO LOG 記錄嚼蚀。
- 數(shù)據(jù)校驗(yàn):拿 UNDO LOG 中的后鏡與當(dāng)前數(shù)據(jù)進(jìn)行比較,如果有不同管挟,說明數(shù)據(jù)被當(dāng)前全局事務(wù)之外的動作做了修改轿曙。這種情況,需要根據(jù)配置策略來做處理僻孝,詳細(xì)的說明在另外的文檔中介紹.
- 根據(jù) UNDO LOG 中的前鏡像和業(yè)務(wù) SQL 的相關(guān)信息生成并執(zhí)行回滾的語句
- update product set name = 'TXC' where id = 1;
- 提交本地事務(wù)导帝。并把本地事務(wù)的執(zhí)行結(jié)果(即分支事務(wù)回滾的結(jié)果)上報給 TC。
- 階段二,如果是提交:收到 TC 的分支提交請求穿铆,把請求放入一個異步任務(wù)的隊(duì)列中舟扎,馬上返回提交成功的結(jié)果給 TC。
- 異步任務(wù)階段的分支提交請求將異步和批量地刪除相應(yīng) UNDO LOG 記錄
- 提交前入撒,向 TC 注冊分支:申請
4.3.2 TCC模式
- 一階段 prepare 行為:調(diào)用 自定義 的 prepare 邏輯悴务。
- 二階段 commit 行為:調(diào)用 自定義 的 commit 邏輯睹限。
- 二階段 rollback 行為:調(diào)用 自定義 的 rollback 邏輯
4.3.3 SAGA模式
Saga模式是SEATA提供的長事務(wù)解決方案,在Saga模式中讯檐,業(yè)務(wù)流程中每個參與者都提交本地事務(wù)羡疗,當(dāng)出現(xiàn)某一個參與者失敗則補(bǔ)償前面已經(jīng)成功的參與者,一階段正向服務(wù)和二階段補(bǔ)償服務(wù)都由業(yè)務(wù)開發(fā)實(shí)現(xiàn)别洪。
4.4 Seata兩種模式
4.4.1 不依賴第三方
直接client和 seata進(jìn)行通訊,這個時候急需要使用file.conf. 如果使用注冊中心, 則就需要把file.conf這個文件刪除掉.
4.4.2 依賴數(shù)據(jù)庫方案
需要利用registry.conf這個內(nèi)容
4.5 SpringBoot+nacos+seata
運(yùn)行啟動 nacos,這里不再贅述. 啟動后,默認(rèn)端口為: 8091
sh startup.sh -m standalone
4.5.1 下載seata server
https://seata.io/zh-cn/blog/download.html
4.5.2 執(zhí)行SQL
下載-源碼-,也可以在seata\seata-1.3.0\script\server\db 這個地址中找到.
https://github.com/seata/seata/blob/develop/script/server/db/mysql.sql
4.5.3 需要用的數(shù)據(jù)庫增加SQL
-- 注意此處0.3.0+ 增加唯一索引 ux_undo_log
CREATE TABLE `undo_log` (
`id` bigint(20) NOT NULL AUTO_INCREMENT,
`branch_id` bigint(20) NOT NULL,
`xid` varchar(100) NOT NULL,
`context` varchar(128) NOT NULL,
`rollback_info` longblob NOT NULL,
`log_status` int(11) NOT NULL,
`log_created` datetime NOT NULL,
`log_modified` datetime NOT NULL,
`ext` varchar(100) DEFAULT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `ux_undo_log` (`xid`,`branch_id`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;
4.5.4 修改config.txt原始配置
將源碼的配置需要推送到nacos上面.注意里面需要改動的地方.
https://github.com/seata/seata/tree/develop/script/config-center/config.txt
主要修改store.mode=mysql, 然后修改mysql相關(guān)的配置.
transport.type=TCP
transport.server=NIO
transport.heartbeat=true
transport.enableClientBatchSendRequest=false
transport.threadFactory.bossThreadPrefix=NettyBoss
transport.threadFactory.workerThreadPrefix=NettyServerNIOWorker
transport.threadFactory.serverExecutorThreadPrefix=NettyServerBizHandler
transport.threadFactory.shareBossWorker=false
transport.threadFactory.clientSelectorThreadPrefix=NettyClientSelector
transport.threadFactory.clientSelectorThreadSize=1
transport.threadFactory.clientWorkerThreadPrefix=NettyClientWorkerThread
transport.threadFactory.bossThreadSize=1
transport.threadFactory.workerThreadSize=default
transport.shutdown.wait=3
service.vgroupMapping.my_test_tx_group=default
service.default.grouplist=127.0.0.1:8091
service.enableDegrade=false
service.disableGlobalTransaction=false
client.rm.asyncCommitBufferLimit=10000
client.rm.lock.retryInterval=10
client.rm.lock.retryTimes=30
client.rm.lock.retryPolicyBranchRollbackOnConflict=true
client.rm.reportRetryCount=5
client.rm.tableMetaCheckEnable=false
client.rm.sqlParserType=druid
client.rm.reportSuccessEnable=false
client.rm.sagaBranchRegisterEnable=false
client.tm.commitRetryCount=5
client.tm.rollbackRetryCount=5
client.tm.degradeCheck=false
client.tm.degradeCheckAllowTimes=10
client.tm.degradeCheckPeriod=2000
store.mode=db
store.file.dir=file_store/data
store.file.maxBranchSessionSize=16384
store.file.maxGlobalSessionSize=512
store.file.fileWriteBufferCacheSize=16384
store.file.flushDiskMode=async
store.file.sessionReloadReadSize=100
store.db.datasource=druid
store.db.dbType=mysql
store.db.driverClassName=com.mysql.cj.jdbc.Driver
store.db.url=jdbc:mysql://127.0.0.1:3306/seata?useUnicode=true
store.db.user=root
store.db.password=root
store.db.minConn=5
store.db.maxConn=30
store.db.globalTable=global_table
store.db.branchTable=branch_table
store.db.queryLimit=100
store.db.lockTable=lock_table
store.db.maxWait=5000
store.redis.host=127.0.0.1
store.redis.port=6379
store.redis.maxConn=10
store.redis.minConn=1
store.redis.database=0
store.redis.password=null
store.redis.queryLimit=100
server.recovery.committingRetryPeriod=1000
server.recovery.asynCommittingRetryPeriod=1000
server.recovery.rollbackingRetryPeriod=1000
server.recovery.timeoutRetryPeriod=1000
server.maxCommitRetryTimeout=-1
server.maxRollbackRetryTimeout=-1
server.rollbackRetryTimeoutUnlockEnable=false
client.undo.dataValidation=true
client.undo.logSerialization=jackson
client.undo.onlyCareUpdateColumns=true
server.undo.logSaveDays=7
server.undo.logDeletePeriod=86400000
client.undo.logTable=undo_log
client.log.exceptionRate=100
transport.serialization=seata
transport.compressor=none
metrics.enabled=false
metrics.registryType=compact
metrics.exporterList=prometheus
metrics.exporterPrometheusPort=9898
將配置推送到已經(jīng)啟動的nacos
ndmicro@bogon nacos % sh ./nacos-config.sh
4.5.5 啟動seata-server
ndmicro@bogon bin % sh ./seata-server.sh
啟動日志如下:
2020-08-13 18:29:19.970 INFO 13200 --- [eoutChecker_1_1] i.s.c.r.netty.NettyClientChannelManager : will connect to 127.0.0.1:8091
2020-08-13 18:29:19.975 INFO 13200 --- [eoutChecker_1_1] i.s.core.rpc.netty.NettyPoolableFactory : NettyPool create channel to transactionRole:TMROLE,address:127.0.0.1:8091,msg:< RegisterTMRequest{applicationId='order-service', transactionServiceGroup='my_test_tx_group'} >
2020-08-13 18:29:19.978 INFO 13200 --- [eoutChecker_2_1] i.s.c.r.netty.NettyClientChannelManager : will connect to 127.0.0.1:8091
2020-08-13 18:29:19.979 INFO 13200 --- [eoutChecker_2_1] i.s.core.rpc.netty.NettyPoolableFactory : NettyPool create channel to transactionRole:RMROLE,address:127.0.0.1:8091,msg:< RegisterRMRequest{resourceIds='null', applicationId='order-service', transactionServiceGroup='my_test_tx_group'} >
2020-08-13 18:29:20.202 INFO 13200 --- [eoutChecker_2_1] i.s.c.rpc.netty.RmNettyRemotingClient : register RM success. client version:1.3.0, server version:1.3.0,channel:[id: 0xe5a6191b, L:/127.0.0.1:53413 - R:/127.0.0.1:8091]
2020-08-13 18:29:20.202 INFO 13200 --- [eoutChecker_1_1] i.s.c.rpc.netty.TmNettyRemotingClient : register TM success. client version:1.3.0, server version:1.3.0,channel:[id: 0x9b715e24, L:/127.0.0.1:53412 - R:/127.0.0.1:8091]
2020-08-13 18:29:20.212 INFO 13200 --- [eoutChecker_1_1] i.s.core.rpc.netty.NettyPoolableFactory : register success, cost 141 ms, version:1.3.0,role:TMROLE,channel:[id: 0x9b715e24, L:/127.0.0.1:53412 - R:/127.0.0.1:8091]
2020-08-13 18:29:20.212 INFO 13200 --- [eoutChecker_2_1] i.s.core.rpc.netty.NettyPoolableFactory : register success, cost 142 ms, version:1.3.0,role:RMROLE,channel:[id: 0xe5a6191b, L:/127.0.0.1:53413 - R:/127.0.0.1:8091]
4.5.6 觀察是否注冊進(jìn)入nacos
4.5.7 依賴客戶端的maven
- 依賴seata-all
- 依賴seata-spring-boot-starter叨恨,支持yml、properties配置(.conf可刪除)挖垛,內(nèi)部已依賴seata-all
- 依賴spring-cloud-alibaba-seata痒钝,內(nèi)部集成了seata,并實(shí)現(xiàn)了xid傳遞
<dependency>
<groupId>com.alibaba.cloud</groupId>
<artifactId>spring-cloud-starter-alibaba-seata</artifactId>
<version>2.2.1.RELEASE</version>
<exclusions>
<exclusion>
<groupId>io.seata</groupId>
<artifactId>seata-spring-boot-starter</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>io.seata</groupId>
<artifactId>seata-spring-boot-starter</artifactId>
<version>1.3.0</version>
</dependency>
4.5.8 開始裝備第一個訂單系統(tǒng)
<!--注冊到nacos上面-->
<dependency>
<groupId>com.alibaba.cloud</groupId>
<artifactId>spring-cloud-starter-alibaba-nacos-discovery</artifactId>
<version>2.2.1.RELEASE</version>
</dependency>
<!--seata 依賴-->
<dependency>
<groupId>com.alibaba.cloud</groupId>
<artifactId>spring-cloud-starter-alibaba-seata</artifactId>
<version>2.2.1.RELEASE</version>
<exclusions>
<exclusion>
<groupId>io.seata</groupId>
<artifactId>seata-spring-boot-starter</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>io.seata</groupId>
<artifactId>seata-spring-boot-starter</artifactId>
<version>1.3.0</version>
</dependency>
在源代碼中copy最全的配置[我們應(yīng)該是根據(jù)這個模板來改]
seata:
enabled: true
application-id: applicationName
tx-service-group: my_test_tx_group
enable-auto-data-source-proxy: true
use-jdk-proxy: false
excludes-for-auto-proxying: firstClassNameForExclude,secondClassNameForExclude
client:
rm:
async-commit-buffer-limit: 1000
report-retry-count: 5
table-meta-check-enable: false
report-success-enable: false
saga-branch-register-enable: false
lock:
retry-interval: 10
retry-times: 30
retry-policy-branch-rollback-on-conflict: true
tm:
degrade-check: false
degrade-check-period: 2000
degrade-check-allow-times: 10
commit-retry-count: 5
rollback-retry-count: 5
undo:
data-validation: true
log-serialization: jackson
log-table: undo_log
only-care-update-columns: true
log:
exceptionRate: 100
service:
vgroup-mapping:
my_test_tx_group: default
grouplist:
default: 127.0.0.1:8091
enable-degrade: false
disable-global-transaction: false
transport:
shutdown:
wait: 3
thread-factory:
boss-thread-prefix: NettyBoss
worker-thread-prefix: NettyServerNIOWorker
server-executor-thread-prefix: NettyServerBizHandler
share-boss-worker: false
client-selector-thread-prefix: NettyClientSelector
client-selector-thread-size: 1
client-worker-thread-prefix: NettyClientWorkerThread
worker-thread-size: default
boss-thread-size: 1
type: TCP
server: NIO
heartbeat: true
serialization: seata
compressor: none
enable-client-batch-send-request: true
config:
type: file
consul:
server-addr: 127.0.0.1:8500
apollo:
apollo-meta: http://192.168.1.204:8801
app-id: seata-server
namespace: application
etcd3:
server-addr: http://localhost:2379
nacos:
namespace:
serverAddr: 127.0.0.1:8848
group: SEATA_GROUP
username: ""
password: ""
zk:
server-addr: 127.0.0.1:2181
session-timeout: 6000
connect-timeout: 2000
username: ""
password: ""
registry:
type: file
consul:
server-addr: 127.0.0.1:8500
etcd3:
serverAddr: http://localhost:2379
eureka:
weight: 1
service-url: http://localhost:8761/eureka
nacos:
application: seata-server
server-addr: 127.0.0.1:8848
group : "SEATA_GROUP"
namespace:
username: ""
password: ""
redis:
server-addr: localhost:6379
db: 0
password:
timeout: 0
sofa:
server-addr: 127.0.0.1:9603
region: DEFAULT_ZONE
datacenter: DefaultDataCenter
group: SEATA_GROUP
addressWaitTime: 3000
application: default
zk:
server-addr: 127.0.0.1:2181
session-timeout: 6000
connect-timeout: 2000
username: ""
password: ""
最后訂單系統(tǒng)采用的參數(shù)如下:
spring:
datasource:
druid:
url: jdbc:mysql://localhost:3306/seata_order?useUnicode=true&characterEncoding=UTF-8&useJDBCCompliantTimezoneShift=true&useLegacyDatetimeCode=false&serverTimezone=UTC&useSSL=false
username: root
password: root
driver-class-name: com.mysql.cj.jdbc.Driver
initial-size: 5
max-active: 20
min-idle: 5
pool-prepared-statements: true
max-pool-prepared-statement-per-connection-size: 20
max-open-prepared-statements: 22
validation-query: SELECT 1 FROM DUAL
validation-query-timeout: 30000
test-on-borrow: false
test-on-return: false
test-while-idle: true
time-between-eviction-runs-millis: 60000
min-evictable-idle-time-millis: 30000
max-evictable-idle-time-millis: 60000
filters: stat
filter:
stat:
db-type: mysql
enabled: true
log-slow-sql: true
slow-sql-millis: 1000
merge-sql: true
stat-view-servlet:
login-password: root
login-username: root
mybatis-plus:
type-aliases-package: com.example.order.entity
mapper-locations: classpath*:mapper/order/*.xml
server:
port: 8080
seata:
enabled: true
application-id: order-service
tx-service-group: my_test_tx_group
service:
vgroup-mapping:
my_test_tx_group: default
grouplist:
default: 127.0.0.1:8091
config:
type: file
4.5.9整合數(shù)據(jù)源和代理
因?yàn)樽钚碌腏AR已經(jīng)支持?jǐn)?shù)據(jù)庫代理了, 所以不用手動寫.因?yàn)槲疫@個是整合了mybatis-plus,所以重新整合.
這里面, 千萬不要用 sqlsessionfactory這個類, 否則會一直報錯,找不到加載方法. 最終使用MybatisSqlSessionFactoryBean搞定.
@Configuration
public class DruidConfig {
@Value("${spring.datasource.druid.stat-view-servlet.login-username}")
private String loginUserName ;
@Value("${spring.datasource.druid.stat-view-servlet.login-password}")
private String loginPassWord ;
@Value("${mybatis-plus.type-aliases-package}")
private String typePackage;
@Value("${mybatis-plus.mapper-locations}")
private String xmlDir ;
/**
* 利用druid 進(jìn)行數(shù)據(jù)庫代理
*/
@Bean
@ConfigurationProperties(prefix = "spring.datasource.druid")
public DataSource druidDataSource() {
return new DruidDataSource();
}
@Bean
public MybatisSqlSessionFactoryBean sqlSessionFactory() throws Exception{
MybatisSqlSessionFactoryBean sqlSessionFactoryBean = new MybatisSqlSessionFactoryBean();
sqlSessionFactoryBean.setDataSource(druidDataSource());
VFS.addImplClass(SpringBootVFS.class);
PathMatchingResourcePatternResolver resolver = new PathMatchingResourcePatternResolver();
sqlSessionFactoryBean.setMapperLocations(resolver.getResources(xmlDir));
return sqlSessionFactoryBean;
}
@Bean
public PlatformTransactionManager transactionManager() throws SQLException {
return new DataSourceTransactionManager(druidDataSource());
}
/**
* 過濾規(guī)則,防止打不開druid
*/
@Bean
public FilterRegistrationBean<WebStatFilter> druidStatFilter() {
FilterRegistrationBean<WebStatFilter> filterRegistrationBean = new FilterRegistrationBean<WebStatFilter>(
new WebStatFilter());
// 添加過濾規(guī)則.
filterRegistrationBean.addUrlPatterns("/*");
// 添加不需要忽略的格式信息.
filterRegistrationBean.addInitParameter("exclusions", "*.js,*.gif,*.jpg,*.png,*.css,*.ico,/druid/*");
return filterRegistrationBean;
}
@Bean
public ServletRegistrationBean<StatViewServlet> druidStatViewServlet() {
ServletRegistrationBean<StatViewServlet> servletRegistrationBean = new ServletRegistrationBean<StatViewServlet>(
new StatViewServlet(), "/druid/*");
servletRegistrationBean.addInitParameter("loginUsername", loginUserName);
servletRegistrationBean.addInitParameter("loginPassword", loginPassWord);
servletRegistrationBean.addInitParameter("resetEnable", "false");
return servletRegistrationBean;
}
}
4.5.10 關(guān)鍵業(yè)務(wù)使用
-
業(yè)務(wù)流程代碼,需要添加@GlobalTransactional
@GlobalTransactional @Override public String business(OrderTblPO orderTblPO) throws Exception { //加入訂單 addOrder(orderTblPO); System.out.println("order begin :" + RootContext.getXID()); //加入賬單 String result = addAccount(orderTblPO); if (result.equals("SUCCESS")) { return "SUCCESS"; } else { throw new RuntimeException("賬單異常,導(dǎo)致我異常了"); } }
加入訂單邏輯代碼
public void addOrder(OrderTblPO orderTblPO) throws Exception {
//加入訂單
orderTblMapper.insert(orderTblPO);
}
- 加入賬單代碼
String addAccount(OrderTblPO orderTblPO) throws Exception {
String url = "http://localhosot:9898/account/update";
AccountTblPO accountTblPO = new AccountTblPO();
accountTblPO.setMoney(orderTblPO.getMoney());
accountTblPO.setUserId(orderTblPO.getUserId());
HttpHeaders headers = new HttpHeaders();
//這里設(shè)置的是以payLoad方式提交數(shù)據(jù)痢毒,對于Payload方式送矩,提交的內(nèi)容一定要是String,且Header要設(shè)為“application/json”
headers.setContentType(MediaType.APPLICATION_JSON_UTF8);
ObjectMapper mapper = new ObjectMapper();
String value = mapper.writeValueAsString(accountTblPO);
HttpEntity<String> requestEntity = new HttpEntity<String>(value, headers);
ResponseEntity<String> responseEntity = restTemplate.postForEntity(url, requestEntity, String.class);
return responseEntity.getBody();
}
4.5.11 賬務(wù)系統(tǒng)的配置
賬務(wù)系統(tǒng)和訂單系統(tǒng)的配置一樣, 注意事務(wù)組的值一定要配置成一樣的, 即server端和調(diào)用端都是一樣,測試用的my_test_tx_group
關(guān)鍵業(yè)務(wù)代碼為:
/**
* 增加一筆賬戶交易
*/
@Override
public String updateAccount(AccountTblPO accountTblPO) throws Exception {
accountTblMapper.insert(accountTblPO);
if (accountTblPO.getUserId().equals("10087")) {
throw new RuntimeException("勞資故意異常了");
}
return "SUCCESS";
}
4.6 成功系統(tǒng)的日志
4.6.1 訂單系統(tǒng)注冊成功日志
2020-08-20 11:01:34.824 INFO 35060 --- [eoutChecker_1_1] i.s.c.r.netty.NettyClientChannelManager : will connect to 127.0.0.1:8091
2020-08-20 11:01:34.828 INFO 35060 --- [eoutChecker_1_1] i.s.core.rpc.netty.NettyPoolableFactory : NettyPool create channel to transactionRole:TMROLE,address:127.0.0.1:8091,msg:< RegisterTMRequest{applicationId='order-service', transactionServiceGroup='my_test_tx_group'} >
2020-08-20 11:01:34.845 INFO 35060 --- [eoutChecker_2_1] i.s.c.r.netty.NettyClientChannelManager : will connect to 127.0.0.1:8091
2020-08-20 11:01:34.846 INFO 35060 --- [eoutChecker_2_1] i.s.core.rpc.netty.NettyPoolableFactory : NettyPool create channel to transactionRole:RMROLE,address:127.0.0.1:8091,msg:< RegisterRMRequest{resourceIds='null', applicationId='order-service', transactionServiceGroup='my_test_tx_group'} >
2020-08-20 11:01:34.979 INFO 35060 --- [eoutChecker_1_1] i.s.c.rpc.netty.TmNettyRemotingClient : register TM success. client version:1.3.0, server version:1.3.0,channel:[id: 0x4b3975a2, L:/127.0.0.1:53522 - R:/127.0.0.1:8091]
2020-08-20 11:01:34.979 INFO 35060 --- [eoutChecker_2_1] i.s.c.rpc.netty.RmNettyRemotingClient : register RM success. client version:1.3.0, server version:1.3.0,channel:[id: 0x245302be, L:/127.0.0.1:53523 - R:/127.0.0.1:8091]
2020-08-20 11:01:34.989 INFO 35060 --- [eoutChecker_2_1] i.s.core.rpc.netty.NettyPoolableFactory : register success, cost 60 ms, version:1.3.0,role:RMROLE,channel:[id: 0x245302be, L:/127.0.0.1:53523 - R:/127.0.0.1:8091]
2020-08-20 11:01:34.989 INFO 35060 --- [eoutChecker_1_1] i.s.core.rpc.netty.NettyPoolableFactory : register success, cost 59 ms, version:1.3.0,role:TMROLE,channel:[id: 0x4b3975a2, L:/127.0.0.1:53522 - R:/127.0.0.1:8091]
4.6.2 賬務(wù)系統(tǒng)注冊成功日志
2020-08-20 11:01:36.318 INFO 35062 --- [eoutChecker_1_1] i.s.c.r.netty.NettyClientChannelManager : will connect to 127.0.0.1:8091
2020-08-20 11:01:36.320 INFO 35062 --- [eoutChecker_1_1] i.s.core.rpc.netty.NettyPoolableFactory : NettyPool create channel to transactionRole:TMROLE,address:127.0.0.1:8091,msg:< RegisterTMRequest{applicationId='account-service', transactionServiceGroup='my_test_tx_group'} >
2020-08-20 11:01:36.333 INFO 35062 --- [eoutChecker_2_1] i.s.c.r.netty.NettyClientChannelManager : will connect to 127.0.0.1:8091
2020-08-20 11:01:36.333 INFO 35062 --- [eoutChecker_2_1] i.s.core.rpc.netty.NettyPoolableFactory : NettyPool create channel to transactionRole:RMROLE,address:127.0.0.1:8091,msg:< RegisterRMRequest{resourceIds='null', applicationId='account-service', transactionServiceGroup='my_test_tx_group'} >
2020-08-20 11:01:36.426 INFO 35062 --- [eoutChecker_1_1] i.s.c.rpc.netty.TmNettyRemotingClient : register TM success. client version:1.3.0, server version:1.3.0,channel:[id: 0x9cfcd088, L:/127.0.0.1:53524 - R:/127.0.0.1:8091]
2020-08-20 11:01:36.426 INFO 35062 --- [eoutChecker_2_1] i.s.c.rpc.netty.RmNettyRemotingClient : register RM success. client version:1.3.0, server version:1.3.0,channel:[id: 0xab34bbdf, L:/127.0.0.1:53525 - R:/127.0.0.1:8091]
2020-08-20 11:01:36.435 INFO 35062 --- [eoutChecker_1_1] i.s.core.rpc.netty.NettyPoolableFactory : register success, cost 49 ms, version:1.3.0,role:TMROLE,channel:[id: 0x9cfcd088, L:/127.0.0.1:53524 - R:/127.0.0.1:8091]
2020-08-20 11:01:36.435 INFO 35062 --- [eoutChecker_2_1] i.s.core.rpc.netty.NettyPoolableFactory : register success, cost 50 ms, version:1.3.0,role:RMROLE,channel:[id: 0xab34bbdf, L:/127.0.0.1:53525 - R:/127.0.0.1:8091]
4.6.3 成功調(diào)用訂單日志
2020-08-20 11:13:29.908 INFO 35060 --- [nio-8080-exec-1] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring DispatcherServlet 'dispatcherServlet'
2020-08-20 11:13:29.908 INFO 35060 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet : Initializing Servlet 'dispatcherServlet'
2020-08-20 11:13:29.916 INFO 35060 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet : Completed initialization in 8 ms
2020-08-20 11:13:29.971 INFO 35060 --- [nio-8080-exec-1] io.seata.tm.TransactionManagerHolder : TransactionManager Singleton io.seata.tm.DefaultTransactionManager@646fc986
2020-08-20 11:13:29.986 INFO 35060 --- [nio-8080-exec-1] i.seata.tm.api.DefaultGlobalTransaction : Begin new global transaction [192.168.0.145:8091:39669769082765312]
2020-08-20 11:13:30.189 INFO 35060 --- [nio-8080-exec-1] com.alibaba.druid.pool.DruidDataSource : {dataSource-1} inited
2020-08-20 11:13:30.233 INFO 35060 --- [nio-8080-exec-1] i.s.c.rpc.netty.RmNettyRemotingClient : will register resourceId:jdbc:mysql://localhost:3306/seata_order
2020-08-20 11:13:30.235 INFO 35060 --- [ctor_RMROLE_1_1] io.seata.rm.AbstractRMHandler : the rm client received response msg [version=1.3.0,extraData=null,identified=true,resultCode=null,msg=null] from tc server.
order begin :192.168.0.145:8091:39669769082765312
2020-08-20 11:13:31.236 INFO 35060 --- [nio-8080-exec-1] i.seata.tm.api.DefaultGlobalTransaction : [192.168.0.145:8091:39669769082765312] commit status: Committed
2020-08-20 11:13:31.799 INFO 35060 --- [h_RMROLE_1_1_24] i.s.c.r.p.c.RmBranchCommitProcessor : rm client handle branch commit process:xid=192.168.0.145:8091:39669769082765312,branchId=39669770907287553,branchType=AT,resourceId=jdbc:mysql://localhost:3306/seata_order,applicationData=null
2020-08-20 11:13:31.800 INFO 35060 --- [h_RMROLE_1_1_24] io.seata.rm.AbstractRMHandler : Branch committing: 192.168.0.145:8091:39669769082765312 39669770907287553 jdbc:mysql://localhost:3306/seata_order null
2020-08-20 11:13:31.801 INFO 35060 --- [h_RMROLE_1_1_24] io.seata.rm.AbstractRMHandler : Branch commit result: PhaseTwo_Committed
4.6.4 成功調(diào)用賬務(wù)日志
2020-08-20 11:13:30.647 INFO 35062 --- [nio-9898-exec-1] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring DispatcherServlet 'dispatcherServlet'
2020-08-20 11:13:30.647 INFO 35062 --- [nio-9898-exec-1] o.s.web.servlet.DispatcherServlet : Initializing Servlet 'dispatcherServlet'
2020-08-20 11:13:30.653 INFO 35062 --- [nio-9898-exec-1] o.s.web.servlet.DispatcherServlet : Completed initialization in 6 ms
開始訪問 account192.168.0.145:8091:39669769082765312
account:192.168.0.145:8091:39669769082765312
2020-08-20 11:13:30.866 INFO 35062 --- [nio-9898-exec-1] com.alibaba.druid.pool.DruidDataSource : {dataSource-1} inited
2020-08-20 11:13:30.906 INFO 35062 --- [nio-9898-exec-1] i.s.c.rpc.netty.RmNettyRemotingClient : will register resourceId:jdbc:mysql://localhost:3306/seata_account
2020-08-20 11:13:30.908 INFO 35062 --- [ctor_RMROLE_1_1] io.seata.rm.AbstractRMHandler : the rm client received response msg [version=1.3.0,extraData=null,identified=true,resultCode=null,msg=null] from tc server.
2020-08-20 11:13:31.218 WARN 35062 --- [nio-9898-exec-1] c.a.c.seata.web.SeataHandlerInterceptor : xid in change during RPC from 192.168.0.145:8091:39669769082765312 to null
2020-08-20 11:13:31.807 INFO 35062 --- [h_RMROLE_1_1_24] i.s.c.r.p.c.RmBranchCommitProcessor : rm client handle branch commit process:xid=192.168.0.145:8091:39669769082765312,branchId=39669774061404161,branchType=AT,resourceId=jdbc:mysql://localhost:3306/seata_account,applicationData=null
2020-08-20 11:13:31.809 INFO 35062 --- [h_RMROLE_1_1_24] io.seata.rm.AbstractRMHandler : Branch committing: 192.168.0.145:8091:39669769082765312 39669774061404161 jdbc:mysql://localhost:3306/seata_account null
2020-08-20 11:13:31.809 INFO 35062 --- [h_RMROLE_1_1_24] io.seata.rm.AbstractRMHandler : Branch commit result: PhaseTwo_Committed
4.6.5 失敗調(diào)用訂單日志
2020-08-20 11:16:55.121 INFO 35060 --- [nio-8080-exec-5] i.seata.tm.api.DefaultGlobalTransaction : Begin new global transaction [192.168.0.145:8091:39670629510676480]
order begin :192.168.0.145:8091:39670629510676480
2020-08-20 11:16:55.161 INFO 35060 --- [h_RMROLE_1_2_24] i.s.c.r.p.c.RmBranchRollbackProcessor : rm handle branch rollback process:xid=192.168.0.145:8091:39670629510676480,branchId=39670629561008129,branchType=AT,resourceId=jdbc:mysql://localhost:3306/seata_order,applicationData=null
2020-08-20 11:16:55.162 INFO 35060 --- [h_RMROLE_1_2_24] io.seata.rm.AbstractRMHandler : Branch Rollbacking: 192.168.0.145:8091:39670629510676480 39670629561008129 jdbc:mysql://localhost:3306/seata_order
2020-08-20 11:16:55.227 INFO 35060 --- [h_RMROLE_1_2_24] i.s.r.d.undo.AbstractUndoLogManager : xid 192.168.0.145:8091:39670629510676480 branch 39670629561008129, undo_log deleted with GlobalFinished
2020-08-20 11:16:55.228 INFO 35060 --- [h_RMROLE_1_2_24] io.seata.rm.AbstractRMHandler : Branch Rollbacked result: PhaseTwo_Rollbacked
2020-08-20 11:16:55.235 INFO 35060 --- [nio-8080-exec-5] i.seata.tm.api.DefaultGlobalTransaction : [192.168.0.145:8091:39670629510676480] rollback status: Rollbacked
4.6.7失敗調(diào)用賬務(wù)日志
開始訪問 account192.168.0.145:8091:39423456063782912
account:192.168.0.145:8091:39423456063782912
2020-08-19 18:54:44.418 ERROR 32738 --- [nio-9898-exec-3] c.a.druid.pool.DruidAbstractDataSource : discard long time none received connection. , jdbcUrl : jdbc:mysql://localhost:3306/seata_account?useUnicode=true&characterEncoding=UTF-8&useJDBCCompliantTimezoneShift=true&useLegacyDatetimeCode=false&serverTimezone=UTC&useSSL=false, jdbcUrl : jdbc:mysql://localhost:3306/seata_account?useUnicode=true&characterEncoding=UTF-8&useJDBCCompliantTimezoneShift=true&useLegacyDatetimeCode=false&serverTimezone=UTC&useSSL=false, lastPacketReceivedIdleMillis : 67156
2020-08-19 18:54:44.436 WARN 32738 --- [nio-9898-exec-3] c.a.c.seata.web.SeataHandlerInterceptor : xid in change during RPC from 192.168.0.145:8091:39423456063782912 to null
2020-08-19 18:54:44.446 INFO 32738 --- [h_RMROLE_1_2_24] i.s.c.r.p.c.RmBranchRollbackProcessor : rm handle branch rollback process:xid=192.168.0.145:8091:39423456063782912,branchId=39423456269303809,branchType=AT,resourceId=jdbc:mysql://localhost:3306/seata_account,applicationData=null
2020-08-19 18:54:44.447 INFO 32738 --- [h_RMROLE_1_2_24] io.seata.rm.AbstractRMHandler : Branch Rollbacking: 192.168.0.145:8091:39423456063782912 39423456269303809 jdbc:mysql://localhost:3306/seata_account
2020-08-19 18:54:44.491 INFO 32738 --- [h_RMROLE_1_2_24] i.s.r.d.undo.AbstractUndoLogManager : xid 192.168.0.145:8091:39423456063782912 branch 39423456269303809, undo_log deleted with GlobalFinished
2020-08-19 18:54:44.492 INFO 32738 --- [h_RMROLE_1_2_24] io.seata.rm.AbstractRMHandler : Branch Rollbacked result: PhaseTwo_Rollbacked
4.6.8 其他方法
所有對是數(shù)據(jù)庫的操作,都可以加上 @Transactional 進(jìn)行事務(wù)保護(hù), 同樣可以起到回滾的作用.
5.利用本地事務(wù)
5.1 業(yè)務(wù)表現(xiàn): 在業(yè)務(wù)A插入一條數(shù)據(jù), 然后在業(yè)務(wù)B插入一條數(shù)據(jù). 要么同時失敗, 要么同時成功. 這個方案不太可靠,
主要提供給不太愿意接入分布式事務(wù), 且數(shù)據(jù)允許有一定錯誤的情況.
第一步: 打開事務(wù)一, 然后做修改數(shù)據(jù)庫操作.
第二步: 在上面有事務(wù)的方法中, 調(diào)用B服務(wù),
第三步: B服務(wù)打開事務(wù), 然后操作B數(shù)據(jù)庫.
第四步: 如果B服務(wù)出錯,則異常, 然后所有回滾, 如果正確就走下去.
此方法的漏點(diǎn): 有可能B數(shù)據(jù)庫成功了, 因?yàn)槠渌驅(qū)е翧服務(wù)超時回滾等. 所以數(shù)據(jù)不準(zhǔn)確. 適用于最初的版本.