一. 由來
如果你搭建過大型的分布式系統(tǒng)贡蓖,那么一般你會(huì)用到zookeeper這個(gè)服務(wù)。該服務(wù)實(shí)現(xiàn)了ZAB算法煌茬。其通常會(huì)用在fail over選主斥铺,微服務(wù)上下游server配置中心等場(chǎng)景。但是ZAB和paxos有個(gè)缺點(diǎn)坛善,就是理解性比較差晾蜘。其論文內(nèi)容十分復(fù)雜邻眷,導(dǎo)致真正理解的開發(fā)人員非常少。【知乎:raft算法與paxos算法相比有什么優(yōu)勢(shì)剔交,使用場(chǎng)景有什么差異】
二. raft 原理描述
【raft homepage】
【raft paper】
【live demo】
1. 選主Leader Election
raft和zookeeper類似肆饶,一般需要3或者5個(gè)node。這樣有利于判斷選舉的多數(shù)情況
node分為3個(gè)狀態(tài):follower岖常,candidate驯镊,leader
狀態(tài)的轉(zhuǎn)換
raft有2個(gè)timeout設(shè)置
1)從follow而轉(zhuǎn)換到candidate的timeout: election timeout,設(shè)置為:150ms到300ms中的隨機(jī)數(shù)腥椒。一個(gè)node到達(dá)這個(gè)timeout之后會(huì)發(fā)起一個(gè)新的選舉term(遞增的阿宅,大的表示新的)候衍,向其他節(jié)點(diǎn)發(fā)起投票請(qǐng)求笼蛛,包括投給自己的那票,如果獲得了大多數(shù)選票蛉鹿,那么自己就轉(zhuǎn)換為leader狀態(tài)
2)node成為leader之后會(huì)向其他node發(fā)送Append Entries滨砍,這個(gè)時(shí)間為heartbeat timeout
如果lead在實(shí)際使用中down掉,剩下的節(jié)點(diǎn)會(huì)重新開啟1)和2)描述的選舉流程妖异,保證了高可用性
特殊情況
如果集群中剩下偶數(shù)個(gè)node惋戏,并且在選舉的過程中有2個(gè)node獲得相等的選票數(shù),那么會(huì)開啟新的一輪term選舉他膳。知道有一個(gè)node獲得多數(shù)選票(隨機(jī)的election timeout保證可行)
2. 分布式系統(tǒng)中數(shù)據(jù)的一致性和高可用保證log replication
client給leader發(fā)送數(shù)據(jù)修改請(qǐng)求
leader通過Append Entries在心跳的過程中將修改內(nèi)容下發(fā)到follower nodes
在多數(shù)follower 接收了修改內(nèi)容返回后响逢,leader向client確認(rèn)
leader向follower發(fā)送心跳,具體執(zhí)行修改操作棕孙,此后數(shù)據(jù)在集群中保持一致
特殊情況
節(jié)點(diǎn)之前的網(wǎng)絡(luò)狀況十分不好舔亭,此時(shí)會(huì)有多個(gè)leader,其term也是不同的蟀俊。
由于commit的修改需要多數(shù)通過钦铺,那么只有具有最多node的一個(gè)集群會(huì)commit修改成功。
當(dāng)網(wǎng)絡(luò)狀況恢復(fù)肢预,整個(gè)集群的節(jié)點(diǎn)會(huì)向多數(shù)節(jié)點(diǎn)的集群同步矛洞。這樣整個(gè)集群中的數(shù)據(jù)會(huì)繼續(xù)保持一致
3. raft集群擴(kuò)容Membership Changes
live demo中沒有提及,但是paper中說明的內(nèi)容烫映。
在實(shí)際使用中可有可能會(huì)遇到現(xiàn)有機(jī)器被新機(jī)器替換沼本,或者為了提升穩(wěn)定性擴(kuò)容raft集群的情況。作者給出了joint consensus的解決方案锭沟。其能保證切換過程是無縫的抽兆。
三. 在工業(yè)界系統(tǒng)的應(yīng)用
-
【MySQL 三節(jié)點(diǎn)企業(yè)版】
mysql 三節(jié)點(diǎn)企業(yè)版
利用分布式一致性協(xié)議(Raft)保障多節(jié)點(diǎn)狀態(tài)切換的可靠性和原子性。 - 【RethinkDB: pushes JSON to your apps in realtime】
How is cluster configuration propagated?
Updating the state of a cluster is a surprisingly difficult problem in distributed systems. At any given point different (and potentially) conflicting configurations can be selected on different sides of a netsplit, different configurations can reach different nodes in the cluster at unpredictable times, etc.
RethinkDB uses the Raft algorithm to store and propagate cluster configuration in most cases, although there are some situations it uses semilattices, versioned with internal timestamps. This architecture turns out to have sufficient mathematical properties to address all the issues mentioned above (this result has been known in distributed systems research for quite a while)
What is failure tolerance?
An etcd cluster operates so long as a member quorum can be established. If quorum is lost through transient network failures (e.g., partitions), etcd automatically and safely resumes once the network recovers and restores quorum; Raft enforces cluster consistency. For power loss, etcd persists the Raft log to disk; etcd replays the log to the point of failure and resumes cluster participation. For permanent hardware failure, the node may be removed from the cluster through runtime reconfiguration.
It is recommended to have an odd number of members in a cluster. An odd-size cluster tolerates the same number of failures as an even-size cluster but with fewer nodes. The difference can be seen by comparing even and odd sized clusters:
etcd節(jié)點(diǎn)數(shù)與容錯(cuò)能力數(shù)據(jù):奇數(shù)個(gè)node有優(yōu)勢(shì)
Adding a member to bring the size of cluster up to an even number doesn't buy additional fault tolerance. Likewise, during a network partition, an odd number of members guarantees that there will always be a majority partition that can continue to operate and be the source of truth when the partition ends.