前面幾章我們重點(diǎn)講述了spark的的原理告希,sparkContext的初使化桥温,spark主備切換引矩,master的注冊(cè)等。其中我們分析源碼時(shí)侵浸,不管是driver,還是application在注冊(cè)Master時(shí)都旺韭,在最后都有這樣的一個(gè)方法調(diào)用:
...
schedule()
這個(gè)方法其實(shí)就是資源調(diào)度算法(這個(gè)方法太重要了,也是核心中的核心)
下面我們就來深入分析這個(gè)方法掏觉。
private def schedule() {
#當(dāng)master不是alive時(shí)区端,直接reuturn
#也就是說Standby是不參與資源調(diào)度的
if (state != RecoveryState.ALIVE) { return }
//Random.shuffle作用就是把集合隨機(jī)打亂
//取出workers中所有之前注冊(cè)的worker,進(jìn)行過濾,必須 狀態(tài) 是Alive的worker
//把worker隨機(jī)的打亂
val shuffledAliveWorkers = Random.shuffle(workers.toSeq.filter(
_.state == WorkerState.ALIVE))
val numWorkersAlive = shuffledAliveWorkers.size
為什么要調(diào)度driver,大家想一下什么時(shí)候會(huì)注冊(cè)driver澳腹,并且導(dǎo)致driver被調(diào)度
其實(shí)只有在模式是yarn-cluster提交后织盼,才會(huì)注冊(cè)driver,因?yàn)閟tandalone與yarn-client
都會(huì)在本地啟動(dòng)dirver,而不會(huì)來注冊(cè)driver,就更不可能被master來調(diào)度
所以說下面的這個(gè)for只會(huì)運(yùn)行在yarn-cluster模式下提交下。
for (driver <- waitingDrivers.toList) { // iterate over a copy of waitingDrivers
// We assign workers to each waiting driver in a round-robin fashion. For each driver, we
// start from the last worker that was assigned a driver, and continue onwards until we have
// explored all alive workers.
var launched = false
var numWorkersVisited = 0
/**while中的條件酱塔,當(dāng)還有活著的worker沒有被遍歷到沥邻,就繼續(xù)遍歷
* 而且這個(gè)driver在這個(gè)worker中還沒有啟動(dòng),launched=false
*/
while (numWorkersVisited < numWorkersAlive && !launched) {
val worker = shuffledAliveWorkers(curPos)
numWorkersVisited += 1
/**
* 如果這個(gè)worker的空閑內(nèi)存容量 大于等于driver所需的內(nèi)存
* 而且worker空間的CPU大于等于driver所需的CPU數(shù)量
*/
if (worker.memoryFree >= driver.desc.mem && worker.coresFree >= driver.desc.cores) {
//啟動(dòng)driver
launchDriver(worker, driver)
//把此driver從waitingDrivers中去掉
waitingDrivers -= driver
launched = true
}
//將指針指向下一個(gè)worker
curPos = (curPos + 1) % numWorkersAlive
}
}
上面就是對(duì)Drvier的調(diào)度。
接著我們看對(duì)Application的資源調(diào)度
/**
* Application的調(diào)度機(jī)制(核心之核心 )
* 兩種算法:一種是spreadOutApps(默認(rèn)),另一種是非spreadOutApps
*
* 通過個(gè)算法羊娃,其實(shí) 會(huì)將每個(gè)application,要啟動(dòng)的executor都平均分配 到每個(gè)worker上
* 比如有20cpu core,有10個(gè)worker,那么實(shí)際會(huì)遍歷兩遍唐全,每次循環(huán),每個(gè)worker分配一個(gè)core
* 最后每個(gè)worker分配了兩個(gè)core
*/
if (spreadOutApps) {
for (app <- waitingApps if app.coresLeft > 0) {
//從workerk中蕊玷,過濾出狀態(tài)是ALIVE的
val usableWorkers = workers.toArray.filter(_.state == WorkerState.ALIVE)
.filter(canUse(app, _)).sortBy(_.coresFree).reverse
val numUsable = usableWorkers.length
//創(chuàng)建一個(gè)空數(shù)組邮利,存儲(chǔ)了要分配的每個(gè)worker的cpu
val assigned = new Array[Int](numUsable) // Number of cores to give on each node
//獲取到底可以分配多少個(gè)cpu,取application所需的cpu數(shù)量與worker可用的cpu數(shù)量的最小值
var toAssign = math.min(app.coresLeft, usableWorkers.map(_.coresFree).sum)
var pos = 0
//只要有還有要分配的cpu沒有分配完就while
while (toAssign > 0) {
//如果可用的cpu大于已經(jīng)分配的cpu數(shù)量弥雹,其實(shí)就是還有可用的cpu
if (usableWorkers(pos).coresFree - assigned(pos) > 0) {
//將要分配的cpu數(shù)量減一
toAssign -= 1
//給這個(gè)worker分配的cpu加1
assigned(pos) += 1
}
//指定指向下一個(gè)worker
pos = (pos + 1) % numUsable
}
// Now that we've decided how many cores to give on each node, let's actually give them
//給每個(gè)worker分配了application需要的cpu core后
for (pos <- 0 until numUsable) {
//判斷這個(gè)worker已經(jīng)分配了core
if (assigned(pos) > 0) {
//創(chuàng)建了ExecutorDesc對(duì)象,封裝了executor的信息
//將這個(gè)executor添加到application緩存區(qū)
val exec = app.addExecutor(usableWorkers(pos), assigned(pos))
//在worker上啟動(dòng)Executor
launchExecutor(usableWorkers(pos), exec)
//將application的狀態(tài)為RUNNING
app.state = ApplicationState.RUNNING
}
}
}
}
關(guān)于這種調(diào)度算法總結(jié):
我們之前說了在spark-submit中指定了多少個(gè)executor,每個(gè)execuotr需要多少個(gè)cpu core 實(shí)際上基本這個(gè)機(jī)制近弟,最后,executor的實(shí)際數(shù)量挺智,每個(gè)executor需要的core,可能與配置不一樣 因?yàn)檫@里我們是基于總的cpu來分配的祷愉,就是說比如,我們配置了需要三個(gè)executor來啟動(dòng)application, 每個(gè)executor需要三個(gè)core赦颇,那么就總需9個(gè)core,其實(shí)在這種算法中二鳄,如果我們有9個(gè)worker,會(huì)給每個(gè) worker分配一個(gè)core,然后給每個(gè)worker啟動(dòng)一個(gè)executor.最后其實(shí)是啟動(dòng)了9個(gè)executor,每個(gè) executor有一個(gè)core
第二種調(diào)度算法:
這種算法與上面的正好相反,每個(gè)application,都盡可能少的分配到worker上去媒怯, 比如總共有10個(gè)worker,每個(gè)有10個(gè)core application總共要分配20個(gè)core,那么只會(huì)分配到兩個(gè)worker上订讼,每個(gè)worker都占滿了這10個(gè)core那么其它的application只能分配另外的worker上去了。 所以我們?cè)趕park-submit中配置了要10個(gè)executor,每個(gè)execuotr需要2個(gè)core 那么共需要20個(gè)core,但這種算法中扇苞,其實(shí)只會(huì)啟動(dòng)兩個(gè)executor欺殿,每個(gè)executor有10個(gè)core
//遍歷worker,并且狀態(tài)是ALIVE,還有空閑的cpu的worker
for (worker <- workers if worker.coresFree > 0 && worker.state == WorkerState.ALIVE) {
//遍歷application,并且還有需要分配的core的applicztion
for (app <- waitingApps if app.coresLeft > 0) {
//判斷當(dāng)前這個(gè)worker可以被application使用
if (canUse(app, worker)) {
//取worker可用的cpu數(shù)量與application要分配的cpu數(shù)量的最小值
val coresToUse = math.min(worker.coresFree, app.coresLeft)
//如果小于0鳖敷,說明沒有core可分了
if (coresToUse > 0) {
val exec = app.addExecutor(worker, coresToUse)
//在worker上啟動(dòng)executor
launchExecutor(worker, exec)
//設(shè)置application的狀態(tài)是
app.state = ApplicationState.RUNNING
}
}
其中里面有一個(gè)非常重要的方法:
launchExecutor(worker, exec)
def launchExecutor(worker: WorkerInfo, exec: ExecutorDesc) {
logInfo("Launching executor " + exec.fullId + " on worker " + worker.id)
//將executor加入worker緩存
worker.addExecutor(exec)
//向worker的actor發(fā)送LaunchExecutor ,在worker中啟動(dòng)executor
worker.actor ! LaunchExecutor(masterUrl,
exec.application.id, exec.id, exec.application.desc, exec.cores, exec.memory)
exec.application.driver ! ExecutorAdded(
exec.id, worker.id, worker.hostPort, exec.cores, exec.memory)
}
這個(gè)方法里的
worker.actor ! LaunchExecutor(masterUrl,
exec.application.id, exec.id, exec.application.desc, exec.cores, exec.memory)
正是向worker發(fā)送這個(gè)消息(封閉了akka的消息通信) 脖苏,在worker端調(diào)用這個(gè)方法來啟動(dòng)Executor(關(guān)于worker如何啟動(dòng)Eexecutor與Application,我們?cè)谙乱还?jié)會(huì)詳細(xì)的剖析)
本章中每一個(gè)字(包括源碼注解)都是作者敲出來的定踱,你感覺有用棍潘,幫點(diǎn)擊'喜歡'