在Yarn上提交Spark任務(wù),在提交時(shí)命令為
spark-submit --master yarn --deploy-mode client --num-executors 1 --executor-memory 512mb --executor-cores 1 --driver-memory 1g --driver-cores 1 --jars xxx.jar --class xxx xxx.jar
理論上提交的資源應(yīng)該為內(nèi)存1.5g,虛擬CPU2。
但是提交到Y(jié)arn尖昏,通過(guò)Web UI發(fā)現(xiàn)提交的配置并沒(méi)有生效
可以發(fā)現(xiàn)用了9個(gè)虛擬CPU,9GB內(nèi)存
這是為什么呢构资?
通過(guò)查找資料發(fā)現(xiàn)抽诉,是因?yàn)閟park開(kāi)啟了動(dòng)態(tài)資源配置spark.dynamicAllocation.enabled
以下相關(guān)參數(shù):
spark.dynamicAllocation.minExecutors,動(dòng)態(tài)分配最小executor個(gè)數(shù)吐绵,在啟動(dòng)時(shí)就申請(qǐng)好的迹淌,默認(rèn)0;
spark.dynamicAllocation.maxExecutors己单,動(dòng)態(tài)分配最大executor個(gè)數(shù)唉窃,默認(rèn)infinity;
spark.dynamicAllocation.initialExecutors動(dòng)態(tài)分配初始executor個(gè)數(shù)默認(rèn)值=spark.dynamicAllocation.minExecutors荷鼠;
spark.dynamicAllocation.executorIdleTimeout當(dāng)某個(gè)executor空閑超過(guò)這個(gè)設(shè)定值句携,就會(huì)被kill,默認(rèn)60s允乐;
spark.dynamicAllocation.cachedExecutorIdleTimeout矮嫉,當(dāng)某個(gè)緩存數(shù)據(jù)的executor空閑時(shí)間超過(guò)這個(gè)設(shè)定值,就會(huì)被kill牍疏,默認(rèn)infinity蠢笋;
spark.dynamicAllocation.schedulerBacklogTimeout,任務(wù)隊(duì)列非空鳞陨,資源不夠昨寞,申請(qǐng)executor的時(shí)間間隔,默認(rèn)1s
spark.dynamicAllocation.sustainedSchedulerBacklogTimeout厦滤,同schedulerBacklogTimeout援岩,是申請(qǐng)了新executor之后繼續(xù)申請(qǐng)的間隔,默認(rèn)=schedulerBacklogTimeout掏导;