> 通過spark-submit會(huì)固定占用一占的資源碴倾,有什么辦法池磁,在任務(wù)不運(yùn)作的時(shí)候?qū)①Y源釋放跪解,讓其它任務(wù)使用呢炉旷,yarn新版本默認(rèn)已經(jīng)支持了,我們使用的是HDP叉讥。
## 版本如下
![](https://upload-images.jianshu.io/upload_images/9028759-35c1bf0606261dc5.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240)
## 配置
1. HDP里面已經(jīng)默認(rèn)支持spark動(dòng)態(tài)資源釋配置
2. 代碼配置
```
val sparkConf = new SparkConf()
? ? .set("spark.shuffle.service.enabled", "true")
? ? .set("spark.dynamicAllocation.enabled", "true")
? ? .set("spark.dynamicAllocation.minExecutors", "1") //最少占用1個(gè)Executor
? ? .set("spark.dynamicAllocation.initialExecutors", "1") //默認(rèn)初始化一個(gè)Executor
? ? .set("spark.dynamicAllocation.maxExecutors", "6") //最多占用6個(gè)Executor
? ? .set("spark.dynamicAllocation.executorIdleTimeout", "60") //executor閑置時(shí)間
? ? .set("spark.dynamicAllocation.cachedExecutorIdleTimeout", "60") //cache閑置時(shí)間
? ? .set("spark.executor.cores", "3")//使用的vcore
? ? //? ? .setMaster("local[12]")
? ? .setAppName("Spark DynamicRelease")
? val spark: SparkSession = SparkSession
? ? .builder
? ? .config(sparkConf)
? ? .getOrCreate()
```
## 注意事項(xiàng)
如果spark計(jì)算當(dāng)中使用了rdd.cache窘行,不加下面的配置,動(dòng)態(tài)資源不會(huì)釋放
```
.set("spark.dynamicAllocation.cachedExecutorIdleTimeout", "60")
```
---
![](https://upload-images.jianshu.io/upload_images/9028759-07315bb8dadcd082.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240)