一薇搁、
錯(cuò)誤:Job failed with org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 6, hadoop104, executor 1): UnknownReason
FAILED: Execution Error, return code 3 from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Spark job failed during runtime. Please check stacktrace for the root cause.
解決辦法:
①檢查udtf自定義函數(shù)匕累,代碼寫的是否正確。
②檢查執(zhí)行sql寫的是否正確迂求。
二碾盐、
1)JVM堆(Heap)內(nèi)存溢出:堆內(nèi)存不足時(shí),一般會(huì)拋出如下異常:
第一種:“java.lang.OutOfMemoryError:” GC overhead limit exceeded揩局;
第二種:“Error: Java heapspace”異常信息毫玖;
第三種:“running beyondphysical memory limits.Current usage: 4.3 GB of 4.3 GBphysical memory used; 7.4 GB of 13.2 GB virtual memory used. Killing container”。
解決辦法:在hive-env.sh這個(gè)文件中設(shè)置
export HADOOP_HEAPSIZE=4096
2) 棧內(nèi)存溢出:拋出異常為:java.lang.StackOverflowError
常會(huì)出現(xiàn)在SQL中(SQL語句中條件組合太多凌盯,被解析成為不斷的遞歸調(diào)用)付枫,或MR代碼中有遞歸調(diào)用。這種深度的遞歸調(diào)用在棧中方法調(diào)用鏈條太長導(dǎo)致的驰怎。出現(xiàn)這種錯(cuò)誤一般說明程序?qū)懙挠袉栴}阐滩。