Java提交sparksql任務及問題分析

SparkSession spark = SparkSession.builder()
                .enableHiveSupport()  //這個必須加,是獲取hivemetastore
                .getOrCreate();
spark.sql("use " + args[0]);//args[0] is database 缩宜,args[1]為sql語句, args[2]為輸出hdfs目錄肘迎,格式"/task/data"
spark.sql(args[1]).write().format("csv").save(args[2]);
spark.close();
final CountDownLatch countDownLatch = new CountDownLatch(1);
       
            HashMap env = new HashMap();
            //env.put("JAVA_HOME","/usr/java8/jdk");
            env.put("HADOOP_CONF_DIR","/etc/hadoop/conf");
            SparkAppHandle handler = new SparkLauncher(env)
                    .setSparkHome("/opt/ZDH/parcels/lib/spark")
                    .setMaster("yarn")
                    .setDeployMode("cluster")
                    .setConf(SparkLauncher.EXECUTOR_MEMORY, "4g")
                    .setConf(SparkLauncher.EXECUTOR_CORES, "2")
                    .setConf("spark.cores.max","22")
                    .setConf("spark.executorEnv.JAVA_HOME","/usr/java8/jdk")
                    .setConf("spark.driverEnv.JAVA_HOME", "/usr/java8/jdk")
                    .setConf("spark.yarn.appMasterEnv.JAVA_HOME", "/usr/java8/jdk")       //yarn模式下
                    .setConf("spark.sql.shuffle.partitions", "800")
                    .setConf("spark.driver.extraClassPath", "/opt/ZDH//parcels/lib/spark/libext/*")   //一些jar包無法加載,需要手動添加conf配置
                    .setConf("spark.executor.extraClassPath", "/opt/ZDH//parcels/lib/spark/libext/*")
                    .addSparkArg("--files", "/etc/spark/conf/hive-site.xml")
                    .setAppResource("hdfs://nameservice/task/algorithm/jar/KLTAnonymous-1.0-SNAPSHOT.jar") //TODO
                    .setMainClass("anonymous.main.KLMainSpark")
                    .addAppArgs("hdfs://nameservice" + algCfgPath)
                    .setVerbose(true)   //--verbose for debug output
                    .startApplication(new SparkAppHandle.Listener() {
                        //監(jiān)聽任務的狀態(tài)锻煌,任務結束時(不管什么原因導致結束)妓布,isFinal()方法會返回true,否則返回false
                        @Override
                        public void stateChanged(SparkAppHandle sparkAppHandle) {
                            if (sparkAppHandle.getState().isFinal()) {
                                countDownLatch.countDown();
                                logger.info("SparkJob finished! state =>" + sparkAppHandle.getState().toString());
                                if(SparkAppHandle.State.FINISHED.equals(sparkAppHandle.getState())) {
                                    updateTaskStatus(jobId, 2);
                                } else {
                                    updateTaskStatus(jobId, 3);
                                }
                            }
                        }

                        @Override
                        public void infoChanged(SparkAppHandle handler) {
                            logger.info("infoChanged=>"+handler.getState().toString());
                        }
                    });

            while(!"FINISHED".equalsIgnoreCase(handler.getState().toString())
                    && SparkAppHandle.State.FAILED != handler.getState()){

                logger.info("SparkJob executing, applicationId=>" + handler.getAppId() + ", state=>" + handler.getState());
                try {
                    Thread.sleep(10000);
                } catch(InterruptedException e) {
                    logger.error("Thread sleep error: " + e.getMessage());
                }
            }
        }
 String localPath = new File("").getAbsolutePath();
    String sparkJarsPath = localPath + File.separator + "jars";
    String appJarPath = localPath + File.separator + "target";

    System.setProperty("HADOOP_USER_NAME", "mr");
    System.setProperty("SPARK_YARN_MODE", "true");

    SparkConf sparkConf = new SparkConf();
    sparkConf.setMaster("yarn");
    sparkConf.setAppName("SparkOnYarnTest");
    //sparkConf.set("spark.yarn.jars", sparkJarsPath+ File.separator + "*");
    sparkConf.set("spark.yarn.jars", "local:///opt/ZDH/parcels/lib/spark/jars/*");
    sparkConf.set("spark.submit.deployMode", "cluster");

    List<String> argsList = new ArrayList<String>();

    argsList.add("--jar");
    argsList.add(appJarPath + File.separator + "SparkApplication-1.0-SNAPSHOT.jar");
    argsList.add("--class");
    argsList.add("com.zte.spark.sql.debug.SqlDebugger");

    String[] arr = argsList.toArray(new String[argsList.size()]);
    ClientArguments clientArgs = new ClientArguments(arr);
    Client client = new Client(clientArgs, sparkConf);
    client.run();

問題分析:

問題一

org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category WRITE is not supported in state standby
at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87)
at org.apache.hadoop.hdfs.server.namenode.NameNodeNameNodeHAContext.checkOperation(NameNode.java:1800) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1327) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.create(NameNodeRpcServer.java:616) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.create(ClientNamenodeProtocolServerSideTranslatorPB.java:401) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtosClientNamenodeProtocol2.callBlockingMethod(ClientNamenodeProtocolProtos.java) at org.apache.hadoop.ipc.ProtobufRpcEngineServerProtoBufRpcInvoker.call(ProtobufRpcEngine.java:616) at org.apache.hadoop.ipc.RPCServer.call(RPC.java:982)
at org.apache.hadoop.ipc.ServerHandler1.run(Server.java:2075)
at org.apache.hadoop.ipc.ServerHandler1.run(Server.java:2071)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2069)

此報錯因為程序部署在141機器,141機器為namenode備節(jié)點

問題二

fs = FileSystem.get(URI.create("hdfs://10.43.159.142:9000"),conf); 沒有加用戶標識
Call From dap141-159/10.43.159.141 to dap142-159:9000 failed on connection exception: java.net.ConnectException: 拒絕連接; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused

此報錯因為程序部署在141機器,代碼去連dap142-159:9000 拒絕連接

java創(chuàng)建hdfs目錄和文件瘪板,fs = FileSystem.get(URI.create("hdfs://10.43.159.142:9000"),conf); URI要配置nameNode節(jié)點信息

問題三

嚴重: Servlet.service() for servlet [REST Service] in context with path [] threw exception [java.lang.UnsupportedClassVersionError: org/apache/spark/launcher/SparkLauncher : Unsupported major.minor version 52.0 (unable to load class org.apache.spark.launcher.SparkLauncher)] with root cause
java.lang.UnsupportedClassVersionError: org/apache/spark/launcher/SparkLauncher : Unsupported major.minor version 52.0 (unable to load class org.apache.spark.launcher.SparkLauncher)
at org.apache.catalina.loader.WebappClassLoaderBase.findClassInternal(WebappClassLoaderBase.java:3209)
at org.apache.catalina.loader.WebappClassLoaderBase.findClass(WebappClassLoaderBase.java:1373)
at org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1861)
at org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1735)
at com.zte.rest.TestRest.createMaskingPlan(TestRest.java:63)
at com.zte.rest.TestRestFastClassBySpringCGLIB19ca2510.invoke(<generated>)
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
at org.springframework.aop.framework.CglibAopProxyCglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:700) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:150) at org.springframework.transaction.interceptor.TransactionInterceptor1.proceedWithInvocation(TransactionInterceptor.java:96)
at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:260)
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:94)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172)
at org.springframework.aop.framework.CglibAopProxyDynamicAdvisedInterceptor.intercept(CglibAopProxy.java:633) at com.zte.rest.TestRest$$EnhancerBySpringCGLIB$$e60e9ed0.createMaskingPlan() at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at com.sun.jersey.spi.container.JavaMethodInvokerFactory1.invoke(JavaMethodInvokerFactory.java:60)
at com.sun.jersey.server.impl.model.method.dispatch.AbstractResourceMethodDispatchProvider$VoidOutInvoker._dispatch(AbstractResourceMethodDispatchProvider.java:167)
at com.sun.jersey.server.impl.model.method.dispatch.ResourceJavaMethodDispatcher.dispatch(ResourceJavaMethodDispatcher.java:75)
at com.sun.jersey.server.impl.uri.rules.HttpMethodRule.accept(HttpMethodRule.java:302)
at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at com.sun.jersey.server.impl.uri.rules.ResourceClassRule.accept(ResourceClassRule.java:108)
at com.sun.jersey.server.impl.uri.rules.RightHandPathRule.accept(RightHandPathRule.java:147)
at com.sun.jersey.server.impl.uri.rules.RootResourceClassesRule.accept(RootResourceClassesRule.java:84)
at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1542)
at com.sun.jersey.server.impl.application.WebApplicationImpl._handleRequest(WebApplicationImpl.java:1473)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1419)
at com.sun.jersey.server.impl.application.WebApplicationImpl.handleRequest(WebApplicationImpl.java:1409)
at com.sun.jersey.spi.container.servlet.WebComponent.service(WebComponent.java:409)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:558)
at com.sun.jersey.spi.container.servlet.ServletContainer.service(ServletContainer.java:733)
jdk版本不對
本地代碼編譯是基于java1.7編譯的

主要原因是因為mask-web工程啟動的jdk是1.7,需要修改為1.8

命令行參數(shù):spark.executorEnv.JAVA_HOME=/usr/java8/jdk
spark.yarn.appMasterEnv.JAVA_HOME=/usr/java8/jdk
spark.yarn.queue=mr
spark.master=spark://xinhuo113:7077,xinhuo111:7077 standalone模式下需要

問題四

六月 12, 2018 1:47:53 下午 org.apache.spark.launcher.OutputRedirector redirect
INFO: Error: Cluster deploy mode is not compatible with master "local"
六月 12, 2018 1:47:53 下午 org.apache.spark.launcher.OutputRedirector redirect
INFO: Run with --help for usage help or --verbose for debug output

不設置如下參數(shù)释涛,運行時是local模式 提交報錯
先設置yarn模式運行
.setMaster("yarn")
.setDeployMode("cluster")
報錯用戶權限,需要賦權 先默認以mr用戶啟動工程
六月 12, 2018 2:01:42 下午 org.apache.spark.launcher.OutputRedirector redirect
INFO: Exception in thread "main" org.apache.hadoop.yarn.exceptions.YarnException: Failed to submit application_1528709704391_0001 to YARN : User root cannot submit applications to queue root.root

日志文件:
/home/xiehh/mask/mask-1.0.0-zdh8.2.1-SNAPSHOT-web/target/unit-tests.log

spark-submit --master yarn --deploy-mode cluster --executor-cores 2 --executor-memory 4g --conf spark.sql.shuffle.partitions=800 --class anonymous.main.KLMainSpark Ka-Ld-1.0-SNAPSHOT.jar KAn_DoubleNum.xml

spark-submit --master yarn --deploy-mode client --executor-cores 1 --executor-memory 1g --num-executors 2 --conf spark.sql.shuffle.partitions=800 --class anonymous.main.KLMainSpark Ka-Ld-1.0-SNAPSHOT.jar KAn_DoubleNum.xml

client模式和cluster模式的區(qū)別

jar包和配置文件都在hdfs目錄: 注:集群模式cluster 存在讀不到/etc/spark/conf/hive-site.xml文件的情況倦沧,需要手動指定
spark-submit --master yarn --deploy-mode cluster --executor-cores 2 --executor-memory 4g --conf spark.sql.shuffle.partitions=800 --class anonymous.main.KLMainSpark hdfs://nameservice/job/KLTAnonymous-1.0-SNAPSHOT.jar hdfs://nameservice/job/KAn_DoubleNum.xml
此命令會讀不到hive-site文件唇撬,
2018-06-13 09:43:42,785 INFO org.apache.hadoop.hive.metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
2018-06-13 09:43:42,789 INFO org.apache.hadoop.hive.metastore.ObjectStore: Initialized ObjectStore
2018-06-13 09:43:43,092 WARN org.apache.hadoop.hive.metastore.ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0
2018-06-13 09:43:43,421 WARN org.apache.hadoop.hive.metastore.ObjectStore: Failed to get database default, returning NoSuchObjectException

指定hive-site.xml文件
spark-submit --master yarn --deploy-mode cluster --files /etc/spark/conf/hive-site.xml --executor-cores 2 --executor-memory 4g --conf spark.sql.shuffle.partitions=800 --class anonymous.main.KLMainSpark hdfs://nameservice/job/KLTAnonymous-1.0-SNAPSHOT.jar hdfs://nameservice/job/KAn_DoubleNum.xml

yarn-cluster模式運行報錯:為分布式運行,需要在hdfs目錄下找文件KAn_DoubleNum.xml 分發(fā)到每臺機器
mr@dap132-183:/home/xiehh> yarn logs -applicationId application_1528709704391_0017
2018-06-13 09:19:40,881 INFO org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider: Failing over to rm2

Container: container_1528709704391_0017_02_000001 on dap134-183_8041

LogType:stderr
Log Upload Time:星期三 六月 13 09:17:28 +0800 2018
LogLength:9816
Log Contents:
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/data2/zdh/yarn/local/usercache/mr/filecache/45/__spark_libs__558832802002154562.zip/slf4j-log4j12-1.7.16.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/ZDH/parcels/lib/hadoop/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
2018-06-13 09:17:22,186 INFO org.apache.spark.util.SignalUtils: Registered signal handler for TERM
2018-06-13 09:17:22,188 INFO org.apache.spark.util.SignalUtils: Registered signal handler for HUP
2018-06-13 09:17:22,188 INFO org.apache.spark.util.SignalUtils: Registered signal handler for INT
2018-06-13 09:17:23,835 INFO org.apache.spark.deploy.yarn.ApplicationMaster: Preparing Local resources
2018-06-13 09:17:24,691 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2018-06-13 09:17:24,699 WARN org.apache.hadoop.hdfs.shortcircuit.DomainSocketFactory: The short-circuit local reads feature cannot be used because libhadoop cannot be loaded.
2018-06-13 09:17:25,162 INFO org.apache.spark.deploy.yarn.ApplicationMaster: ApplicationAttemptId: appattempt_1528709704391_0017_000002
2018-06-13 09:17:25,173 INFO org.apache.spark.SecurityManager: Changing view acls to: mr
2018-06-13 09:17:25,173 INFO org.apache.spark.SecurityManager: Changing modify acls to: mr
2018-06-13 09:17:25,297 INFO org.apache.spark.SecurityManager: Changing view acls groups to:
2018-06-13 09:17:25,298 INFO org.apache.spark.SecurityManager: Changing modify acls groups to:
2018-06-13 09:17:25,298 INFO org.apache.spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(mr); groups with view permissions: Set(); users with modify permissions: Set(mr); groups with modify permissions: Set()
2018-06-13 09:17:25,430 INFO org.apache.spark.deploy.yarn.ApplicationMaster: Starting the user application in a separate Thread
2018-06-13 09:17:25,439 INFO org.apache.spark.deploy.yarn.ApplicationMaster: Waiting for spark context initialization...
2018-06-13 09:17:25,449 ERROR config.DOMParser: /data2/zdh/yarn/local/usercache/mr/appcache/application_1528709704391_0017/container_1528709704391_0017_02_000001/KAn_DoubleNum.xml (沒有那個文件或目錄)
java.io.FileNotFoundException: /data2/zdh/yarn/local/usercache/mr/appcache/application_1528709704391_0017/container_1528709704391_0017_02_000001/KAn_DoubleNum.xml (沒有那個文件或目錄)
at java.io.FileInputStream.open0(Native Method)
at java.io.FileInputStream.open(FileInputStream.java:195)
at java.io.FileInputStream.<init>(FileInputStream.java:138)
at java.io.FileInputStream.<init>(FileInputStream.java:93)
at sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:90)
at sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:188)
at org.apache.xerces.impl.XMLEntityManager.setupCurrentEntity(Unknown Source)
at org.apache.xerces.impl.XMLVersionDetector.determineDocVersion(Unknown Source)
at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
at org.apache.xerces.parsers.DOMParser.parse(Unknown Source)
at org.apache.xerces.jaxp.DocumentBuilderImpl.parse(Unknown Source)
at javax.xml.parsers.DocumentBuilder.parse(DocumentBuilder.java:205)
at config.DOMParser.parse(DOMParser.java:29)
at config.DOMParser.parseConfig(DOMParser.java:46)
at anonymous.main.KLMainSpark.function(KLMainSpark.java:65)
at anonymous.main.KLMainSpark.init(KLMainSpark.java:57)
at anonymous.main.KLMainSpark.main(KLMainSpark.java:53)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.yarn.ApplicationMasteranon$2.run(ApplicationMaster.scala:637) 2018-06-13 09:17:25,485 ERROR org.apache.spark.deploy.yarn.ApplicationMaster: User class threw exception: java.lang.Exception: /data2/zdh/yarn/local/usercache/mr/appcache/application_1528709704391_0017/container_1528709704391_0017_02_000001/KAn_DoubleNum.xml (沒有那個文件或目錄) java.lang.Exception: /data2/zdh/yarn/local/usercache/mr/appcache/application_1528709704391_0017/container_1528709704391_0017_02_000001/KAn_DoubleNum.xml (沒有那個文件或目錄) at config.DOMParser.parse(DOMParser.java:32) at config.DOMParser.parseConfig(DOMParser.java:46) at anonymous.main.KLMainSpark.function(KLMainSpark.java:65) at anonymous.main.KLMainSpark.init(KLMainSpark.java:57) at anonymous.main.KLMainSpark.main(KLMainSpark.java:53) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.yarn.ApplicationMasteranon2.run(ApplicationMaster.scala:637) 2018-06-13 09:17:25,488 INFO org.apache.spark.deploy.yarn.ApplicationMaster: Final app status: FAILED, exitCode: 15, (reason: User class threw exception: java.lang.Exception: /data2/zdh/yarn/local/usercache/mr/appcache/application_1528709704391_0017/container_1528709704391_0017_02_000001/KAn_DoubleNum.xml (沒有那個文件或目錄)) 2018-06-13 09:17:25,503 ERROR org.apache.spark.deploy.yarn.ApplicationMaster: Uncaught exception: org.apache.spark.SparkException: Exception thrown in awaitResult: at org.apache.spark.util.ThreadUtils.awaitResult(ThreadUtils.scala:205)
at org.apache.spark.deploy.yarn.ApplicationMaster.runDriver(ApplicationMaster.scala:403)
at org.apache.spark.deploy.yarn.ApplicationMaster.run(ApplicationMaster.scala:256)
at org.apache.spark.deploy.yarn.ApplicationMasteranonfun$main$1.apply$mcV$sp(ApplicationMaster.scala:766) at org.apache.spark.deploy.SparkHadoopUtilanon2.run(SparkHadoopUtil.scala:67) at org.apache.spark.deploy.SparkHadoopUtil$$anon2.run(SparkHadoopUtil.scala:66)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
at org.apache.spark.deploy.SparkHadoopUtil.runAsSparkUser(SparkHadoopUtil.scala:66)
at org.apache.spark.deploy.yarn.ApplicationMaster.main(ApplicationMaster.scala:764) at org.apache.spark.deploy.yarn.ApplicationMaster.main(ApplicationMaster.scala) Caused by: java.lang.Exception: /data2/zdh/yarn/local/usercache/mr/appcache/application_1528709704391_0017/container_1528709704391_0017_02_000001/KAn_DoubleNum.xml (沒有那個文件或目錄) at config.DOMParser.parse(DOMParser.java:32) at config.DOMParser.parseConfig(DOMParser.java:46) at anonymous.main.KLMainSpark.function(KLMainSpark.java:65) at anonymous.main.KLMainSpark.init(KLMainSpark.java:57) at anonymous.main.KLMainSpark.main(KLMainSpark.java:53) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.spark.deploy.yarn.ApplicationMaster$$anon2.run(ApplicationMaster.scala:637)
2018-06-13 09:17:25,512 INFO org.apache.spark.deploy.yarn.ApplicationMaster: Unregistering ApplicationMaster with FAILED (diag message: User class threw exception: java.lang.Exception: /data2/zdh/yarn/local/usercache/mr/appcache/application_1528709704391_0017/container_1528709704391_0017_02_000001/KAn_DoubleNum.xml (沒有那個文件或目錄))
2018-06-13 09:17:25,513 INFO org.apache.spark.deploy.yarn.ApplicationMaster: Deleting staging directory hdfs://nameservice/user/mr/.sparkStaging/application_1528709704391_0017
2018-06-13 09:17:25,652 ERROR org.apache.spark.util.Utils: Uncaught exception in thread Thread-3
java.lang.NullPointerException
at org.apache.spark.deploy.yarn.ApplicationMaster$$anonfunrun1.applymcVsp(ApplicationMaster.scala:237)
at org.apache.spark.util.SparkShutdownHook.run(ShutdownHookManager.scala:216)
at org.apache.spark.util.SparkSh

2018-06-13 09:43:42,785 INFO org.apache.hadoop.hive.metastore.MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY

問題三:
2018-06-08 14:51:07,506 WARN org.apache.spark.scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
原因:任務執(zhí)行所需的資源不夠

最后編輯于
?著作權歸作者所有,轉載或內容合作請聯(lián)系作者
  • 序言:七十年代末展融,一起剝皮案震驚了整個濱河市窖认,隨后出現(xiàn)的幾起案子,更是在濱河造成了極大的恐慌告希,老刑警劉巖扑浸,帶你破解...
    沈念sama閱讀 206,723評論 6 481
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件,死亡現(xiàn)場離奇詭異暂雹,居然都是意外死亡首装,警方通過查閱死者的電腦和手機创夜,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 88,485評論 2 382
  • 文/潘曉璐 我一進店門杭跪,熙熙樓的掌柜王于貴愁眉苦臉地迎上來,“玉大人,你說我怎么就攤上這事涧尿∠捣睿” “怎么了?”我有些...
    開封第一講書人閱讀 152,998評論 0 344
  • 文/不壞的土叔 我叫張陵姑廉,是天一觀的道長缺亮。 經常有香客問我,道長桥言,這世上最難降的妖魔是什么萌踱? 我笑而不...
    開封第一講書人閱讀 55,323評論 1 279
  • 正文 為了忘掉前任,我火速辦了婚禮号阿,結果婚禮上并鸵,老公的妹妹穿的比我還像新娘。我一直安慰自己扔涧,他們只是感情好园担,可當我...
    茶點故事閱讀 64,355評論 5 374
  • 文/花漫 我一把揭開白布。 她就那樣靜靜地躺著枯夜,像睡著了一般弯汰。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發(fā)上湖雹,一...
    開封第一講書人閱讀 49,079評論 1 285
  • 那天咏闪,我揣著相機與錄音,去河邊找鬼摔吏。 笑死汤踏,一個胖子當著我的面吹牛,可吹牛的內容都是我干的舔腾。 我是一名探鬼主播溪胶,決...
    沈念sama閱讀 38,389評論 3 400
  • 文/蒼蘭香墨 我猛地睜開眼,長吁一口氣:“原來是場噩夢啊……” “哼稳诚!你這毒婦竟也來了哗脖?” 一聲冷哼從身側響起,我...
    開封第一講書人閱讀 37,019評論 0 259
  • 序言:老撾萬榮一對情侶失蹤扳还,失蹤者是張志新(化名)和其女友劉穎才避,沒想到半個月后,有當?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體氨距,經...
    沈念sama閱讀 43,519評論 1 300
  • 正文 獨居荒郊野嶺守林人離奇死亡桑逝,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內容為張勛視角 年9月15日...
    茶點故事閱讀 35,971評論 2 325
  • 正文 我和宋清朗相戀三年,在試婚紗的時候發(fā)現(xiàn)自己被綠了俏让。 大學時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片楞遏。...
    茶點故事閱讀 38,100評論 1 333
  • 序言:一個原本活蹦亂跳的男人離奇死亡茬暇,死狀恐怖,靈堂內的尸體忽然破棺而出寡喝,到底是詐尸還是另有隱情糙俗,我是刑警寧澤,帶...
    沈念sama閱讀 33,738評論 4 324
  • 正文 年R本政府宣布预鬓,位于F島的核電站巧骚,受9級特大地震影響,放射性物質發(fā)生泄漏格二。R本人自食惡果不足惜劈彪,卻給世界環(huán)境...
    茶點故事閱讀 39,293評論 3 307
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望顶猜。 院中可真熱鬧粉臊,春花似錦、人聲如沸驶兜。這莊子的主人今日做“春日...
    開封第一講書人閱讀 30,289評論 0 19
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽抄淑。三九已至屠凶,卻和暖如春,著一層夾襖步出監(jiān)牢的瞬間肆资,已是汗流浹背矗愧。 一陣腳步聲響...
    開封第一講書人閱讀 31,517評論 1 262
  • 我被黑心中介騙來泰國打工, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留郑原,地道東北人唉韭。 一個月前我還...
    沈念sama閱讀 45,547評論 2 354
  • 正文 我出身青樓,卻偏偏與公主長得像犯犁,于是被迫代替她去往敵國和親属愤。 傳聞我的和親對象是個殘疾皇子,可洞房花燭夜當晚...
    茶點故事閱讀 42,834評論 2 345

推薦閱讀更多精彩內容