一翔冀、Livy安裝部署
Download
[hadoop@hadoop001 software]$ wget http://mirrors.hust.edu.cn/apache/incubator/livy/0.5.0-incubating/livy-0.5.0-incubating-bin.zip
[hadoop@hadoop001 software]$ unzip livy-0.5.0-incubating-bin.zip
[hadoop@hadoop001 software]$ mv livy-0.5.0-incubating-bin/ ../app/
[hadoop@hadoop001 software]$ cd ../app/livy-0.5.0-incubating-bin/
[hadoop@hadoop001 livy-0.5.0-incubating-bin]$ cd conf/
[hadoop@hadoop001 conf]$ cp livy-env.sh.template livy-env.sh
[hadoop@hadoop001 conf]$ vi livy-env.sh
JAVA_HOME=/opt/app/jdk1.8.0_45
HADOOP_CONF_DIR=/opt/app/hadoop-2.6.0-cdh5.7.0/conf
SPARK_HOME=/opt/app/spark-2.2.0-bin-2.6.0-cdh5.7.0
- 修改日志蛙吏,使其信息能打印在控制臺(tái)上
[hadoop@hadoop001 conf]$vim log4j.properties
log4j.rootCategory=INFO, console
log4j.appender.console=org.apache.log4j.ConsoleAppender
log4j.appender.console.target=System.err
log4j.appender.console.layout=org.apache.log4j.PatternLayout
log4j.appender.console.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss} %p %c{1}: %m%n
log4j.logger.org.eclipse.jetty=WARN
啟動(dòng)Livy
[hadoop@hadoop001 livy-0.5.0-incubating-bin]$ ./bin/livy-server
會(huì)報(bào)錯(cuò),信息如下
Exception in thread "main" java.io.IOException: Cannot write log directory /opt/app/livy-0.5.0-incubating-bin/logs
at org.eclipse.jetty.util.RolloverFileOutputStream.setFile(RolloverFileOutputStream.java:219)
at org.eclipse.jetty.util.RolloverFileOutputStream.<init>(RolloverFileOutputStream.java:166)
at org.eclipse.jetty.server.NCSARequestLog.doStart(NCSARequestLog.java:228)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:132)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:114)
at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61)
at org.eclipse.jetty.server.handler.RequestLogHandler.doStart(RequestLogHandler.java:140)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:132)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:114)
at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:132)
at org.eclipse.jetty.server.Server.start(Server.java:387)
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:114)
at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61)
at org.eclipse.jetty.server.Server.doStart(Server.java:354)
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)
at org.apache.livy.server.WebServer.start(WebServer.scala:92)
at org.apache.livy.server.LivyServer.start(LivyServer.scala:259)
at org.apache.livy.server.LivyServer$.main(LivyServer.scala:339)
at org.apache.livy.server.LivyServer.main(LivyServer.scala)
- 解決辦法:
權(quán)限問(wèn)題筷屡,需要手動(dòng)創(chuàng)建logs目錄
[hadoop@hadoop001 livy-0.5.0-incubating-bin]$ mkdir logs
- 啟動(dòng)成功后進(jìn)行Web訪問(wèn):
19/08/29 22:26:20 INFO LineBufferedStream: stdout: Welcome to
19/08/29 22:26:20 INFO LineBufferedStream: stdout: ____ __
19/08/29 22:26:20 INFO LineBufferedStream: stdout: / __/__ ___ _____/ /__
19/08/29 22:26:20 INFO LineBufferedStream: stdout: _\ \/ _ \/ _ `/ __/ '_/
19/08/29 22:26:20 INFO LineBufferedStream: stdout: /___/ .__/\_,_/_/ /_/\_\ version 2.4.2
19/08/29 22:26:20 INFO LineBufferedStream: stdout: /_/
19/08/29 22:26:20 INFO LineBufferedStream: stdout:
19/08/29 22:26:20 INFO LineBufferedStream: stdout: Using Scala version 2.11.12, Java HotSpot(TM) 64-Bit Server VM, 1.8.0_201
19/08/29 22:26:20 INFO LineBufferedStream: stdout: Branch
19/08/29 22:26:20 INFO LineBufferedStream: stdout: Compiled by user hadoop on 2019-05-01T03:17:40Z
19/08/29 22:26:20 INFO LineBufferedStream: stdout: Revision
19/08/29 22:26:20 INFO LineBufferedStream: stdout: Url
19/08/29 22:26:20 INFO LineBufferedStream: stdout: Type --help for more information.
19/08/29 22:26:20 WARN LivySparkUtils$: Current Spark (2,4) is not verified in Livy, please use it carefully
19/08/29 22:26:20 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
19/08/29 22:26:21 INFO StateStore$: Using BlackholeStateStore for recovery.
19/08/29 22:26:21 INFO BatchSessionManager: Recovered 0 batch sessions. Next session id: 0
19/08/29 22:26:21 INFO InteractiveSessionManager: Recovered 0 interactive sessions. Next session id: 0
19/08/29 22:26:21 INFO InteractiveSessionManager: Heartbeat watchdog thread started.
19/08/29 22:26:21 INFO WebServer: Starting server on http://hadoop000:8998
---------------------------------------------------------------------------------------------
#換成自己的IP地址
http://hadoop000:8998
Livy配置文件解讀
livy.conf:配置了一些server的信息
spark-blacklist.conf
會(huì)列出來(lái)一些spark配置中的一些東西腿箩,這些東西用戶(hù)是不允許被修改掉的
給用戶(hù)的一些東西,有些是不能改的金蜀,比如:內(nèi)存大小的設(shè)置刷后、executor的設(shè)置
這些給用戶(hù)改,是不放心的渊抄;因此有些東西必然是不能夠暴露的log4j.properties:日志信息
livy.conf的配置如下:
hadoop@hadoop001 conf]$ cp livy.conf.template livy.conf
[hadoop@hadoop001 conf]$ vi livy.conf
livy.server.host = 0.0.0.0
livy.server.port = 8998
livy.spark.master = local[2]
二尝胆、架構(gòu)篇
1、有個(gè)客戶(hù)端client抒线,中間有個(gè)livy server班巩,后面有spark interactive session和spark batch session(在這2個(gè)里面的底層都是有一個(gè)SparkContext的)
2、client發(fā)請(qǐng)求過(guò)來(lái)(http或rest)到livy server嘶炭,然后會(huì)去spark interactive session和spark batch session分別去創(chuàng)建2個(gè)session抱慌;與spark集群交互打交道,去創(chuàng)建session的方式有2種:http或rpc眨猎,現(xiàn)在用的比較多的方式是:rpc
3抑进、livy server就是一個(gè)rest的服務(wù),收到客戶(hù)端的請(qǐng)求之后睡陪,與spark集群進(jìn)行連接寺渗;客戶(hù)端只需要把請(qǐng)求發(fā)到server上就可以了這樣的話,就分為了3層:
最左邊:其實(shí)就是一個(gè)客戶(hù)單兰迫,只需要向livy server發(fā)送請(qǐng)求
到livy server之后就會(huì)去spark集群創(chuàng)建我們的session
session創(chuàng)建好之后信殊,客戶(hù)端就可以把作業(yè)以代碼片段的方式提交上來(lái)就OK了,其實(shí)就是以請(qǐng)求的方式發(fā)到server上就行
這樣能帶來(lái)一個(gè)優(yōu)點(diǎn)汁果,對(duì)于原來(lái)提交作業(yè)機(jī)器的壓力可以減少很多涡拘,我們只要保障Livy Server的HA就OK了
對(duì)于這個(gè)是可以保證的
此架構(gòu)與spark-submit的對(duì)比:使用spark-submit(yarn-client模式)必須在客戶(hù)端進(jìn)行提交,如果客戶(hù)端那臺(tái)機(jī)器掛掉了(driver跑在客戶(hù)端上据德,因此driver也就掛了)鳄乏,那么作業(yè)全部都完成不了,這就存在一個(gè)單點(diǎn)問(wèn)題
架構(gòu)概況:
1棘利、客戶(hù)端發(fā)一個(gè)請(qǐng)求到livy server
2橱野、Livy Server發(fā)一個(gè)請(qǐng)求到Spark集群,去創(chuàng)建session
3善玫、session創(chuàng)建完之后水援,會(huì)返回一個(gè)請(qǐng)求到Livy Server,這樣Livy Server就知道session創(chuàng)建過(guò)程中的一個(gè)狀態(tài)
4蝌焚、客戶(hù)端的操作裹唆,如:如果客戶(hù)端再發(fā)一個(gè)請(qǐng)求過(guò)來(lái)看一下,比如說(shuō)看session信息啥的(可以通過(guò)GET API搞定)
多用戶(hù)的特性:
上述是一個(gè)用戶(hù)的操作只洒,如果第二個(gè)许帐、第三個(gè)用戶(hù)來(lái),可以這樣操作:
提交過(guò)去的時(shí)候毕谴,可以共享一個(gè)session
其實(shí)一個(gè)session就是一個(gè)SparkContext
比如:藍(lán)色的client共享一個(gè)session成畦,黑色的client共享一個(gè)session距芬,可以通過(guò)一定的標(biāo)識(shí),它們自己能夠識(shí)別出來(lái)
三循帐、提交Spark作業(yè)案例
1框仔、創(chuàng)建交互式的session
使用交互式會(huì)話的前提是需要先創(chuàng)建會(huì)話。當(dāng)前的Livy可在同一會(huì)話中支持spark拄养,pyspark或是sparkr三種不同的解釋器類(lèi)型以滿(mǎn)足不同語(yǔ)言的需求离斩。
[hadoop@hadoop000 livy-0.5.0-incubating-bin]$ curl -X POST --data '{"kind":"spark"}' -H "Content-Type:application/json" hadoop000:8998/sessions
------------------下面是創(chuàng)建Session返回的信息--------------------
{
"id": 1,
"appId": null,
"owner": null,
"proxyUser": null,
"state": "starting",
"kind": "spark",
"appInfo": {
"driverLogUrl": null,
"sparkUiUrl": null
},
"log": ["stdout: ", "\nstderr: "]
}
其中需要我們關(guān)注的是會(huì)話id,id代表了此會(huì)話瘪匿,所有基于該會(huì)話的操作都需要指明其id
2跛梗、提交一個(gè)Spark的代碼片段
sc.parallelize(1 to 10).count()
Livy的REST提交方式
curl hadoop000:8998/sessions/1/statements -X POST -H 'Content-Type: application/json' -d '{"code":"sc.parallelize(1 to 2).count()", "kind": "spark"}'
---------返回信息如下--------
{
"id": 1,
"code": "sc.parallelize(1 to 10).count()",
"state": "waiting",
"output": null,
"progress": 0.0
}
注意此代碼片段提交到session_id為1的session里面去了,所以Web點(diǎn)擊1
3棋弥、以批處理會(huì)話(Batch Session)提交打包的JAR
package com.soul.bigdata.spark.core01
import org.apache.spark.{SparkConf, SparkContext}
object SparkWCApp {
def main(args: Array[String]): Unit = {
val conf = new SparkConf()
.setAppName("SparkWCApp").setMaster("local[2]")
val sc = new SparkContext(conf)
val lineRDD = sc.parallelize(Seq("hadoop","hadoop","Spark","Flink"))
val rsRDD = lineRDD.flatMap(x => x.split("\t")).map(x => (x, 1)).reduceByKey(_ + _)
rsRDD.collect().foreach(println)
sc.stop()
}
}
以上代碼打包上傳至
[hadoop@hadoop000 lib]$ pwd
/home/hadoop/soul/lib
[hadoop@hadoop000 lib]$ ll
total 228
-rw-r--r-- 1 hadoop hadoop 231035 Aug 29 23:09 spark-train-1.0.jar
使用Livy提交
curl -H "Content-Type: application/json" -X POST -d '{ "file":"/home/hadoop/soul/libspark-train-1.0.jar", "className":"com.soul.bigdata.spark.core01.SparkWCApp" }' hadoop000:8998/batches
查看Livy的Web界面報(bào)錯(cuò)
19/08/29 23:19:01 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Exception in thread "main" java.io.FileNotFoundException: File hdfs://hadoop000:8020/home/hadoop/soul/lib/spark-train-1.0.jar does not exist.
at org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:705)
at org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:106)
at org.apache.hadoop.hdfs.DistributedFileSystem$15.doCall(DistributedFileSystem.java:763)
at org.apache.hadoop.hdfs.DistributedFileSystem$15.doCall(DistributedFileSystem.java:759)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:759)
at org.apache.spark.util.Utils$.fetchHcfsFile(Utils.scala:755)
at org.apache.spark.util.Utils$.doFetchFile(Utils.scala:723)
at org.apache.spark.deploy.DependencyUtils$.downloadFile(DependencyUtils.scala:137)
at org.apache.spark.deploy.SparkSubmit$$anonfun$prepareSubmitEnvironment$7.apply(SparkSubmit.scala:367)
at org.apache.spark.deploy.SparkSubmit$$anonfun$prepareSubmitEnvironment$7.apply(SparkSubmit.scala:367)
at scala.Option.map(Option.scala:146)
at org.apache.spark.deploy.SparkSubmit.prepareSubmitEnvironment(SparkSubmit.scala:366)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:143)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:924)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:933)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
所以File后面跟的Path需要是HDFS路徑核偿,而不是本地路徑,將打包的JAR上傳至HDFS
[hadoop@hadoop000 lib]$ hadoop fs -ls /lib
Found 1 items
-rw-r--r-- 1 hadoop supergroup 231035 2019-08-29 23:20 /lib/spark-train-1.0.jar
再次提交
curl -H "Content-Type: application/json" -X POST -d '{ "file":"/lib/spark-train-1.0.jar", "className":"com.soul.bigdata.spark.core01.SparkWCApp" }' hadoop000:8998/batches
查看Web成功返回了我們需要的結(jié)果