一、簡介
Spark是UC Berkeley AMPLab開發(fā)的類MapRed計算框架碱蒙。MapRed框架適用于batch job荠瘪,但是由于它自身的框架限制,第一赛惩,pull-based heartbeat作業(yè)調(diào)度哀墓。第二,shuffle中間結(jié)果全部落地disk喷兼,導致了高延遲篮绰,啟動開銷很大。
而Spark是為迭代式季惯,交互式計算所生的吠各。第一,它采用了actor model的akka作為通訊框架勉抓。第二贾漏,它使用了RDD分布式內(nèi)存,操作之間的數(shù)據(jù)不需要dump到磁盤上藕筋,而是通過RDD Partition分布在各個節(jié)點內(nèi)存中纵散,極大的提高了數(shù)據(jù)間的流轉(zhuǎn),同時RDD之間維護了血統(tǒng)關系,一旦RDD fail掉了伍掀,能通過父RDD自動重建掰茶,保證了fault tolerance。
而且在Spark之上有豐富的應用硕盹,比如Shark符匾,Spark Streaming,MLBase瘩例。我們在生產(chǎn)環(huán)境中已經(jīng)使用Shark來作為Hive的一種補充啊胶,它共享了hive 的metastore,serde垛贤,使用方式也和hive幾乎一樣焰坪,如果data input size不是很大的情況下,相同語句確實比hive會快很多聘惦。
二某饰、安裝部署
-
下載安裝配置Scala
[root@master ~]# wget https://downloads.lightbend.com/scala/2.12.2/scala-2.12.2.tgz [root@master spark]# tar xvf scala-2.12.2.tgz -C /usr/local/program/scala/ #在etc/profile中增加環(huán)境變量SCALA_HOME,并使之生效: vim /etc/profile export SCALA_HOME=/usr/local/program/scala/scala-2.12.2 export PATH=$PATH:$SCALA_HOME/bin [root@master spark]# . /etc/profile
-
下載安裝配置Spark
#因為我現(xiàn)有的hadoop是2.7.1版本善绎,故... [root@master spark]# wget https://d3kbcqa49mib13.cloudfront.net/spark-2.1.1-bin-hadoop2.7.tgz [root@master spark]# tar xvf spark-2.1.1-bin-hadoop2.7.tgz -C /usr/local/program/spark/ #在etc/profile中增加環(huán)境變量SPARK_HOME黔漂,并使之生效: export SPARK_HOME=/usr/local/program/spark/spark-2.1.1-bin-hadoop2.7 export PATH=$PATH:$SPARK_HOME/bin:$SPARK_HOME/sbin [root@master spark]# . /etc/profile #在m1上配置Spark,修改spark-env.sh配置文件 #進入spark的conf目錄 [root@master spark]# cd /usr/local/program/spark/spark-2.1.1-bin-hadoop2.7/conf/ [root@master conf]# cp spark-env.sh.template spark-env.sh [root@master conf]# cat spark-env.sh export SCALA_HOME=/usr/local/program/scala/scala-2.12.2 export HADOOP_HOME=/home/hadoop/hadoop-2.7.3 export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.79.x86_64 export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop #export SPARK_JAR=/usr/local/program/spark/ export SPARK_MASTER_IP=master #修改conf/slaves文件禀酱,將計算節(jié)點的主機名添加到該文件炬守,一行一個 # 這里應該包含master,將master也同時作為一個計算節(jié)點 [root@master conf]# cat slaves slave01 slave02 slave03 slave04
配置ssh免密碼登陸
-
復制到集群節(jié)點
[root@master conf]# scp /etc/profile slave01:/etc/ [root@master conf]# scp -r /usr/local/program/spark/spark-2.1.1-bin-hadoop2.7 slave02:/usr/local/program/spark/ [root@master conf]# scp -r /usr/local/program/scala/scala-2.12.2/ slave02:/usr/local/program/scala/
-
啟動master和slaves
[root@master conf]# start-master.sh [root@master conf]# start-slaves.sh
-
通過web端口訪問spark
http://master:8080
三剂跟、 運行簡單的example
-
單機運行
#計算圓周率 [root@master spark-2.1.1-bin-hadoop2.7]# ./bin/run-example SparkPi 10 Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties 17/06/05 19:19:00 INFO SparkContext: Running Spark version 2.1.1 17/06/05 19:19:00 WARN SparkContext: Support for Java 7 is deprecated as of Spark 2.0.0 17/06/05 19:19:00 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 17/06/05 19:19:00 INFO SecurityManager: Changing view acls to: root 17/06/05 19:19:00 INFO SecurityManager: Changing modify acls to: root 17/06/05 19:19:00 INFO SecurityManager: Changing view acls groups to: 17/06/05 19:19:00 INFO SecurityManager: Changing modify acls groups to: 17/06/05 19:19:00 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); groups with view permissions: Set(); users with modify permissions: Set(root); groups with modify permissions: Set() ... 17/06/05 19:19:02 INFO DAGScheduler: Job 0 finished: reduce at SparkPi.scala:38, took 0.761265 s Pi is roughly 3.143967143967144
-
spark-shell的簡單使用
[root@master spark-2.1.1-bin-hadoop2.7]# spark-shell scala> val s=sc.textFile("hdfs://master:9000/user/hadoop/test/Temperature.txt") s: org.apache.spark.rdd.RDD[String] = hdfs://master:9000/user/hadoop/test/Temperature.txt MapPartitionsRDD[3] at textFile at <console>:24 scala> s.count res1: Long = 11 [hadoop@slave02 ~]$ hdfs dfs -cat test/Temperature.txt 2015,1,24 2015,3,56 2015,1,3 2015,2,-43 2015,4,5 2015,3,46 2014,2,64 2015,1,4 2015,1,21 2015,2,35 2015,2,0
-
集群提交作業(yè)
[hadoop@master ~]$ spark-submit --class org.apache.spark.examples.SparkPi --master yarn --deploy-mode cluster --driver-memory 4g --executor-memory 2g --executor-cores 1 /usr/local/program/spark/spark-2.1.1-bin-hadoop2.7/examples/jars/spark-examples_2.11-2.1.1.jar 100 17/06/06 16:03:21 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 17/06/06 16:03:22 INFO client.RMProxy: Connecting to ResourceManager at master/10.10.18.229:8032 17/06/06 16:03:22 INFO yarn.Client: Requesting a new application from cluster with 4 NodeManagers 17/06/06 16:03:22 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container) ... 17/06/06 16:04:33 INFO yarn.Client: Application report for application_1494595290830_0061 (state: RUNNING) 17/06/06 16:04:34 INFO yarn.Client: Application report for application_1494595290830_0061 (state: RUNNING) 17/06/06 16:04:35 INFO yarn.Client: Application report for application_1494595290830_0061 (state: RUNNING) 17/06/06 16:04:36 INFO yarn.Client: Application report for application_1494595290830_0061 (state: FINISHED) 17/06/06 16:04:36 INFO yarn.Client: client token: N/A diagnostics: N/A ApplicationMaster host: 10.10.19.232 ApplicationMaster RPC port: 0 queue: default start time: 1496736259866 final status: SUCCEEDED tracking URL: http://master:8088/proxy/application_1494595290830_0061/ user: hadoop 17/06/06 16:04:36 INFO util.ShutdownHookManager: Shutdown hook called 17/06/06 16:04:36 INFO util.ShutdownHookManager: Deleting directory /tmp/spark-6039cb14-8084-404e-b970-633dff4dd086