本地想玩耍spark瓤狐,Hadoop其他環(huán)境都使用的是CDH 5.7.0,本想一路下來(lái)繼續(xù)使用cloudera提供的版本赴捞,但是官方編譯的版本運(yùn)行起來(lái)各種錯(cuò)誤郁稍,所以先碼一片文章,發(fā)揚(yáng)奉獻(xiàn)精神恢着,走你>>>
開(kāi)始入坑:(直接上編譯步驟)
1.環(huán)境介紹:
CentOS6.8-64位
JDK7
Spark1.6.0
Scala2.10.5
Hadoop2.6.0-CDH5.7.0
maven3.3.9(只需要3.3.3+就可以)
2.編譯環(huán)境部署
(1)jdk的安裝掰派,發(fā)揚(yáng)工匠精神,直接都碼上吧
- tar -zxvf jdk1.7.0_65.tar.gz -C /opt/cluster/
- vim /etc/profile(添加環(huán)境變量)
# Java Home
export JAVA_HOME=/opt/jdk1.7.0_65
export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
bin:$JAVA_HOME/bin
- source /etc/profile
- 驗(yàn)證安裝:java -version 或者 echo $JAVA_HOME
(2)Maven的安裝
- tar -zxvf apache-maven-3.3.9.tar.gz -C /opt/cluster/
- vim /etc/profile(添加環(huán)境變量)
# Maven Home
export MAVEN_HOME=/opt/cluster/apache-maven-3.3.9
bin:$MAVEN_HOME/bin
- source /etc/profile
- 驗(yàn)證安裝:mvn -version 或者 echo $MAVEN_HOME
(3)Scala的安裝
- tar -zxvf scala-2.10.5.tgz -C /opt/cluster/
- vim /etc/profile(添加環(huán)境變量)
# Scala Home
export SCALA_HOME=/opt/cluster/scala-2.10.5
bin:$SCALA_HOME/bin
- source /etc/profile
- 驗(yàn)證安裝:scala
(4)spark源碼安裝與配置
- tar -zxvf spark-1.6.0.tgz -C /opt/cluster/
- 為了編譯比較快,要更改make-distribution.sh文件
添加版本(不用spark自己去解析生成)
VERSION=1.6.0
SCALA_VERSION=2.10
SPARK_HADOOP_VERSION=2.6.0-cdh5.7.0
SPARK_HIVE=1
將130多行的解析版本代碼注釋掉
#VERSION=$("$MVN" help:evaluate -Dexpression=project.version $@ 2>/dev/null | grep -v "INFO" | tail -n 1)
#SCALA_VERSION=$("$MVN" help:evaluate -Dexpression=scala.binary.version $@ 2>/dev/null\
# | grep -v "INFO"\
# | tail -n 1)
#SPARK_HADOOP_VERSION=$("$MVN" help:evaluate -Dexpression=hadoop.version $@ 2>/dev/null\
# | grep -v "INFO"\
# | tail -n 1)
#SPARK_HIVE=$("$MVN" help:evaluate -Dexpression=project.activeProfiles -pl sql/hive $@ 2>/dev/null\
# | grep -v "INFO"\
# | fgrep --count "<id>hive</id>";\
# # Reset exit status to 0, otherwise the script stops here if the last grep finds nothing\
# # because we use "set -o pipefail"
# echo -n)
3.開(kāi)始編譯
- 我建議使用jdk7去編譯描扯,jdk8的不去特此說(shuō)明了
step1:進(jìn)入spark源碼目錄
cd /opt/cluster/spark-1.6.0
step2:配置一下MAVEN_OPTS
export MAVEN_OPTS="-Xmx2g -XX:MaxPermSize=512M -XX:ReservedCodeCacheSize=512m"
step3:開(kāi)始編譯
./make-distribution.sh --tgz -Phadoop-2.6 -Dhadoop.version=2.6.0-cdh5.7.0 -Pyarn -Phive -Phive-thriftserver
4.編譯過(guò)程
- 編譯是按照下面的模塊執(zhí)行的绽诚,可以去觀察日志打印更好的了解執(zhí)行過(guò)程
[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] Spark Project Parent POM ........................... SUCCESS [ 5.850 s]
[INFO] Spark Project Test Tags ............................ SUCCESS [ 4.403 s]
[INFO] Spark Project Launcher ............................. SUCCESS [ 15.255 s]
[INFO] Spark Project Networking ........................... SUCCESS [ 11.419 s]
[INFO] Spark Project Shuffle Streaming Service ............ SUCCESS [ 7.578 s]
[INFO] Spark Project Unsafe ............................... SUCCESS [ 20.146 s]
[INFO] Spark Project Core ................................. SUCCESS [05:10 min]
[INFO] Spark Project Bagel ................................ SUCCESS [ 17.821 s]
[INFO] Spark Project GraphX ............................... SUCCESS [ 45.020 s]
[INFO] Spark Project Streaming ............................ SUCCESS [01:12 min]
[INFO] Spark Project Catalyst ............................. SUCCESS [01:39 min]
[INFO] Spark Project SQL .................................. SUCCESS [02:23 min]
[INFO] Spark Project ML Library ........................... SUCCESS [02:24 min]
[INFO] Spark Project Tools ................................ SUCCESS [ 19.271 s]
[INFO] Spark Project Hive ................................. SUCCESS [01:53 min]
[INFO] Spark Project Docker Integration Tests ............. SUCCESS [ 8.271 s]
[INFO] Spark Project REPL ................................. SUCCESS [ 46.352 s]
[INFO] Spark Project YARN Shuffle Service ................. SUCCESS [ 18.256 s]
[INFO] Spark Project YARN ................................. SUCCESS [01:37 min]
[INFO] Spark Project Hive Thrift Server ................... SUCCESS [02:47 min]
[INFO] Spark Project Assembly ............................. SUCCESS [02:55 min]
[INFO] Spark Project External Twitter ..................... SUCCESS [ 56.260 s]
[INFO] Spark Project External Flume Sink .................. SUCCESS [02:39 min]
[INFO] Spark Project External Flume ....................... SUCCESS [ 27.604 s]
[INFO] Spark Project External Flume Assembly .............. SUCCESS [ 16.969 s]
[INFO] Spark Project External MQTT ........................ SUCCESS [03:33 min]
[INFO] Spark Project External MQTT Assembly ............... SUCCESS [ 20.258 s]
[INFO] Spark Project External ZeroMQ ...................... SUCCESS [01:17 min]
[INFO] Spark Project External Kafka ....................... SUCCESS [01:24 min]
[INFO] Spark Project Examples ............................. SUCCESS [07:13 min]
[INFO] Spark Project External Kafka Assembly .............. SUCCESS [ 11.575 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 44:07 min
[INFO] Final Memory: 101M/1243M
[INFO] ------------------------------------------------------------------------
5.后續(xù)
編譯成功后會(huì)在當(dāng)前目錄下生成一個(gè)包:spark-1.6.0-bin-2.6.0-cdh5.7.0.tgz
解壓這個(gè)到你想解壓的位置
tar -zxvfspark-1.6.0-bin-2.6.0-cdh5.7.0.tgz -C /opt/cluster/
- 可以運(yùn)行一個(gè)Wordcount體驗(yàn)一下
step1:進(jìn)入解壓目錄執(zhí)行
bin/spark-shell
step2:自己在HDFS上添加一個(gè)文件,隨便寫(xiě)幾個(gè)單詞蜂桶,空格隔開(kāi)犀勒,執(zhí)行scala
sc.textFile("hdfs://hadoop-master:8020/user/master/mapreduce/wordcount/input/hello.txt").flatMap(_.split(" ")).map((_,1)).reduceByKey(_ + _)
step3:查看結(jié)果
rse0.collect