Spark
科普
首先推薦一篇文章與 Hadoop 對比搀继,如何看待 Spark 技術(shù)
里面有很多優(yōu)秀的回答者,引用用心閣大神的回答來了解Spark是做什么的。
Apache Spark是一個新興的大數(shù)據(jù)處理的引擎,主要特點是提供了一個集群的分布式內(nèi)存抽象侨拦,以支持需要工作集的應(yīng)用。
這個抽象就是RDD(Resilient Distributed Dataset)辐宾,RDD就是一個不可變的帶分區(qū)的記錄集合狱从,RDD也是Spark中的編程模型。Spark提供了RDD上的兩類操作叠纹,轉(zhuǎn)換和動作季研。轉(zhuǎn)換是用來定義一個新的RDD,包括map, flatMap, filter, union, sample, join, groupByKey, cogroup, ReduceByKey, cros, sortByKey, mapValues等誉察,動作是返回一個結(jié)果与涡,包括collect, reduce, count, save, lookupKey。
簡單來說持偏,在大數(shù)據(jù)處理上驼卖,spark擁有計算速度快,更易上手鸿秆,擴展性高酌畜,以及跨平臺性能好的優(yōu)點。
安裝
我選擇的是比較新的spark2.0.1版本spark-2.0.1-bin-hadoop2.7.tgz,從官網(wǎng)下好了放至服務(wù)器卿叽。
- 環(huán)境要求
- Hadoop環(huán)境桥胞,可參考我上一篇博文CentOS7下搭建Hadoop2.7.3集群
- scala安裝配置恳守。直接下載解壓即可,順手配置一下環(huán)境變量埠戳。
$ tar -zxvf scala-2.11.2.tgz
$ vi /etc/profile
#Scala Env
export SCALA_HOME=/data/soft/scala-2.11.2
export PATH=${SCALA_HOME}/bin:${PATH}
- 解壓即安裝
我把scala
和spark
都解壓安裝在/data
目錄下井誉。
$ tar -zxvf spark-2.0.1-bin-hadoop2.7.tgz
- 配置
$ vi /data/spark-2.0.1-bin-hadoop2.7/conf/spark-env.sh.template
export HADOOP_CONF_DIR=/data/hadoop-2.7.3/etc/hadoop
export JAVA_HOME=/usr/java/jdk1.7.0_79
export SCALA_HOME=/data/soft/scala-2.11.2
export SPARK_HOME=/data/spark-2.0.1-bin-hadoop2.7
調(diào)試
-
bin
目錄下啟動spark-shell
bin/spark-shell
啟動截圖
出現(xiàn)以上界面代表安裝并啟動成功。 - 實現(xiàn)WordCount
```
//讀取當(dāng)前目錄下的README.md并存到textFile中
scala> val textFile = sc.textFile("README.md")
textFile: org.apache.spark.rdd.RDD[String] = README.md MapPartitionsRDD[5] at textFile at <console>:24
//按行解析整胃,將每行的單詞按mapreduce存儲
scala> val wordCounts = textFile.flatMap(line => line.split(" ")).map(word => (word, 1)).reduceByKey((a, b) => a + b)
wordCounts: org.apache.spark.rdd.RDD[(String, Int)] = ShuffledRDD[8] at reduceByKey at <console>:26
//存儲wordCounts
scala> wordCounts.collect()
res0: Array[(String, Int)] = Array((package,1), (For,3), (Programs,1), (processing.,1), (Because,1), (The,1), (cluster.,1), (its,1), ([run,1), (than,1), (APIs,1), (have,1), (Try,1), (computation,1), (through,1), (several,1), (This,2), (graph,1), (Hive,2), (storage,1), (["Specifying,1), (To,2), (page](http://spark.apache.org/documentation.html),1), (Once,1), ("yarn",1), (prefer,1), (SparkPi,2), (engine,1), (version,1), (file,1), (documentation,,1), (processing,,1), (the,22), (are,1), (systems.,1), (params,1), (not,1), (different,1), (refer,2), (Interactive,2), (R,,1), (given.,1), (if,4), (build,4), (when,1), (be,2), (Tests,1), (Apache,1), (thread,1), (programs,,1), (including,3), (./bin/run-example,2), (Spark.,1), (package.,1), (1000).count(),1), (Versions,1), (HDFS,1), (Data.,1), (>>>,1...
```
3. 應(yīng)用程序颗圣,根據(jù)API采用不同語言實現(xiàn)