spark具有完善的接口:Scala溯壶、Python且改、Java板驳、R
啟動(dòng)Scala接口
./bin/spark-shell
Python
./bin/pyspark
啟動(dòng)pyspark,出現(xiàn)spark版本號(hào),>>> 則代表啟動(dòng)成功
[hadoop@localhost Desktop]$ pyspark
Python 3.5.2 |Anaconda 4.1.1 (64-bit)| (default, Jul 2 2016, 17:53:06)
[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux
Type "help", "copyright", "credits" or "license" for more information.
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/spark-1.6.2-bin-hadoop2.6/lib/spark-assembly-1.6.2-hadoop2.6.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
16/10/18 06:16:18 INFO spark.SparkContext: Running Spark version 1.6.2
16/10/18 06:16:19 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/10/18 06:16:19 WARN util.Utils: Your hostname, localhost.localdomain resolves to a loopback address: 127.0.0.1; using 192.168.163.129 instead (on interface eth0)
16/10/18 06:16:19 WARN util.Utils: Set SPARK_LOCAL_IP if you need to bind to another address
16/10/18 06:16:19 INFO spark.SecurityManager: Changing view acls to: hadoop
16/10/18 06:16:19 INFO spark.SecurityManager: Changing modify acls to: hadoop
16/10/18 06:16:19 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hadoop); users with modify permissions: Set(hadoop)
16/10/18 06:16:20 INFO util.Utils: Successfully started service 'sparkDriver' on port 55502.
16/10/18 06:16:21 INFO slf4j.Slf4jLogger: Slf4jLogger started
16/10/18 06:16:21 INFO Remoting: Starting remoting
16/10/18 06:16:21 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@192.168.163.129:33962]
16/10/18 06:16:21 INFO util.Utils: Successfully started service 'sparkDriverActorSystem' on port 33962.
16/10/18 06:16:21 INFO spark.SparkEnv: Registering MapOutputTracker
16/10/18 06:16:21 INFO spark.SparkEnv: Registering BlockManagerMaster
16/10/18 06:16:21 INFO storage.DiskBlockManager: Created local directory at /tmp/blockmgr-db7f7a2d-17be-4b7d-92ea-df8621a1d4be
16/10/18 06:16:21 INFO storage.MemoryStore: MemoryStore started with capacity 517.4 MB
16/10/18 06:16:21 INFO spark.SparkEnv: Registering OutputCommitCoordinator
16/10/18 06:16:22 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/10/18 06:16:22 INFO server.AbstractConnector: Started SelectChannelConnector@0.0.0.0:4040
16/10/18 06:16:22 INFO util.Utils: Successfully started service 'SparkUI' on port 4040.
16/10/18 06:16:22 INFO ui.SparkUI: Started SparkUI at http://192.168.163.129:4040
16/10/18 06:16:22 INFO executor.Executor: Starting executor ID driver on host localhost
16/10/18 06:16:22 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 43486.
16/10/18 06:16:22 INFO netty.NettyBlockTransferService: Server created on 43486
16/10/18 06:16:22 INFO storage.BlockManagerMaster: Trying to register BlockManager
16/10/18 06:16:22 INFO storage.BlockManagerMasterEndpoint: Registering block manager localhost:43486 with 517.4 MB RAM, BlockManagerId(driver, localhost, 43486)
16/10/18 06:16:22 INFO storage.BlockManagerMaster: Registered BlockManager
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/__ / .__/\_,_/_/ /_/\_\ version 1.6.2
/_/
Using Python version 3.5.2 (default, Jul 2 2016 17:53:06)
SparkContext available as sc, HiveContext available as sqlContext.
>>>
'SparkContext available as sc, HiveContext available as sqlContext.'
>>>textFile = sc.textFile("file:///opt/spark-1.6.2-bin-hadoop2.6/README.md")
>>>textFile.count()
此處需注意礼烈,spark shell默認(rèn)讀取HDFS上的數(shù)據(jù)此熬,使用“file://”限定讀取本地文件,否則會(huì)報(bào)如下的錯(cuò),提示你HDFS上不存在該文件募谎。
py4j.protocol.Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.collectAndServe.
: org.apache.hadoop.mapred.InvalidInputException: Input path does not exist: hdfs://localhost:9000/opt/spark-1.6.2-bin-hadoop2.6/README.md
在跑Spark示例程序時(shí)近哟,輸出信息會(huì)很多鲫寄,可使用“2>/dev/null”將錯(cuò)誤信息過濾
[hadoop@localhost spark-1.6.2-bin-hadoop2.6]$ run-example SparkPi 2>/dev/null
##結(jié)果
Pi is roughly 3.13928
或者將標(biāo)準(zhǔn)錯(cuò)誤重定向到終端地来,然后用管道命令“|”截取
[hadoop@localhost spark-1.6.2-bin-hadoop2.6]$ run-example SparkPi 2>&1 | grep "Pi is "
##結(jié)果
Pi is roughly 3.1439
RDD(Resilient Distributed Dataset熙掺,彈性分布數(shù)據(jù)集)是Spark的核心概念币绩,有actions缆镣、transformations兩種操作芽突。actions返回計(jì)算的值寞蚌,transformations返回一個(gè)指向新RDD的指針钠糊。
疑惑:transformations返回的到底是指針還是新的RDD
RDD操作總結(jié)可參考:http://blog.csdn.net/eric_sunah/article/details/51037837