組件: Spark core

Spark can create distributed datasets from any file stored in the Hadoop distributed filesystem (HDFS) or other storage systems supported by the Hadoop APIs (including your local filesystem, Amazon S3, Cassandra, Hive, HBase, etc.).

Spark supports text files, SequenceFiles, Avro, Parquet, and any other Hadoop InputFormat.

  • spark core
    resilient distributed datasets (RDDs): A Fault-Tolerant Abstraction for In-Memory Cluster Computing. RDDs represent a collection of items distributed across many compute nodes that can be manipulated in parallel.
    In Spark all work is expressed as either creating new RDDs, transforming existing RDDs, or calling operations on RDDs to compute a result. Under the hood, Spark automatically distributes the data contained in RDDs.
    Each RDD is split into multiple partitions, which may be computed on different nodes of the cluster.
    Create RDDs in two ways: by loading an external dataset, or by distributing a collection of objects (e.g., a list or set) in their driver program. eg. loading a text file as an RDD of strings using SparkContext.textFile().
    Often do some initial ETL (extract, transform, and load) to get our data into a key/value format. Key/value RDDs expose new operations (e.g., counting up reviews for each product, grouping together data with the same key, and grouping together two different RDDs).
    Advanced feature: Let users control the layout of pair RDDs across nodes: partitioning(the PageRank algorithm).這個能帶來顯著的加速.

    • Pair RDDs
      Spark provides special operations on RDDs containing key/value pairs. Pair RDDs are a useful building block in many programs, as they expose operations that allow you to act on each key in parallel or regroup data across the network. For example, pair RDDs have a reduceByKey() method that can aggregate data separately for each key, and a join() method that can merge two RDDs together by grouping elements with the same key.
      There are a number of ways to get pair RDDs in Spark.
      Pair RDDs are also still RDDs (of Tuple2 objects in Java/Scala or of Python tuples).
      Access only the value part of our pair RDD: Spark provides the mapValues(func) function, which is the same as map{case (x, y): (x, func(y))}.
      Spark's "distributed reduce" transformations operate on RDDs of key-value pairs.

      • Aggregations
        the fold(), combine(), and reduce() actions on basic RDDs. These operations return RDDs and are transformations rather than actions.
        the reduceByKey() is quite similar to reduce(); both take a function and use it to combine values. reduceByKey() runs several parallel reduce operations.
        the foldByKey() is quite similar to fold(); both use a zero value of the same type of the data in our RDD and combination function.
        the reduceByKey() and the foldByKey() will automatically perform combining locally on each machine before computing global totals for each key.
        the combineByKey() is the most general of the per-key aggregation functions. Most of the other per-key combiners are implemented using it. (is a new element: createCombiner() -- is a value: mergeValue() -- mergeCombiners()).
        In any case, using one of the specialized aggregation functions in Spark can be much faster than the naive approach of grouping our data and then reducing it.

      • Tuning the level of parallelism
        Every RDD has a fixed number of partitions that determine the degree of parallelism to use when executing operations on the RDD.
        When performing aggregations or grouping operations, we can ask Spark to use a specific number of partitions. Spark will always try to infer a sensible default value based on the size of your cluster, but in some cases you will want to tune the level of parallelism for better performance.

最后編輯于
?著作權歸作者所有,轉載或內容合作請聯(lián)系作者
  • 序言:七十年代末,一起剝皮案震驚了整個濱河市,隨后出現(xiàn)的幾起案子,更是在濱河造成了極大的恐慌烂瘫,老刑警劉巖椒袍,帶你破解...
    沈念sama閱讀 207,248評論 6 481
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件沐绒,死亡現(xiàn)場離奇詭異谋作,居然都是意外死亡息裸,警方通過查閱死者的電腦和手機舀锨,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 88,681評論 2 381
  • 文/潘曉璐 我一進店門岭洲,熙熙樓的掌柜王于貴愁眉苦臉地迎上來,“玉大人雁竞,你說我怎么就攤上這事钦椭∨《睿” “怎么了?”我有些...
    開封第一講書人閱讀 153,443評論 0 344
  • 文/不壞的土叔 我叫張陵彪腔,是天一觀的道長侥锦。 經常有香客問我,道長德挣,這世上最難降的妖魔是什么恭垦? 我笑而不...
    開封第一講書人閱讀 55,475評論 1 279
  • 正文 為了忘掉前任,我火速辦了婚禮格嗅,結果婚禮上番挺,老公的妹妹穿的比我還像新娘。我一直安慰自己屯掖,他們只是感情好玄柏,可當我...
    茶點故事閱讀 64,458評論 5 374
  • 文/花漫 我一把揭開白布。 她就那樣靜靜地躺著贴铜,像睡著了一般粪摘。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發(fā)上绍坝,一...
    開封第一講書人閱讀 49,185評論 1 284
  • 那天徘意,我揣著相機與錄音,去河邊找鬼轩褐。 笑死椎咧,一個胖子當著我的面吹牛,可吹牛的內容都是我干的把介。 我是一名探鬼主播勤讽,決...
    沈念sama閱讀 38,451評論 3 401
  • 文/蒼蘭香墨 我猛地睜開眼,長吁一口氣:“原來是場噩夢啊……” “哼劳澄!你這毒婦竟也來了地技?” 一聲冷哼從身側響起,我...
    開封第一講書人閱讀 37,112評論 0 261
  • 序言:老撾萬榮一對情侶失蹤秒拔,失蹤者是張志新(化名)和其女友劉穎,沒想到半個月后飒硅,有當?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體砂缩,經...
    沈念sama閱讀 43,609評論 1 300
  • 正文 獨居荒郊野嶺守林人離奇死亡,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內容為張勛視角 年9月15日...
    茶點故事閱讀 36,083評論 2 325
  • 正文 我和宋清朗相戀三年三娩,在試婚紗的時候發(fā)現(xiàn)自己被綠了庵芭。 大學時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片。...
    茶點故事閱讀 38,163評論 1 334
  • 序言:一個原本活蹦亂跳的男人離奇死亡雀监,死狀恐怖双吆,靈堂內的尸體忽然破棺而出眨唬,到底是詐尸還是另有隱情,我是刑警寧澤好乐,帶...
    沈念sama閱讀 33,803評論 4 323
  • 正文 年R本政府宣布匾竿,位于F島的核電站,受9級特大地震影響蔚万,放射性物質發(fā)生泄漏岭妖。R本人自食惡果不足惜,卻給世界環(huán)境...
    茶點故事閱讀 39,357評論 3 307
  • 文/蒙蒙 一反璃、第九天 我趴在偏房一處隱蔽的房頂上張望昵慌。 院中可真熱鬧,春花似錦淮蜈、人聲如沸斋攀。這莊子的主人今日做“春日...
    開封第一講書人閱讀 30,357評論 0 19
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽蜻韭。三九已至,卻和暖如春柿扣,著一層夾襖步出監(jiān)牢的瞬間肖方,已是汗流浹背。 一陣腳步聲響...
    開封第一講書人閱讀 31,590評論 1 261
  • 我被黑心中介騙來泰國打工未状, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留俯画,地道東北人。 一個月前我還...
    沈念sama閱讀 45,636評論 2 355
  • 正文 我出身青樓司草,卻偏偏與公主長得像艰垂,于是被迫代替她去往敵國和親。 傳聞我的和親對象是個殘疾皇子埋虹,可洞房花燭夜當晚...
    茶點故事閱讀 42,925評論 2 344

推薦閱讀更多精彩內容