<官網(wǎng)學(xué)大數(shù)據(jù)>MapReduce

MapReduce官網(wǎng)介紹地址http://hadoop.apache.org/docs/stable/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html

Overview

Hadoop MapReduce is a software framework for easily writing applications which process vast amounts of data (multi-terabyte data-sets) in-parallel on large clusters (thousands of nodes) of commodity hardware in a reliable, fault-tolerant manner.

MapReduce以可靠勺美、容錯(cuò)的方式在大型集群的普通硬件上并行處理海量的數(shù)據(jù)

A MapReduce job usually splits the input data-set into independent chunks(塊) which are processed by the map tasks in a completely parallel manner. The framework sorts the outputs of the maps, which are then input to the reduce tasks. Typically both the input and the output of the job are stored in a file-system. The framework takes care of scheduling tasks, monitoring them and re-executes the failed tasks.

MapReduce作業(yè)通常會(huì)把輸入的數(shù)據(jù)集切分成獨(dú)立的塊,交由map進(jìn)行完全的并行的處理。框架會(huì)把map輸出的數(shù)據(jù)進(jìn)行排序然后輸入到reduce官册。作業(yè)的輸入和輸出都存儲(chǔ)在文件系統(tǒng)中硫痰】贪框架負(fù)責(zé)調(diào)度任務(wù)馋记、監(jiān)視任務(wù)和重啟失敗的任務(wù)。

Typically the compute nodes and the storage nodes are the same, that is, the MapReduce framework and the Hadoop Distributed File System (see HDFS Architecture Guide) are running on the same set of nodes. This configuration allows the framework to effectively schedule tasks on the nodes where data is already present, resulting in very high aggregate(高聚合) bandwidth across the cluster.

通常計(jì)算節(jié)點(diǎn)和存儲(chǔ)節(jié)點(diǎn)是相同的肆资,即MapReduce和HDFS運(yùn)行在相同的節(jié)點(diǎn)上矗愧。這種配置允許框架有效地在已存在數(shù)據(jù)的節(jié)點(diǎn)上調(diào)度任務(wù),從而在集群中產(chǎn)生非常高的聚合帶寬郑原。

The MapReduce framework consists of a single master ResourceManager, one slave NodeManager per cluster-node, and MRAppMaster per application (see YARN Architecture Guide).

MapReduce框架由一個(gè)主節(jié)點(diǎn)RM唉韭,每個(gè)集群節(jié)點(diǎn)一個(gè)NM夜涕,和每個(gè)應(yīng)用程序一個(gè)MR的AM組成

Minimally, applications specify the input/output locations and supply map and reduce functions via implementations of appropriate interfaces and/or abstract-classes. These, and other job parameters, comprise the job configuration.

應(yīng)用程序通過合適的接口實(shí)現(xiàn)類或抽象類來指定輸入輸出位置和實(shí)現(xiàn)map和reduce功能

The Hadoop job client then submits the job (jar/executable etc.) and configuration to the ResourceManager which then assumes the responsibility of distributing the software/configuration to the slaves, scheduling tasks and monitoring them, providing status and diagnostic information to the job-client.

hadoop作業(yè)客戶端提交作業(yè)和配置給RM,RM負(fù)責(zé)分發(fā)程序和配置給從節(jié)點(diǎn),調(diào)度任務(wù)并監(jiān)視他們属愤,提供狀態(tài)和診斷信息給客戶端

Inputs and Outputs

The MapReduce framework operates exclusively(只) on <key, value> pairs, that is, the framework views the input to the job as a set of <key, value> pairs and produces a set of<key, value> pairs as the output of the job, conceivably of different types.

MapReduce框架只處理鍵值對(duì)數(shù)據(jù)钠乏,作業(yè)的輸入可以看成是一連串的鍵值對(duì),產(chǎn)生一連串不同類型的鍵值對(duì)作為作業(yè)的輸出春塌。

The key and value classes have to be serializable by the framework and hence need to implement the Writable interface. Additionally, the key classes have to implement the WritableComparable interface to facilitate(促進(jìn)) sorting by the framework.

key和value值必須經(jīng)過框架的序列化,因此需要實(shí)現(xiàn)Writable接口簇捍。key值還需要實(shí)現(xiàn)WritableComparable接口便于框架的排序只壳。

  • Writable接口源碼
package org.apache.hadoop.io;

import java.io.DataOutput;
import java.io.DataInput;
import java.io.IOException;

import org.apache.hadoop.classification.InterfaceAudience;
import org.apache.hadoop.classification.InterfaceStability;

/**
 * A serializable object which implements a simple, efficient, serialization 
 * protocol, based on {@link DataInput} and {@link DataOutput}.
 *
 * <p>Any <code>key</code> or <code>value</code> type in the Hadoop Map-Reduce
 * framework implements this interface.</p>
 * 
 * <p>Implementations typically implement a static <code>read(DataInput)</code>
 * method which constructs a new instance, calls {@link #readFields(DataInput)} 
 * and returns the instance.</p>
 * 
 * <p>Example:</p>
 * <p><blockquote><pre>
 *     public class MyWritable implements Writable {
 *       // Some data     
 *       private int counter;
 *       private long timestamp;
 *       
 *       public void write(DataOutput out) throws IOException {
 *         out.writeInt(counter);
 *         out.writeLong(timestamp);
 *       }
 *       
 *       public void readFields(DataInput in) throws IOException {
 *         counter = in.readInt();
 *         timestamp = in.readLong();
 *       }
 *       
 *       public static MyWritable read(DataInput in) throws IOException {
 *         MyWritable w = new MyWritable();
 *         w.readFields(in);
 *         return w;
 *       }
 *     }
 * </pre></blockquote></p>
 */
@InterfaceAudience.Public
@InterfaceStability.Stable
public interface Writable {
  /** 
   * Serialize the fields of this object to <code>out</code>.
   * 
   * @param out <code>DataOuput</code> to serialize this object into.
   * @throws IOException
   */
  void write(DataOutput out) throws IOException;

  /** 
   * Deserialize the fields of this object from <code>in</code>.  
   * 
   * <p>For efficiency, implementations should attempt to re-use storage in the 
   * existing object where possible.</p>
   * 
   * @param in <code>DataInput</code> to deseriablize this object from.
   * @throws IOException
   */
  void readFields(DataInput in) throws IOException;
}
  • WritableComparable接口源碼
package org.apache.hadoop.io;

import org.apache.hadoop.classification.InterfaceAudience;
import org.apache.hadoop.classification.InterfaceStability;

/**
 * A {@link Writable} which is also {@link Comparable}. 
 *
 * <p><code>WritableComparable</code>s can be compared to each other, typically 
 * via <code>Comparator</code>s. Any type which is to be used as a 
 * <code>key</code> in the Hadoop Map-Reduce framework should implement this
 * interface.</p>
 *
 * <p>Note that <code>hashCode()</code> is frequently used in Hadoop to partition
 * keys. It's important that your implementation of hashCode() returns the same 
 * result across different instances of the JVM. Note also that the default 
 * <code>hashCode()</code> implementation in <code>Object</code> does <b>not</b>
 * satisfy this property.</p>
 *  
 * <p>Example:</p>
 * <p><blockquote><pre>
 *     public class MyWritableComparable implements WritableComparable<MyWritableComparable> {
 *       // Some data
 *       private int counter;
 *       private long timestamp;
 *       
 *       public void write(DataOutput out) throws IOException {
 *         out.writeInt(counter);
 *         out.writeLong(timestamp);
 *       }
 *       
 *       public void readFields(DataInput in) throws IOException {
 *         counter = in.readInt();
 *         timestamp = in.readLong();
 *       }
 *       
 *       public int compareTo(MyWritableComparable o) {
 *         int thisValue = this.value;
 *         int thatValue = o.value;
 *         return (thisValue &lt; thatValue ? -1 : (thisValue==thatValue ? 0 : 1));
 *       }
 *
 *       public int hashCode() {
 *         final int prime = 31;
 *         int result = 1;
 *         result = prime * result + counter;
 *         result = prime * result + (int) (timestamp ^ (timestamp &gt;&gt;&gt; 32));
 *         return result
 *       }
 *     }
 * </pre></blockquote></p>
 */
@InterfaceAudience.Public
@InterfaceStability.Stable
public interface WritableComparable<T> extends Writable, Comparable<T> {
}
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
  • 序言:七十年代末,一起剝皮案震驚了整個(gè)濱河市暑塑,隨后出現(xiàn)的幾起案子吼句,更是在濱河造成了極大的恐慌,老刑警劉巖事格,帶你破解...
    沈念sama閱讀 219,039評(píng)論 6 508
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件惕艳,死亡現(xiàn)場(chǎng)離奇詭異,居然都是意外死亡驹愚,警方通過查閱死者的電腦和手機(jī)远搪,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 93,426評(píng)論 3 395
  • 文/潘曉璐 我一進(jìn)店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來逢捺,“玉大人谁鳍,你說我怎么就攤上這事〗偻” “怎么了倘潜?”我有些...
    開封第一講書人閱讀 165,417評(píng)論 0 356
  • 文/不壞的土叔 我叫張陵,是天一觀的道長(zhǎng)志于。 經(jīng)常有香客問我涮因,道長(zhǎng),這世上最難降的妖魔是什么伺绽? 我笑而不...
    開封第一講書人閱讀 58,868評(píng)論 1 295
  • 正文 為了忘掉前任养泡,我火速辦了婚禮,結(jié)果婚禮上奈应,老公的妹妹穿的比我還像新娘瓤荔。我一直安慰自己,他們只是感情好钥组,可當(dāng)我...
    茶點(diǎn)故事閱讀 67,892評(píng)論 6 392
  • 文/花漫 我一把揭開白布输硝。 她就那樣靜靜地躺著,像睡著了一般程梦。 火紅的嫁衣襯著肌膚如雪点把。 梳的紋絲不亂的頭發(fā)上橘荠,一...
    開封第一講書人閱讀 51,692評(píng)論 1 305
  • 那天,我揣著相機(jī)與錄音郎逃,去河邊找鬼哥童。 笑死,一個(gè)胖子當(dāng)著我的面吹牛褒翰,可吹牛的內(nèi)容都是我干的贮懈。 我是一名探鬼主播,決...
    沈念sama閱讀 40,416評(píng)論 3 419
  • 文/蒼蘭香墨 我猛地睜開眼优训,長(zhǎng)吁一口氣:“原來是場(chǎng)噩夢(mèng)啊……” “哼朵你!你這毒婦竟也來了?” 一聲冷哼從身側(cè)響起揣非,我...
    開封第一講書人閱讀 39,326評(píng)論 0 276
  • 序言:老撾萬(wàn)榮一對(duì)情侶失蹤抡医,失蹤者是張志新(化名)和其女友劉穎,沒想到半個(gè)月后早敬,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體忌傻,經(jīng)...
    沈念sama閱讀 45,782評(píng)論 1 316
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡,尸身上長(zhǎng)有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 37,957評(píng)論 3 337
  • 正文 我和宋清朗相戀三年搞监,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了水孩。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片。...
    茶點(diǎn)故事閱讀 40,102評(píng)論 1 350
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡琐驴,死狀恐怖荷愕,靈堂內(nèi)的尸體忽然破棺而出,到底是詐尸還是另有隱情棍矛,我是刑警寧澤安疗,帶...
    沈念sama閱讀 35,790評(píng)論 5 346
  • 正文 年R本政府宣布,位于F島的核電站够委,受9級(jí)特大地震影響荐类,放射性物質(zhì)發(fā)生泄漏。R本人自食惡果不足惜茁帽,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 41,442評(píng)論 3 331
  • 文/蒙蒙 一玉罐、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧潘拨,春花似錦吊输、人聲如沸。這莊子的主人今日做“春日...
    開封第一講書人閱讀 31,996評(píng)論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽(yáng)。三九已至,卻和暖如春扭屁,著一層夾襖步出監(jiān)牢的瞬間算谈,已是汗流浹背。 一陣腳步聲響...
    開封第一講書人閱讀 33,113評(píng)論 1 272
  • 我被黑心中介騙來泰國(guó)打工料滥, 沒想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留然眼,地道東北人。 一個(gè)月前我還...
    沈念sama閱讀 48,332評(píng)論 3 373
  • 正文 我出身青樓葵腹,卻偏偏與公主長(zhǎng)得像高每,于是被迫代替她去往敵國(guó)和親。 傳聞我的和親對(duì)象是個(gè)殘疾皇子践宴,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 45,044評(píng)論 2 355

推薦閱讀更多精彩內(nèi)容