古時候充易,人們用牛來拉重物,當一頭牛拉不動一根圓木得時候荸型,他們不曾想過培育個頭更大的牛盹靴。同樣,我們也不需要嘗試更大的計算機瑞妇,而是應(yīng)該開發(fā)更多的計算系統(tǒng)稿静。--格雷斯·霍珀
一、hadoop簡介
Hadoop實現(xiàn)了一個分布式文件系統(tǒng)(Hadoop Distributed File System)辕狰,簡稱HDFS改备。Hadoop的框架最核心的設(shè)計就是:HDFS和MapReduce。
其中HDFS就是一個分魚展
的大硬盤:
- 分:數(shù)據(jù)分塊存儲
- 魚:數(shù)據(jù)冗余
- 展:動態(tài)擴展
二蔓倍、Hadoop的優(yōu)勢
- 方便:Hadoop可以運行在一般商業(yè)機器構(gòu)成的大型集群上
- 高可靠性:Hadoop按位存儲和處理數(shù)據(jù)的能力值得信賴
- 高擴展性:Hadoop通過增加集群節(jié)點悬钳,可以線性地擴展以處理更大的數(shù)據(jù)集
- 高效性:Hadoop能夠在節(jié)點間動態(tài)地移動數(shù)據(jù),并保證各個節(jié)點的動態(tài)平衡
- 高容錯性:hadoop能夠自動保存數(shù)據(jù)的多個副本偶翅,并且能夠自動將失敗的任務(wù)重新分配
三默勾、Hadoop的核心系統(tǒng)
1、Hadoop Common
Common 為Hadoop的其他項目提供了一些常用工具聚谁,主要包括系統(tǒng)配置工具Configuration灾测、遠程過程調(diào)用RPC、序列化機制和Hadoop 抽象文件系統(tǒng)FileSystem等垦巴。
2媳搪、Avro
Avro,是一個數(shù)據(jù)序列化系統(tǒng)骤宣,可以將數(shù)據(jù)結(jié)構(gòu)或者對象轉(zhuǎn)換何成便于存儲和傳輸?shù)母袷健?br>
3秦爆、Zookeeper
Zookeeper作為一個分布式的服務(wù)框架,解決了分布式計算中的一致性問題憔披,可用于處理分布式應(yīng)用中經(jīng)常遇到的數(shù)據(jù)管理問題:統(tǒng)一命名服務(wù)等限、狀態(tài)同步服務(wù)、集群管理芬膝、分布式應(yīng)用配置項管理
4望门、HDFS
HDFS(Hadoop Distributed File System),Hadoop分布式文件系統(tǒng)
5锰霜、MapReduce
MapReduce是一種計算模型筹误,用以進行大數(shù)據(jù)量的計算。MapReduce 將應(yīng)用劃分為Map和Reduce兩個步驟癣缅,其中Map對數(shù)據(jù)集上的獨立元素進行指定的操作厨剪,生成鍵-值對形勢的中間結(jié)果哄酝;Reduce則對中間結(jié)果中相同“鍵‘的所有”值“進行規(guī)約,以得到最終結(jié)果祷膳。
6陶衅、HBase
HBase是一個針對數(shù)據(jù)化數(shù)據(jù)的可伸縮、高可靠直晨、高性能搀军、分布式和面向列的動態(tài)模式數(shù)據(jù)庫。它采用了增強的稀疏排序映射表(Key/Vlaue)
7勇皇、Pig
Pig運行在hadoop上罩句,是對大型數(shù)據(jù)集進行分析和評估的平臺。
8儒士、Mahout
Mahout的主要目標是創(chuàng)建一些可擴展的機器學(xué)習(xí)領(lǐng)域經(jīng)典算法的實現(xiàn)。
四檩坚、hadoop分布式環(huán)境搭建
1着撩、虛擬機中安裝一臺master服務(wù)器
(1) 修改主機名
[root@localhost ~]# vim /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=master
(2) 配置網(wǎng)卡IP
[root@localhost ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
HWADDR=00:0C:29:A8:49:61
TYPE=Ethernet
UUID=8e0bff9b-d7b2-4429-8408-f2f122e323bf
ONBOOT=yes ## 修改為yes
NM_CONTROLLED=yes
BOOTPROTO=static ## 修改為static
IPADDR=192.168.137.100 ##本機IP
GATEWAY=192.168.137.1 ## 網(wǎng)關(guān)信息
NETMASK=255.255.255.0 ## 廣播
DNS1=1.1.1.1 ## 配置DNS
(3) 創(chuàng)建數(shù)據(jù)目錄
[root@localhost ~]# mkdir -p /var/data/hadoop
(4) 配置hosts,允許集群間通過主機名訪問
[root@localhost ~]# vim /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.137.100 master
192.168.137.101 slave1
192.168.137.102 slave2
192.168.137.103 slave3
(5) 關(guān)閉防火墻
service iptables stop
chkconfig iptables off
2匾委、配置Java環(huán)境
1)下載Java拖叙,jdk-8u171-linux-x64.rpm
2)利用指令 rpm -i jdk-8u171-linux-x64.rpm
,安裝Java環(huán)境,默認的路徑為/usr/java/jdk1.8.0_171
3)配置java環(huán)境變量赂乐,在/etc/profile下配置如下信息:
JAVA_HOME=/usr/java/jdk1.8.0_171
PATH=$JAVA_HOME/bin:$PATH
export JAVA_HOME PATH
退出vim后薯鳍,敲入命令source /etc/profile
,繼續(xù)輸入 java -version
挨措,查看Java環(huán)境是否配置成功:
[root@localhost ~]# java -version
java version "1.8.0_171"
Java(TM) SE Runtime Environment (build 1.8.0_171-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.171-b11, mixed mode)
3挖滤、安裝hadoop
1)軟件安裝,下載相應(yīng)的版本浅役,這里以hadoop-2.7.6.tar.gz為例;
2)解壓Hadoop文件斩松,tar -xzvf hadoop-2.7.6.tar.gz -C /usr/local/
3)配置hadoop運行環(huán)境,hadoop-env.sh
:
[root@localhost ~]# cd /usr/local/hadoop/etc/hadoop/
[root@localhost hadoop]# vim hadoop-env.sh
修改JAVA_HOME的變量值:
# The java implementation to use.
export JAVA_HOME=/usr/java/jdk1.8.0_171
4)配置hadoop核心文件觉既,core-site.xml
,在<configuration></configuration>配置相關(guān)屬性:
[root@localhost hadoop]# vim core-site.xml
# 配置namenode
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
</property>
#修改hdfs默認文件路徑
<property>
<name>hadoop.tmp.dir</name>
<value>/var/data/hadoop</value>
</property>
5)配置Hadoop的hdfs屬性惧盹,hdfs-site.xml
[root@localhost hadoop]# vim hdfs-site.xml
<!--指定hdfs保存數(shù)據(jù)的副本數(shù)量-->
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
6) 配置Hadoop的yarn屬性:yarn-site.xml
[root@localhost hadoop]# vim yarn-site.xml
<property>
<name>yarn.resourcemanager.hostname</name>
<value>master</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
7)配置hadoop的MapReduce屬性, mapred-site.xml
[root@localhost hadoop]# cp mapred-site.xml.template mapred-site.xml
[root@localhost hadoop]# vim mapred-site.xml
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
8)配置datanode節(jié)點
[root@localhost hadoop]# vim slaves
slave1
slave2
slave3
注意:如在此文件中配置了localhost瞪讼,則會在master中也創(chuàng)建一個datanode
9)配置環(huán)境變量
[root@master hadoop]# vim /etc/profile
JAVA_HOME=/usr/java/jdk1.8.0_171
HADOOP_INSTALL=/usr/local/hadoop
PATH=$JAVA_HOME/bin:$HADOOP_INSTALL/bin:$HADOOP_INSTALL/sbin:$PATH
export JAVA_HOME HADOOP_INSTALL PATH
3钧椰、克隆master機器三份,分別是slave1,slave2,slave3
1) 修改對應(yīng)的IP地址和對應(yīng)的主機名
[root@localhost ~]# vim /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=slave1
2)clone后的機器無法使用網(wǎng)卡eth0符欠,解決辦法:
首先:打開/etc/udev/rules.d/70-persistent-net.rules
# This file was automatically generated by the /lib/udev/write_net_rules
# program, run by the persistent-net-generator.rules rules file.
#
# You can modify it, as long as you keep each rule on a single
# line, and change only the value of the NAME= key.
# PCI device 0x8086:0x100f (e1000)
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:0c:29:a8:49:61",ATTR{type}=="1", KERNEL=="eth*", NAME="eth0"
# PCI device 0x8086:0x100f (e1000)
## 注釋這一行
#SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:0c:29:71:ec:e2", ATTR{type}=="1", KERNEL=="eth*", NAME="eth1"
將這一行注釋掉嫡霞,并記錄下00:0c:29:71:ec:e2
3) 配置網(wǎng)卡信息
DEVICE=eth1 ## 將原有的eth0,修改成eth1
HWADDR=00:0C:29:71:ec:e2 ## 修改為上一步中記錄的號串
TYPE=Ethernet
UUID=8e0bff9b-d7b2-4429-8408-f2f122e323bf
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static
IPADDR=192.168.137.101 ## 修改IP地址
GATEWAY=192.168.137.2
NETMASK=255.255.255.0
DNS1=1.1.1.1
4希柿、免密登陸
Hadoop 啟動 / 停止腳本需要通過 SSH 發(fā)送命令啟動相關(guān)守護進程秒际,為了避免每次啟動 / 停止 Hadoop 都要輸入密碼進行驗證悬赏,需設(shè)置免密碼登錄。切記:重啟一下三臺slave機器
1)在master服務(wù)器端生成ssh key: ssh-keygen -t rsa
[root@master .ssh]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): ## 回車
Enter passphrase (empty for no passphrase): ## 回車
Enter same passphrase again: ## 回車
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
fd:d8:c4:6c:51:8a:77:35:12:e2:00:54:34:02:09:e3 root@master
The key's randomart image is:
+--[ RSA 2048]----+
| o..+++= . oo..|
| . .. . +..o...|
| E ..+ . |
| . + o |
| S . = |
| * |
| . o |
| |
| |
+-----------------+
2)將生成的公鑰復(fù)制到三臺服務(wù)器上以及master上:
[root@master .ssh]# ssh-copy-id slave1
5娄徊、master機器上namenode格式化
1)執(zhí)行命令:hadoop namenode -format
2)驗證是否格式化成功:
[root@master ~]# cd /var/data/hadoop/
[root@master hadoop]# ll
total 4
drwxr-xr-x. 3 root root 4096 May 6 20:43 dfs
6闽颇、啟動Hadoop集群
1)執(zhí)行啟動命令,包括hdfs和yarn
[root@master ~]# start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [master]
master: starting namenode, logging to /usr/local/hadoop-2.7.6/logs/hadoop-root-namenode-master.out
slave2: starting datanode, logging to /usr/local/hadoop-2.7.6/logs/hadoop-root-datanode-slave2.out
slave3: starting datanode, logging to /usr/local/hadoop-2.7.6/logs/hadoop-root-datanode-slave3.out
slave1: starting datanode, logging to /usr/local/hadoop-2.7.6/logs/hadoop-root-datanode-slave1.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop-2.7.6/logs/hadoop-root-secondarynamenode-master.out
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop-2.7.6/logs/yarn-root-resourcemanager-master.out
slave2: starting nodemanager, logging to /usr/local/hadoop-2.7.6/logs/yarn-root-nodemanager-slave2.out
slave3: starting nodemanager, logging to /usr/local/hadoop-2.7.6/logs/yarn-root-nodemanager-slave3.out
slave1: starting nodemanager, logging to /usr/local/hadoop-2.7.6/logs/yarn-root-nodemanager-slave1.out
2)分別在集群中執(zhí)行jps指令寄锐,查看節(jié)點是否啟動成功
[root@master ~]# jps
4197 Jps
3783 SecondaryNameNode
3592 NameNode
3934 ResourceManager
[root@slave1 ~]# jps
2033 DataNode
2124 NodeManager
2223 Jps
3)訪問Hadoop提供的網(wǎng)頁管理
192.168.137.100:50070
192.168.137.100:8088
交流群: