1 創(chuàng)建Hadoop用戶
1.1 創(chuàng)建新用戶
用戶名為hadoop搂橙,使用/bin/bash作為shell
$ sudo useradd -m hadoop -s /bin/bash
1.2 修改密碼
$ sudo passwd hadoop
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
1.2 為hadoop用戶添加管理員權(quán)限
$ sudo adduser hadoop sudo
Adding user `hadoop' to group `sudo' ...
Adding user hadoop to group sudo
Done.
2 安裝java環(huán)境
2.1 安裝
$ sudo apt-get install default-jre default-jdk
2.2 配置環(huán)境變量
$ vim ~/.bashrc
后面加入export JAVA_HOME=/usr/lib/jvm/default-java
然后使環(huán)境變量生效:
$ source ~/.bashrc
2.3 測試java是否安裝成功
$ echo $JAVA_HOME
/usr/lib/jvm/default-java
$ java -version
openjdk version "1.8.0_191"
OpenJDK Runtime Environment (build 1.8.0_191-8u191-b12-0ubuntu0.16.04.1-b12)
OpenJDK 64-Bit Server VM (build 25.191-b12, mixed mode)
3 設(shè)置SSH
SSH是Secure Shell的縮寫歉提,SSH由客戶端和服務(wù)端構(gòu)成,服務(wù)端是一個(gè)守護(hù)進(jìn)程,在后臺(tái)運(yùn)行并相應(yīng)來自客戶端的請(qǐng)求苔巨,客戶端包含遠(yuǎn)程復(fù)制scp版扩、安全文件傳輸sftp,遠(yuǎn)程登錄slogin等運(yùn)用程序侄泽。
Ubuntu已經(jīng)默認(rèn)安裝了SSH客戶端礁芦,還需要安裝SSH服務(wù)端。
【注意】:Hadoop并沒有提供SSH密碼登錄的形式蔬顾,所以需要將所有機(jī)器配置為無密碼登錄宴偿。
3.1 安裝SSH服務(wù)端
$ sudo apt-get install openssh-server
3.2 登錄localhost
$ ssh localhost
The authenticity of host 'localhost (127.0.0.1)' can't be established.
ECDSA key fingerprint is SHA256:MCT7ubGt3sPlkvS9v//KhAoa7vBO+EVPJN/JXenC8XM.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts.
hadoop@localhost's password:
Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.10.0-42-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
243 packages can be updated.
11 updates are security updates.
之后會(huì)在~/文件夾下發(fā)現(xiàn)一個(gè).ssh文件
3.3 設(shè)置為無密碼登錄
$ cd ~/.ssh/
$ ssh-keygen -t rsa #出現(xiàn)提示直接按enter
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:FaavA0T6j8XH0clbVu0pq5hkad7kADUBibL/76I2U00 hadoop@ubuntu
The key's randomart image is:
+---[RSA 2048]----+
| o.o.+ o|
| . + . = + . ..|
| + . o + + o..|
| . o o E . = ..|
| . o S = . o |
| . * X . . |
| + O B . |
| + o = + |
| ..+ +o |
+----[SHA256]-----+
$ cat ./id_rsa.pub >> ./authorized_keys #加入授權(quán)
此時(shí)就直接使用$ ssh localhost
,無密碼登錄了诀豁。
4 安裝Hadoop
Hadoop的安裝包括3中模式:
(1)單機(jī)模式:只在一臺(tái)機(jī)器上運(yùn)行窄刘,存儲(chǔ)采用本地文件系統(tǒng),沒有采用分布式文件系統(tǒng)HDFS舷胜。
(2)偽分布式模式:存儲(chǔ)采用分布式文件系統(tǒng)HDFS娩践,但是HDFS的節(jié)點(diǎn)和數(shù)據(jù)節(jié)點(diǎn)都在同一節(jié)點(diǎn)。
(2)分布式模式:存儲(chǔ)采用分布式文件系統(tǒng)HDFS烹骨,而且HDFS的節(jié)點(diǎn)和數(shù)據(jù)節(jié)點(diǎn)位于不同機(jī)器上翻伺。
Hadoop的下載:http://mirrors.cnnic.cn/apache/hadoop/common
4.1 單機(jī)模式配置
下載安裝包后解壓即可使用:
$ sudo tar -zxvf hadoop-2.7.1.tar.gz -C /usr/local
$ cd /usr/local/
$ sudo mv ./hadoop-2.7.1/ ./hadoop # 將文件夾名改為hadoop
$ sudo chown -R hadoop ./hadoop # 修改文件權(quán)限
查看Hadoop版本信息:
$ cd /usr/local/hadoop/bin
$ ./hadoop version
Hadoop 2.7.1
Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r 15ecc87ccf4a0228f35af08fc56de536e6ce657a
Compiled by jenkins on 2015-06-29T06:04Z
Compiled with protoc 2.5.0
From source with checksum fc0a1a23fc1868e4d5ee7fa2b28a58a
This command was run using /usr/local/hadoop/share/hadoop/common/hadoop-common-2.7.1.jar
Hadoop附帶了很多例子,運(yùn)行如下命令可以查看:
$ ./hadoop jar ../share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.1.jar
An example program must be given as the first argument.
Valid program names are:
aggregatewordcount: An Aggregate based map/reduce program that counts the words in the input files.
aggregatewordhist: An Aggregate based map/reduce program that computes the histogram of the words in the input files.
bbp: A map/reduce program that uses Bailey-Borwein-Plouffe to compute exact digits of Pi.
dbcount: An example job that count the pageview counts from a database.
distbbp: A map/reduce program that uses a BBP-type formula to compute exact bits of Pi.
grep: A map/reduce program that counts the matches of a regex in the input.
join: A job that effects a join over sorted, equally partitioned datasets
multifilewc: A job that counts words from several files.
pentomino: A map/reduce tile laying program to find solutions to pentomino problems.
pi: A map/reduce program that estimates Pi using a quasi-Monte Carlo method.
randomtextwriter: A map/reduce program that writes 10GB of random textual data per node.
randomwriter: A map/reduce program that writes 10GB of random data per node.
secondarysort: An example defining a secondary sort to the reduce.
sort: A map/reduce program that sorts the data written by the random writer.
sudoku: A sudoku solver.
teragen: Generate data for the terasort
terasort: Run the terasort
teravalidate: Checking results of terasort
wordcount: A map/reduce program that counts the words in the input files.
wordmean: A map/reduce program that counts the average length of the words in the input files.
wordmedian: A map/reduce program that counts the median length of the words in the input files.
wordstandarddeviation: A map/reduce program that counts the standard deviation of the length of the words in the input files.
下面運(yùn)行g(shù)rep程序
$ cd /usr/local/hadoop
$ mkdir input
$ cp ./etc/hadoop/*.xml ./input # 將配置文件復(fù)制到input目錄下
$ ./bin/hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar grep ./input ./output 'dfs[a-z.]+'
$ cat ./output/* # 查看運(yùn)行結(jié)果
1 dfsadmin
運(yùn)行成功后沮焕,可以看到grep程序?qū)nput文件夾作為輸入吨岭,從文件夾中篩選出所有符合正則表達(dá)式dfs[a-z]+
的單詞,并把單詞出現(xiàn)的次數(shù)的統(tǒng)計(jì)結(jié)果輸出到/usr/local/hadoop/output文件夾下峦树。
【注意】:如果再次運(yùn)行上述命令辣辫,會(huì)報(bào)錯(cuò),因?yàn)镠adoop默認(rèn)不會(huì)覆蓋output輸出結(jié)果的文件夾魁巩,所有需要先刪除output文件夾才能再次運(yùn)行急灭。
4.2 偽分布式模式配置
在單個(gè)節(jié)點(diǎn)(一臺(tái)機(jī)器上)以偽分布式的方式運(yùn)行。
4.2.1 修改配置文件
需要修改/usr/local/hadoop/etc/hadoop/
文件夾下的core-site.xml
和hdfs-site.xml
文件谷遂。
core-site.xml
文件:
將
<configuration>
</configuration>
修改為:
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/usr/local/hadoop/tmp</value>
<description>Abase for other temporary directories.</description>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
-
hadoop.tmp.dir
用于保存臨時(shí)文件葬馋,如果沒有配置這個(gè)參數(shù),則默認(rèn)使用的臨時(shí)目錄為/tmp/hadoo-hadoop
肾扰,這個(gè)目錄在Hadoop重啟后會(huì)被系統(tǒng)清理掉畴嘶。 -
fs.defaultFS
用于指定HDFS的訪問地址。
hdfs-site.xml
文件修改如下:
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/hadoop/tmp/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/local/hadoop/tmp/dfs/data</value>
</property>
</configuration>
-
dfs.replicaion
:指定副本數(shù)量集晚,在分布式文件系統(tǒng)中掠廓,數(shù)據(jù)通常會(huì)被冗余的存儲(chǔ)多份,以保證可靠性和安全性甩恼,但是這里用的是偽分布式模式,節(jié)點(diǎn)只有一個(gè),也有就只有一個(gè)副本条摸。 -
dfs.namenode.name.di
:設(shè)定名稱節(jié)點(diǎn)元數(shù)據(jù)的保存目錄 -
dfs.datanode.data.dir
:設(shè)定數(shù)據(jù)節(jié)點(diǎn)的數(shù)據(jù)保存目錄
這里悦污,名稱節(jié)點(diǎn)和數(shù)據(jù)節(jié)點(diǎn)必須設(shè)定。
【注意】:Hadoop的運(yùn)行方式是由配置文件決定的钉蒲,如果想從偽分布式模式切換回單機(jī)模式切端,只需刪除core-site.xml
文件中的配置項(xiàng)即可
4.2.2 執(zhí)行名稱節(jié)點(diǎn)格式化
執(zhí)行如下命令:
$ cd /usr/local/hadoop
$ ./bin/hdfs namenode -format
【錯(cuò)誤】:出現(xiàn)Exiting with status 1
表示出現(xiàn)錯(cuò)誤
19/01/11 18:38:02 ERROR namenode.NameNode: Failed to start namenode.
java.lang.IllegalArgumentException: URI has an authority component
at java.io.File.<init>(File.java:423)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.getStorageDirectory(NNStorage.java:329)
at org.apache.hadoop.hdfs.server.namenode.FSEditLog.initJournals(FSEditLog.java:276)
at org.apache.hadoop.hdfs.server.namenode.FSEditLog.initJournalsForWrite(FSEditLog.java:247)
at org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:985)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1429)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1554)
19/01/11 18:38:02 INFO util.ExitUtil: Exiting with status 1
19/01/11 18:38:02 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at ubuntu/127.0.1.1
************************************************************/
【解決】:檢查hdfs-site.xml
的配置
如果出現(xiàn)/usr/local/hadoop/tmp/dfs/name has been successfully formatted.
和 Exiting with status 0
,表示格式化成功顷啼。
19/01/11 18:46:35 INFO common.Storage: Storage directory /usr/local/hadoop/tmp/dfs/name has been successfully formatted.
19/01/11 18:46:36 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
19/01/11 18:46:36 INFO util.ExitUtil: Exiting with status 0
4.2.3 啟動(dòng)Hadoop
$ cd /usr/local/hadoop
$ ./sbin/start-dfs.sh
【錯(cuò)誤】:
Starting namenodes on [localhost]
localhost: Error: JAVA_HOME is not set and could not be found.
localhost: Error: JAVA_HOME is not set and could not be found.
Starting secondary namenodes [0.0.0.0]
【解決】:
$ echo $JAVA_HOME
/usr/lib/jvm/default-java
查看后發(fā)現(xiàn)JAVA_HOME
路徑已經(jīng)設(shè)置踏枣,那就只能將/hadoop/etc/hadoop/hadoop-env.sh
文件的JAVA_HOME改為絕對(duì)路徑了。將export JAVA_HOME=$JAVA_HOME
改為
export JAVA_HOME=/usr/lib/jvm/default-java
用jps
命令查看Hadoop是否啟動(dòng)成功钙蒙,如果出現(xiàn)DataNode
茵瀑、NameNode
、SecondaryNameNode
的進(jìn)程說明啟動(dòng)成功躬厌。
$ jps
4821 Jps
4459 DataNode
4348 NameNode
4622 SecondaryNameNode
如果還要問題马昨,重復(fù)如下命令:
$ ./sbin/stop-dfs.sh # 關(guān)閉
$ rm -r ./tmp # 刪除 tmp 文件,注意這會(huì)刪除 HDFS中原有的所有數(shù)據(jù)
$ ./bin/hdfs namenode -format # 重新格式化名稱節(jié)點(diǎn)
$ ./sbin/start-dfs.sh # 重啟
4.2.4 使用瀏覽器查看HDFS信息
在瀏覽器中打開鏈接:http://localhost:50070/dfshealth.html#tab-overview
即可查看:
4.2.5 運(yùn)行Hadoop偽分布式實(shí)例
$ cd /usr/local/hadoop
$ ./bin/hdfs dfs -mkdir -p /user/hadoop # 在HDFS中創(chuàng)建用戶目錄
$ ./bin/hdfs dfs -mkdir input #在HDFS中創(chuàng)建hadoop用戶對(duì)應(yīng)的input目錄
$ ./bin/hdfs dfs -put ./etc/hadoop/*.xml input #把本地文件復(fù)制到HDFS中
$ ./bin/hdfs dfs -ls input #查看文件列表
Found 8 items
-rw-r--r-- 1 hadoop supergroup 4436 2019-01-11 19:35 input/capacity-scheduler.xml
-rw-r--r-- 1 hadoop supergroup 1075 2019-01-11 19:35 input/core-site.xml
-rw-r--r-- 1 hadoop supergroup 9683 2019-01-11 19:35 input/hadoop-policy.xml
-rw-r--r-- 1 hadoop supergroup 1130 2019-01-11 19:35 input/hdfs-site.xml
-rw-r--r-- 1 hadoop supergroup 620 2019-01-11 19:35 input/httpfs-site.xml
-rw-r--r-- 1 hadoop supergroup 3518 2019-01-11 19:35 input/kms-acls.xml
-rw-r--r-- 1 hadoop supergroup 5511 2019-01-11 19:35 input/kms-site.xml
-rw-r--r-- 1 hadoop supergroup 690 2019-01-11 19:35 input/yarn-site.xml
$ ./bin/hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-*.jar grep input output 'dfs[a-z.]+'
....
$ ./bin/hdfs dfs -cat output/* #查看運(yùn)行結(jié)果
1 dfsadmin
1 dfs.replication
1 dfs.namenode.name.dir
1 dfs.datanode.data.dir
再次運(yùn)行需要?jiǎng)h除output文件夾
$ ./bin/hdfs dfs -rm -r output # 刪除 output 文件夾
4.2.6 關(guān)閉Hadoop
使用命令:
./sbin/stop-dfs.sh
下次啟動(dòng)時(shí)不需要再執(zhí)行節(jié)點(diǎn)格式化命令(否則會(huì)報(bào)錯(cuò))扛施,只需要直接運(yùn)行start-dfs.sh
命令即可鸿捧。
5 總結(jié)
hadoop的安裝步驟:
1 創(chuàng)建Hadoop用戶
2 安裝java環(huán)境
3 設(shè)置SSH
4 修改配置文件修改/usr/local/hadoop/etc/hadoop/
文件夾下的core-site.xml
和hdfs-site.xml
文件
5 相關(guān)命令
$ cd /usr/local/hadoop
$ ./bin/hdfs namenode -format #格式化名稱節(jié)點(diǎn) (這個(gè)命令只需只需一次)
$ ./sbin/start-dfs.sh #啟動(dòng)Hadoop
$ jps #查看Hadoop是否成功啟動(dòng)
$ ./sbin/stop-dfs.sh # 關(guān)閉Hadoop
$ rm -r ./tmp # 刪除 tmp 文件,注意這會(huì)刪除 HDFS中原有的所有數(shù)據(jù)
$ ./sbin/start-dfs.sh # 重啟