1. 目的說明
本文檔提供apache hadoop的基礎安裝手冊质涛,適用于快速入門練習使用阐斜;生產環(huán)境配置,在關鍵步驟和重要參數(shù)設置需要小心對待。
2. 參考文檔
《hadoop權威指南-第3版》
hadoop Reference Document
3. 系統(tǒng)準備
準備3臺centos6.5服務器御毅,機器配置保證2g內存根欧,連通外網。
server info:
hostname ip
master 192.168.0.180
slave1 192.168.0.181
slave2 192.168.0.182
準備oracle JDK
http://www.oracle.com/technetwork/java/javase/downloads/index-jsp-138363.html
準備apache hadoop
http://hadoop.apache.org/releases.html
#選擇已經編譯好的binary
4. 基礎環(huán)境配置
基礎環(huán)境搭建端蛆,主要是用戶創(chuàng)建凤粗、JDK環(huán)境和ssh無密碼登陸;在虛擬機環(huán)境中今豆,通常先在一個虛擬機上配置好JDK和ssh無密碼登陸嫌拣,只需要clone或者直接copy2份虛擬機,然后更改ip和主機名呆躲。本次實驗是按完整的3臺完整虛擬機操作异逐,安裝時選擇basic server。
4.1 網絡環(huán)境配置
先在master機器上操作插掂,slave1和slave2重復同樣的操作:
主機名配置
vi /etc/sysconfig/network
#修改hostname
hostname=master
hosts配置
vi /etc/hosts
#添加以下內容
192.168.0.180 master
192.168.0.181 slave1
192.168.0.182 slave2
網卡文件配置
vi /etc/sysconfig/network-scripts/ifcfg-etho
#修改以下內容
DEVICE=eth0
ONBOOT=yes
BOOTPROTO=static
IPADDR=192.168.0.181
GATEWAY=192.168.0.1
NETMASK=255.255.255.0
#重啟網絡服務
service network restart
DNS配置
vi /etc/resolv.conf
#添加以下內容
nameserver 114.114.114.114
關閉防火墻
service iptables stop
chkconfig iptables off
關閉selinux
vi /etc/selinux/config
#修改SELINUX
SELINUX=disabled
4.2 JDK安裝配置
#檢查是否安裝JDK:
rpm -qa | grep java
#卸載JDK
rpm -e java-1.7.0-openjdk-1.7.0.45-2.4.3.3.el6.x86_64
rpm -e java-1.6.0-openjdk-1.6.0.0-1.66.1.13.0.el6.x86_64
#安裝oracle JDK
scp jdk-8u111-linux-x64.tar.gz root@master:~/
mkdir -p /usr/java
tar -zxvf jdk-8u111-linux-x64.tar.gz -C /usr/java
#修改環(huán)境變量
vi /etc/profile
#末行添加以下內容
export JAVA_HOME=/usr/java/jdk1.8.0_111
export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export PATH=$PATH:$JAVA_HOME/bin
#環(huán)境變量生效
source /etc/profile
#驗證環(huán)境變量
java -version
javac -version
4.3 用戶和目錄創(chuàng)建
#創(chuàng)建用戶組和用戶
groupadd hadoop
useradd -g hadoop -s /bin/bash -d /home/hadoop -m hadoop
passwd hadoop
#創(chuàng)建hadoop介質目錄
mkdir -p /opt/hadoop/
#創(chuàng)建hadoop存儲目錄
mkdir -p /hadoop/hdfs/name
mkdir -p /hadoop/hdfs/data
mkdir -p /hadoop/tmp
#目錄權限
chown hadoop.hadoop -R /hadoop
chown hadoop.hadoop -R /opt/hadoop
4.4 ssh無密碼登陸
master和slave操作灰瞻。
#生成公鑰私鑰
su - hadoop
ssh-keygen -t rsa -P ''
chmod 700 ~/.ssh
master操作
#復制公鑰到slave機器
ssh-copy-id hadoop@slave1
ssh-copy-id hadoop@slave2
#測試是否可以登陸
ssh slave1
ssh slave2
5. hadoop安裝配置
5.1 hadoop介質配置
上傳hadoop介質
scp hadoop-2.7.3.tar.gz root@master:~/
配置hadoop介質和環(huán)境變量
su - hadoop
#解壓介質
tar -zxvf hadoop-2.7.3.tar.gz -C /opt/hadoop/
#編輯環(huán)境變量,slave最好也配置一下hadoop環(huán)境變量辅甥。
vi ~/.bash_profile
#添加以下內容
export HADOOP_HOME=/opt/hadoop/hadoop-2.7.3/
#PATH增加hadoop的bin目錄
PATH=$PATH:$HOME/bin:$HADOOP_HOME/bin
#使之生效
source ~/.bash_profile
5.2 配置core-site.xml
vi core-site.xml
#修改以下內容
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/hadoop/tmp</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131702</value>
</property>
</configuration>
5.3 配置hdfs-site.xml
vi hdfs-site.xml
#添加以下內容
<configuration>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/hadoop/hdfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/hadoop/hdfs/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>master:9001</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
</configuration>
5.4 配置mapred-site.xml
#默認沒有mapred-site.xml酝润,需要copy一份模版
cp etc/hadoop/mapred-site.xml.template etc/hadoop/mapred-site.xml
vi etc/hadoop/mapred-site.xml
#添加以下內容
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>master:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>master:19888</value>
</property>
</configuration>
5.5 配置yarn-site.xml
vi yarn-site.xml
#添加以下內容
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>master:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>master:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>master:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>master:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>master:8088</value>
</property>
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>1600</value>
</property>
</configuration>
```
### 5.6 配置相關env.sh
配置hadoop-env.sh
```bash
cd $HADOOP_HOME
vi etc/hadoop/hadoop-env.sh
```
配置yarn-env.sh
```bash
cd $HADOOP_HOME
vi etc/hadoop/yarn-env.sh
#修改JAVE_HOME
#修改JAVA_HEAP_MAX=-Xmx3072m
```
5.7 配置slave文件
添加slave機器
```bash
vi etc/hadoop/slaves
#添加以下內容
slave1
slave2
```
### 5.8 同步hadoop介質
```bash
scp -r hadoop-2.7.3 slave1:/opt/hadoop
scp -r hadoop-2.7.3 slave2:/opt/hadoop
```
## 6. hadoop環(huán)境驗證
### 6.1 hadoop進程管理
```bash
#啟動hadoop進程
cd $HADOOP_HOME
sbin/start-dfs.sh
sbin/start-yarn.sh
#關閉hadoop進程
sbin/stop-dfs.sh
sbin/stop-yarn.sh
#start-all.sh和stop-all.sh雖然支持,但已經標明被廢棄了璃弄。
```
### 6.2 hadoop運行狀態(tài)檢查
分別在3臺虛擬機上驗證進程
master狀態(tài):
```bash
#狀態(tài)檢查
hdfs dfsadmin -report
#進程檢查
[hadoop@master hadoop-2.7.3]$ jps
31616 Jps
31355 ResourceManager
31071 SecondaryNameNode
```
slave狀態(tài):
```bash
[hadoop@slave1 ~]$ jps
30885 NodeManager
30987 Jps
30669 DataNode
```
登陸web管理平臺
```
http://master:8088/cluster
```
![Paste_Image.png](http://upload-images.jianshu.io/upload_images/68803-ea481d5c8dc91eec.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240)
```
http://master:50070
```
![Paste_Image.png](http://upload-images.jianshu.io/upload_images/68803-d0a4caf99ea5b853.png?imageMogr2/auto-orient/strip%7CimageView2/2/w/1240)
### 6.3 運行wordcount示例
```bash
#創(chuàng)建hadoop目錄
hdfs dfs -mkdir /test
#上傳運行的文本要销,隨便寫幾個單詞
hdfs dfs -put test.txt /test/
hdfs dfs -ls /test/
#運行wordcount
cd $HADOOP_HOME
hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar wordcount /test/test.txt /test/out
#驗證運行結果
[hadoop@master hadoop-2.7.3]$ hdfs dfs -ls /test/out
Found 2 items
-rw-r--r-- 2 hadoop supergroup 0 2016-11-20 20:56 /test/out/_SUCCESS
-rw-r--r-- 2 hadoop supergroup 41 2016-11-20 20:56 /test/out/part-r-00000
[hadoop@master hadoop-2.7.3]$ hdfs dfs -cat /test/out/part-r-00000
china 1
hadoop 1
hello 1
people 1
word 1
```
## 7. hadoop故障排查
### 7.1 啟動前的檢查
檢查防火墻關閉
檢查hosts正確配置
檢查主機網絡正確配置
檢查selinux關閉
檢查JDK安裝正確
檢查ssh無密碼登陸正確
檢查hadoop存儲和介質目錄權限正確
檢查hadoop配置文件(重要的4個)是否正確
檢查yarn-site.xml,此項參數(shù)設計hadoop性能調優(yōu)夏块,入門暫不考慮這個參數(shù)疏咐,保證大于1536即可,否則有可能會導致wordcount運行失敗脐供。
```xml
<property>
<name>yarn.nodemanager.resource.memory-mb</name>
<value>1600</value>
</property>
```
檢查yarn-env.sh浑塞,此項參數(shù)涉及hadoop性能調優(yōu),入門暫不考慮這個參數(shù)政己。
```bash
JAVA_HEAP_MAX=-Xmx3072m
```
### 7.2 運行日志
運行日志在$HADOOP_HOME/logs中:
```bash
cd $HADOOP_HOME
[hadoop@master logs]$ ls
hadoop-hadoop-namenode-master.log hadoop-hadoop-secondarynamenode-master.out.2
hadoop-hadoop-namenode-master.out hadoop-hadoop-secondarynamenode-master.out.3
hadoop-hadoop-namenode-master.out.1 SecurityAuth-hadoop.audit
hadoop-hadoop-namenode-master.out.2 yarn-hadoop-resourcemanager-master.log
hadoop-hadoop-namenode-master.out.3 yarn-hadoop-resourcemanager-master.out
hadoop-hadoop-secondarynamenode-master.log yarn-hadoop-resourcemanager-master.out.1
hadoop-hadoop-secondarynamenode-master.out yarn-hadoop-resourcemanager-master.out.2
hadoop-hadoop-secondarynamenode-master.out.1 yarn-hadoop-resourcemanager-master.out.3
#查看日志信息酌壕,slave機器類似操作
vi hadoop-hadoop-namenode-master.log
```