環(huán)境基礎
三臺虛擬機,本文以三臺CentOS6.9為例(分別命名:bigdata1 bigdata2 bigdata3,并且每臺的hosts文件中已添加上三臺的IP映射)
http://mirrors.aliyun.com/centos/6.9/isos/x86_64/CentOS-6.9-x86_64-minimal.iso
注意:本文操作統(tǒng)一用root用戶,生產環(huán)境須單獨創(chuàng)建用戶
基礎配置
關閉防火墻
關閉防火墻
service iptables stop
啟動防火墻
service iptables start
永久關閉防火墻
chkconfig iptables off
永久關閉防火墻后啟用
chkconfig iptables on
HOSTS文件配置
修改/etc/hosts文件
vi /etc/hosts
尾行后添加
192.168.128.129 bigdata1
192.168.128.130 bigdata2
192.168.128.131 bigdata3
SSH免密登錄
每臺虛擬機都要生成公私鑰并授權其他虛擬機
ssh-keygen
ssh-copy-id bigdata1
ssh-copy-id bigdata2
ssh-copy-id bigdata3
軟件準備
JDK
Hadoop
http://archive.cloudera.com/cdh5/cdh/5/hadoop-2.6.0-cdh5.13.0.tar.gz
安裝步驟
JDK(只裝1臺配置完成后分發(fā)給其它節(jié)點)
安裝此處不展開,詳見Linux下JDK安裝
Hadoop(只裝1臺配置完成后分發(fā)給其它節(jié)點)
安裝解壓
創(chuàng)建/opt/software文件夾
mkdir /opt/software
將Hadoop壓縮包放進此文件夾并解壓
tar -zxvf hadoop-2.6.0-cdh5.13.0.tar.gz
環(huán)境變量
修改系統(tǒng)環(huán)境變量
vi /etc/profile
尾行后添加
export HADOOP_HOME=/opt/software/hadoop-2.6.0-cdh5.13.0
export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
生效系統(tǒng)環(huán)境變量
source /etc/profile
Hadoop配置
masters
修改hadoop目錄下etc/hadoop/masters
vi etc/hadoop/masters
bigdata1
slaves
修改hadoop目錄下etc/hadoop/slaves
vi etc/hadoop/slaves
bigdata2
bigdata3
hadoop-env.sh
修改hadoop目錄下bin/hadoop-env.sh
vi bin/hadoop-env.sh
export JAVA_HOME="/opt/software/jdk1.8.0_144"
export HADOOP_PREFIX="/opt/software/hadoop-2.6.0-cdh5.13.0"
yarn-env.sh
修改hadoop目錄下bin/yarn-env.sh
vi bin/yarn-env.sh
export YARN_LOG_DIR="/opt/software/hadoop-2.6.0-cdh5.13.0/yarn/logs"
core-site.xml
修改hadoop目錄下etc/hadoop/core-site.xml
vi etc/hadoop/core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://bigdata1:9000</value>
<description>定義HadoopMaster的URI和端口</description>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
<description>用作序列化文件處理時讀寫buffer的大小</description>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/opt/software/hadoop-2.6.0-cdh5.13.0/hdfs/tmp</value>
<description>臨時數(shù)據存儲目錄設定</description>
</property>
</configuration>
hdfs-site.xml
修改hadoop目錄下etc/hadoop/hdfs-site.xml
vi etc/hadoop/hdfs-site.xml
<configuration>
<property>
<name>dfs.namenode.http-address</name>
<value>bigdata1:50070</value>
<description>namenode的http通訊地址</description>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>bigdata2:50090</value>
<description>secondarynamenode的http通訊地址</description>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
<description>hdfs副本數(shù)量</description>
</property>
<property>
<name>dfs.blocksize</name>
<value>1048576</value>
<description>塊大小1M</description>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
<description>是否對DFS中的文件進行權限控制(測試中一般用false)</description>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/opt/software/hadoop-2.6.0-cdh5.13.0/hdfs/name</value>
<description>namenode存放fsimage本地目錄</description>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/opt/software/hadoop-2.6.0-cdh5.13.0/hdfs/data</value>
<description>datanode存放block本地目錄</description>
</property>
</configuration>
mapred-site.xml
修改hadoop目錄下etc/hadoop/mapred-site.xml
vi etc/hadoop/hdfs-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
yarn-site.xml
修改hadoop目錄下etc/hadoop/yarn-site.xml
vi etc/hadoop/yarn-site.xml
<configuration>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>bigdata1</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
</configuration>
分發(fā)程序
分發(fā)命令模板
scp -r source root@hostname:/target
分發(fā)software文件夾
scp -r /opt/software root@bigdata2:/opt
scp -r /opt/software root@bigdata3:/opt
分發(fā)hosts文件
scp -r /etc/hosts root@bigdata2:/etc
scp -r /etc/hosts root@bigdata3:/etc
分發(fā)/etc/profile文件(環(huán)境變量)
scp -r /etc/profile root@bigdata2:/etc
scp -r /etc/profile root@bigdata3:/etc
注意:分發(fā)環(huán)境變量配置文件后每臺機器都須執(zhí)行:
source /etc/profile
啟動驗證
格式化NameNode(只在bigdata1執(zhí)行命令)
hdfs namenode -format
注意:format之后啟動了集群辛馆,再次format后datanode集群ID可能不一致寥假,導致datanode啟動不了,有兩種解決方法
- 分別打開nn和dn中的hdfs-site.xml里配置的datanode和namenode對應的目錄鼓黔,打開current文件夾里的VERSION旭旭,可以看到clusterID項正如日志里記錄的一樣谎脯,確實不一致,修改datanode里VERSION文件的clusterID 與namenode里的一致持寄,再重新啟動dfs(執(zhí)行start-dfs.sh)再執(zhí)行jps命令可以看到datanode已正常啟動
- 刪除集群內每一臺主機的hdfs目錄源梭,重新格式化NameNode
啟動HDFS(只在bigdata1執(zhí)行命令)
start-dfs.sh
啟動YARN(只在bigdata1執(zhí)行命令)
start-yarn.sh
驗證
http://192.168.128.129:50070
其他
基本操作
hadoop fs -ls /
hadoop fs -put file /
hadoop fs -mkdir /dirname
hadoop fs -text /filename
hadoop fs -rm /filename
橫向擴展
- 新起一臺虛擬機
- 修改hosts文件
- 修改nn主機hadoop目錄下etc/hadoop/slaves文件加入主機名
- 同樣方式分發(fā)hosts, profile, software
- 在新加的主機上執(zhí)行
hadoop-daemon.sh start datanode