目前本系列文章有:
搭建大數(shù)據(jù)平臺系列(0)-機器準(zhǔn)備
搭建大數(shù)據(jù)平臺系列(1)-Hadoop環(huán)境搭建[hdfs,yarn,mapreduce]
搭建大數(shù)據(jù)平臺系列(2)-zookeeper環(huán)境搭建
搭建大數(shù)據(jù)平臺系列(3)-hbase環(huán)境搭建
搭建大數(shù)據(jù)平臺系列(4)-hive環(huán)境搭建
1.ssh免密碼登錄設(shè)置
[hadoop@master ~]$ ssh -version
OpenSSH_5.3p1, OpenSSL 1.0.1e-fips 11 Feb 2013
Bad escape character 'rsion'.
查看ssh的版本后猿诸,如果ssh未安裝則需要執(zhí)行如下安裝命令:
[hadoop@master ~]$ sudo yum install openssh-server
在每臺機器上都執(zhí)行一次下面的命令:
$ ssh-keygen –t rsa #一路回車,提示要填的都默認(rèn)不填,按回車
上面執(zhí)行完成后海渊,每臺機器上都會生成一個~/.ssh文件夾
$ ll ~/.ssh #查看.ssh文件下的文件列表
-rw-------. 1 hadoop hadoop 1580 Apr 18 16:53 authorized_keys
-rw-------. 1 hadoop hadoop 1675 Apr 15 16:01 id_rsa
-rw-r--r--. 1 hadoop hadoop 395 Apr 15 16:01 id_rsa.pub
把slave1昔瞧,slave2袁滥,slave3上生成的公鑰id_rsa.pub發(fā)給master機器:
在slave1機器上:
[hadoop@slave1 ~]$ scp ~/.ssh/id_rsa.pub hadoop@master:~/.ssh/id_rsa.pub.slave1
在slave2機器上:
[hadoop@slave2 ~]$ scp ~/.ssh/id_rsa.pub hadoop@master:~/.ssh/id_rsa.pub.slave2
在slave3機器上:
[hadoop@slave3 ~]$ scp ~/.ssh/id_rsa.pub hadoop@master:~/.ssh/id_rsa.pub.slave3
在master機器上憨奸,將所有公鑰加到新增的用于認(rèn)證的公鑰文件authorized_keys中:
[hadoop@master ~]$ cat ~/.ssh/id_rsa.pub* >> ~/.ssh/authorized_keys
需要修改文件authorized_keys的權(quán)限(權(quán)限的設(shè)置非常重要驱富,因為不安全的設(shè)置安全設(shè)置,會讓你不能使用RSA功能 )
[hadoop@master ~]$ chmod 600 ~/.ssh/authorized_keys #如果免密碼不成功有可能缺少這步
將公鑰文件authorized_keys分發(fā)給每臺slave:
[hadoop@master ~]$ scp ~/.ssh/authorized_keys hadoop@slave1:~/.ssh/
[hadoop@master ~]$ scp ~/.ssh/authorized_keys hadoop@slave1:~/.ssh/
[hadoop@master ~]$ scp ~/.ssh/authorized_keys hadoop@slave1:~/.ssh/
2.Java環(huán)境的安裝
下載jdk-8u60-linux-x64.tar.gz安裝包后(放在~/bigdataspace路徑下):
[hadoop@master ~]$ cd ~/bigdataspace
[hadoop@master bigdataspace]$ tar -zxvf jdk-8u60-linux-x64.tar.gz
修改環(huán)境變量配置文件:
[hadoop@master bigdataspace]$ sudo vi /etc/profile
(在配置文件末尾加上如下配置)
export JAVA_HOME=/home/hadoop/bigdataspace/jdk1.8.0_60
export PATH=$JAVA_HOME/bin:$PATH
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
讓環(huán)境變量設(shè)置生效:
[hadoop@master bigdataspace]$ source /etc/profile
驗證Java是否安裝成功:
[hadoop@master bigdataspace]$ java -version
java version "1.8.0_60"
Java(TM) SE Runtime Environment (build 1.8.0_60-b27)
Java HotSpot(TM) 64-Bit Server VM (build 25.60-b23, mixed mode)
(每臺機器上都需要按照上面的操作安裝Java)
每臺機器上執(zhí)行:
[hadoop@master ~]$ sudo chmod 777 /data/ #讓所有用戶可操作/data目錄下的數(shù)據(jù)
3.集群上的機器實現(xiàn)同步時間
檢查時間服務(wù)是否安裝:
[hadoop@master ~]$ rpm -q ntp
ntp-4.2.6p5-1.el6.centos.x86_64 #這表示已安裝了反砌,如果沒有安裝,這是空白
如果沒有安裝萌朱,需要執(zhí)行下面的安裝命令:
[hadoop@master ~]$ sudo yum install ntp
需要配置NTP服務(wù)為自啟動:
[hadoop@master ~]$ sudo chkconfig ntpd on
[hadoop@master ~]$ chkconfig --list ntpd
ntpd 0:off 1:off 2:on 3:on 4:on 5:on 6:off
(需要打開master機器上udp協(xié)議的123端口是為了其他節(jié)點使用ntpdate通過該端口同步master機器的時間)
[hadoop@master ~]$ sudo vi /etc/sysconfig/iptables
(新增的端口配置)
-A INPUT -m state --state NEW -m udp -p udp --dport 123 -j ACCEPT
[hadoop@master ~]$ sudo service iptables restart
在配置前宴树,先使用ntpdate手動同步下時間,免得本機與外部時間服務(wù)器時間差距太大晶疼,讓ntpd不能正常同步酒贬。
[hadoop@master ~]$ sudo ntpdate pool.ntp.org
26 Apr 17:12:15 ntpdate[7376]: step time server 202.112.29.82 offset 13.827386 sec
更改master機器上的相關(guān)配置文件:
[hadoop@master ~]$ sudo vim /etc/ntp.conf
(下面只顯示修改的必要項)
# Hosts on local network are less restricted.
restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap
#讓同一局域網(wǎng)ip段可以進(jìn)行時間同步:
restrict 10.3.19.0 mask 255.255.255.0 nomodify notrap
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
#外部時間服務(wù)器
server pool.ntp.org iburst
server 0.asia.pool.ntp.org iburst
server 1.asia.pool.ntp.org iburst
server 1.asia.pool.ntp.org iburst
server 2.asia.pool.ntp.org iburst
#broadcast 192.168.1.255 autokey # broadcast server
#broadcastclient # broadcast client
#broadcast 224.0.1.1 autokey # multicast server
#multicastclient 224.0.1.1 # multicast client
#manycastserver 239.255.254.254 # manycast server
#manycastclient 239.255.254.254 autokey # manycast client
# allow update time by the upper server
# Undisciplined Local Clock. This is a fake driver intended for backup
# and when no outside source of synchronized time is available.
# 外部時間服務(wù)器不可用時,以本地時間作為時間服務(wù)
server 127.127.1.0
fudge 127.127.1.0 stratum 10
#############################################################
其他節(jié)點/etc/ntp.conf(slave1,slave2,slave3)的配置:
……..
#server 3.centos.pool.ntp.org iburst
#外部時間服務(wù)器翠霍,以master時間為準(zhǔn)進(jìn)行同步
server master iburst
……..
[hadoop@master ~]$ sudo service ntpd start
(每臺機器上都需要锭吨,設(shè)置ntpd開機啟動,并第一次手動打開ntpd)寒匙,命令如下:
$ sudo chkconfig ntpd on #開機啟動ntpd
$ sudo service ntpd start #啟動 ntpd
時間同步設(shè)置參考:http://cn.soulmachine.me/blog/20140124/
時間同步設(shè)置總結(jié):
每個節(jié)點上安裝ntpd零如,并設(shè)置為開機啟動躏将,當(dāng)然第一次要先手動啟動,通過配置/etc/ntp.conf文件考蕾,讓master作為時間同步服務(wù)器祸憋,這臺機器的時間是根據(jù)聯(lián)網(wǎng)同步網(wǎng)絡(luò)時間的,其他節(jié)點以master的ip作為同步的地址
配置完成后肖卧,發(fā)現(xiàn)后面的節(jié)點時間可能還未同步蚯窥,可能需要等30分鐘左右,一段時間后時間都會以master為準(zhǔn)塞帐,進(jìn)行同步
4.Hadoop的安裝拦赠、配置
下載hadoop-2.6.0-cdh5.5.0.tar.gz安裝包后(放在master機器上的~/bigdataspace路徑下):
[hadoop@master ~]$ cd ~/bigdataspace
[hadoop@master bigdataspace]$ tar -zxvf hadoop-2.6.0-cdh5.5.0.tar.gz
進(jìn)入hadoop配置文件路徑:
[hadoop@master ~]$ cd ~/bigdataspace/hadoop-2.6.0-cdh5.5.0/etc/hadoop
1> 在hadoop-env.sh中配置JAVA_HOME:
[hadoop@master hadoop]$ vi hadoop-env.sh
# set JAVA_HOME in this file, so that it is correctly defined on
# The java implementation to use.
export JAVA_HOME=/home/hadoop/bigdataspace/jdk1.8.0_60
2> 在yarn-env.sh中配置JAVA_HOME:
[hadoop@master hadoop]$ vi yarn-env.sh
# some Java parameters
export JAVA_HOME=/home/hadoop/bigdataspace/jdk1.8.0_60
3> 在slaves中配置slave節(jié)點的ip或者h(yuǎn)ost
[hadoop@master hadoop]$ vi slaves
slave1
slave2
slave3
4> 修改core-site.xml
[hadoop@master hadoop]$ vi core-site.xml
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:8020</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/data/hadoop-2.6.0-cdh5.5.0/tmp</value>
</property>
</configuration>
5> 修改hdfs-site.xml
[hadoop@master hadoop]$ vi hdfs-site.xml
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>master:50090</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/data/hadoop-2.6.0-cdh5.5.0/dfs/name</value>
</property>
<property>
<name>dfs.namenode.data.dir</name>
<name>dfs.datanode.data.dir</name>
<value>file:/data/hadoop-2.6.0-cdh5.5.0/dfs/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
</configuration>
6> 修改mapred-site.xml
[hadoop@master hadoop]$ vi mapred-site.xml
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>master:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>master:19888</value>
</property>
</configuration>
7> 修改yarn-site.xml
[hadoop@master hadoop]$ vi yarn-site.xml
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>master:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>master:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>master:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>master:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>master:8088</value>
</property>
</configuration>
因為CDH版本缺少hadoop的native庫,因此需要引入葵姥,否則會報錯荷鼠,解決方法:
http://www.cnblogs.com/huaxiaoyao/p/5046374.html
本次安裝具體采取的解決方法:
[hadoop@master ~]$ cd ~/bigdataspace
[hadoop@master bigdataspace]$ wget http://archive.cloudera.com/cdh5/redhat/6/x86_64/cdh/5.5.0/RPMS/x86_64/hadoop-2.6.0+cdh5.5.0+921-1.cdh5.5.0.p0.15.el6.x86_64.rpm
[hadoop@master bigdataspace]$ rpm2cpio *.rpm | cpio -div
在bigdataspace文件夾下
$ cp -r ./usr/lib/hadoop/lib/native/ ~/bigdataspace/hadoop-2.6.0-cdh5.5.0/lib/native/
刪除解壓后得到的文件:
[hadoop@master bigdataspace]$ rm -r ~/bigdataspace/etc/
[hadoop@master bigdataspace]$ rm -r ~/bigdataspace/usr/
[hadoop@master bigdataspace]$ rm -r ~/bigdataspace/var//
$ rm ~/ bigdataspace/hadoop-2.6.0+cdh5.5.0+921-1.cdh5.5.0.p0.15.el6.x86_64.rpm
5.使用scp命令分發(fā)配置好的hadoop到各個子節(jié)點
$ scp –r ~/bigdataspace/hadoop-2.6.0-cdh5.5.0/ hadoop@slave1:~/bigdataspace/
$ scp –r ~/bigdataspace/hadoop-2.6.0-cdh5.5.0/ hadoop@slave2:~/bigdataspace/
$ scp –r ~/bigdataspace/hadoop-2.6.0-cdh5.5.0/ hadoop@slave3:~/bigdataspace/
(每臺機器)修改環(huán)境變量配置文件:
[hadoop@master bigdataspace]$ sudo vi /etc/profile
(在配置文件末尾加上如下配置)
export HADOOP_HOME=/home/hadoop/bigdataspace/hadoop-2.6.0-cdh5.5.0
export PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$PATH
讓環(huán)境變量設(shè)置生效:
[hadoop@master bigdataspace]$ source /etc/profile
6.啟動并驗證Hadoop
[hadoop@master ~]$ cd ~/bigdataspace/hadoop-2.6.0-cdh5.5.0 #進(jìn)入hadoop目錄
[hadoop@master hadoop-2.6.0-cdh5.5.0]$ ./bin/hdfs namenode –format #格式化namenode
[hadoop@master hadoop-2.6.0-cdh5.5.0]$ ./sbin/start-dfs.sh #啟動dfs
[hadoop@master hadoop-2.6.0-cdh5.5.0]$ ./sbin/start-yarn.sh #啟動yarn
可以通過jps命令查看各個節(jié)點啟動的進(jìn)程是否正常。在 master 上應(yīng)該有以下幾個進(jìn)程
[hadoop@master hadoop-2.6.0-cdh5.5.0]$ jps
3407 SecondaryNameNode
3218 NameNode
3552 ResourceManager
3910 Jps
在 slave1 上應(yīng)該有以下幾個進(jìn)程
[hadoop@slave1 ~]$ jps
2072 NodeManager
2213 Jps
1962 DataNode
或者在瀏覽器中輸入 http://master:8088 榔幸,應(yīng)該有 hadoop 的管理界面出來了,并通過http://master:8088/cluster/nodes能看到 slave1颊咬、slave2、slave3節(jié)點
7.啟動Hadoop自帶的jobhistoryserver
[hadoop@master ~] sbin/mr-jobhistory-daemon.sh start historyserver
(mapred-site.xml配置文件有對jobhistory的相關(guān)配置)
[hadoop@master hadoop-2.6.0-cdh5.5.0]$ jps
5314 Jps
19994 JobHistoryServer
19068 NameNode
19422 ResourceManager
19263 SecondaryNameNode
參考:
http://blog.csdn.net/liubei_whut/article/details/42397985
8.停止hadoop集群的問題
Linux運行一段時間后牡辽,/tmp下的文件夾下面會清空一些文件,hadoop的停止腳本stop-all.sh是需要根據(jù)/tmp下面的pid文件關(guān)閉對應(yīng)的進(jìn)程敞临,當(dāng)/tmp下的文件被自動清理后可能會出出先的錯誤:
$ ./sbin/stop-all.sh
Stopping namenodes on [master]
master: no namenode to stop
slave1: no datanode to stop
slave2: no datanode to stop
slave3: no datanode to stop
Stopping secondary namenodes [master]
master: no secondarynamenode to stop
……
方法1:這時需要在/tmp文件夾下手動創(chuàng)建恢復(fù)這些pid文件
master節(jié)點(每個文件中保存對應(yīng)的進(jìn)程id):
hadoop-hadoop-namenode.pid
hadoop-hadoop-secondarynamenode.pid
yarn-hadoop-resourcemanager.pid
slave節(jié)點(每個文件中保存對應(yīng)的進(jìn)程id):
hadoop-hadoop-datanode.pid
yarn-hadoop-nodemanager.pid
方法2:使用kill -9逐個關(guān)閉相應(yīng)的進(jìn)程id
從根本上解決的方法:
(首先使用了方法1或方法2關(guān)閉了hadoop集群)
1.修改配置文件hadoop-env.sh:
#export HADOOP_PID_DIR=${HADOOP_PID_DIR}
export HADOOP_PID_DIR=/data/hadoop-2.6.0-cdh5.5.0/pids
#export HADOOP_SECURE_DN_PID_DIR=${HADOOP_PID_DIR}
export HADOOP_SECURE_DN_PID_DIR=/data/hadoop-2.6.0-cdh5.5.0/pids
2.修改配置文件yarn-env.sh:
export YARN_PID_DIR=/data/hadoop-2.6.0-cdh5.5.0/pids
3.創(chuàng)建文件夾pids:
$ mkdir /data/hadoop-2.6.0-cdh5.5.0/pids(發(fā)現(xiàn)會自動創(chuàng)建pids文件态辛,因此不需要創(chuàng)建)
這2個步驟需要在各個節(jié)點都執(zhí)行.