usr/local/hadoop-ha/etc/hadoop
編輯hdfs-site.xml
<property>
<name>dfs.nameservices</name>
<value>mycluster</value>
</property>
<property>
<name>dfs.ha.namenodes.mycluster</name>
<value>nn1,nn2</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.nn1</name>
<value>ha01:8020</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.nn2</name>
<value>ha02:8020</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.nn1</name>
<value>ha01:50070</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.nn2</name>
<value>ha02:50070</value>
</property>
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://ha01:8485;ha02:8485;ha03:8485/mycluster</value>
</property>
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/var/tmp/hadoop/ha/jn</value>
</property>
<property>
<name>dfs.client.failover.proxy.provider.mycluster</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/root/.ssh/id_dsa</value>
</property>
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
編輯 core-site.xml
core-site.xml
<property>
<name>fs.defaultFS</name>
<value>hdfs://mycluster</value>
</property>
<property>
<name>ha.zookeeper.quorum</name>
<value>node02:2181,node03:2181,node04:2181</value>
</property>
編輯 slaves文件
ha02
ha03
ha04
同步其它三個節(jié)點(diǎn)
scp -r hadoop-ha root@ha02:/usr/local
scp -r hadoop-ha root@ha03:/usr/local
scp -r hadoop-ha root@ha04:/usr/local
/usr/local/hadoop/bin下
hdfs namenode -format
/usr/local/hadoop/sbin下
start-dfs.sh
配置文件:集群中要同步M箍恕P蠓ァ!
zookeepr配置
啟動zookeeper集群
zkServer.sh start || zkServer.sh status
hadoop-daemon.sh start journalnode(兩個主節(jié)點(diǎn))
第一臺NN:
hdfs namenode –format
hadoop-deamon.sh start namenode
另一臺NN:
hdfs namenode -bootstrapStandby
start-dfs.sh
$ZOOKEEPER/bin/zkCli.sh
ls /
hdfs zkfc -formatZK
stop-dfs.sh && start-dfs.sh || hadoop-daemon.sh start zkfc