前言
因工作需要驗證FlinkCDC相關(guān)功能侦副,F(xiàn)link的checkpoint 信息可以放到Hdfs上酒贬,因此想部署一套Hadoop進行驗證,鑒于之前部署的都沒有做記錄绢涡,本次安裝部署的時候還得重新找安裝步驟乓梨,因此本次做了記錄鳖轰,方便后續(xù)如果需要可以進行快速安裝。大神繞過~
一扶镀、環(huán)境準備
1.集群規(guī)劃
VMware Workstation 15
CentOS 7.9
192.168.10.21(hadoop01)
192.168.10.22(hadoop02)
192.168.10.23(hadoop03)
2.虛擬機安裝蕴侣、網(wǎng)絡(luò)配置
略...
3.JDK安裝
版本:JDK1.8
步驟:略
- 新增 hadoop 用戶增加sudo權(quán)限
1.新增用戶
useradd hadoop
passwd hadoop
2.增加sudo權(quán)限
vi /etc/sudoers
新增 hadoop ALL=(ALL) ALL
## Allow root to run any commands anywhere
root ALL=(ALL) ALL
hadoop ALL=(ALL) ALL
二、zookeeper安裝
1.hadoop用戶配置免密
1.各節(jié)點ssh-keygen生成RSA密鑰和公鑰(192.168.10.21~23)
ssh-keygen -q -t rsa -N "" -f ~/.ssh/id_rsa
2.將所有的公鑰文件匯總到一個總的授權(quán)key文件中臭觉,在192.168.10.21 執(zhí)行
ssh 192.168.10.21 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
ssh 192.168.10.22 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
ssh 192.168.10.23 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
3.出于安全性考慮昆雀,將這個授權(quán)key文件賦予600權(quán)限:
chmod 600 ~/.ssh/authorized_keys
4.將這個包含了所有互信機器認證key的認證文件,分發(fā)到各個機器中去
scp ~/.ssh/authorized_keys 192.168.10.22:~/.ssh/
scp ~/.ssh/authorized_keys 192.168.10.23:~/.ssh/
- 修改hosts
vi /etc/hosts
#Hadoop Cloud
192.168.10.21 hadoop01
192.168.10.22 hadoop02
192.168.10.23 hadoop03
3.上傳安裝包到hadoop01
下載:http://archive.apache.org/dist/zookeeper/
版本:apache-zookeeper-3.8.0-bin.tar.gz
注:zookeeper 好像從 3.5 版本以后胧谈,命名就發(fā)生了改變忆肾,如果是 apache-zookeeper-3.5.5.tar.gz 這般命名的,都是未編譯的菱肖,而 apache-zookeeper-3.5.5-bin.tar.gz 這般命名的客冈,才是已編譯的包。
個人喜歡使用rz稳强,先安裝這個工具
sudo yum -y install lrzsz
1.創(chuàng)建目錄
mkdir -p /home/hadoop/plat/zookeeper
2.上傳安裝包到該目錄
3.解壓
cd /home/hadoop/plat/zookeeper
tar -zxvf apache-zookeeper-3.8.0-bin.tar.gz
4.更改名稱
mv apache-zookeeper-3.8.0-bin zookeeper-3.8.0
5.新增目錄
mkdir -p /home/hadoop/plat/zookeeper/zookeeper-3.8.0/data
mkdir -p /home/hadoop/plat/zookeeper/zookeeper-3.8.0/logs
- 修改配置文件
1.配置文件修改
cd /home/hadoop/plat/zookeeper/zookeeper-3.8.0/conf
cp zoo_sample.cfg zoo.cfg
vi zoo.cfg
刪除其中內(nèi)容场仲,并更改為
tickTime=2000
dataDir=/home/hadoop/plat/zookeeper/zookeeper-3.8.0/data
dataLogDir=/home/hadoop/plat/zookeeper/zookeeper-3.8.0/logs
clientPort=21001
initLimit=5
syncLimit=2
server.1=192.168.10.21:21002:21003
server.2=192.168.10.22:21002:21003
server.3=192.168.10.23:21002:21003
autopurge.purgeInterval=24
2.myid修改
cd /home/hadoop/plat/zookeeper/zookeeper-3.8.0/data
echo 1 > myid
5.分發(fā)到其他主機
1.配置put工具
cd /home/hadoop
vi .bashrc
新增如下內(nèi)容
export hadoop01=192.168.10.21
export hadoop02=192.168.10.22
export hadoop03=192.168.10.23
put()
{
if [ $# != 2 ]
then
echo " put filename remotedir -- eg: put a.txt /home"
else
FileName=$1
DirName=$2
echo "${FileName} ${DirName}"
echo $hadoop0{2..3} | xargs -n1 |awk '{print $0}'
echo $hadoop0{2..3} | xargs -n1 | xargs -i scp -r ${FileName} {}:${DirName}
fi
}
2. 分發(fā)到其他兩臺主機
目標主機創(chuàng)建目錄
mkdir -p /home/hadoop/plat/zookeeper
hadoop01主機
cd /home/hadoop/plat/zookeeper
put zookeeper-3.8.0 /home/hadoop/plat/zookeeper
3.修改hadoop02和hadoop03的myid
hadoop02 修改為 2
hadoop02 修改為 3
6.啟動集群
1.配置環(huán)境變量
#Zookeeper
export ZOOKEEPER_HOME=/home/hadoop/plat/zookeeper/zookeeper-3.8.0
export PATH=${ZOOKEEPER_HOME}/bin:${PATH}
2.三臺機器同時啟動
cd /home/hadoop
zkServer.sh start
7.查看集群狀態(tài)
zkServer.sh status
三、hadoop安裝
1退疫、安裝jdk
2渠缕、配置hostname
3、配置hosts
4褒繁、版本選擇
地址:http://archive.apache.org/dist/hadoop/core/
版本:hadoop-3.3.1.tar.gz
5亦鳞、上傳壓縮包
1.hadoop01 創(chuàng)建目錄
mkdir -p /home/hadoop/plat/hadoop
2.上傳安裝包到該目錄并解壓
tar -zxvf hadoop-3.3.1.tar.gz
6、配置環(huán)境變量
vi ~/.bashrc
#Hadoop
export HADOOP_HOME=/home/hadoop/plat/hadoop/hadoop-3.3.1
export PATH=${HADOOP_HOME}/bin:${HADOOP_HOME}/sbin:${PATH}
7、配置文件修改
7.1 修改 hadoop-env.sh
cd ${HADOOP_HOME}/etc/hadoop
vi hadoop-env.sh
# The java implementation to use. By default, this environment
# variable is REQUIRED on ALL platforms except OS X!
# export JAVA_HOME=
export JAVA_HOME=/usr/local/src/jdk1.8.0_321
7.2 修改 core-site.xml
cd ${HADOOP_HOME}
mkdir data
cd ${HADOOP_HOME}/etc/hadoop
vi core-site.xml
<configuration>
<!-- 指定HADOOP所使用的文件系統(tǒng)schema(URI)燕差,HDFS的老大(NameNode)的地址 -->
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop01:9000</value>
</property>
<!-- 指定hadoop運行時產(chǎn)生文件的存儲目錄 -->
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/plat/hadoop/hadoop-3.3.1/data</value>
</property>
<!-- 在Web UI訪問HDFS使用的用戶名遭笋。-->
<property>
<name>hadoop.http.staticuser.user</name>
<value>hadoop</value>
</property>
</configuration>
7.3 修改 hdfs-site.xml
<configuration>
<!-- 設(shè)定SNN運行主機和端口。-->
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>hadoop02:9868</value>
</property>
<!-- 指定HDFS副本的數(shù)量 -->
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
7.4 修改 mapred-site.xml
<configuration>
<!-- 指定mapreduce運行在yarn上 -->
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>yarn.app.mapreduce.am.env</name>
<value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value>
</property>
<property>
<name>mapreduce.map.env</name>
<value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value>
</property>
<property>
<name>mapreduce.reduce.env</name>
<value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value>
</property>
</configuration>
7.5 修改 yarn-site.xml
<configuration>
<!-- 指定YARN的老大(ResourceManager)的地址 -->
<property>
<name>yarn.resourcemanager.hostname</name>
<value>hadoop01</value>
</property>
<!-- reducer獲取數(shù)據(jù)的方式 -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<!-- 關(guān)閉yarn內(nèi)存檢查 -->
<property>
<name>yarn.nodemanager.pmem-check-enabled</name>
<value>false</value>
</property>
<property>
<name>yarn.nodemanager.vmem-check-enabled</name>
<value>false</value>
</property>
<!--開啟日志聚合-->
<property>
<name>yarn.log-aggregation-enable</name>
<value>true</value>
</property>
<!--日志聚合hdfs存儲路徑-->
<property>
<name>yarn.nodemanager.remote-app-log-dir</name>
<value>/home/hadoop/plat/hadoop/hadoop-3.3.1/logs/nodemanager-remote-app-logs</value>
</property>
<!--hdfs上的日志保留時間-->
<property>
<name>yarn.log-aggregation.retain-seconds</name>
<value>604800</value>
</property>
<!--應(yīng)用執(zhí)行時存儲路徑-->
<property>
<name>yarn.nodemanager.log-dirs</name>
<value>file:///home/hadoop/plat/hadoop/hadoop-3.3.1/logs/nodemanager-logs</value>
</property>
<property>
<!--應(yīng)用執(zhí)行完日志保留的時間徒探,默認0瓦呼,即執(zhí)行完立刻刪除-->
<name>yarn.nodemanager.delete.debug-delay-sec</name>
<value>604800</value>
</property>
</configuration>
7.6 修改workers
hadoop01
hadoop02
hadoop03
7.6 分發(fā)到其他主機
cd /home/hadoop/plat
put hadoop /home/hadoop/plat
7.7 初始化namenode
hdfs namenode -format
看到成功的標識
/home/hadoop/plat/hadoop/hadoop-3.3.1/data/dfs/name has been successfully formatted.
7.8 啟動 hadoop
start-all.sh