1.在滴滴云申請三臺服務(wù)器(CentOS系統(tǒng)64位7.3)
Master | Worker1 | Worker2 |
---|---|---|
公網(wǎng)116.85.9.118 | 公網(wǎng)116.85.9.117 | 公網(wǎng)116.85.9.119 |
內(nèi)網(wǎng)10.254.0.58 | 內(nèi)網(wǎng)10.254.0.94 | 內(nèi)網(wǎng)10.254.0.88 |
單核2G內(nèi)存 | 單核1G內(nèi)存 | 單核1G內(nèi)存 |
2.修改hosts文件
修改三臺服務(wù)器的hosts文件,vim /etc/hosts(需要權(quán)限加上sudo vim /etc/hosts),在原文件的基礎(chǔ)最后面加上:
10.254.0.58 Master
10.254.0.94 Worker1
10.254.0.88 Worker2
修改完成后保存執(zhí)行如下命令,可以讓修改立即生效
source /etc/hosts
3.ssh無密碼驗證配置
參考ssh免密登陸逢倍,為了讓幾臺機器之間可以互相免密登陸,可以把公私鑰對上傳到三臺服務(wù)器上(為了方便使用同樣的密鑰景图,你也可以重新生成)
4.安裝基礎(chǔ)環(huán)境(JAVA和SCALA環(huán)境)
4.1安裝Java
下載jdk-8u171-linux-x64.tar.gz,解壓到/usr/local目錄较雕,配置環(huán)境變量,在/etc/profile中添加
export JAVA_HOME=/usr/local/jdk1.8.0_121
export PATH=$JAVA_HOME/bin:$PATH
export CLASSPATH=.:$JAVA_HOME/lib/rt.jar
4.2安裝scala
下載scala安裝包scala-2.11.8.rpm安裝,rpm -ivh scala-2.11.8.rpm
添加Scala環(huán)境變量,在/etc/profile中添加:
export SCALA_HOME=/usr/share/scala
export PATH=$SCALA_HOME/bin:$PATH
5.Hadoop2.7.4完全分布式搭建
首先在本地下載hadoop-2.7.4.tar.gz挚币,使用命令將hadoop上傳到Master
scp -r Documents/hadoop-2.7.4.tar.gz dc2-user@116.85.9.118:
tar -zxvf hadoop-2.7.4.tar.gz
mv hadoop-2.7.4 /opt
修改/etc/profile亮蒋,增加如下內(nèi)容:
export HADOOP_HOME=/opt/hadoop-2.7.4/
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_ROOT_LOGGER=INFO,console
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
修改完成后執(zhí)行:source /etc/profile
修改$HADOOP_HOME/etc/hadoop/hadoop-env.sh,修改JAVA_HOME 如下:
export JAVA_HOME=/usr/local/jdk1.8.0_171
修改$HADOOP_HOME/etc/hadoop/slaves妆毕,將原來的localhost刪除慎玖,改成如下內(nèi)容:
Worker1
Worker2
修改$HADOOP_HOME/etc/hadoop/core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://Master:9000</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/opt/hadoop-2.7.4/tmp</value>
</property>
</configuration>
修改$HADOOP_HOME/etc/hadoop/hdfs-site.xml
<configuration>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>Master:50090</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/opt/hadoop-2.7.4/hdfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/opt/hadoop-2.7.4/hdfs/data</value>
</property>
</configuration>
修改$HADOOP_HOME/etc/hadoop/mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>Master:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>Master:19888</value>
</property>
</configuration>
修改$HADOOP_HOME/etc/hadoop/yarn-site.xml
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>Master:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>Master:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>Master:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>Master:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>Master:8088</value>
</property>
</configuration>
復制Master節(jié)點的hadoop文件夾到Worker1和Worker2上。
scp -r /opt/hadoop-2.7.4 dc2-user@Worker1:
scp -r /opt/hadoop-2.7.4 dc2-user@Worker2:
然后在mv到/opt目錄下
在Worker1和Worker2上分別修改/etc/profile设塔,過程同Master一樣凄吏。
在Master節(jié)點啟動集群
啟動之前格式化一下namenode:
hadoop namenode -format
啟動:
/opt/hadoop-2.7.4/sbin/start-all.sh
至此hadoop的完全分布式環(huán)境搭建完畢。
查看集群是否啟動成功:
jps
Master顯示:
SecondaryNameNode
ResourceManager
NameNode
Slave顯示:
NodeManager
DataNode
這里Master申請2G內(nèi)存闰蛔,如果申請1G痕钢,后面配置spark最少要1G,否則啟動內(nèi)存不夠
Spark2.2.0完全分布式環(huán)境搭建
將spark-2.2.0-bin-hadoop2.7上傳到Master序六,也是放在/opt目錄下
修改/etc/profie任连,增加如下內(nèi)容:
export SPARK_HOME=/opt/spark-2.2.0-bin-hadoop2.7/
export PATH=$PATH:$SPARK_HOME/bin
cp spark-env.sh.template spark-env.sh
修改$SPARK_HOME/conf/spark-env.sh,添加如下內(nèi)容
export JAVA_HOME=/usr/local/jdk1.8.0_171
export SCALA_HOME=/usr/share/scala
export HADOOP_HOME=/opt/hadoop-2.7.4
export HADOOP_CONF_DIR=/opt/hadoop-2.7.4/etc/hadoop
export SPARK_MASTER_IP=10.254.0.58
export SPARK_MASTER_HOST=10.254.0.58
export SPARK_LOCAL_IP=10.254.0.58
export SPARK_WORKER_MEMORY=1g
export SPARK_WORKER_CORES=2
export SPARK_HOME=/opt/spark-2.2.0-bin-hadoop2.7
export SPARK_DIST_CLASSPATH=$(/opt/hadoop-2.7.4/bin/hadoop classpath)
cp slaves.template slaves
修改$SPARK_HOME/conf/slaves例诀,添加如下內(nèi)容:
Worker1
Worker2
注意這里如果把Master也添加到這里随抠,Master將即使主機又做工作機
將配置好的spark文件復制到Worker1和Worker2節(jié)點。
scp /opt/spark-2.2.0-bin-hadoop2.7 dc2-user@Worker1:
scp /opt/spark-2.2.0-bin-hadoop2.7 dc2-user@Worker2:
修改Worker1和Worker2配置,在Worker1和Worker2上分別修改/etc/profile繁涂,增加Spark的配置拱她,過程同Master一樣。
在Worker1和Worker2修改$SPARK_HOME/conf/spark-env.sh扔罪,將export SPARK_LOCAL_IP改成Worker1和Worker2對應(yīng)節(jié)點的IP秉沼。
在Master節(jié)點啟動集群。
/opt/spark-2.2.0-bin-hadoop2.7/sbin/start-all.sh
查看集群是否啟動成功:
jps
Master在Hadoop的基礎(chǔ)上新增了:
Master
Slave在Hadoop的基礎(chǔ)上新增了:
Worker