grep -A 5 'UPDATE ddt_frequency_car' pub.log 查找向下的五行
ip addr 查看ip
service network restart /etc/init.d/network restart 重啟網(wǎng)卡
免密登錄
通過命令”ssh-keygen -t rsa“
生成之后會在用戶的根目錄生成一個 “.ssh”的文件夾
通過ssh-copy-id的方式
命令: ssh-copy-id -i ~/.ssh/id_rsa.put <romte_ip>
舉例:
[root@test .ssh]# ssh-copy-id -i ~/.ssh/id_rsa.pub 192.168.91.135
root@192.168.91.135's password:
Now try logging into the machine, with "ssh '192.168.91.135'", and check in:
.ssh/authorized_keys
to make sure we haven't added extra keys that you weren't expecting.
[root@test .ssh]# ssh root@192.168.91.135
Last login: Mon Oct 10 01:25:49 2016 from 192.168.91.133
[root@localhost ~]#
常見錯誤:
[root@test ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub 192.168.91.135
-bash: ssh-copy-id: command not found //提示命令不存在
解決辦法:yum -y install openssh-clients
cat id_rsa.pub >> authorized_keys 將公鑰追加到 authorized_keys 中懦砂,authorized_keys中是可以防問的公鑰
A將公鑰發(fā)給B锁右,不是說讓B來訪問A秉扑,而是A就可以訪問B了
vi /etc/hosts
192.168.20.75 Master
192.168.20.76 Slave1
192.168.20.77 Slave2
設(shè)置靜態(tài)ip
用#將BOOTPROTO=dhcp注釋
IPADDR=192.168.60.101 #靜態(tài)IP
GATEWAY=192.168.20.1 #默認(rèn)網(wǎng)關(guān)
NETMASK=255.255.255.0 #子網(wǎng)掩碼
DNS1=192.168.1.10 #DNS 配置
IPADDR=192.168.60.102 #靜態(tài)IP
GATEWAY=192.168.20.1 #默認(rèn)網(wǎng)關(guān)
NETMASK=255.255.255.0 #子網(wǎng)掩碼
DNS1=192.168.1.10 #DNS 配置
IPADDR=192.168.60.103 #靜態(tài)IP
GATEWAY=192.168.20.1 #默認(rèn)網(wǎng)關(guān)
NETMASK=255.255.255.0 #子網(wǎng)掩碼
DNS1=192.168.1.10 #DNS 配置
安裝jdk scala 設(shè)置環(huán)境變量
rpm -ivh jdk-8u144-linux-x64.rpm 安裝jdk
rpm -ivh scala-2.11.8.rpm 安裝scala
vi /etc/profile
export JAVA_HOME=/usr/java/jdk1.8.0_144
export PATH=$PATH:${JAVA_HOME}/bin
export SCALA_HOME=/usr/share/scala
export PATH=$SCALA_HOME/bin:$PATH
移動解壓hadoop 設(shè)置環(huán)境變量
mv hadoop-2.7.4 /opt
tar -zxvf hadoop-2.7.4.tar.gz
export HADOOP_HOME=/opt/hadoop-2.7.4/
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_ROOT_LOGGER=INFO,console
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
source /etc/profile
修改$HADOOP_HOME/etc/hadoop/hadoop-env.sh纺且,修改JAVA_HOME 如下:
export JAVA_HOME=/usr/java/jdk1.8.0_144
修改$HADOOP_HOME/etc/hadoop/slaves,將原來的localhost刪除添坊,改成如下內(nèi)容:
Slave1
Slave2
修改$HADOOP_HOME/etc/hadoop/core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://Master:9000</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/opt/hadoop-2.7.4/tmp</value>
</property>
</configuration>
修改$HADOOP_HOME/etc/hadoop/hdfs-site.xml
<configuration>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>Master:50090</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/opt/hadoop-2.7.4/hdfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/opt/hadoop-2.7.4/hdfs/data</value>
</property>
</configuration>
復(fù)制template圆到,生成xml坏平,命令如下:
cp mapred-site.xml.template mapred-site.xml
修改$HADOOP_HOME/etc/hadoop/mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>Master:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>Master:19888</value>
</property>
</configuration>
修改$HADOOP_HOME/etc/hadoop/yarn-site.xml
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>Master:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>Master:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>Master:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>Master:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>Master:8088</value>
</property>
</configuration>
scp -r /opt/hadoop-2.7.4/etc/hadoop root@Slave1:/opt/hadoop-2.7.4/etc
scp -r /opt/hadoop-2.7.4/etc/hadoop root@Slave2:/opt/hadoop-2.7.4/etc
在Master節(jié)點(diǎn)啟動集群,啟動之前格式化一下namenode:
hadoop namenode -format
啟動:
/opt/hadoop-2.7.4/sbin/start-all.sh
防問地址:
http://192.168.20.75:8088
http://master:50070
安裝spark2.1.0
mv spark-2.1.0-bin-hadoop2.7.tgz /opt
tar -zxvf spark-2.1.0-bin-hadoop2.7.tgz
修改/etc/profie渠概,增加如下內(nèi)容:
export SPARK_HOME=/opt/spark-2.1.0-bin-hadoop2.7/
export PATH=$PATH:$SPARK_HOME/bin
cd /opt/spark-2.1.0-bin-hadoop2.7/conf
復(fù)制spark-env.sh.template成spark-env.sh
cp spark-env.sh.template spark-env.sh
修改$SPARK_HOME/conf/spark-env.sh茶凳,添加如下內(nèi)容:
export JAVA_HOME=/usr/java/jdk1.8.0_144
export SCALA_HOME=/usr/share/scala
export HADOOP_HOME=/opt/hadoop-2.7.4
export HADOOP_CONF_DIR=/opt/hadoop-2.7.4/etc/hadoop
export SPARK_MASTER_IP=192.168.20.75
export SPARK_MASTER_HOST=192.168.20.75
export SPARK_LOCAL_IP=192.168.20.75
export SPARK_WORKER_MEMORY=1g
export SPARK_WORKER_CORES=2
export SPARK_HOME=/opt/spark-2.1.0-bin-hadoop2.7
export SPARK_DIST_CLASSPATH=$(/opt/hadoop-2.7.4/bin/hadoop classpath)
復(fù)制slaves.template成slaves
cp slaves.template slaves
修改$SPARK_HOME/conf/slaves,添加如下內(nèi)容:
Master
Slave1
Slave2
將配置好的spark文件復(fù)制到Slave1和Slave2節(jié)點(diǎn)播揪。
scp -r /opt/spark-2.1.0-bin-hadoop2.7 root@Slave1:/opt
scp -r /opt/spark-2.1.0-bin-hadoop2.7 root@Slave2:/opt
在Slave1和Slave2上分別修改/etc/profile贮喧,增加Spark的配置
在Slave1和Slave2修改$SPARK_HOME/conf/spark-env.sh,將export SPARK_LOCAL_IP=114.55.246.88改成Slave1和Slave2對應(yīng)節(jié)點(diǎn)的IP
在Master節(jié)點(diǎn)啟動集群猪狈。
/opt/spark-2.1.0-bin-hadoop2.7/sbin/start-all.sh
查看集群是否啟動成功:
jps
Master在Hadoop的基礎(chǔ)上新增了:
Master
Slave在Hadoop的基礎(chǔ)上新增了:
Worker
啟動: systemctl start firewalld
查看狀態(tài): systemctl status firewalld
停止: systemctl disable firewalld
禁用: systemctl stop firewalld
zookeeper安裝
conf/zoo.cfg
server.0=Master:2288:3388
server.1=Slave1:2288:3388
server.2=Slave2:2288:3388
touch myid 在dataDir下
export ZOOKEEPER_HOME=/opt/zookeeper-3.4.10
export PATH=$PATH:$ZOOKEEPER_HOME/bin
/opt/zookeeper-3.4.10/bin/zkServer.sh start
/opt/zookeeper-3.4.10/bin/zkServer.sh status