方案
192.168.211.129 elastic (zookeeper、kafka、hadoop namenode、yarn resourcemanager、hbase hmaster琳骡、park master、es master)
192.168.211.130 hbase (zookeeper、kafka您觉、hadoop namenode、hadoop datanode授滓、yarn resourcemanager琳水、yarn nodemanager、spark worker褒墨、es data)
192.168.211.131 mongodb (zookeeper炫刷、kafka、hadoop datanode郁妈、yarn nodemanager浑玛、spark worker、es data)
安裝jdk(每臺(tái))
rpm -ivh jdk-7u80-linux-x64.rpm
配置ssh(每臺(tái))
vi /etc/hosts 添加:
192.168.211.129 elastic
192.168.211.130 hbase
192.168.211.131 mongodb
useradd spark
passwd spark
切換到spark用戶:
ssh-keygen -t rsa
ssh-copy-id -i /home/spark/.ssh/id_rsa.pub elastic
ssh-copy-id -i /home/spark/.ssh/id_rsa.pub hbase
ssh-copy-id -i /home/spark/.ssh/id_rsa.pub mongodb
elastic機(jī)器上:
cd
mkdir nosql
將要安裝的tar包拷貝到nosql目錄
tar -zxf hadoop-2.6.2.tar.gz
tar -zxf zookeeper-3.4.6.tar.gz
tar -zxf spark-2.0.2-bin-hadoop2.6.tgz
tar -zxf hbase-1.2.4-bin.tar.gz
tar -zxf kafka_2.10-0.10.1.0.tgz
tar -zxf elasticsearch-5.0.1.tar.gz
tar -zxf mongodb-linux-x86_64-rhel62-3.2.11.tgz
vi .bashrc
JAVA_HOME=/usr/java/default
HADOOP_HOME=/home/spark/nosql/hadoop-2.6.2
SPARK_HOME=/home/spark/nosql/spark-2.0.2-bin-hadoop2.6
ZOOKEEPER_HOME=/home/spark/nosql/zookeeper-3.4.6
HBASE_HOME=/home/spark/nosql/hbase-1.2.4
ELASTICSEARCH_HOME=/home/spark/nosql/elasticsearch-5.0.1
MONGODB_HOME=/home/spark/nosql/mongodb-linux-x86_64-rhel62-3.2.11
export JAVA_HOME HADOOP_HOME SPARK_HOME ZOOKEEPER_HOME HBASE_HOME ELASTICSEARCH_HOME MONGODB_HOME
export PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$SPARK_HOME/bin:$SPARK_HOME/sbin:$ZOOKEEPER_HOME/bin:$HBASE_HOME/bin:$ELASTICSEARCH_HOME/bin:$MONGODB_HOME/bin:$PATH
source .bashrc
hadoop配置(配置完后復(fù)制到各節(jié)點(diǎn))
- vi /home/spark/nosql/hadoop-2.6.2/etc/hadoop/slaves
hbase
mongodb
- vi /home/spark/nosql/hadoop-2.6.2/etc/hadoop/core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://mycluster</value>
<description>這里的 mycluster為HA集群的邏輯名,與hdfs-site.xml中的dfs.nameservices配置一致</description>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/spark/nosql/data</value>
<description>這里的路徑默認(rèn)是NameNode噩咪、DataNode顾彰、JournalNode等存放數(shù)據(jù)的公共目錄. 用戶也可單獨(dú)指定每類數(shù)據(jù)的存儲(chǔ)目錄。這里目錄結(jié)構(gòu)需要自己先創(chuàng)建好</description>
</property>
<property>
<name>ha.zookeeper.quorum</name>
<value>elastic:2181,hbase:2181,mongodb:2181</value>
<description>這里是zk集群配置中各節(jié)點(diǎn)的地址和端口胃碾。 注意:數(shù)量一定是奇數(shù)而且和zoo.cfg中配置的一致</description>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
<description>Size of read/write buffer used inSequenceFiles.</description>
</property>
</configuration>
- vi /home/spark/nosql/hadoop-2.6.2/etc/hadoop/hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
<description>配置副本數(shù)量</description>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/home/spark/nosql/dfs/name</value>
<description>namenode元數(shù)據(jù)存儲(chǔ)目錄</description>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/home/spark/nosql/dfs/data</value>
<description>datanode數(shù)據(jù)存儲(chǔ)目錄</description>
</property>
<property>
<name>dfs.nameservices</name>
<value>mycluster</value>
<description>指定HA命名服務(wù),core-site.xml中fs.defaultFS配置需要引用它</description>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.nn1</name>
<value>elastic:9000</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.nn2</name>
<value>hbase:9000</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.nn1</name>
<value>elastic:50070</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.nn2</name>
<value>hbase:50070</value>
</property>
<property>
<name>dfs.namenode.servicerpc-address.mycluster.nn1</name>
<value>elastic:53310</value>
</property>
<property>
<name>dfs.namenode.servicerpc-address.mycluster.nn2</name>
<value>hbase:53310</value>
</property>
<property>
<name>dfs.ha.automatic-failover.enabled.mycluster</name>
<value>true</value>
<description>故障失敗是否自動(dòng)切換</description>
</property>
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://elastic:8485;hbase:8485;mongodb:8485/hadoop-journal</value>
<description>配置JournalNode涨享,包含三部分:
1.qjournal 前綴表名協(xié)議;
2.然后就是三臺(tái)部署JournalNode的主機(jī)host/ip:端口仆百,三臺(tái)機(jī)器之間用分號(hào)分隔
3.最后的hadoop-journal是journalnode的命名空間厕隧,可以隨意取名
</description>
</property>
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/home/spark/nosql/dfs/HAjournal</value>
<description>journalnode的本地?cái)?shù)據(jù)存放目錄</description>
</property>
<property>
<name>dfs.client.failover.proxy.provider.mycluster</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
<description> 指定mycluster出故障時(shí)執(zhí)行故障切換的類</description>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
<description>ssh的操作方式執(zhí)行故障切換</description>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/home/spark/.ssh/id_rsa</value>
<description> 如果使用ssh進(jìn)行故障切換,使用ssh通信時(shí)用的密鑰存儲(chǔ)的位置</description>
</property>
<property>
<name>dfs.ha.fencing.ssh.connect-timeout</name>
<value>1000</value>
</property>
<property>
<name>dfs.namenode.handler.count</name>
<value>10</value>
</property>
</configuration>
- vi /home/spark/nosql/hadoop-2.6.2/etc/hadoop/yarn-site.xml
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.resourcemanager.ha.enabled</name>
<value>true</value>
</property>
<property>
<name>yarn.resourcemanager.cluster-id</name>
<value>clusterrm</value>
</property>
<property>
<name>yarn.resourcemanager.ha.rm-ids</name>
<value>rm1,rm2</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm1</name>
<value>elastic</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm2</name>
<value>hbase</value>
</property>
<property>
<name>yarn.resourcemanager.recovery.enabled</name>
<value>true</value>
</property>
<property>
<name>yarn.resourcemanager.store.class</name>
<value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
</property>
<property>
<name>yarn.resourcemanager.zk-address</name>
<value>elastic:2181,hbase:2181,mongodb:2181</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<!-- set the proxy server -->
<!-- set history server -->
<property>
<name>yarn.log-aggregation-enable</name>
<value>true</value>
</property>
<!-- set the timeline server -->
<property>
<description>The hostname of the Timeline service web application.</description>
<name>yarn.timeline-service.hostname</name>
<value>elastic</value>
</property>
<property>
<description>Address for the Timeline server to start the RPC server.</description>
<name>yarn.timeline-service.address</name>
<value>elastic:10200</value>
</property>
<property>
<description>The http address of the Timeline service web application.</description>
<name>yarn.timeline-service.webapp.address</name>
<value>elastic:8188</value>
</property>
<property>
<description>The https address of the Timeline service web application.</description>
<name>yarn.timeline-service.webapp.https.address</name>
<value>elastic:8190</value>
</property>
<property>
<description>Handler thread count to serve the client RPC requests.</description>
<name>yarn.timeline-service.handler-thread-count</name>
<value>10</value>
</property>
<property>
<name>yarn.timeline-service.http-cross-origin.enabled</name>
<value>false</value>
</property>
<property>
<description>Comma separated list of origins that are allowed for web services needing cross-origin (CORS) support. Wildcards (*) and patterns allowed</description>
<name>yarn.timeline-service.http-cross-origin.allowed-origins</name>
<value>*</value>
</property>
<property>
<description>Comma separated list of methods that are allowed for web services needing cross-origin (CORS) support.</description>
<name>yarn.timeline-service.http-cross-origin.allowed-methods</name>
<value>GET,POST,HEAD</value>
</property>
<property>
<description>Comma separated list of headers that are allowed for web services needing cross-origin (CORS) support.</description>
<name>yarn.timeline-service.http-cross-origin.allowed-headers</name>
<value>X-Requested-With,Content-Type,Accept,Origin</value>
</property>
<property>
<description>The number of seconds a pre-flighted request can be cached for web services needing cross-origin (CORS) support.</description>
<name>yarn.timeline-service.http-cross-origin.max-age</name>
<value>1800</value>
</property>
<property>
<description>Indicate to clients whether Timeline service is enabled or not.
If enabled, the TimelineClient library used by end-users will post entities and events to the Timeline server.</description>
<name>yarn.timeline-service.enabled</name>
<value>true</value>
</property>
<property>
<description>Store class name for timeline store.</description>
<name>yarn.timeline-service.store-class</name>
<value>org.apache.hadoop.yarn.server.timeline.LeveldbTimelineStore</value>
</property>
<property>
<description>Enable age off of timeline store data.</description>
<name>yarn.timeline-service.ttl-enable</name>
<value>true</value>
</property>
<property>
<description>Time to live for timeline store data in milliseconds.</description>
<name>yarn.timeline-service.ttl-ms</name>
<value>604800000</value>
</property>
- vi /home/spark/nosql/hadoop-2.6.2/etc/hadoop/mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<!-- set the history -->
<property>
<name>mapreduce.jobhistory.address</name>
<value>elastic:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>elastic:19888</value>
</property>
<property>
<name>mapreduce.jobhistory.intermediate-done-dir</name>
<value>/home/spark/nosql/dfs/mr_history/HAmap</value>
<description>Directory where history files are written by MapReduce jobs.</description>
</property>
<property>
<name>mapreduce.jobhistory.done-dir</name>
<value>/home/spark/nosql/dfs/mr_history/HAdone</value>
<description>Directory where history files are managed by the MR JobHistory Server.</description>
</property>
</configuration>
scp -r nosql/hadoop-2.6.2 spark@mongodb:/home/spark/nosql/
scp -r nosql/hadoop-2.6.2 spark@hbase:/home/spark/nosql/
zookeeper配置
cd /home/spark/nosql/zookeeper-3.4.6/conf
cp zoo_sample.cfg zoo.cfg
vi zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/home/spark/nosql/zookeeper-3.4.6/data
dataLogDir=/home/spark/nosql/zookeeper-3.4.6/logs
clientPort=2181
server.1=elastic:2888:3888
server.2=hbase:2888:3888
server.3=mongodb:2888:3888
cd /home/spark/nosql/zookeeper-3.4.6
mkdir data
scp -r nosql/zookeeper-3.4.6 spark@mongodb:/home/spark/nosql/
scp -r nosql/zookeeper-3.4.6 spark@hbase:/home/spark/nosql/
在elastic節(jié)點(diǎn):
cd /home/spark/nosql/zookeeper-3.4.6
echo 1 > data/myid
在hbase節(jié)點(diǎn):
cd /home/spark/nosql/zookeeper-3.4.6
echo 2 > data/myid
在mongodb節(jié)點(diǎn):
cd /home/spark/nosql/zookeeper-3.4.6
echo 3 > data/myid
spark配置
cd ~/nosql/spark-2.0.2-bin-hadoop2.6/conf
cp spark-env.sh.template spark-env.sh
vi spark-env.sh
export JAVA_HOME=/usr/java/default
export HADOOP_CONF_DIR=/home/spark/nosql/hadoop-2.6.2/etc/hadoop
export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=elastic:2181,hbase:2181,mongodb:2181 -Dspark.deploy.zookeeper.dir=/home/spark/nosql/spark-2.0.2-bin-hadoop2.6/meta"
cp slaves.template slaves
vi slaves
hbase
mongodb
scp -r nosql/spark-2.0.2-bin-hadoop2.6 spark@mongodb:/home/spark/nosql/
scp -r nosql/spark-2.0.2-bin-hadoop2.6 spark@hbase:/home/spark/nosql/
啟動(dòng)zookeeper、hadoop吁讨、spark
cd /home/spark/nosql/zookeeper-3.4.6(每臺(tái))
zkServer.sh start
zkServer.sh status
格式化zk(任一節(jié)點(diǎn)) hdfs zkfc -formatZK
啟動(dòng)zkfc(主備節(jié)點(diǎn)elastic/hbase) hadoop-daemon.sh start zkfc
啟動(dòng)JournalNode(每臺(tái)) hadoop-daemon.sh start journalnode
格式化(任一節(jié)點(diǎn)髓迎,勿重復(fù)) hdfs namenode -format
主節(jié)點(diǎn)(elastic) hadoop-daemon.sh start namenode
備節(jié)點(diǎn)(hbase):
hadoop namenode -bootstrapStandBy
hadoop-daemon.sh start namenode
查看節(jié)點(diǎn)狀態(tài):
hdfs haadmin -getServiceState nn1
hdfs haadmin -getServiceState nn2
啟動(dòng)數(shù)據(jù)節(jié)點(diǎn):hadoop-daemons.sh start datanode
啟動(dòng)resourcemanager(主備) yarn-daemon.sh start resourcemanager
啟動(dòng)nodemanager:yarn-daemons.sh start nodemanager
查看yarn狀態(tài):
yarn rmadmin -getServiceState rm1
yarn rmadmin -getServiceState rm2
啟動(dòng)mrjobhistoryserver:mr-jobhistory-daemon.sh start historyserver
啟動(dòng)timelineserver:yarn-daemon.sh start timelineserver
啟動(dòng)spark master(主備):sbin/start-master.sh
最終效果
elastic:
11910 Jps
11385 JobHistoryServer
11715 Master
10518 NameNode
11521 ApplicationHistoryServer
10281 JournalNode
10098 QuorumPeerMain
10945 ResourceManager
10216 DFSZKFailoverController
hbase:
5813 NodeManager
5250 NameNode
5606 ResourceManager
5486 DataNode
5071 DFSZKFailoverController
4984 QuorumPeerMain
6153 Worker
5136 JournalNode
5987 Master
6252 Jps
mongodb:
3748 JournalNode
4179 Jps
4092 Worker
3701 QuorumPeerMain
3836 DataNode
3958 NodeManager