hadoop spark HA高可用集群搭建

方案

192.168.211.129   elastic    (zookeeper、kafka、hadoop namenode、yarn resourcemanager、hbase hmaster琳骡、park master、es master)
192.168.211.130   hbase         (zookeeper、kafka您觉、hadoop namenode、hadoop datanode授滓、yarn resourcemanager琳水、yarn nodemanager、spark worker褒墨、es data)    
192.168.211.131   mongodb     (zookeeper炫刷、kafka、hadoop datanode郁妈、yarn nodemanager浑玛、spark worker、es data)    

安裝jdk(每臺(tái))

rpm -ivh jdk-7u80-linux-x64.rpm

配置ssh(每臺(tái))

vi /etc/hosts 添加:
    192.168.211.129   elastic
    192.168.211.130   hbase
    192.168.211.131   mongodb

useradd spark
passwd spark

切換到spark用戶:
ssh-keygen -t rsa
ssh-copy-id -i /home/spark/.ssh/id_rsa.pub elastic
ssh-copy-id -i /home/spark/.ssh/id_rsa.pub hbase
ssh-copy-id -i /home/spark/.ssh/id_rsa.pub mongodb

elastic機(jī)器上:
cd
mkdir nosql
將要安裝的tar包拷貝到nosql目錄
tar -zxf hadoop-2.6.2.tar.gz
tar -zxf zookeeper-3.4.6.tar.gz
tar -zxf spark-2.0.2-bin-hadoop2.6.tgz
tar -zxf hbase-1.2.4-bin.tar.gz
tar -zxf kafka_2.10-0.10.1.0.tgz
tar -zxf elasticsearch-5.0.1.tar.gz
tar -zxf mongodb-linux-x86_64-rhel62-3.2.11.tgz
vi .bashrc
    JAVA_HOME=/usr/java/default
    HADOOP_HOME=/home/spark/nosql/hadoop-2.6.2
    SPARK_HOME=/home/spark/nosql/spark-2.0.2-bin-hadoop2.6
    ZOOKEEPER_HOME=/home/spark/nosql/zookeeper-3.4.6
    HBASE_HOME=/home/spark/nosql/hbase-1.2.4
    ELASTICSEARCH_HOME=/home/spark/nosql/elasticsearch-5.0.1
    MONGODB_HOME=/home/spark/nosql/mongodb-linux-x86_64-rhel62-3.2.11
    export JAVA_HOME HADOOP_HOME SPARK_HOME ZOOKEEPER_HOME HBASE_HOME ELASTICSEARCH_HOME MONGODB_HOME
    export PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$SPARK_HOME/bin:$SPARK_HOME/sbin:$ZOOKEEPER_HOME/bin:$HBASE_HOME/bin:$ELASTICSEARCH_HOME/bin:$MONGODB_HOME/bin:$PATH
source .bashrc

hadoop配置(配置完后復(fù)制到各節(jié)點(diǎn))

  • vi /home/spark/nosql/hadoop-2.6.2/etc/hadoop/slaves
    hbase
    mongodb
  • vi /home/spark/nosql/hadoop-2.6.2/etc/hadoop/core-site.xml
    <configuration>
    <property>
           <name>fs.defaultFS</name>
           <value>hdfs://mycluster</value>
           <description>這里的 mycluster為HA集群的邏輯名,與hdfs-site.xml中的dfs.nameservices配置一致</description>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/home/spark/nosql/data</value>
        <description>這里的路徑默認(rèn)是NameNode噩咪、DataNode顾彰、JournalNode等存放數(shù)據(jù)的公共目錄. 用戶也可單獨(dú)指定每類數(shù)據(jù)的存儲(chǔ)目錄。這里目錄結(jié)構(gòu)需要自己先創(chuàng)建好</description>
    </property>
    <property>
          <name>ha.zookeeper.quorum</name>
          <value>elastic:2181,hbase:2181,mongodb:2181</value>
          <description>這里是zk集群配置中各節(jié)點(diǎn)的地址和端口胃碾。 注意:數(shù)量一定是奇數(shù)而且和zoo.cfg中配置的一致</description>
    </property>
    <property>
           <name>io.file.buffer.size</name>
           <value>131072</value>
           <description>Size of read/write buffer used inSequenceFiles.</description>
    </property>
    </configuration>
  • vi /home/spark/nosql/hadoop-2.6.2/etc/hadoop/hdfs-site.xml
<configuration>
<property>
    <name>dfs.replication</name>
    <value>2</value>
    <description>配置副本數(shù)量</description>
</property>
<property>
    <name>dfs.namenode.name.dir</name>
    <value>/home/spark/nosql/dfs/name</value>
    <description>namenode元數(shù)據(jù)存儲(chǔ)目錄</description>
</property>
<property>
    <name>dfs.datanode.data.dir</name>
    <value>/home/spark/nosql/dfs/data</value>
    <description>datanode數(shù)據(jù)存儲(chǔ)目錄</description>
</property>

<property>
    <name>dfs.nameservices</name>
    <value>mycluster</value>
    <description>指定HA命名服務(wù),core-site.xml中fs.defaultFS配置需要引用它</description>
 </property>

<property>
    <name>dfs.namenode.rpc-address.mycluster.nn1</name>
    <value>elastic:9000</value>
</property>
<property>
    <name>dfs.namenode.rpc-address.mycluster.nn2</name>
    <value>hbase:9000</value>
</property>

<property>
    <name>dfs.namenode.http-address.mycluster.nn1</name>
    <value>elastic:50070</value>
</property>
<property>
    <name>dfs.namenode.http-address.mycluster.nn2</name>
    <value>hbase:50070</value>
</property>

<property>
    <name>dfs.namenode.servicerpc-address.mycluster.nn1</name>
    <value>elastic:53310</value>
</property>

<property>
    <name>dfs.namenode.servicerpc-address.mycluster.nn2</name>
    <value>hbase:53310</value>
</property>
<property>
    <name>dfs.ha.automatic-failover.enabled.mycluster</name>  
    <value>true</value>
    <description>故障失敗是否自動(dòng)切換</description>
</property>

<property>
    <name>dfs.namenode.shared.edits.dir</name>
    <value>qjournal://elastic:8485;hbase:8485;mongodb:8485/hadoop-journal</value>
    <description>配置JournalNode涨享,包含三部分:
        1.qjournal 前綴表名協(xié)議;
        2.然后就是三臺(tái)部署JournalNode的主機(jī)host/ip:端口仆百,三臺(tái)機(jī)器之間用分號(hào)分隔
        3.最后的hadoop-journal是journalnode的命名空間厕隧,可以隨意取名
    </description>
</property>

<property>
    <name>dfs.journalnode.edits.dir</name>
    <value>/home/spark/nosql/dfs/HAjournal</value>
    <description>journalnode的本地?cái)?shù)據(jù)存放目錄</description>
</property>

<property>
    <name>dfs.client.failover.proxy.provider.mycluster</name>
    <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
    <description> 指定mycluster出故障時(shí)執(zhí)行故障切換的類</description>
</property>
<property>
    <name>dfs.ha.fencing.methods</name>
    <value>sshfence</value>
    <description>ssh的操作方式執(zhí)行故障切換</description>
</property>

<property>
    <name>dfs.ha.fencing.ssh.private-key-files</name>
    <value>/home/spark/.ssh/id_rsa</value>
    <description> 如果使用ssh進(jìn)行故障切換,使用ssh通信時(shí)用的密鑰存儲(chǔ)的位置</description>
</property>

<property>
    <name>dfs.ha.fencing.ssh.connect-timeout</name>
    <value>1000</value>
</property>
<property>
    <name>dfs.namenode.handler.count</name>
    <value>10</value>
</property>
</configuration>
  • vi /home/spark/nosql/hadoop-2.6.2/etc/hadoop/yarn-site.xml
<configuration>

<!-- Site specific YARN configuration properties -->
<property>
    <name>yarn.resourcemanager.ha.enabled</name>
    <value>true</value>
</property>

<property>
    <name>yarn.resourcemanager.cluster-id</name>
    <value>clusterrm</value>
</property>

<property>
    <name>yarn.resourcemanager.ha.rm-ids</name>
    <value>rm1,rm2</value>
</property>

<property>
    <name>yarn.resourcemanager.hostname.rm1</name>
    <value>elastic</value>
</property>

<property>
    <name>yarn.resourcemanager.hostname.rm2</name>
    <value>hbase</value>
</property>

<property>
    <name>yarn.resourcemanager.recovery.enabled</name>
    <value>true</value>
</property>

<property>
    <name>yarn.resourcemanager.store.class</name>
    <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
</property>

<property>
    <name>yarn.resourcemanager.zk-address</name>
    <value>elastic:2181,hbase:2181,mongodb:2181</value>
</property>

<property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
</property>
<property>
    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
    <value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>

<!-- set the proxy server -->

<!-- set history server -->
<property>
    <name>yarn.log-aggregation-enable</name>
    <value>true</value>
</property>

<!-- set the timeline server -->
<property>
    <description>The hostname of the Timeline service web application.</description>
    <name>yarn.timeline-service.hostname</name>
    <value>elastic</value>
</property>

<property>
    <description>Address for the Timeline server to start the RPC server.</description>
    <name>yarn.timeline-service.address</name>
    <value>elastic:10200</value>
</property>

<property>
    <description>The http address of the Timeline service web application.</description>
    <name>yarn.timeline-service.webapp.address</name>
    <value>elastic:8188</value>
</property>

<property>
    <description>The https address of the Timeline service web application.</description>
    <name>yarn.timeline-service.webapp.https.address</name>
    <value>elastic:8190</value>
</property>

<property>
    <description>Handler thread count to serve the client RPC requests.</description>
    <name>yarn.timeline-service.handler-thread-count</name>
    <value>10</value>
</property>
<property>
    <name>yarn.timeline-service.http-cross-origin.enabled</name>
    <value>false</value>
</property>

<property>
    <description>Comma separated list of origins that are allowed for web services needing cross-origin (CORS) support. Wildcards (*) and patterns allowed</description>
    <name>yarn.timeline-service.http-cross-origin.allowed-origins</name>
    <value>*</value>
</property>

<property>
    <description>Comma separated list of methods that are allowed for web services needing cross-origin (CORS) support.</description>
    <name>yarn.timeline-service.http-cross-origin.allowed-methods</name>
    <value>GET,POST,HEAD</value>
</property>

<property>
    <description>Comma separated list of headers that are allowed for web services needing cross-origin (CORS) support.</description>
    <name>yarn.timeline-service.http-cross-origin.allowed-headers</name>
    <value>X-Requested-With,Content-Type,Accept,Origin</value>
</property>

<property>
    <description>The number of seconds a pre-flighted request can be cached for web services needing cross-origin (CORS) support.</description>
    <name>yarn.timeline-service.http-cross-origin.max-age</name>
    <value>1800</value>
</property>

<property>
    <description>Indicate to clients whether Timeline service is enabled or not.
            If enabled, the TimelineClient library used by end-users will post entities and events to the Timeline server.</description>
    <name>yarn.timeline-service.enabled</name>
    <value>true</value>
</property>

<property>
    <description>Store class name for timeline store.</description>
    <name>yarn.timeline-service.store-class</name>
    <value>org.apache.hadoop.yarn.server.timeline.LeveldbTimelineStore</value>
</property>

<property>
    <description>Enable age off of timeline store data.</description>
    <name>yarn.timeline-service.ttl-enable</name>
    <value>true</value>
</property>
<property>
    <description>Time to live for timeline store data in milliseconds.</description>
    <name>yarn.timeline-service.ttl-ms</name>
    <value>604800000</value>
</property>
  • vi /home/spark/nosql/hadoop-2.6.2/etc/hadoop/mapred-site.xml
<configuration>
<property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
</property>

<!-- set the history -->

<property>
    <name>mapreduce.jobhistory.address</name>
    <value>elastic:10020</value>
</property>

<property>
    <name>mapreduce.jobhistory.webapp.address</name>
    <value>elastic:19888</value>
</property>

<property>
    <name>mapreduce.jobhistory.intermediate-done-dir</name>
    <value>/home/spark/nosql/dfs/mr_history/HAmap</value>
    <description>Directory where history files are written by MapReduce jobs.</description>
</property>

<property>
    <name>mapreduce.jobhistory.done-dir</name>
    <value>/home/spark/nosql/dfs/mr_history/HAdone</value>
    <description>Directory where history files are managed by the MR JobHistory Server.</description>
</property>
</configuration>
scp -r nosql/hadoop-2.6.2 spark@mongodb:/home/spark/nosql/
scp -r nosql/hadoop-2.6.2 spark@hbase:/home/spark/nosql/

zookeeper配置

cd /home/spark/nosql/zookeeper-3.4.6/conf
cp zoo_sample.cfg zoo.cfg
vi zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/home/spark/nosql/zookeeper-3.4.6/data
dataLogDir=/home/spark/nosql/zookeeper-3.4.6/logs
clientPort=2181
server.1=elastic:2888:3888
server.2=hbase:2888:3888
server.3=mongodb:2888:3888
cd /home/spark/nosql/zookeeper-3.4.6
mkdir data
scp -r nosql/zookeeper-3.4.6 spark@mongodb:/home/spark/nosql/
scp -r nosql/zookeeper-3.4.6 spark@hbase:/home/spark/nosql/
在elastic節(jié)點(diǎn):
    cd /home/spark/nosql/zookeeper-3.4.6
    echo 1 > data/myid
在hbase節(jié)點(diǎn):
    cd /home/spark/nosql/zookeeper-3.4.6
    echo 2 > data/myid
在mongodb節(jié)點(diǎn):
    cd /home/spark/nosql/zookeeper-3.4.6
    echo 3 > data/myid

spark配置

cd ~/nosql/spark-2.0.2-bin-hadoop2.6/conf
cp spark-env.sh.template spark-env.sh
vi spark-env.sh
    export JAVA_HOME=/usr/java/default
    export HADOOP_CONF_DIR=/home/spark/nosql/hadoop-2.6.2/etc/hadoop
    export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=elastic:2181,hbase:2181,mongodb:2181 -Dspark.deploy.zookeeper.dir=/home/spark/nosql/spark-2.0.2-bin-hadoop2.6/meta"
cp slaves.template slaves
vi slaves
    hbase
    mongodb

scp -r nosql/spark-2.0.2-bin-hadoop2.6 spark@mongodb:/home/spark/nosql/
scp -r nosql/spark-2.0.2-bin-hadoop2.6 spark@hbase:/home/spark/nosql/

啟動(dòng)zookeeper、hadoop吁讨、spark

cd /home/spark/nosql/zookeeper-3.4.6(每臺(tái))
zkServer.sh start
zkServer.sh status
格式化zk(任一節(jié)點(diǎn))    hdfs zkfc -formatZK
啟動(dòng)zkfc(主備節(jié)點(diǎn)elastic/hbase)    hadoop-daemon.sh start zkfc
啟動(dòng)JournalNode(每臺(tái))    hadoop-daemon.sh start journalnode
格式化(任一節(jié)點(diǎn)髓迎,勿重復(fù))  hdfs namenode -format
主節(jié)點(diǎn)(elastic)   hadoop-daemon.sh start namenode
備節(jié)點(diǎn)(hbase):
    hadoop namenode -bootstrapStandBy
    hadoop-daemon.sh start namenode
查看節(jié)點(diǎn)狀態(tài):
    hdfs haadmin -getServiceState nn1
    hdfs haadmin -getServiceState nn2
啟動(dòng)數(shù)據(jù)節(jié)點(diǎn):hadoop-daemons.sh start datanode
啟動(dòng)resourcemanager(主備)  yarn-daemon.sh start resourcemanager
啟動(dòng)nodemanager:yarn-daemons.sh start nodemanager
查看yarn狀態(tài):
    yarn rmadmin -getServiceState rm1
    yarn rmadmin -getServiceState rm2
啟動(dòng)mrjobhistoryserver:mr-jobhistory-daemon.sh start historyserver
啟動(dòng)timelineserver:yarn-daemon.sh start timelineserver
啟動(dòng)spark master(主備):sbin/start-master.sh

最終效果

elastic:
    11910 Jps
    11385 JobHistoryServer
    11715 Master
    10518 NameNode
    11521 ApplicationHistoryServer
    10281 JournalNode
    10098 QuorumPeerMain
    10945 ResourceManager
    10216 DFSZKFailoverController
hbase:
    5813 NodeManager
    5250 NameNode
    5606 ResourceManager
    5486 DataNode
    5071 DFSZKFailoverController
    4984 QuorumPeerMain
    6153 Worker
    5136 JournalNode
    5987 Master
    6252 Jps
mongodb:
    3748 JournalNode
    4179 Jps
    4092 Worker
    3701 QuorumPeerMain
    3836 DataNode
    3958 NodeManager
最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
  • 序言:七十年代末,一起剝皮案震驚了整個(gè)濱河市建丧,隨后出現(xiàn)的幾起案子排龄,更是在濱河造成了極大的恐慌,老刑警劉巖翎朱,帶你破解...
    沈念sama閱讀 211,290評(píng)論 6 491
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件橄维,死亡現(xiàn)場(chǎng)離奇詭異,居然都是意外死亡拴曲,警方通過查閱死者的電腦和手機(jī)争舞,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 90,107評(píng)論 2 385
  • 文/潘曉璐 我一進(jìn)店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來疗韵,“玉大人兑障,你說我怎么就攤上這事〗锻簦” “怎么了流译?”我有些...
    開封第一講書人閱讀 156,872評(píng)論 0 347
  • 文/不壞的土叔 我叫張陵,是天一觀的道長(zhǎng)者疤。 經(jīng)常有香客問我福澡,道長(zhǎng),這世上最難降的妖魔是什么驹马? 我笑而不...
    開封第一講書人閱讀 56,415評(píng)論 1 283
  • 正文 為了忘掉前任革砸,我火速辦了婚禮,結(jié)果婚禮上糯累,老公的妹妹穿的比我還像新娘算利。我一直安慰自己,他們只是感情好泳姐,可當(dāng)我...
    茶點(diǎn)故事閱讀 65,453評(píng)論 6 385
  • 文/花漫 我一把揭開白布效拭。 她就那樣靜靜地躺著,像睡著了一般胖秒。 火紅的嫁衣襯著肌膚如雪缎患。 梳的紋絲不亂的頭發(fā)上,一...
    開封第一講書人閱讀 49,784評(píng)論 1 290
  • 那天阎肝,我揣著相機(jī)與錄音挤渔,去河邊找鬼。 笑死风题,一個(gè)胖子當(dāng)著我的面吹牛判导,可吹牛的內(nèi)容都是我干的嫉父。 我是一名探鬼主播,決...
    沈念sama閱讀 38,927評(píng)論 3 406
  • 文/蒼蘭香墨 我猛地睜開眼骡楼,長(zhǎng)吁一口氣:“原來是場(chǎng)噩夢(mèng)啊……” “哼熔号!你這毒婦竟也來了稽鞭?” 一聲冷哼從身側(cè)響起鸟整,我...
    開封第一講書人閱讀 37,691評(píng)論 0 266
  • 序言:老撾萬榮一對(duì)情侶失蹤,失蹤者是張志新(化名)和其女友劉穎朦蕴,沒想到半個(gè)月后篮条,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體,經(jīng)...
    沈念sama閱讀 44,137評(píng)論 1 303
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡吩抓,尸身上長(zhǎng)有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 36,472評(píng)論 2 326
  • 正文 我和宋清朗相戀三年涉茧,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片疹娶。...
    茶點(diǎn)故事閱讀 38,622評(píng)論 1 340
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡伴栓,死狀恐怖,靈堂內(nèi)的尸體忽然破棺而出雨饺,到底是詐尸還是另有隱情钳垮,我是刑警寧澤,帶...
    沈念sama閱讀 34,289評(píng)論 4 329
  • 正文 年R本政府宣布额港,位于F島的核電站饺窿,受9級(jí)特大地震影響,放射性物質(zhì)發(fā)生泄漏移斩。R本人自食惡果不足惜肚医,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 39,887評(píng)論 3 312
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望向瓷。 院中可真熱鬧肠套,春花似錦、人聲如沸猖任。這莊子的主人今日做“春日...
    開封第一講書人閱讀 30,741評(píng)論 0 21
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽超升。三九已至入宦,卻和暖如春,著一層夾襖步出監(jiān)牢的瞬間室琢,已是汗流浹背乾闰。 一陣腳步聲響...
    開封第一講書人閱讀 31,977評(píng)論 1 265
  • 我被黑心中介騙來泰國(guó)打工, 沒想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留盈滴,地道東北人涯肩。 一個(gè)月前我還...
    沈念sama閱讀 46,316評(píng)論 2 360
  • 正文 我出身青樓轿钠,卻偏偏與公主長(zhǎng)得像,于是被迫代替她去往敵國(guó)和親病苗。 傳聞我的和親對(duì)象是個(gè)殘疾皇子疗垛,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 43,490評(píng)論 2 348

推薦閱讀更多精彩內(nèi)容