kylo集群搭建教程
kylo文檔中 Clustering Kylo 介紹的比較模糊勤讽,但是大概步驟都講到了鞋邑。
kylo的集群拳芙,目的是做HA统诺,所以兩個節(jié)點共用同一個數(shù)據(jù)庫,在這個案例中瞒渠,兩個節(jié)點共用122上的mysql良蒸、activeMQ、elasticsearch 和 nifi伍玖。
121 | 122 | 服務(wù) |
---|---|---|
N | Y | mysql |
N | Y | activeMQ |
N | Y | elasticsearch |
Y | Y | nifi |
Y | Y | kylo |
ModeShape Configuration
修改metadata-repository.json 文件內(nèi)容
vim /opt/kylo/kylo-services/conf/metadata-repository.json
在最后一個追加如下內(nèi)容
,"clustering": {
"clusterName":"kylo-modeshape-cluster",
"configuration":"modeshape-jgroups-config.xml",
"locking":"db"
}
修改后預(yù)覽:
Kylo Configuration
在 /opt/kylo/kylo-services/conf/目錄下執(zhí)行如下語句
<!--這里一定要執(zhí)行一個空語句的 -->
echo " " >> application.properties
echo " " >> application.properties
echo "kylo.cluster.nodeCount=2" >> application.properties
echo "kylo.cluster.jgroupsConfigFile=kylo-cluster-jgroups-config.xml" >> application.properties
sed -i 's|jms.activemq.broker.url=.*|jms.activemq.broker.url=tcp://10.88.88.122:61616|' application.properties
sed -i 's|config.elasticsearch.jms.url=.*|config.elasticsearch.jms.url=tcp://10.88.88.122:61616|' application.properties
修改nifi.rest.host指向同一個NIFI節(jié)點
121
vim /opt/kylo/kylo-services/conf/application.properties
nifi.rest.host=10.88.88.122
nifi.rest.port=8079
122
vim /opt/kylo/kylo-services/conf/application.properties
nifi.rest.host=10.88.88.122
nifi.rest.port=8079
修改121和122的 elasticsearch-rest.properties
search.rest.host=10.88.88.122
search.rest.port=9200
Quartz Scheduler Configuration
查看在/opt/kylo/setup/config/kylo-cluster目錄下的配置文件:
在121上操作:
拷貝 quartz-cluster-example.properties 到 /opt/kylo/kylo-services/conf/ 目錄下嫩痰。并重命名為quartz.properties 不需要修改內(nèi)容。
拷貝 kylo-cluster-jgroups-config-example.xml 到 /opt/kylo/kylo-services/conf/ 目錄下窍箍。并重命名為
kylo-cluster-jgroups-config.xml
修改參數(shù)如下:
122上的kylo-cluster-jgroups-config.xml
<!--bind_port 不需要修改-->
<!--bind_addr 當(dāng)前節(jié)點IP-->
<TCP bind_port="7900"
bind_addr="10.88.88.122"
....
<!--initial_hosts kylo的所在節(jié)點的IP始赎,端口都是7900不變-->
<TCPPING timeout="3000" async_discovery="true" num_initial_members="2"
initial_hosts="10.88.88.122[7900],10.88.88.121[7900]"
121上的kylo-cluster-jgroups-config.xml
<!--bind_port 不需要修改-->
<!--bind_addr 當(dāng)前節(jié)點IP-->
<TCP bind_port="7900"
bind_addr="10.88.88.121"
....
<!--initial_hosts kylo的所在節(jié)點的IP,端口都是7900不變-->
<TCPPING timeout="3000" async_discovery="true" num_initial_members="2"
initial_hosts="10.88.88.121[7900],10.88.88.122[7900]"
修改后預(yù)覽如下:
拷貝modeshape-local-test-jgroups-config.xml 到 /opt/kylo/kylo-services/conf/ 目錄下仔燕。并重命名為 modeshape-jgroups-config.xml
121上的modeshape-jgroups-config.xml
<TCP bind_port="7800"
bind_addr="10.88.88.121"
...
<TCPPING timeout="3000" async_discovery="true" num_initial_members="2"
initial_hosts="10.88.88.121[7800],10.88.88.122[7801]"
122上的modeshape-jgroups-config.xml
<TCP bind_port="7801"
bind_addr="10.88.88.122"
...
<TCPPING timeout="3000" async_discovery="true" num_initial_members="2"
initial_hosts="10.88.88.122[7801],10.88.88.121[7800]"
修改后預(yù)覽如下:
測試:(可以跳過)
在121上執(zhí)行
java -Djava.net.preferIP4Stack=true -cp /opt/kylo/kylo-services/conf:/opt/kylo/kylo-services/lib/*:/opt/kylo/kylo-services/plugin/* org.jgroups.tests.McastReceiverTest -bind_addr 10.88.88.121 -port 7900
在122上執(zhí)行
java -Djava.net.preferIP4Stack=true -cp /opt/kylo/kylo-services/conf:/opt/kylo/kylo-services/lib/*:/opt/kylo/kylo-services/plugin/* org.jgroups.tests.McastSenderTest -bind_addr 10.88.88.122 -port 7900
修改run-kylo-services.sh文件
Modify the /opt/kylo/kylo-services/bin/run-kylo-services.sh
Add -Djava.net.preferIPv4Stack=true
java $KYLO_SERVICES_OPTS -Djava.net.preferIPv4Stack=true -cp /opt/kylo/kylo-services/conf ....
原因:
If you get a Network is unreachable error, below, you may need to do the following:
- Network unreachable error
SEVERE: JGRP000200: failed sending discovery request
java.io.IOException: Network is unreachable
at java.net.PlainDatagramSocketImpl.send(Native Method)
at java.net.DatagramSocket.send(DatagramSocket.java:693)
at org.jgroups.protocols.MPING.sendMcastDiscoveryRequest(MPING.java:295)
at org.jgroups.protocols.PING.sendDiscoveryRequest(PING.java:62)
at org.jgroups.protocols.PING.findMembers(PING.java:32)
at org.jgroups.protocols.Discovery.findMembers(Discovery.java:244)
重新新建kylo數(shù)據(jù)庫
登錄mysql
drop database kylo ;create database kylo;
然后
cd /opt/kylo/setup/sql/mysql
sh setup-mysql.sh 10.88.88.122 kylo kylo
cd /opt/kylo/setup/sql
sh generate-update-sql.sh
<!--在當(dāng)前目錄下生成兩個文件-->
登錄mysql
use kylo source /opt/kylo/setup/sql/mysqlkylo-db-update-script.sql;
下載Quartz
Download and extract the Quartz distribution to a machine. http://d2zwv9pap9ylyd.cloudfront.net/quartz-2.2.3-distribution.tar.gz You just need this to get the database scripts.
Run the Quartz database scripts for your database found in the docs/dbTables
再次登錄mysql;
use kylo;
souroce ~/quartz-2.2.3/docs/dbTables/tables_mysql.sql;
啟動kylo
訪問:http://10.88.88.122:8400/index.html#!/admin/cluster
nifi 集群搭建
kylo 文檔中介紹的很詳細(xì)造垛,安裝官網(wǎng)教程配置即可;
在這里晰搀,分別在121五辽、122上安裝nifi
然后執(zhí)行如下命令:
121上
sed -i "s|nifi.web.http.host=.*|nifi.web.http.host=10.88.88.121|" /opt/nifi/current/conf/nifi.properties
sed -i 's|nifi.cluster.is.node=.*|nifi.cluster.is.node=true|' /opt/nifi/current/conf/nifi.properties
sed -i 's|nifi.cluster.node.address=.*|nifi.cluster.node.address=10.88.88.121|' /opt/nifi/current/conf/nifi.properties
sed -i 's|nifi.cluster.node.protocol.port=.*|nifi.cluster.node.protocol.port=8078|' /opt/nifi/current/conf/nifi.properties
sed -i 's|nifi.zookeeper.connect.string=.*|nifi.zookeeper.connect.string=10.88.88.121:2181|' /opt/nifi/current/conf/nifi.properties
至此nifi集群模式已經(jīng)設(shè)置好了,下面配置kylo和nifi的交互
修改nifi的activeMQ的配置
vi /opt/nifi/ext-config/config.properties
# 這里需要配置active MQ的主節(jié)點
jms.activemq.broker.url=tcp://10.88.88.121:61616
或者
sed -i "s|jms.activemq.broker.url=.*|jms.activemq.broker.url=tcp://10.88.88.121:61616|" /opt/nifi/ext-config/config.properties
修改nifi.properties配置文件
<!--設(shè)置超時時長外恕,避免導(dǎo)入模板返回500錯誤-->
sed -i "s|nifi.cluster.node.connection.timeout=.*|nifi.cluster.node.connection.timeout=25 sec|" /opt/nifi/nifi-1.6.0/conf/nifi.properties
sed -i "s|nifi.cluster.node.read.timeout=.*|nifi.cluster.node.read.timeout=25 sec|" /opt/nifi/nifi-1.6.0/conf/nifi.properties
添加監(jiān)控插
拷貝/opt/kylo/setup/plugins下的kylo-service-monitor-kylo-cluster-0.9.1.jar 到/opt/kylo/kylo-services/plugin/目錄下
重啟nifi和kylo后導(dǎo)入模板測試(data_ingest.zip)報錯杆逗。原因是activeMQ的配置需要修改
然后在kylo中跑流程就可以成功了。