說明
- 集群由dpnode05和dpnode08兩個節(jié)點組成
>>>下面在dpnode05上操作
創(chuàng)建kylo安裝目錄
- mkdir /opt/kylo
將裝裝包解壓到安裝目錄
- tar xvf /home/wcm/sources/kylo-0.10.0.tar -C /opt/kylo
安裝前執(zhí)行預處理腳本, 這里以root用戶安裝
- /opt/kylo/setup/install/post-install.sh /opt/kylo root root
PS: 三個參數(shù)依次為: kylo安裝目錄, 安裝程序的用戶, 安裝程序的用戶組
修改配置文件
- vim /opt/kylo/kylo-services/conf/application.properties
spring.datasource.url=jdbc:mysql://dpnode05:3306/kylo spring.datasource.username=root spring.datasource.password=123456 spring.datasource.driverClassName=com.mysql.jdbc.Driver
復制Mysql驅動
- cp /home/wcm/mysql-connector-java-8.0.15.jar /opt/kylo/kylo-services/lib/
生成sql腳本
- cd /opt/kylo/setup/sql/mysql
- ./setup-mysql.sh dpnode05 root 123456
- cd /opt/kylo/setup/sql
- ./generate-update-ql.sh
# 生成的兩個腳本文件 kylo-db-update-script.sql kylo-db-update-script.sql.bac
連接Mysql客戶端執(zhí)行Kylo sql腳本
use kylo;
source kylo-db-update-script.sql;
下載Quartz
- 下載Quartz: http://d2zwv9pap9ylyd.cloudfront.net/quartz-2.2.3-distribution.tar.gz
- 解壓: tar -zxvf quartz-2.2.3-distribution.tar.gz
- 進入Quartz解壓目錄: cd quartz-2.2.3/docs/dbTables/
- 進入Mysql客戶端執(zhí)行腳本Quartz sql腳本
use kylo; source tables_mysql.sql;
集群配置
-
ModeShape配置
# vim /opt/kylo/kylo-services/conf/metadata-repository.json # 在最后一個追加如下內容 ,"clustering": { "clusterName":"kylo-modeshape-cluster", "configuration":"modeshape-jgroups-config.xml", "locking":"db" }
-
Kylo配置
# vim /opt/kylo/kylo-services/conf/application.properties # 最后添加如下內容 kylo.cluster.jgroupsConfigFile=kylo-cluster-jgroups-config.xml kylo.cluster.nodeCount=2 # 修改activemq jms連接 jms.activemq.broker.url=tcp://dpnode05:61616 # 修改elasticsearch jms連接 config.elasticsearch.jms.url=tcp://dpnode05:61616 # 修改nifi配置 nifi.rest.host=dpnode05 nifi.rest.port=8079
-
Elasticsearch配置
# vim /opt/kylo/kylo-services/conf/elasticsearch-rest.properties # 修改連接配置 search.rest.host=dpnode05 search.rest.port=9200
-
Quartz配置
# cp /opt/kylo/setup/config/kylo-cluster/quartz-cluster-example.properties /opt/kylo/kylo-services/conf/quartz.properties
-
kylo-cluster-jgroups-config
# cp /opt/kylo/setup/config/kylo-cluster/kylo-cluster-jgroups-config-example.xml /opt/kylo/kylo-services/conf/kylo-cluster-jgroups-config.xml # vim /opt/kylo/kylo-services/conf/kylo-cluster-jgroups-config.xml <TCP bind_port="7900" bind_addr="dpnode05" .... <TCPPING timeout="3000" async_discovery="true" num_initial_members="2" initial_hosts="dpnode05[7900],dpnode08[7900]" ....
-
modeshape-jgroups-config
# cp /opt/kylo/setup/config/kylo-cluster/modeshape-local-test-jgroups-config.xml /opt/kylo/kylo-services/conf/modeshape-jgroups-config.xml # vim modeshape-jgroups-config.xml <TCP bind_port="7800" bind_addr="dpnode05" .... <TCPPING timeout="3000" async_discovery="true" num_initial_members="2" initial_hosts="dpnode05[7800],dpnode08[7800]" ....
-
啟動腳本配置
# vim /opt/kylo/kylo-services/bin/run-kylo-services.sh # 添加啟動參數(shù) -Djava.net.preferIPv4Stack=true java -Djava.net.preferIPv4Stack=true -Dorg.springframework.boot.logging.LoggingSystem=none $KYLO_SERVICES_OPTS $KYLO_SPRING_PROFILES_OPTS -cp /opt/kylo/kylo-services/conf:$HADOOP_CONF_DIR:/opt/kylo/kylo-services/lib/*:/opt/kylo/kylo-services/lib/${KYLO_NIFI_PROFILE}/*:/opt/kylo/kylo-services/plugin/* com.thinkbiganalytics.server.KyloServerApplication --pgrep-marker=kylo-services-pgrep-marker > /var/log/kylo-services/std.out 2>/var/log/kylo-services/std.err &
添加集群服務監(jiān)控依賴插件
- cp /opt/kylo/setup/plugins/kylo-service-monitor-kylo-cluster-0.10.0.jar /opt/kylo/kylo-services/plugin/
通信測試(可省略)
- dpnode05 上執(zhí)行, 阻塞并接受消息
java -Djava.net.preferIP4Stack=true -cp /opt/kylo/kylo-services/conf:/opt/kylo/kylo-services/lib/*:/opt/kylo/kylo-services/plugin/* org.jgroups.tests.McastReceiverTest -bind_addr dpnode05 -port 7900
- dpnode08 上執(zhí)行, 發(fā)送消息
java -Djava.net.preferIP4Stack=true -cp /opt/kylo/kylo-services/conf:/opt/kylo/kylo-services/lib/*:/opt/kylo/kylo-services/plugin/* org.jgroups.tests.McastSenderTest -bind_addr dpnode08 -port 7900
PS: dpnode05能收到消息則正常
復制節(jié)點
- cd /opt
- scp -r /opt dpnode08://$PWD
>>>下面在dpnode08上操作
執(zhí)行預處理腳本, 這里以root用戶安裝
- /opt/kylo/setup/install/post-install.sh /opt/kylo root root
集群配置
-
啟動腳本配置
# vim /opt/kylo/kylo-services/bin/run-kylo-services.sh # 添加啟動參數(shù) -Djava.net.preferIPv4Stack=true java -Djava.net.preferIPv4Stack=true -Dorg.springframework.boot.logging.LoggingSystem=none $KYLO_SERVICES_OPTS $KYLO_SPRING_PROFILES_OPTS -cp /opt/kylo/kylo-services/conf:$HADOOP_CONF_DIR:/opt/kylo/kylo-services/lib/*:/opt/kylo/kylo-services/lib/${KYLO_NIFI_PROFILE}/*:/opt/kylo/kylo-services/plugin/* com.thinkbiganalytics.server.KyloServerApplication --pgrep-marker=kylo-services-pgrep-marker > /var/log/kylo-services/std.out 2>/var/log/kylo-services/std.err &
-
kylo-cluster-jgroups-config
# cp /opt/kylo/setup/config/kylo-cluster/kylo-cluster-jgroups-config-example.xml /opt/kylo/kylo-services/conf/kylo-cluster-jgroups-config.xml # vim /opt/kylo/kylo-services/conf/kylo-cluster-jgroups-config.xml <TCP bind_port="7900" bind_addr="dpnode08" .... <TCPPING timeout="3000" async_discovery="true" num_initial_members="2" initial_hosts="dpnode05[7900],dpnode08[7900]" ....
-
modeshape-jgroups-config
# cp /opt/kylo/setup/config/kylo-cluster/modeshape-local-test-jgroups-config.xml /opt/kylo/kylo-services/conf/modeshape-jgroups-config.xml # vim modeshape-jgroups-config.xml <TCP bind_port="7800" bind_addr="dpnode08" .... <TCPPING timeout="3000" async_discovery="true" num_initial_members="2" initial_hosts="dpnode05[7800],dpnode08[7800]" ....
>>>下面操作分別在兩個節(jié)點上執(zhí)行
啟動服務
- kylo-services start
在瀏覽器訪問kylo-ui
參考文章:
問題
- Admin -> Connectors -> Hive/JDBC/... 訪問不了
- 原因: kylo 訪問控制分為兩層, service-level(kylo-wide) 和 entity-level, entity-level層權限控制默認是關閉的! 而這個UI選項就屬于entity層權限,所以訪問不了
- 解決方法
- 修改 /opt/kylo/kylo-services/conf/application.properties
- 修改屬性 security.entity.access.controlled=true
- 重啟服務即可
集群原理
Modeshape uses jgroups internally for Kylo Cluster management.
Modeshape Replicated with a shared database. Modeshape Cluster
This is the only clustering model which will be supported by ModeShape 5.
A cluster in this model can have any number of members each with it's own in-memory cache but all using a shared database for persisting and reading the content. Binary stores and Indexes can be configured to be either local to each member or shared across all members, depending on the chosen implementation.
Updates in the cluster are sent to each of the members in the form of JGroups messages representing the various events that caused that data to mutate. Each cluster member will update their own local state in response to these events.
This works great for small- to medium-sized repositories, even when the available memory on each process is not large enough to hold all of the nodes and binary values at one time.