janusgraph 0.2.0 相關(guān)問題與解決方案
- 由于janusgraph 0.2.0的
lib
文件夾下面缺少hadoop-hdfs-2.7.2.jar
,需要手動添加相關(guān)文件到lib文件夾下面。 -
No FileSystem for scheme: hdfs
這個問題需要在hadoop的配置文件core-site.xml
中添加如下配置
<property>
<name>fs.hdfs.impl</name>
<value>org.apache.hadoop.hdfs.DistributedFileSystem</value>
</property>
環(huán)境變量配置
# gremlin console的地址狭吼。這個配置是可選項目层坠,用于解決janusgraph缺少相關(guān)jar的問題。
export GREMLIN_HOME=/opt/apache-tinkerpop-gremlin-console-3.2.6
# hadoop的配置文件地址
export HADOOP_CONF_DIR=/etc/hadoop/conf
# gremlin console下載的插件的lib文件地址刁笙。這個配置是可選項目破花,用于解決janusgraph缺少相關(guān)jar的問題。
export HADOOP_GREMLIN_LIBS=$GREMLIN_HOME/ext/hadoop-gremlin/plugin:$GREMLIN_HOME/ext/spark-gremlin/plugin
export HBASE_CONF_DIR=/etc/hbase/conf
export CLASSPATH=$HADOOP_CONF_DIR:$HADOOP_GREMLIN_LIBS:$HBASE_CONF_DIR
如果手動添加了相關(guān)jar疲吸,則不需要配置gremlin console的相關(guān)配置項座每。安裝gremlin-console插件的步驟
- hadoop插件
- :install org.apache.tinkerpop hadoop-gremlin 3.2.6
- :plugin use tinkerpop.hadoop
- giraph-gremlin插件
- :install org.apache.tinkerpop giraph-gremlin 3.2.6
- :plugin use tinkerpop.giraph
- spark-gremlin插件
- :install org.apache.tinkerpop spark-gremlin 3.2.6
- :plugin use tinkerpop.spark
導入數(shù)據(jù)并查詢
bin/gremlin.sh
\,,,/
(o o)
-----oOOo-(3)-oOOo-----
plugin activated: janusgraph.imports
gremlin> :plugin use tinkerpop.hadoop
==>tinkerpop.hadoop activated
gremlin> :plugin use tinkerpop.spark
==>tinkerpop.spark activated
gremlin> :load data/grateful-dead-janusgraph-schema.groovy
==>true
==>true
gremlin> graph = JanusGraphFactory.open('conf/janusgraph-hbase.properties')
==>standardjanusgraph[hbase:[kg-server-96.kg.com, kg-agent-95.kg.com, kg-agent-97.kg.com]]
gremlin> defineGratefulDeadSchema(graph)
==>null
gremlin> graph.close()
==>null
gremlin> if (!hdfs.exists('data/grateful-dead.kryo')) hdfs.copyFromLocal('data/grateful-dead.kryo','data/grateful-dead.kryo')
==>null
gremlin> graph = GraphFactory.open('conf/hadoop-graph/hadoop-load.properties')
==>hadoopgraph[gryoinputformat->nulloutputformat]
gremlin> blvp = BulkLoaderVertexProgram.build().writeGraph('conf/janusgraph-hbase.properties').create(graph)
==>BulkLoaderVertexProgram[bulkLoader=IncrementalBulkLoader,vertexIdProperty=bulkLoader.vertex.id,userSuppliedIds=false,keepOriginalIds=true,batchSize=0]
gremlin> graph.compute(SparkGraphComputer).program(blvp).submit().get()
...
==>result[hadoopgraph[gryoinputformat->nulloutputformat],memory[size:0]]
gremlin> graph.close()
==>null
gremlin> graph = GraphFactory.open('conf/hadoop-graph/read-hbase.properties')
==>hadoopgraph[cassandrainputformat->gryooutputformat]
gremlin> g = graph.traversal().withComputer(SparkGraphComputer)
==>graphtraversalsource[hadoopgraph[cassandrainputformat->gryooutputformat], sparkgraphcomputer]
gremlin> g.V().count()
...
==>808
相關(guān)配置文件
janusgraph-hbase.properties
gremlin.graph=org.janusgraph.core.JanusGraphFactory
storage.backend=hbase
storage.hostname= kg-server-96.kg.com,kg-agent-95.kg.com,kg-agent-97.kg.com
cache.db-cache=true
cache.db-cache-clean-wait=20
cache.db-cache-time=180000
cache.db-cache-size=0.5
index.search.backend=elasticsearch
index.search.hostname=10.110.18.52
storage.hbase.ext.zookeeper.znode.parent=/hbase-unsecure
storage.hbase.table=Medical-POC
index.search.index-name=Medical-POC
grateful-dead-janusgraph-schema.groovy
def defineGratefulDeadSchema(janusGraph) {
m = janusGraph.openManagement()
// vertex labels
artist = m.makeVertexLabel("artist").make()
song = m.makeVertexLabel("song").make()
// edge labels
sungBy = m.makeEdgeLabel("sungBy").make()
writtenBy = m.makeEdgeLabel("writtenBy").make()
followedBy = m.makeEdgeLabel("followedBy").make()
// vertex and edge properties
blid = m.makePropertyKey("bulkLoader.vertex.id").dataType(Long.class).make()
name = m.makePropertyKey("name").dataType(String.class).make()
songType = m.makePropertyKey("songType").dataType(String.class).make()
performances = m.makePropertyKey("performances").dataType(Integer.class).make()
weight = m.makePropertyKey("weight").dataType(Integer.class).make()
// global indices
m.buildIndex("byBulkLoaderVertexId", Vertex.class).addKey(blid).buildCompositeIndex()
m.buildIndex("artistsByName", Vertex.class).addKey(name).indexOnly(artist).buildCompositeIndex()
m.buildIndex("songsByName", Vertex.class).addKey(name).indexOnly(song).buildCompositeIndex()
// vertex centric indices
m.buildEdgeIndex(followedBy, "followedByWeight", Direction.BOTH, Order.decr, weight)
m.commit()
}
hadoop-load.properties
#
# Hadoop Graph Configuration
#
gremlin.graph=org.apache.tinkerpop.gremlin.hadoop.structure.HadoopGraph
gremlin.hadoop.graphInputFormat=org.apache.tinkerpop.gremlin.hadoop.structure.io.gryo.GryoInputFormat
gremlin.hadoop.graphOutputFormat=org.apache.hadoop.mapreduce.lib.output.NullOutputFormat
gremlin.hadoop.inputLocation=./data/grateful-dead.kryo
gremlin.hadoop.outputLocation=output
gremlin.hadoop.jarsInDistributedCache=true
#
# GiraphGraphComputer Configuration
#
giraph.minWorkers=2
giraph.maxWorkers=2
giraph.useOutOfCoreGraph=true
giraph.useOutOfCoreMessages=true
mapred.map.child.java.opts=-Xmx1024m
mapred.reduce.child.java.opts=-Xmx1024m
giraph.numInputThreads=4
giraph.numComputeThreads=4
giraph.maxMessagesInMemory=100000
#
# SparkGraphComputer Configuration
#
spark.master=local[*]
spark.executor.memory=1g
spark.serializer=org.apache.spark.serializer.KryoSerializer
read-hbase.properties
#
# Hadoop Graph Configuration
#
gremlin.graph=org.apache.tinkerpop.gremlin.hadoop.structure.HadoopGraph
gremlin.hadoop.graphInputFormat=org.janusgraph.hadoop.formats.hbase.HBaseInputFormat
gremlin.hadoop.graphOutputFormat=org.apache.tinkerpop.gremlin.hadoop.structure.io.gryo.GryoOutputFormat
gremlin.hadoop.jarsInDistributedCache=true
gremlin.hadoop.inputLocation=none
gremlin.hadoop.outputLocation=output
#
# JanusGraph HBase InputFormat configuration
#
janusgraphmr.ioformat.conf.storage.backend=hbase
#只需要配置一個hbase節(jié)點的ip就可以
janusgraphmr.ioformat.conf.storage.hostname=127.0.0.1
janusgraphmr.ioformat.conf.storage.hbase.table=Medical-POC
#如果不配置會報org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get the locations
zookeeper.znode.parent=/hbase-unsecure
#
# SparkGraphComputer Configuration
#
spark.master=local[4]
spark.serializer=org.apache.spark.serializer.KryoSerializer