JanusGraph java 数据导入_JanusGraph 0.2.0 gremlin-hadoop数据导入配置

janusgraph 0.2.0 相关问题与解决方案

由于janusgraph 0.2.0的lib文件夹下面缺少hadoop-hdfs-2.7.2.jar,需要手动添加相关文件到lib文件夹下面。

No FileSystem for scheme: hdfs这个问题需要在hadoop的配置文件core-site.xml中添加如下配置

fs.hdfs.impl

org.apache.hadoop.hdfs.DistributedFileSystem

环境变量配置

# gremlin console的地址。这个配置是可选项目,用于解决janusgraph缺少相关jar的问题。

export GREMLIN_HOME=/opt/apache-tinkerpop-gremlin-console-3.2.6

# hadoop的配置文件地址

export HADOOP_CONF_DIR=/etc/hadoop/conf

# gremlin console下载的插件的lib文件地址。这个配置是可选项目,用于解决janusgraph缺少相关jar的问题。

export HADOOP_GREMLIN_LIBS=$GREMLIN_HOME/ext/hadoop-gremlin/plugin:$GREMLIN_HOME/ext/spark-gremlin/plugin

export HBASE_CONF_DIR=/etc/hbase/conf

export CLASSPATH=$HADOOP_CONF_DIR:$HADOOP_GREMLIN_LIBS:$HBASE_CONF_DIR

如果手动添加了相关jar,则不需要配置gremlin console的相关配置项。安装gremlin-console插件的步骤

hadoop插件

:install org.apache.tinkerpop hadoop-gremlin 3.2.6

:plugin use tinkerpop.hadoop

giraph-gremlin插件

:install org.apache.tinkerpop giraph-gremlin 3.2.6

:plugin use tinkerpop.giraph

spark-gremlin插件

:install org.apache.tinkerpop spark-gremlin 3.2.6

:plugin use tinkerpop.spark

导入数据并查询

bin/gremlin.sh

\,,,/

(o o)

-----oOOo-(3)-oOOo-----

plugin activated: janusgraph.imports

gremlin> :plugin use tinkerpop.hadoop

==>tinkerpop.hadoop activated

gremlin> :plugin use tinkerpop.spark

==>tinkerpop.spark activated

gremlin> :load data/grateful-dead-janusgraph-schema.groovy

==>true

==>true

gremlin> graph = JanusGraphFactory.open('conf/janusgraph-hbase.properties')

==>standardjanusgraph[hbase:[kg-server-96.kg.com, kg-agent-95.kg.com, kg-agent-97.kg.com]]

gremlin> defineGratefulDeadSchema(graph)

==>null

gremlin> graph.close()

==>null

gremlin> if (!hdfs.exists('data/grateful-dead.kryo')) hdfs.copyFromLocal('data/grateful-dead.kryo','data/grateful-dead.kryo')

==>null

gremlin> graph = GraphFactory.open('conf/hadoop-graph/hadoop-load.properties')

==>hadoopgraph[gryoinputformat->nulloutputformat]

gremlin> blvp = BulkLoaderVertexProgram.build().writeGraph('conf/janusgraph-hbase.properties').create(graph)

==>BulkLoaderVertexProgram[bulkLoader=IncrementalBulkLoader,vertexIdProperty=bulkLoader.vertex.id,userSuppliedIds=false,keepOriginalIds=true,batchSize=0]

gremlin> graph.compute(SparkGraphComputer).program(blvp).submit().get()

...

==>result[hadoopgraph[gryoinputformat->nulloutputformat],memory[size:0]]

gremlin> graph.close()

==>null

gremlin> graph = GraphFactory.open('conf/hadoop-graph/read-hbase.properties')

==>hadoopgraph[cassandrainputformat->gryooutputformat]

gremlin> g = graph.traversal().withComputer(SparkGraphComputer)

==>graphtraversalsource[hadoopgraph[cassandrainputformat->gryooutputformat], sparkgraphcomputer]

gremlin> g.V().count()

...

==>808

相关配置文件

janusgraph-hbase.properties

gremlin.graph=org.janusgraph.core.JanusGraphFactory

storage.backend=hbase

storage.hostname= kg-server-96.kg.com,kg-agent-95.kg.com,kg-agent-97.kg.com

cache.db-cache=true

cache.db-cache-clean-wait=20

cache.db-cache-time=180000

cache.db-cache-size=0.5

index.search.backend=elasticsearch

index.search.hostname=10.110.18.52

storage.hbase.ext.zookeeper.znode.parent=/hbase-unsecure

storage.hbase.table=Medical-POC

index.search.index-name=Medical-POC

grateful-dead-janusgraph-schema.groovy

def defineGratefulDeadSchema(janusGraph) {

m = janusGraph.openManagement()

// vertex labels

artist = m.makeVertexLabel("artist").make()

song = m.makeVertexLabel("song").make()

// edge labels

sungBy = m.makeEdgeLabel("sungBy").make()

writtenBy = m.makeEdgeLabel("writtenBy").make()

followedBy = m.makeEdgeLabel("followedBy").make()

// vertex and edge properties

blid = m.makePropertyKey("bulkLoader.vertex.id").dataType(Long.class).make()

name = m.makePropertyKey("name").dataType(String.class).make()

songType = m.makePropertyKey("songType").dataType(String.class).make()

performances = m.makePropertyKey("performances").dataType(Integer.class).make()

weight = m.makePropertyKey("weight").dataType(Integer.class).make()

// global indices

m.buildIndex("byBulkLoaderVertexId", Vertex.class).addKey(blid).buildCompositeIndex()

m.buildIndex("artistsByName", Vertex.class).addKey(name).indexOnly(artist).buildCompositeIndex()

m.buildIndex("songsByName", Vertex.class).addKey(name).indexOnly(song).buildCompositeIndex()

// vertex centric indices

m.buildEdgeIndex(followedBy, "followedByWeight", Direction.BOTH, Order.decr, weight)

m.commit()

}

hadoop-load.properties

#

# Hadoop Graph Configuration

#

gremlin.graph=org.apache.tinkerpop.gremlin.hadoop.structure.HadoopGraph

gremlin.hadoop.graphInputFormat=org.apache.tinkerpop.gremlin.hadoop.structure.io.gryo.GryoInputFormat

gremlin.hadoop.graphOutputFormat=org.apache.hadoop.mapreduce.lib.output.NullOutputFormat

gremlin.hadoop.inputLocation=./data/grateful-dead.kryo

gremlin.hadoop.outputLocation=output

gremlin.hadoop.jarsInDistributedCache=true

#

# GiraphGraphComputer Configuration

#

giraph.minWorkers=2

giraph.maxWorkers=2

giraph.useOutOfCoreGraph=true

giraph.useOutOfCoreMessages=true

mapred.map.child.java.opts=-Xmx1024m

mapred.reduce.child.java.opts=-Xmx1024m

giraph.numInputThreads=4

giraph.numComputeThreads=4

giraph.maxMessagesInMemory=100000

#

# SparkGraphComputer Configuration

#

spark.master=local[*]

spark.executor.memory=1g

spark.serializer=org.apache.spark.serializer.KryoSerializer

read-hbase.properties

#

# Hadoop Graph Configuration

#

gremlin.graph=org.apache.tinkerpop.gremlin.hadoop.structure.HadoopGraph

gremlin.hadoop.graphInputFormat=org.janusgraph.hadoop.formats.hbase.HBaseInputFormat

gremlin.hadoop.graphOutputFormat=org.apache.tinkerpop.gremlin.hadoop.structure.io.gryo.GryoOutputFormat

gremlin.hadoop.jarsInDistributedCache=true

gremlin.hadoop.inputLocation=none

gremlin.hadoop.outputLocation=output

#

# JanusGraph HBase InputFormat configuration

#

janusgraphmr.ioformat.conf.storage.backend=hbase

#只需要配置一个hbase节点的ip就可以

janusgraphmr.ioformat.conf.storage.hostname=127.0.0.1

janusgraphmr.ioformat.conf.storage.hbase.table=Medical-POC

#如果不配置会报org.apache.hadoop.hbase.client.RetriesExhaustedException: Can't get the locations

zookeeper.znode.parent=/hbase-unsecure

#

# SparkGraphComputer Configuration

#

spark.master=local[4]

spark.serializer=org.apache.spark.serializer.KryoSerializer

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值