Speed up MapReduceIndexManagement with Spark/Hadoop #4823
Unanswered
VertigoStr
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hello!
I have a cluster of ApacheSpark and ApacheHadoop. I would like to speed up the process of indexing data using Spark and Hadoop.
Data indexing takes up to 14 days on our data (20 billions of vertex and 50 billions of edges) by running the following code:
While studying the source code of MapReduceIndexManagement I noticed that it is possible to pass configuration for connection to hadoop:
Help me, is it possible:
Our janusgraph config for graph:
Beta Was this translation helpful? Give feedback.
All reactions