Replies: 6 comments 3 replies
-
It probably does not make a difference, but did you also try the idiomatic approach:
More promising: did you try and set the storage.buffer-size config property to a higher value than the 1024 default value, see: |
Beta Was this translation helpful? Give feedback.
-
Hi @vtslab , About the second option you've shared, actually no. I did not tried anything and i think everything is the default setting of Apache atlas and janusgraph itself |
Beta Was this translation helpful? Give feedback.
-
It was just a guess, because the stacktrace mentions "buffer" and concerns the storage backend. If you have direct access to the storage backend from the Gremlin Console, you can just connect and "mask" the value for storage.buffer-size in the config properties file referenced in the JanusGraphFactory statement (so, entirely skipping Atlas). Otherwise, the Atlas admin has to change the property in the Gremlin Server config file and restart Gremlin Server. |
Beta Was this translation helpful? Give feedback.
-
We surely have descended to the bug level. Last idea: set query.fast-property=false. Hopefully, with this setting g.V(433803264).drop() fetches the vertex without fetching any property values and after that pushes the delete to the storage backend right away. |
Beta Was this translation helpful? Give feedback.
-
I If you need more details, just mention to me what could i do |
Beta Was this translation helpful? Give feedback.
-
Regarding the details: do you have any idea about the size of the json string in the "hive_db.parameters" property? Just in case I want to try to reproduce it. As it is now, the issue has little chance of being resolved. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello everyone!
I have a problem related to updating, thus dropping an entity.
At first i was having a property value in vertex that was causing a lot of pain as it as very huge load, and i tried to update the value 'hive_db.parameters' which contains EXTRA_SUPER_HUGE_JSON which was.. really huge.
the property schema is:
__type.hive_db.parameters | SINGLE | class java.lang.String
So as i beginner graphdb explorer i tried to connect with gremlin and clear up the value,
What i've tried:
But then entity was unable to read due to this error:
invalid stream header: 444F5F4E
or something like that, i loose the stacktracewhich translated from ASCII means DO_N, similar to starting, then i've tried to remove the parameter value :
which nows gives me:
Required size [1] exceeds actual remaining size [0]
Stacktrace: Click to expand
gremlin> g.V(433803264).valueMap() Required size [1] exceeds actual remaining size [0] Type ':help' or ':h' for help. Display stack trace? [yN]y java.lang.ArrayIndexOutOfBoundsException: Required size [1] exceeds actual remaining size [0] at org.janusgraph.diskstorage.util.StaticArrayBuffer.require(StaticArrayBuffer.java:94) at org.janusgraph.diskstorage.util.StaticArrayBuffer.getByte(StaticArrayBuffer.java:170) at org.janusgraph.diskstorage.util.StaticArrayBuffer.getBytes(StaticArrayBuffer.java:253) at org.janusgraph.diskstorage.util.ReadArrayBuffer.getBytes(ReadArrayBuffer.java:120) at org.janusgraph.graphdb.database.serialize.attribute.ByteArraySerializer.read(ByteArraySerializer.java:46) at org.janusgraph.graphdb.database.serialize.attribute.ByteArraySerializer.read(ByteArraySerializer.java:23) at org.janusgraph.graphdb.database.serialize.StandardSerializer.readObjectNotNullInternal(StandardSerializer.java:268) at org.janusgraph.graphdb.database.serialize.StandardSerializer.readObjectNotNull(StandardSerializer.java:243) at org.janusgraph.graphdb.database.serialize.attribute.SerializableSerializer.read(SerializableSerializer.java:37) at org.janusgraph.graphdb.database.serialize.attribute.SerializableSerializer.read(SerializableSerializer.java:31) at org.janusgraph.graphdb.database.serialize.StandardSerializer.readObjectNotNullInternal(StandardSerializer.java:268) at org.janusgraph.graphdb.database.serialize.StandardSerializer.readObjectInternal(StandardSerializer.java:258) at org.janusgraph.graphdb.database.serialize.StandardSerializer.readObject(StandardSerializer.java:238) at org.janusgraph.graphdb.database.EdgeSerializer.readPropertyValue(EdgeSerializer.java:208) at org.janusgraph.graphdb.database.EdgeSerializer.readPropertyValue(EdgeSerializer.java:198) at org.janusgraph.graphdb.database.EdgeSerializer.parseRelation(EdgeSerializer.java:128) at org.janusgraph.graphdb.database.EdgeSerializer.readRelation(EdgeSerializer.java:73) at org.janusgraph.graphdb.transaction.RelationConstructor.readRelation(RelationConstructor.java:70) at org.janusgraph.graphdb.transaction.RelationConstructor$1.next(RelationConstructor.java:57) at org.janusgraph.graphdb.transaction.RelationConstructor$1.next(RelationConstructor.java:45) at org.janusgraph.graphdb.query.LimitAdjustingIterator.next(LimitAdjustingIterator.java:94) at com.google.common.collect.Iterators$5.computeNext(Iterators.java:638) at com.google.common.collect.AbstractIterator.tryToComputeNext(AbstractIterator.java:141) at com.google.common.collect.AbstractIterator.hasNext(AbstractIterator.java:136) at org.janusgraph.graphdb.query.ResultMergeSortIterator.nextInternal(ResultMergeSortIterator.java:73) at org.janusgraph.graphdb.query.ResultMergeSortIterator.next(ResultMergeSortIterator.java:63) at org.janusgraph.graphdb.query.ResultSetIterator.nextInternal(ResultSetIterator.java:55) at org.janusgraph.graphdb.query.ResultSetIterator.(ResultSetIterator.java:44) at org.janusgraph.graphdb.query.QueryProcessor.iterator(QueryProcessor.java:66) at org.janusgraph.graphdb.vertices.AbstractVertex.properties(AbstractVertex.java:177) at org.apache.tinkerpop.gremlin.process.traversal.step.map.PropertyMapStep.map(PropertyMapStep.java:96) at org.apache.tinkerpop.gremlin.process.traversal.step.map.PropertyMapStep.map(PropertyMapStep.java:52) at org.apache.tinkerpop.gremlin.process.traversal.step.map.MapStep.processNextStart(MapStep.java:37) at org.apache.tinkerpop.gremlin.process.traversal.step.util.AbstractStep.hasNext(AbstractStep.java:143) at org.apache.tinkerpop.gremlin.process.traversal.util.DefaultTraversal.hasNext(DefaultTraversal.java:197) at org.apache.tinkerpop.gremlin.server.op.AbstractOpProcessor.handleIterator(AbstractOpProcessor.java:93) at org.apache.tinkerpop.gremlin.server.op.AbstractEvalOpProcessor.lambda$evalOpInternal$5(AbstractEvalOpProcessor.java:264) at org.apache.tinkerpop.gremlin.groovy.engine.GremlinExecutor.lambda$eval$0(GremlinExecutor.java:278) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:829)Dropping this entity also does not work do to this size exception. As a temp WO i've dropped releation to this entity as my graph represents database,table,column relations (Apache Atlas).
Do you have any ideas how can I achieve this?
My environment:
Solr as search engine v8.8.2 , Cassandra as backend host v3.11.10, JanusGraph libraries v0.5.2, Gremlin server v3.4.6 in k8s setup
Thank you for the time!
Beta Was this translation helpful? Give feedback.
All reactions