You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am setting up a Janus deployment.
My setup is going to have Scylla a backend.
Janus itself is deployed as a set of pods of janusgraph-docker which are exposed via the JanusGraphWsAndHttpChannelizer.
There will be 2 consumers of Janus:
an http server that allowes querying nodes by id
a spark cluster doing some traversals
I cannot connect to Janus gremlin server with this code
val cluster = Cluster.build()
.addContactPoint("my-endpoint")
.port(8182)
.serializer(Serializers.GRAPHBINARY_V1D0.simpleInstance())
.create()
val remoteConnection = DriverRemoteConnection.using(cluster, "my_traversal")
val g = AnonymousTraversalSource.traversal().withRemote(remoteConnection)
g.V().label().next(20)
What's not really clear is where the traversal is actually gonna happen.
My assumption is that the code above would perform the whole traversal on the remote Janus instance that is running in pod, and the complete result is returned to the client.
Am I correct?
If the previous statement is true, it would be nice to tweak the behaviour for Spark. What I'd like to achieve is pulling the data from Scylla on the remote Janus docker but doing the traversal in spark. Will it suffice to add the .withComputer(SparkGraphComputer) to the AnonymousTraversalSource.traversal().withRemote(remoteConnection)?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hi, Janus community!
I am setting up a Janus deployment.
My setup is going to have Scylla a backend.
Janus itself is deployed as a set of pods of janusgraph-docker which are exposed via the
JanusGraphWsAndHttpChannelizer
.There will be 2 consumers of Janus:
I cannot connect to Janus gremlin server with this code
What's not really clear is where the traversal is actually gonna happen.
My assumption is that the code above would perform the whole traversal on the remote Janus instance that is running in pod, and the complete result is returned to the client.
Am I correct?
If the previous statement is true, it would be nice to tweak the behaviour for Spark. What I'd like to achieve is pulling the data from Scylla on the remote Janus docker but doing the traversal in spark. Will it suffice to add the
.withComputer(SparkGraphComputer)
to theAnonymousTraversalSource.traversal().withRemote(remoteConnection)
?Thank you in advance for you answers!
Beta Was this translation helpful? Give feedback.
All reactions