Python networkx and persistence (perhaps in neo4j) -


i have application creates many thousands of graphs in memory per second. wish find way persist these subsequent querying. aren't particularly large (perhaps max ~1k nodes).

i need able store entire graph object including node attributes , edge attributes. need able search graphs within specific time windows based on time attribute in node.

is there simple way coerce data neo4j ? i've yet find examples of this. though have found several python libs including embedded neo4j , rest client.

is common approach manually traverse graph , store in manner?

are there better persistence alternatives?

networkx has several serialization method.

in case, choose graphml serialization :

http://networkx.github.io/documentation/latest/reference/readwrite.graphml.html

it's quite simple use :

import networkx nx nx.write_graphml('/path/to/file') 

to load in neo4j, provided have neo4j<2.0, use tinkerpop gremlin load graphml dump in neo4j

g.loadgraphml('/path/to/file') 

the tinkerpop quite useful - not serialization/deserialization.

it allow use different graph database commion "dialect" (provided have "blueprint" driver of them do)


Comments

Popular posts from this blog

SPSS keyboard combination alters encoding -

Add new record to the table by click on the button in Microsoft Access -

javascript - jQuery .height() return 0 when visible but non-0 when hidden -