You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
I'm using the solrdf-1.0 branch with JRE 1.7.0 and successfully loaded aprox. 125000000 documents to the store (and optimized). Solr queries are fine. But with simple SPARQL queries I always get a "java heap space out of memory" exception (even with LIMIT 1).
I'm running a 4 CPU 64 bit Debian with 16G RAM. I'm not very familiar with Maven, but I tried to give Solrdf a little bit more RAM with
$ mvn -DargLine="-Xmx8g" cargo:run
... but with the same exception.
The text was updated successfully, but these errors were encountered:
Hi @cKlee,
unfortunately this is part of #96 which is a huge work, still pending. The current implementation of SolRDF uses the general-purpose SPARQL Algebra implementation bundled with Jena, which is good for small or in-memory datasets.
A more sophisticated logic is needed here, in order to gain advantage from the underlying inverted index and most important, to avoid what you're seeing.
Specifically, even small queries containing joins or count() with simple graph patterns, completely scans the entire index; I guess that's the underlying reason of your out of memory issue.
The bad thing is that I have no a precise idea about when this thing will be done, as it is complicated and it is taking me a lot of time.
Hi,
I'm using the solrdf-1.0 branch with JRE 1.7.0 and successfully loaded aprox. 125000000 documents to the store (and optimized). Solr queries are fine. But with simple SPARQL queries I always get a "java heap space out of memory" exception (even with LIMIT 1).
I'm running a 4 CPU 64 bit Debian with 16G RAM. I'm not very familiar with Maven, but I tried to give Solrdf a little bit more RAM with
... but with the same exception.
The text was updated successfully, but these errors were encountered: