-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add a compositional neural instance retriever to handle incompleteness in the data #123
Comments
We have two different neural OWL reasoner that can retrieve instances by using embeddings/neural networks Todos for @LckyLke :
Once these steps are done, @Jean-KOUAGOU can start their integration of compostional neural instance retriever Please thumps up of you like the plan |
We have to see if the interface, i.e. the abstract neural reasoner, needs adjustment depending on which methods the neural reasoners should really have to implement. Also now we have dicee (for KGE) and ontolearn (for example script) as dependecies Feature branch: here |
I have been thinking about following our design from owlapy.owl_ontology_manager import SyncOntologyManager, OntologyManager
from owlapy.owl_reasoner import StructuralReasoner, SyncReasoner
cwr_reasoner=StructuralReasoner(ontology=OntologyManager().load_ontology(path="KGs/Family/father.owl"))
owr_reasoner=SyncReasoner(ontology=SyncOntologyManager().load_ontology(path="KGs/Family/father.owl"),reasoner="HermiT")
neural_reasoner_A=NeuralReasonerA(ontology=NeuralReasonerOntologyManager(), ...)
neural_reasoner_B=NeuralReasonerB(ontology=NeuralReasonerOntologyManager(), ...) |
Why do you think that we need ontolearn here ? |
Yes, I guess I should refactor and use Structural reasoner directly for the reference results |
Now we dont use ontolearn anymore :) |
but why should we what some form of ontology parameter? Usually we don't want to load an ontology in to memory for a KGE based model, do we? |
You are right that we do not want to load an ontology into memory for KGE base models if the path of a pretrained model is given. The purpose of |
I will think of a sensible implementation for this class as well than 👍🏼 |
for the regression test to work with the github action we should provide a pretrained model for Family somewhere to fetch during testing. |
Why don't we train the model from scratch and evaluate it as we have been doing in the dice-embeddings ( see for example https://github.com/dice-group/dice-embeddings/blob/develop/tests/test_regression_conex.py) |
What hardware are we running this action on? I thought this might take too long but if there is gpu available this would work ofc 👍🏻 |
It doesn't take much time even without gpu provided that the number of epochs, embedding dims are set accordingly 😀 |
Ok I will test how long it takes :) |
Should preprocessing involve training a model if a kb path is provided? |
I would say no because the training process might differ between approaches. |
Cause of Issue #124 I removed the neural ontology manager for the time being |
A compositional neural instance retriver can parse and encode a class expression into a continuous vector, which is then used together with the embedding of an individual to produce a probability "that the individual is an instance of the class expression"
We plan to implement three compositional neural instance retrievers based on: 1) a transformer architecture, 2) an rnn architecture, 3) the NAND operator and direct interpretations of DL concetpts as sets
An initial implementation with a transformer architecture is available at https://github.com/dice-group/CoNeuralReasoner
Integration should start in the coming weeks
The text was updated successfully, but these errors were encountered: