Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Demo Issues #1

Open
freedomtrain opened this issue Jan 3, 2019 · 1 comment
Open

Demo Issues #1

freedomtrain opened this issue Jan 3, 2019 · 1 comment

Comments

@freedomtrain
Copy link

I have set up the docker demo with the instructions from this page.
https://dev.singularitynet.io/workshops/docker-opencog/#running-the-vqa-demo

When I open the notebook and go to interface-images-demo there are no instructions ?
Is there documentation that explains what the next steps are?

When I step through the code on the page and arrives at the block below I receive error messages.

models = os.path.expanduser('~/projects/data/visual_genome/')
network_runner.runner = SplitMultidnnRunner(models)
vqa = PatternMatcherVqaPipeline(extractor, question_converter, atomspace, None)

The error message is as follows:

loading dictionary from /home/relex/projects/data/visual_genome/dictionary.pkl
no threshold for 25, using mean value 0.8608333333333336
no threshold for 90, using mean value 0.8608333333333336
no threshold for 107, using mean value 0.8608333333333336
no threshold for 114, using mean value 0.8608333333333336
no threshold for 180, using mean value 0.8608333333333336
no threshold for 220, using mean value 0.8608333333333336
no threshold for 225, using mean value 0.8608333333333336
no threshold for 285, using mean value 0.8608333333333336
no threshold for 392, using mean value 0.8608333333333336
no threshold for 613, using mean value 0.8608333333333336
no threshold for 630, using mean value 0.8608333333333336
no threshold for 631, using mean value 0.8608333333333336

@noskill
Copy link
Owner

noskill commented Jan 9, 2019

Hi @fredhampton!
It's is not error message, it is warning. Since it surprises users i am going to hide it.

Here this message says that some models doesn't have threshold.
This demo uses model trained on balanced dataset. Coco vqa dataset we used for this demo(https://visualqa.org/download.html) is not properly balanced, so better performance may be achieved by fine-tuning model for this particular dataset. For balanced dataset threshold is 0.5, and here we shift it a bit.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants