Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test out how well the ML model abstains from applying labels to abstracts that don't belong within the biomimicry taxonomy. #19

Open
bruffridge opened this issue Apr 22, 2021 · 2 comments

Comments

@bruffridge
Copy link
Member

Summary of Issue

So we know the labeller is 53% accurate at labelling abstracts correctly from our training dataset which only contains abstracts that belong within the biomimicry taxonomy. I wonder how well it does at abstaining when given an abstract that does not?

@shrutix
Copy link
Member

shrutix commented Apr 22, 2021

Starting with identifying a confidence/relevancy score for each abstract that is predicted with a label.

@pjuangph says: "would you be able to have your model export a confidence or let us see how it predicts with a test article? It could even be a copy paste string or read from a text file then send to the model for prediction. We want to see the output that it gives. "

@bruffridge
Copy link
Member Author

related to #38

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants