You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
So we know the labeller is 53% accurate at labelling abstracts correctly from our training dataset which only contains abstracts that belong within the biomimicry taxonomy. I wonder how well it does at abstaining when given an abstract that does not?
The text was updated successfully, but these errors were encountered:
Starting with identifying a confidence/relevancy score for each abstract that is predicted with a label.
@pjuangph says: "would you be able to have your model export a confidence or let us see how it predicts with a test article? It could even be a copy paste string or read from a text file then send to the model for prediction. We want to see the output that it gives. "
Summary of Issue
So we know the labeller is 53% accurate at labelling abstracts correctly from our training dataset which only contains abstracts that belong within the biomimicry taxonomy. I wonder how well it does at abstaining when given an abstract that does not?
The text was updated successfully, but these errors were encountered: