Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use a large ensemble of single-label classifiers (so, treat all of the labels independently, ignore the hierarchy, and we have a hundred separate yes/no tasks) and see if this works better than MATCH #74

Open
bruffridge opened this issue Jul 29, 2021 · 0 comments
Assignees

Comments

@bruffridge
Copy link
Member

bruffridge commented Jul 29, 2021

We consider using ensemble methods to improve our performance on the precision and recall of our machine learning component. The impetus behind ensemble methods is that of drawing from the wisdom of crowds, that is, crowds of classifiers. Many different classifiers are trained to detect different signals in the same data. Their verdicts are subsequently aggregated through various voting schemes and policies into an ensemble prediction.
One advantage of ensemble methods is that they do not require each of their component classifiers to be accurate predictors. In fact, an ensemble can learn which of its component classifiers are more reliable and assign them more weight. These weights would be learned in a similar manner to any other parameters in a machine learning model.
For PeTaL, it may be fruitful to explore using an ensemble of single-label classifiers. Each classifier would specialize in predicting a certain biomimicry function, although each biomimicry function may have multiple classifiers assigned to it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants