-
Notifications
You must be signed in to change notification settings - Fork 292
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pareto front #33
Comments
Sounds interesting @ianhbell ... Got a citation in mind? |
This should be a good point to start reading: https://www.iitk.ac.in/kangal/Deb_NSGA-II.pdf |
Hi, @ianhbell Just for ciriosity, if I define a from sklearn.metrics import r2_score
def my_custom_fitness(expr, X, y_true):
y_pred = make_prediction(expr, X)
return r2_score(y_pred, y_true) - (complexity(expr) / 1000) Therefore:
Given these properties, the expression found at the end of Am I missing something ? |
Answering to myself with a reference PARETO-FRONT EXPLOITATION IN SYMBOLIC Written at page 294 :
So yes, I was missing something big |
Has there been any thought given to pareto front optimization? There's always a tradeoff between tree size and model fidelity, which I gather you handle with parsimony. But the other alternative is to keep any model that is non-dominated by the pareto front. I couldn't see any clear way of hacking that into gplearn.
The text was updated successfully, but these errors were encountered: