Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Effect of evaluate!(mach, ...) is unpredictable. Retrain on all data and re-evaluate? #1027

Open
ablaom opened this issue Jun 15, 2023 · 6 comments

Comments

@ablaom
Copy link
Member

ablaom commented Jun 15, 2023

Currently, evaluate!(mach, ...) will mutate mach because mach is repeatedly retrained on different views of the data; mach is never trained on all supplied data (or all specified rows=...).

But there is really no reason for the user to guess the final state of the mach, as she is unlikely to know what the last train/test fold is without looking at the code, especially in the case of acceleration=CPUParallel (where I think it is impossible) or acceleration=CPUThreads.

So, mach, after the evaluate!(mach, ...) is in an ambiguous state, and so kind of useless unless the user explicitly retrains.

I wonder what others feel of the following suggestion:

  • New default behavior: After evaluate!(mach, ...) performs it's current function, mach is trained one last time on all specified rows (all rows by default) and the metrics are evaluated one last time to get training scores, which are added to the returned PerformanceEvaluation object.
  • Add a new keyword argument retrain=true which can be set to false to suppress the final retrain.

Probably we add the keyword option to evaluate(model, data...) as well, but maybe with retrain=false default??

@ExpandingMan @OkonSamuel

@ablaom
Copy link
Member Author

ablaom commented Jun 15, 2023

@tylerjthomas9
Copy link
Contributor

I think that this is a fantastic change. As you said, the current mach returned by evaluate! is not useful. It may add some complications that evaluate and evaluate have different default values for retrain, but I think that it makes logical sense to not retrain by default with evaluate(model, data...).

@CameronBieganek
Copy link

If we set aside learning networks, it seems like we don't really need evaluate!. The non-mutating evaluate is sufficient and more intuitive.

Is there a way to write a non-mutating evaluate that works on learning networks? Or is there already a way to use evaluate on learning networks that is not immediately obvious to me? I'm rusty on MLJ, since I unfortunately have to use Python at work. :(

@OkonSamuel
Copy link
Member

If we set aside learning networks, it seems like we don't really need evaluate!. The non-mutating evaluate is sufficient and more intuitive.

Is there a way to write a non-mutating evaluate that works on learning networks? Or is there already a way to use evaluate on learning networks that is not immediately obvious to me? I'm rusty on MLJ, since I unfortunately have to use Python at work. :(

I agree with @CameronBieganek that it would have been best if evaluate just returned a performance evaluation object and does not mutate the machine.
We could still have the evaluate! method as an internal method for convenience.
For learning networks now that the data isn't attached to the network anymore, we can make it such that calling evaluate on a machine passes a deepcopy of the machine to the internal evaluate! method.
I don't know how this will affect models that wraps external c libraries.

@ablaom
Copy link
Member Author

ablaom commented Jun 17, 2023

This is great feedback, thanks.

I'm torn between dumping mutation altogether (advantage: simplicity) and proceeding as I propose above (convenience). Some context: In a summer project we are working on auto logging of workflows using MLFlow, and in this context it seemed natural to log a training score, and a serialised set of learned parameters, each time a model is "evaluated". This saves having to specify metrics a second time if wanting the training score.

Is there a way to write a non-mutating evaluate that works on learning networks? Or is there already a way to use evaluate on learning networks that is not immediately obvious to me?

@CameronBieganek Not sure what you mean here. Perhaps you mean calling evaluate! on "learning network machines" (a special kind of machine), now deprecated? Generally the idea with learning networks is that they should be "exported" as stand-alone composite models in serious use. There was a simplification to this exporting process you may have missed. Learning network machines, once an intermediate step in the export process, have been deprecated as seen as unnecessary complication.

@ablaom
Copy link
Member Author

ablaom commented Jun 21, 2023

Okay, I now remember the reason for the existing behaviour. The use case is evaluating models that support "warm restart". If using Holdout() (or other single test/train pair of row indices) and you re-evaluate the machine after changing a model hyperparameter then, for some hyperparameters like n_iterations, the re-evaluation will not require retraining the machine from scratch.

On another matter, perhaps a better way to get training scores, is to add a resampling strategy InSample() which has the single train/test pair of row indicestrain=rows and test=rows.

@github-project-automation github-project-automation bot moved this to tracking/discussion/metaissues/misc in General Aug 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: tracking/discussion/metaissues/misc
Development

No branches or pull requests

4 participants