You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Oct 31, 2023. It is now read-only.
Since maml already has it's fine-tuning step (it's adaptation step) I'm unsure what this means.
Does this mean that people change the meta-learned weights (slow weights) on the validation set again? Isn't that just making the data set larger in overall effect? Or something else?
# Crucially in our testing procedure here, we do *not* fine-tune
# the model during testing for simplicity.
# Most research papers using MAML for this task do an extra
# stage of fine-tuning here that should be added if you are
# adapting this code for research.
Since maml already has it's fine-tuning step (it's adaptation step) I'm unsure what this means.
Does this mean that people change the meta-learned weights (slow weights) on the validation set again? Isn't that just making the data set larger in overall effect? Or something else?
https://github.com/facebookresearch/higher/blob/main/examples/maml-omniglot.py
cross posted: https://stats.stackexchange.com/questions/550990/what-does-it-mean-to-fine-tune-s-maml-model-for-testing
The text was updated successfully, but these errors were encountered: