This repository has been archived by the owner on Dec 16, 2022. It is now read-only.
Allennlp is hard to reproduce? #5627
Unanswered
Boltzmachine
asked this question in
Q&A
Replies: 1 comment 2 replies
-
Hard to say without knowing what you're training setup looks like. The first I would do is make sure my data is ordered in the exact same way across different runs with the same seed. To do that, you could add an ID field to your data instances, and write out the ID during training so you can check the order afterwards. If it turns out that your data is ordered in the exact same way for every run given the same seed, the next thing I would check is weight initialization. Save your model's state dict as soon as it is initialized to make sure you're starting off with the same weights for each run. |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I use allennlp to train a model and I found the metrics differ by up to 4 percent between two runs with seed unchanged. I think it is a general issue in allennlp. How to avoid it?
Beta Was this translation helpful? Give feedback.
All reactions