You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I noticed that the data.vocab stored in the baseline model has a different vocabulary length compared to the language embedding stored in pretrained model.
For the baseline model "et_plus_h", the data.vocab file has Vocab(2554) for words while if I load the pretrained model from baseline_models/et_plus_h/latest.pth, the embedding layer model.embs_ann.lmdb_simbot_edh_vocab_none.weight has torch.Size([2788, 768]).
Did I miss something?
The text was updated successfully, but these errors were encountered:
I haven't previously examined the saved models in enough detail to notice a discrepancy like this so I'm not sure offhand on whether your intuition of expecting them to the same is correct, although it is plausible. I'll take a deeper look at the ET code and get back to you on this.
I haven't previously examined the saved models in enough detail to notice a discrepancy like this so I'm not sure offhand on whether your intuition of expecting them to the same is correct, although it is plausible. I'll take a deeper look at the ET code and get back to you on this.
I am not training a new model but rather using the pretrained model from baseline_models downloaded using this repo.
The intuition is that model.embs_ann.lmdb_simbot_edh_vocab_none.weight is the weight of the word embedding layer and data.vocab stores the word vocabulary. So Vocab(2554) should be the word vocabulary size according to data.vocab, but if we check the word embedding layer in pretrained model, it seems like the pretrained model accepts a larger vocabulary size=2788 rather than 2554.
I think the pretrained model should have a corresponding data.vocab that is of size Vocab(2778) rather than 2554?
Hi, I noticed that the
data.vocab
stored in the baseline model has a different vocabulary length compared to the language embedding stored in pretrained model.For the baseline model "et_plus_h", the
data.vocab
file hasVocab(2554)
for words while if I load the pretrained model frombaseline_models/et_plus_h/latest.pth
, the embedding layermodel.embs_ann.lmdb_simbot_edh_vocab_none.weight
hastorch.Size([2788, 768])
.Did I miss something?
The text was updated successfully, but these errors were encountered: