You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for your excellent work!
In Table 6, there are 3 different fine-tuning settings: single-task (ST), multi-task (MT) and UniIR. According to your explanation in section 4.1, ST is fine-tuned on each specific dataset only. At the same time, MT and UniIR is finetuned on all M-BEIR training data. In addition, Table 6 is the evaluation result on M-BEIR_local, which is a task-specific pool provided by each original dataset.
While the purpose of Table 6 is to highlight the advantages of UniIR fine-tuning, could the observed performance gap (e.g., UniIR vs. ST and MT vs. ST) be attributed to inconsistencies in the training partition? Specifically, could the performance improvement benefit from fine-tuning on more training instances? Or is this concern unnecessary or meaningless?
The text was updated successfully, but these errors were encountered:
Thank you for your excellent work!
In Table 6, there are 3 different fine-tuning settings: single-task (ST), multi-task (MT) and UniIR. According to your explanation in section 4.1, ST is fine-tuned on each specific dataset only. At the same time, MT and UniIR is finetuned on all M-BEIR training data. In addition, Table 6 is the evaluation result on
M-BEIR_local
, which is a task-specific pool provided by each original dataset.While the purpose of Table 6 is to highlight the advantages of UniIR fine-tuning, could the observed performance gap (e.g., UniIR vs. ST and MT vs. ST) be attributed to inconsistencies in the training partition? Specifically, could the performance improvement benefit from fine-tuning on more training instances? Or is this concern unnecessary or meaningless?
The text was updated successfully, but these errors were encountered: