Clarification for model scores #494
-
I have checked that the performance is neither trivial (both models gain close to perfect scores) nor random (both models gain close to random scores). Can some one please elaborate on this? I am not sure I fully understand this. The accuracy on a dataset I am trying to add is ~0.52 for |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Your results are okay, they refletct the performance of these 2 multilingual models on the language and task. |
Beta Was this translation helpful? Give feedback.
Your results are okay, they refletct the performance of these 2 multilingual models on the language and task.
We added this check to make sure the task is not too easy or impossible for models, this helps us make sure the dataset is good for a benchmark. Thus, the scores should not be nearly perfect (~100 accuraccy) or impossible (~0 accuracy) for the tasks.