Skip to content

Commit

Permalink
Update nhl_positivity_index.md
Browse files Browse the repository at this point in the history
  • Loading branch information
Jacob-Winch authored Mar 12, 2024
1 parent d634723 commit 80d241c
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion src/data/nhl_positivity_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ With the hopes of improving the accuracy of [**cardiffnlp/twitter-roberta-base-s

## Fine-tuning the Model

With the help of Hugging Face's PEFT: Parameter-Efficient Fine-Tuning library we were able to effectively fine-tune [**cardiffnlp/twitter-roberta-base-sentiment-latest**](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment-latest). We further fine-tuned the RoBERTa-bas model that was trained on ~124M tweets from January 2018 to December 2021 and was fine-tuned on sentiment analysis, in order to better fit our specific task of classifying hockey related comments. After fine-tuning, our Adapter model for [**cardiffnlp/twitter-roberta-base-sentiment-latest**](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment-latest), [Chelberta](https://huggingface.co/UAlbertaUAIS/Chelberta) achieved an accuracy score of 81.2% improving from the base model of 79.2% on our testing dataset mentioned above. The confusion matrix for our model, [Chelberta](https://huggingface.co/UAlbertaUAIS/Chelberta), can be found below.
With the help of Hugging Face's PEFT: Parameter-Efficient Fine-Tuning library we were able to effectively fine-tune [**cardiffnlp/twitter-roberta-base-sentiment-latest**](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment-latest). We further fine-tuned the RoBERTa-base model that was trained on ~124M tweets from January 2018 to December 2021 and was fine-tuned on sentiment analysis, in order to better fit our specific task of classifying hockey related comments. After fine-tuning, our Adapter model for [**cardiffnlp/twitter-roberta-base-sentiment-latest**](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment-latest), [Chelberta](https://huggingface.co/UAlbertaUAIS/Chelberta) achieved an accuracy score of 81.2% improving from the base model of 79.2% on our testing dataset mentioned above. The confusion matrix for our model, [Chelberta](https://huggingface.co/UAlbertaUAIS/Chelberta), can be found below.

## Data Labelling Process

Expand Down

0 comments on commit 80d241c

Please sign in to comment.