This project involves fine-tuning Google's Gemma-7B LLM. The dataset used for this fine-tuning is the Alpaca cleaned dataset originally released by Stanford University. Due to resource limitations, only 300 rows from the dataset were used for fine-tuning.
After fine-tuning, the model was pushed/saved to the Hugging Face hub, which can be viewed here.