Instruct Fine-tuning Gemma using qLora and Supervise Finetuning
This is a comprahensive notebook and tutorial on how to fine tune the gemma-2b-it
Model
All the code will be available on my Github. Do drop by and give a follow and a star.
Before delving into the fine-tuning process, ensure that you have the following prerequisites in place:
- GPU: gemma-2b - can be finetuned on T4(free google colab) while gemma-7b requires an A100 GPU.
- Python Packages: Ensure that you have the necessary Python packages installed. You can use the following commands to install them:
HuggingFace Link: gemma-2b-mt-German-to-English