A quick setup guide for fine-tuning models with InstructLab.
- micromamba or conda
- CUDA-capable GPU (recommended)
- At least 16GB RAM
- Create and activate the environment (with micromamba or conda):
micromamba create -f environment_cuda.yaml
micromamba activate instructlab
# Verify installation
which ilab
- Initialize InstructLab configuration:
ilab config init
When running ilab config init
:
- Set the taxonomy path to
./instructlab/taxonomy
within your project directory - Choose the appropriate system profile based on your hardware (NVIDIA, AMD, etc.)
- Specify the path to your model file
You'll need two models for fine-tuning:
-
Base Model (to be fine-tuned):
- Granite 7B
- Place in
./models/
directory
-
Training Assistant Model:
- Merlinite 7B
- Place in
./models/
directory
- Test model serving:
ilab model serve --model-path ./models/granite-7b-lab.Q4_K_M.gguf
- Validate taxonomy data:
ilab taxonomy diff
For a graphical interface, follow the Playground Chat setup instructions
To set it up locally:
-
Install pnpm, npm or deno
-
Clone and run the UI:
git clone https://github.com/instructlab/ui
cd ui
npm install
npm run dev