Skip to content

Latest commit

 

History

History
32 lines (24 loc) · 1.38 KB

README.md

File metadata and controls

32 lines (24 loc) · 1.38 KB

Evolutionary-Model-Merge

Unofficial Implementation of Evolutionary Model Merging ⭐

  • Inspired by work from Sakana.AI [Evolutionary Optimization of Model Merging Recipes] (https://arxiv.org/abs/2403.13187)
  • Built with reference on code from Maxime Labonne's code in AutoMerger, MergeKit, and of course, CLAUDE-3
  • Computation done on @Modal
  • Collaborate with @aarongrainer
ParamSpace.mp4

image

To run your own evolutionary model merge optimizer ⭐

python evolve.py

Evaluating fitness score of a LLM is done by computing the average perplexity score on a instruction-following dataset. I use a experimental one [Ksgk-fy/alignment-sft-test01]. Feel free to replace that with yours ;> Following code allows one to evaluate the model's performance.

modal run eval.py --model-id mistralai/Mistral-7B-Instruct-v0.2

Model Merging with config is done through

modal run merge.py --unique_id

As a experimental run, I've only scraped the top 2 performing 7B LLM from the open llm leaderboard, and SLERP merging is carried out only.

Computation is done on the cloud, using Modal.

Nest to try:

https://towardsdatascience.com/create-mixtures-of-experts-with-mergekit-11b318c99562