XTuner Release V0.1.20
What's Changed
- [Enhancement] Optimizing Memory Usage during ZeRO Checkpoint Convert by @pppppM in #582
- [Fix] ZeRO2 Checkpoint Convert Bug by @pppppM in #684
- [Feature] support auto saving tokenizer by @HIT-cwh in #696
- [Bug] fix internlm2 flash attn by @HIT-cwh in #693
- [Bug] The LoRA model will have
meta-tensor
during thepth_to_hf
phase. by @pppppM in #697 - [Bug] fix cfg check by @HIT-cwh in #729
- [Bugs] Fix bugs caused by sequence parallel when deepspeed is not used. by @HIT-cwh in #752
- [Fix] Avoid incorrect
torchrun
invocation with--launcher slurm
by @LZHgrla in #728 - [fix] fix save eval result failed with mutil-node pretrain by @HoBeedzc in #678
- [Improve] Support the export of various LLaVA formats with
pth_to_hf
by @LZHgrla in #708 - [Refactor] refactor dispatch_modules by @HIT-cwh in #731
- [Docs] Readthedocs ZH by @pppppM in #553
- [Feature] Support finetune Deepseek v2 by @HIT-cwh in #663
- bump version to 0.1.20 by @HIT-cwh in #766
New Contributors
Full Changelog: v0.1.19...v0.1.20