Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how to improve the memory ability of lora fine tuning? #153

Open
K-Alex13 opened this issue Jan 22, 2024 · 2 comments
Open

how to improve the memory ability of lora fine tuning? #153

K-Alex13 opened this issue Jan 22, 2024 · 2 comments

Comments

@K-Alex13
Copy link

how to improve the memory ability of lora fine tuning?

@Az0nik
Copy link

Az0nik commented Jan 28, 2024

Make it mor big and fast! 😂😂😂

@MH-Python
Copy link

This is more related to the underlying model that you are applying LoRA to. Since activations only (without gradients) from many layers will allocate huge memory, then LoRA will not be able to reduce the memory consumption even if you are fine-tuning several thousands parameters.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants