-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About training module in paper #18
Comments
Thanks for your interest. Yes, we train all the modules (attention and FFN) of the MM-DIT blocks in CogVideoX, but in practice, it may work by just training the LoRA. |
Hi, can you give a rough estimate of the amount of training data needed if just training the cross-attn adapter? |
The specific amount of training data might be determined based on the experimental results. But the more training data, the better, as long as the quality is ensured. |
Hello, it's a great work. I read the paper, and feel confused about the training modules. Do you train the all modules (attention and ffn) of MM-Dit blocks of CogVideox? Maybe I miss some details, hoping for reply. Thanks.
The text was updated successfully, but these errors were encountered: