You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This issue has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this issue should remain open. Thank you!
🚀 The feature, motivation and pitch
Motivation is to allow ReFT representations to be applied on the fly during inference, which can be done in a batchwise manner.
this is much faster than applying LoRAs
Alternatives
LoRA is too slow as it requires adapter weights to be added, which increases the number of operations.
Additional context
See stanfordnlp/pyreft#63
The text was updated successfully, but these errors were encountered: