You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello. I have a question concerning how to bake mean subtraction from LayerNorm into the Linear layer.
I have managed to solve by hand that it is possible to merge mean subtraction from the layernorm into the linear layer by subtracting the mean of each column of the weight matrix.
However, because the nn.Linear class holds the weights transposed for memory contiguity, I think that one should do W_ - W_.mean(dim=-1, keepdim=True) instead of W_ - W_.mean(dim=-2, keepdim=True) to subtract from the columns of the weights.
To summarize, since nn.Linear does [email protected], I think that the dimensions should be flipped.
Please correct me if I am wrong.
The text was updated successfully, but these errors were encountered:
One more related question.
Am I correct in believing that subtracting the mean from embeddings is incorrect for Llama, strictly speaking?
The question was raised here #7 and I would like to know if subtracting the mean from the embeddings is incorrect in principle for Llama models, which only use RMSNorm.
I am aware that the results are very similar (I have also checked for myself) but I am curious about what the exact solution should be.
Many thanks in advance, this is a great piece of work and I am learning a lot.
QuaRot/fake_quant/rotation_utils.py
Line 36 in 5008669
Hello. I have a question concerning how to bake mean subtraction from LayerNorm into the Linear layer.
I have managed to solve by hand that it is possible to merge mean subtraction from the layernorm into the linear layer by subtracting the mean of each column of the weight matrix.
However, because the
nn.Linear
class holds the weights transposed for memory contiguity, I think that one should doW_ - W_.mean(dim=-1, keepdim=True)
instead ofW_ - W_.mean(dim=-2, keepdim=True)
to subtract from the columns of the weights.To summarize, since
nn.Linear
does[email protected]
, I think that the dimensions should be flipped.Please correct me if I am wrong.
The text was updated successfully, but these errors were encountered: