Added Support for Rotary Positional Embeddings #99
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Motivation
Original Paper: RoFormer: Enhanced Transformer with Rotary Position Embedding
Rotary Positional Embeddings (RoPEs) are a common positional embedding type used in many transformer models today.
RoPEs work by applying a unique rotation transformation to the vectors that represent each token within our q and k tensors based on each token's respective position in the sequence$$m$$ .
To compute attention, we must first compute$$\text{matmul(}Q \text{,} ~ K^T \text{)}$$ . This effectively is taking the dot product between the vector embeddings of tokens in $$Q$$ and $$K^T$$ . Given two tokens at positions $$i$$ and $$j$$ , the closer $$i$$ and $$j$$ are to each other, then their vector embeddings will end up getting rotated roughly the same amount, and the dot product between these two token embedding vectors will be largely unchanged. However, the further away these tokens are from each other, the more the transformation applied to these two vector embeddings diverges, which causes the dot product to decay. As the dot product decays, so does the attention weighting applied between the two tokens, and likewise this effectively leads the model to learning that for a single token the tokens near it should be paid more attention to than the tokens much further away.
A more detailed explanation
Fundamentally RoPEs work by dividing the embedding space of our q and k vectors (the$$\text{head}$$ _ $$\text{dim}$$ ) into many chunks of two. Each 2-dimensional chunk can be thought of as a vector subcomponent of q and k projected on a 2-dimensional plane that exists within the higher dimensional space of the q and k embedding. RoPE "rotates" the planar chunks of our q and k vectors uniquely based on the index of the token in the sequence. Each "chunk" is rotated some unique amount $$\theta_{m, d/2}$$ based on the index of the token in the sequence $$m$$ , and the dimension $$d$$ of the subcomponents of q and k being rotated.