The Rotary Position Embedding (RoPE) is widely used in the attention heads of many large language models (LLM). It rotates dimensions in the query and the key vectors by different angles according to their positions in the input sequence. For long context modeling, the range of positions may vary a lot, and thus RoPE rotates some dimensions by a great range of angles. We hypothesize that the wide range of rotation angles may prevent LLMs from utilizing those dimensions. To validate this hypothesis, we present a controlled experiment showing that applying RoPE causes low utility of certain dimensions. Our analyses on three LLMs also indicate that these dimensions do not help LLMs do long-context question answering.
View on arXiv@article{chiang2025_2502.11276, title={ The Rotary Position Embedding May Cause Dimension Inefficiency in Attention Heads for Long-Distance Retrieval }, author={ Ting-Rui Chiang and Dani Yogatama }, journal={arXiv preprint arXiv:2502.11276}, year={ 2025 } }