Induction Head Toxicity Mechanistically Explains Repetition Curse in Large Language Models

Repetition curse is a phenomenon where Large Language Models (LLMs) generate repetitive sequences of tokens or cyclic sequences. While the repetition curse has been widely observed, its underlying mechanisms remain poorly understood. In this work, we investigate the role of induction heads--a specific type of attention head known for their ability to perform in-context learning--in driving this repetitive behavior. Specifically, we focus on the "toxicity" of induction heads, which we define as their tendency to dominate the model's output logits during repetition, effectively excluding other attention heads from contributing to the generation process. Our findings have important implications for the design and training of LLMs. By identifying induction heads as a key driver of the repetition curse, we provide a mechanistic explanation for this phenomenon and suggest potential avenues for mitigation. We also propose a technique with attention head regularization that could be employed to reduce the dominance of induction heads during generation, thereby promoting more diverse and coherent outputs.
View on arXiv@article{wang2025_2505.13514, title={ Induction Head Toxicity Mechanistically Explains Repetition Curse in Large Language Models }, author={ Shuxun Wang and Qingyu Yin and Chak Tou Leong and Qiang Zhang and Linyi Yang }, journal={arXiv preprint arXiv:2505.13514}, year={ 2025 } }