Amplify Adjacent Token Differences: Enhancing Long Chain-of-Thought Reasoning with Shift-FFN
- LRM

Recently, models such as OpenAI-o1 and DeepSeek-R1 have demonstrated remarkable performance on complex reasoning tasks through Long Chain-of-Thought (Long-CoT) reasoning. Although distilling this capability into student models significantly enhances their performance, this paper finds that fine-tuning LLMs with full parameters or LoRA with a low rank on long CoT data often leads to Cyclical Reasoning, where models repeatedly reiterate previous inference steps until the maximum length limit. Further analysis reveals that smaller differences in representations between adjacent tokens correlates with a higher tendency toward Cyclical Reasoning. To mitigate this issue, this paper proposes Shift Feedforward Networks (Shift-FFN), a novel approach that edits the current token's representation with the previous one before inputting it to FFN. This architecture dynamically amplifies the representation differences between adjacent tokens. Extensive experiments on multiple mathematical reasoning tasks demonstrate that LoRA combined with Shift-FFN achieves higher accuracy and a lower rate of Cyclical Reasoning across various data sizes compared to full fine-tuning and standard LoRA. Our data and code are available atthis https URL
View on arXiv@article{xu2025_2505.17153, title={ Amplify Adjacent Token Differences: Enhancing Long Chain-of-Thought Reasoning with Shift-FFN }, author={ Yao Xu and Mingyu Xu and Fangyu Lei and Wangtao Sun and Xiangrong Zeng and Bingning Wang and Guang Liu and Shizhu He and Jun Zhao and Kang Liu }, journal={arXiv preprint arXiv:2505.17153}, year={ 2025 } }