ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.14641
28
0

Revisiting Chain-of-Thought Prompting: Zero-shot Can Be Stronger than Few-shot

17 June 2025
Xiang Cheng
Chengyan Pan
Minjun Zhao
Deyang Li
Fangchao Liu
Xinyu Zhang
Xiao Zhang
Yong Liu
    ReLMLRM
ArXiv (abs)PDFHTML
Main:7 Pages
23 Figures
Bibliography:3 Pages
Appendix:9 Pages
Abstract

In-Context Learning (ICL) is an essential emergent ability of Large Language Models (LLMs), and recent studies introduce Chain-of-Thought (CoT) to exemplars of ICL to enhance the reasoning capability, especially in mathematics tasks. However, given the continuous advancement of model capabilities, it remains unclear whether CoT exemplars still benefit recent, stronger models in such tasks. Through systematic experiments, we find that for recent strong models such as the Qwen2.5 series, adding traditional CoT exemplars does not improve reasoning performance compared to Zero-Shot CoT. Instead, their primary function is to align the output format with human expectations. We further investigate the effectiveness of enhanced CoT exemplars, constructed using answers from advanced models such as \texttt{Qwen2.5-Max} and \texttt{DeepSeek-R1}. Experimental results indicate that these enhanced exemplars still fail to improve the model's reasoning performance. Further analysis reveals that models tend to ignore the exemplars and focus primarily on the instructions, leading to no observable gain in reasoning ability. Overall, our findings highlight the limitations of the current ICL+CoT framework in mathematical reasoning, calling for a re-examination of the ICL paradigm and the definition of exemplars.

View on arXiv
@article{cheng2025_2506.14641,
  title={ Revisiting Chain-of-Thought Prompting: Zero-shot Can Be Stronger than Few-shot },
  author={ Xiang Cheng and Chengyan Pan and Minjun Zhao and Deyang Li and Fangchao Liu and Xinyu Zhang and Xiao Zhang and Yong Liu },
  journal={arXiv preprint arXiv:2506.14641},
  year={ 2025 }
}
Comments on this paper