ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.09586
38
0

Short-Path Prompting in LLMs: Analyzing Reasoning Instability and Solutions for Robust Performance

13 April 2025
Zuoli Tang
Junjie Ou
Kaiqin Hu
Chunwei Wu
Zhaoxin Huan
Chilin Fu
Xiaolu Zhang
Jun Zhou
Chenliang Li
    ReLM
    LRM
ArXivPDFHTML
Abstract

Recent years have witnessed significant progress in large language models' (LLMs) reasoning, which is largely due to the chain-of-thought (CoT) approaches, allowing models to generate intermediate reasoning steps before reaching the final answer. Building on these advances, state-of-the-art LLMs are instruction-tuned to provide long and detailed CoT pathways when responding to reasoning-related questions. However, human beings are naturally cognitive misers and will prompt language models to give rather short responses, thus raising a significant conflict with CoT reasoning. In this paper, we delve into how LLMs' reasoning performance changes when users provide short-path prompts. The results and analysis reveal that language models can reason effectively and robustly without explicit CoT prompts, while under short-path prompting, LLMs' reasoning ability drops significantly and becomes unstable, even on grade-school problems. To address this issue, we propose two approaches: an instruction-guided approach and a fine-tuning approach, both designed to effectively manage the conflict. Experimental results show that both methods achieve high accuracy, providing insights into the trade-off between instruction adherence and reasoning accuracy in current models.

View on arXiv
@article{tang2025_2504.09586,
  title={ Short-Path Prompting in LLMs: Analyzing Reasoning Instability and Solutions for Robust Performance },
  author={ Zuoli Tang and Junjie Ou and Kaiqin Hu and Chunwei Wu and Zhaoxin Huan and Chilin Fu and Xiaolu Zhang and Jun Zhou and Chenliang Li },
  journal={arXiv preprint arXiv:2504.09586},
  year={ 2025 }
}
Comments on this paper