ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.21961
37
0

Entropy-Aware Branching for Improved Mathematical Reasoning

27 March 2025
Xianzhi Li
Ethan Callanan
Xiaodan Zhu
Mathieu Sibue
Antony Papadimitriou
Mahmoud Mahfouz
Zhiqiang Ma
Xiaomo Liu
    LRM
ArXivPDFHTML
Abstract

While Large Language Models (LLMs) are effectively aligned through extensive pre-training and fine-tuning, they still struggle with varying levels of uncertainty during token generation. In our investigation of mathematical reasoning, we observe that errors are more likely to arise at tokens exhibiting high entropy and variance of entropy in the model's output distribution. Based on the observation, we propose a novel approach that dynamically branches the generation process on demand instead of defaulting to the single most probable token. By exploring in parallel multiple branches stemming from high probability tokens of critical decision points, the model can discover diverse reasoning paths that might otherwise be missed. We further harness external feedback from larger models to rank and select the most coherent and accurate reasoning branch. Our experimental results on mathematical word problems and calculation questions show that this branching strategy boosts the reasoning capabilities of small LLMs up to 4.6% compared to conventional argmax decoding.

View on arXiv
@article{li2025_2503.21961,
  title={ Entropy-Aware Branching for Improved Mathematical Reasoning },
  author={ Xianzhi Li and Ethan Callanan and Xiaodan Zhu and Mathieu Sibue and Antony Papadimitriou and Mahmoud Mahfouz and Zhiqiang Ma and Xiaomo Liu },
  journal={arXiv preprint arXiv:2503.21961},
  year={ 2025 }
}
Comments on this paper