ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2412.13292
81
0

Refining Answer Distributions for Improved Large Language Model Reasoning

17 December 2024
Soumyasundar Pal
Didier Chetelat
Yingxue Zhang
Mark Coates
    ReLM
    LRM
ArXivPDFHTML
Abstract

Large Language Models (LLMs) have exhibited an impressive capability to perform reasoning tasks, especially if they are encouraged to generate a sequence of intermediate steps. Reasoning performance can be improved by suitably combining multiple LLM responses, generated either in parallel in a single query, or via sequential interactions with LLMs throughout the reasoning process. Existing strategies for combination, such as self-consistency and progressive-hint-prompting, make inefficient usage of the LLM responses. We present Refined Answer Distributions, a novel and principled algorithmic framework to enhance the reasoning capabilities of LLMs. Our approach can be viewed as an iterative sampling strategy for forming a Monte Carlo approximation of an underlying distribution of answers, with the goal of identifying the mode -- the most likely answer. Empirical evaluation on several reasoning benchmarks demonstrates the superiority of the proposed approach.

View on arXiv
@article{pal2025_2412.13292,
  title={ Refining Answer Distributions for Improved Large Language Model Reasoning },
  author={ Soumyasundar Pal and Didier Chételat and Yingxue Zhang and Mark Coates },
  journal={arXiv preprint arXiv:2412.13292},
  year={ 2025 }
}
Comments on this paper