ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.09014
12
0

Learning to Reason Across Parallel Samples for LLM Reasoning

10 June 2025
Jianing Qi
Xi Ye
Hao Tang
Zhigang Zhu
Eunsol Choi
    ReLMLRM
ArXiv (abs)PDFHTML
Abstract

Scaling test-time compute brings substantial performance gains for large language models (LLMs). By sampling multiple answers and heuristically aggregate their answers (e.g., either through majority voting or using verifiers to rank the answers), one can achieve consistent performance gains in math domains. In this paper, we propose a new way to leverage such multiple sample set. We train a compact LLM, called Sample Set Aggregator (SSA), that takes a concatenated sequence of multiple samples and output the final answer, optimizing it for the answer accuracy with reinforcement learning. Experiments on multiple reasoning datasets show that SSA outperforms other test-time scaling methods such as reward model-based re-ranking. Our approach also shows a promising generalization ability, across sample set sizes, base model families and scales, and tasks. By separating LLMs to generate answers and LLMs to analyze and aggregate sampled answers, our approach can work with the outputs from premier black box models easily and efficiently.

View on arXiv
@article{qi2025_2506.09014,
  title={ Learning to Reason Across Parallel Samples for LLM Reasoning },
  author={ Jianing Qi and Xi Ye and Hao Tang and Zhigang Zhu and Eunsol Choi },
  journal={arXiv preprint arXiv:2506.09014},
  year={ 2025 }
}
Comments on this paper