ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.18149
52
0

First Finish Search: Efficient Test-Time Scaling in Large Language Models

23 May 2025
Aradhye Agarwal
Ayan Sengupta
Tanmoy Chakraborty
    ReLM
    RALM
    ALM
    LRM
ArXivPDFHTML
Abstract

Test-time scaling (TTS), which involves dynamic allocation of compute during inference, offers a promising way to improve reasoning in large language models. While existing TTS methods work well, they often rely on long decoding paths or require a large number of samples to be generated, increasing the token usage and inference latency. We observe the surprising fact that for reasoning tasks, shorter traces are much more likely to be correct than longer ones. Motivated by this, we introduce First Finish Search (FFS), a training-free parallel decoding strategy that launches nnn independent samples and returns as soon as any one completes. We evaluate FFS alongside simple decoding, beam search, majority voting, and budget forcing on four reasoning models (DeepSeek-R1, R1-Distill-Qwen-32B, QwQ-32B and Phi-4-Reasoning-Plus) and across four datasets (AIME24, AIME25-I, AIME25-II and GPQA Diamond). With DeepSeek-R1, FFS achieves 82.23%82.23\%82.23% accuracy on the AIME datasets, a 15%15\%15% improvement over DeepSeek-R1's standalone accuracy, nearly matching OpenAI's o4-mini performance. Our theoretical analysis explains why stopping at the shortest trace is likely to yield a correct answer and identifies the conditions under which early stopping may be suboptimal. The elegance and simplicity of FFS demonstrate that straightforward TTS strategies can perform remarkably well, revealing the untapped potential of simple approaches at inference time.

View on arXiv
@article{agarwal2025_2505.18149,
  title={ First Finish Search: Efficient Test-Time Scaling in Large Language Models },
  author={ Aradhye Agarwal and Ayan Sengupta and Tanmoy Chakraborty },
  journal={arXiv preprint arXiv:2505.18149},
  year={ 2025 }
}
Comments on this paper