ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.04611
107
0

Revisiting Test-Time Scaling: A Survey and a Diversity-Aware Method for Efficient Reasoning

5 June 2025
Ho-Lam Chung
Teng-Yun Hsiao
Hsiao-Ying Huang
Chunerh Cho
Jian-Ren Lin
Zhang Ziwei
Yun-Nung Chen
    LRM
ArXiv (abs)PDFHTML
Abstract

Test-Time Scaling (TTS) improves the reasoning performance of Large Language Models (LLMs) by allocating additional compute during inference. We conduct a structured survey of TTS methods and categorize them into sampling-based, search-based, and trajectory optimization strategies. We observe that reasoning-optimized models often produce less diverse outputs, which limits TTS effectiveness. To address this, we propose ADAPT (A Diversity Aware Prefix fine-Tuning), a lightweight method that applies prefix tuning with a diversity-focused data strategy. Experiments on mathematical reasoning tasks show that ADAPT reaches 80% accuracy using eight times less compute than strong baselines. Our findings highlight the essential role of generative diversity in maximizing TTS effectiveness.

View on arXiv
@article{chung2025_2506.04611,
  title={ Revisiting Test-Time Scaling: A Survey and a Diversity-Aware Method for Efficient Reasoning },
  author={ Ho-Lam Chung and Teng-Yun Hsiao and Hsiao-Ying Huang and Chunerh Cho and Jian-Ren Lin and Zhang Ziwei and Yun-Nung Chen },
  journal={arXiv preprint arXiv:2506.04611},
  year={ 2025 }
}
Main:8 Pages
6 Figures
Bibliography:7 Pages
2 Tables
Appendix:1 Pages
Comments on this paper