ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.15882
9
0

Fractional Reasoning via Latent Steering Vectors Improves Inference Time Compute

18 June 2025
Sheng Liu
Tianlang Chen
Pan Lu
Haotian Ye
Yizheng Chen
Lei Xing
James Zou
    ReLMLRM
ArXiv (abs)PDFHTML
Main:9 Pages
6 Figures
Bibliography:3 Pages
5 Tables
Appendix:6 Pages
Abstract

Test-time compute has emerged as a powerful paradigm for improving the performance of large language models (LLMs), where generating multiple outputs or refining individual chains can significantly boost answer accuracy. However, existing methods like Best-of-N, majority voting, and self-reflection typically apply reasoning in a uniform way across inputs, overlooking the fact that different problems may require different levels of reasoning depth. In this work, we propose Fractional Reasoning, a training-free and model-agnostic framework that enables continuous control over reasoning intensity at inference time, going beyond the limitations of fixed instructional prompts. Our method operates by extracting the latent steering vector associated with deeper reasoning and reapplying it with a tunable scaling factor, allowing the model to tailor its reasoning process to the complexity of each input. This supports two key modes of test-time scaling: (1) improving output quality in breadth-based strategies (e.g., Best-of-N, majority voting), and (2) enhancing the correctness of individual reasoning chains in depth-based strategies (e.g., self-reflection). Experiments on GSM8K, MATH500, and GPQA demonstrate that Fractional Reasoning consistently improves performance across diverse reasoning tasks and models.

View on arXiv
@article{liu2025_2506.15882,
  title={ Fractional Reasoning via Latent Steering Vectors Improves Inference Time Compute },
  author={ Sheng Liu and Tianlang Chen and Pan Lu and Haotian Ye and Yizheng Chen and Lei Xing and James Zou },
  journal={arXiv preprint arXiv:2506.15882},
  year={ 2025 }
}
Comments on this paper