ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.02725
  4. Cited By
Adaptive Inference-Time Compute: LLMs Can Predict if They Can Do Better,
  Even Mid-Generation

Adaptive Inference-Time Compute: LLMs Can Predict if They Can Do Better, Even Mid-Generation

3 October 2024
Rohin Manvi
Anikait Singh
Stefano Ermon
    SyDa
ArXivPDFHTML

Papers citing "Adaptive Inference-Time Compute: LLMs Can Predict if They Can Do Better, Even Mid-Generation"

5 / 5 papers shown
Title
Scalable LLM Math Reasoning Acceleration with Low-rank Distillation
Scalable LLM Math Reasoning Acceleration with Low-rank Distillation
Harry Dong
Bilge Acun
Beidi Chen
Yuejie Chi
LRM
34
0
0
08 May 2025
Between Underthinking and Overthinking: An Empirical Study of Reasoning Length and correctness in LLMs
Between Underthinking and Overthinking: An Empirical Study of Reasoning Length and correctness in LLMs
Jinyan Su
Jennifer Healey
Preslav Nakov
Claire Cardie
LRM
165
1
0
30 Apr 2025
Dynamic Early Exit in Reasoning Models
Dynamic Early Exit in Reasoning Models
Chenxu Yang
Qingyi Si
Yongjie Duan
Zheliang Zhu
Chenyu Zhu
Zheng Lin
Zheng Lin
Li Cao
Weiping Wang
ReLM
LRM
48
0
0
22 Apr 2025
DISC: Dynamic Decomposition Improves LLM Inference Scaling
DISC: Dynamic Decomposition Improves LLM Inference Scaling
Jonathan Light
Wei Cheng
Wu Yue
Masafumi Oyamada
Mengdi Wang
Santiago Paternain
Haifeng Chen
ReLM
LRM
64
2
0
23 Feb 2025
Make Every Penny Count: Difficulty-Adaptive Self-Consistency for Cost-Efficient Reasoning
Make Every Penny Count: Difficulty-Adaptive Self-Consistency for Cost-Efficient Reasoning
Xinglin Wang
Shaoxiong Feng
Yiwei Li
Peiwen Yuan
Y. Zhang
Boyuan Pan
Heda Wang
Yao Hu
Kan Li
LRM
40
17
0
24 Aug 2024
1