T: An Adaptive Test-Time Scaling Strategy for Contextual Question Answering

Recent advances in Large Language Models (LLMs) have demonstrated remarkable performance in Contextual Question Answering (CQA). However, prior approaches typically employ elaborate reasoning strategies regardless of question complexity, leading to low adaptability. Recent efficient test-time scaling methods introduce budget constraints or early stop mechanisms to avoid overthinking for straightforward questions. But they add human bias to the reasoning process and fail to leverage models' inherent reasoning capabilities. To address these limitations, we present T: Think-to-Think, a novel framework that dynamically adapts reasoning depth based on question complexity. T leverages the insight that if an LLM can effectively solve similar questions using specific reasoning strategies, it can apply the same strategy to the original question. This insight enables to adoption of concise reasoning for straightforward questions while maintaining detailed analysis for complex problems. T works through four key steps: decomposing questions into structural elements, generating similar examples with candidate reasoning strategies, evaluating these strategies against multiple criteria, and applying the most appropriate strategy to the original question. Experimental evaluation across seven diverse CQA benchmarks demonstrates that T not only achieves higher accuracy than baseline methods but also reduces computational overhead by up to 25.2\%.
View on arXiv@article{zhao2025_2505.17427, title={ T$^2$: An Adaptive Test-Time Scaling Strategy for Contextual Question Answering }, author={ Zhengyi Zhao and Shubo Zhang and Zezhong Wang and Huimin Wang and Yutian Zhao and Bin Liang and Yefeng Zheng and Binyang Li and Kam-Fai Wong and Xian Wu }, journal={arXiv preprint arXiv:2505.17427}, year={ 2025 } }