22
0

Efficient Latent Semantic Clustering for Scaling Test-Time Computation of LLMs

Main:8 Pages
13 Figures
Bibliography:3 Pages
7 Tables
Appendix:7 Pages
Abstract

Scaling test-time computation--generating and analyzing multiple or sequential outputs for a single input--has become a promising strategy for improving the reliability and quality of large language models (LLMs), as evidenced by advances in uncertainty quantification and multi-step reasoning. A key shared component is semantic clustering, which groups outputs that differ in form but convey the same meaning. Semantic clustering enables estimation of the distribution over the semantics of outputs and helps avoid redundant exploration of reasoning paths. However, existing approaches typically rely on external models, which introduce substantial computational overhead and often fail to capture context-aware semantics. We propose Latent Semantic Clustering (LSC), a lightweight and context-sensitive method that leverages the generator LLM's internal hidden states for clustering, eliminating the need for external models. Our extensive experiment across various LLMs and datasets shows that LSC significantly improves the computational efficiency of test-time scaling while maintaining or exceeding the performance of existing methods.

View on arXiv
@article{lee2025_2506.00344,
  title={ Efficient Latent Semantic Clustering for Scaling Test-Time Computation of LLMs },
  author={ Sungjae Lee and Hoyoung Kim and Jeongyeon Hwang and Eunhyeok Park and Jungseul Ok },
  journal={arXiv preprint arXiv:2506.00344},
  year={ 2025 }
}
Comments on this paper