Self-ensemble: Mitigating Confidence Distortion for Large Language Models

Although Large Language Models (LLMs) perform well in general fields, they exhibit a confidence distortion problem on multi-choice question-answering (MCQA), particularly as the number of answer choices increases. Specifically, on MCQA with many choices, LLMs suffer from under-confidence in correct predictions and over-confidence in incorrect ones, leading to a substantially degraded performance. To solve this problem, we propose Self-ensemble in this work. Our method splits the choices into several groups and ensembles LLM predictions across these groups to reach a final decision. The advantage of Self-ensemble is its plug-and-play nature, where it can be integrated into existing LLM architecture based on a designed attention mask and positional encoding, without requiring labeled datasets for parameter tuning. Experimental results on three LLMs and datasets demonstrate that Self-ensemble comprehensively addresses the confidence distortion problem of LLMs, outperforming standard inference as well as baseline methods.
View on arXiv@article{xu2025_2506.01951, title={ Self-ensemble: Mitigating Confidence Distortion for Large Language Models }, author={ Zicheng Xu and Guanchu Wang and Guangyao Zheng and Yu-Neng Chuang and Alexander Szalay and Xia Hu and Vladimir Braverman }, journal={arXiv preprint arXiv:2506.01951}, year={ 2025 } }