Improving the Calibration of Confidence Scores in Text Generation Using the Output Distribution's Characteristics

Well-calibrated model confidence scores can improve the usefulness of text generation models. For example, users can be prompted to review predictions with low confidence scores, to prevent models from returning bad or potentially dangerous predictions. However, confidence metrics are not always well calibrated in text generation. One reason is that in generation, there can be many valid answers, which previous methods do not always account for. Hence, a confident model could distribute its output probability among multiple sequences because they are all valid. We propose task-agnostic confidence metrics suited to generation, which rely solely on the probabilities associated with the model outputs without the need for further fine-tuning or heuristics. Using these, we are able to improve the calibration of BART and Flan-T5 on summarization, translation, and QA datasets.
View on arXiv@article{flores2025_2506.00637, title={ Improving the Calibration of Confidence Scores in Text Generation Using the Output Distribution's Characteristics }, author={ Lorenzo Jaime Yu Flores and Ori Ernst and Jackie Chi Kit Cheung }, journal={arXiv preprint arXiv:2506.00637}, year={ 2025 } }