AdUE: Improving uncertainty estimation head for LoRA adapters in LLMs
- UQCV

Uncertainty estimation remains a critical challenge in adapting pre-trained language models to classification tasks, particularly under parameter-efficient fine-tuning approaches such as adapters. We introduce AdUE1, an efficient post-hoc uncertainty estimation (UE) method, to enhance softmax-based estimates. Our approach (1) uses a differentiable approximation of the maximum function and (2) applies additional regularization through L2-SP, anchoring the fine-tuned head weights and regularizing the model. Evaluations on five NLP classification datasets across four language models (RoBERTa, ELECTRA, LLaMA-2, Qwen) demonstrate that our method consistently outperforms established baselines such as Mahalanobis distance and softmax response. Our approach is lightweight (no base-model changes) and produces better-calibrated confidence.
View on arXiv@article{zabolotnyi2025_2505.15443, title={ AdUE: Improving uncertainty estimation head for LoRA adapters in LLMs }, author={ Artem Zabolotnyi and Roman Makarov and Mile Mitrovic and Polina Proskura and Oleg Travkin and Roman Alferov and Alexey Zaytsev }, journal={arXiv preprint arXiv:2505.15443}, year={ 2025 } }