64
0

AdUE: Improving uncertainty estimation head for LoRA adapters in LLMs

Main:4 Pages
2 Figures
Bibliography:2 Pages
8 Tables
Appendix:3 Pages
Abstract

Uncertainty estimation remains a critical challenge in adapting pre-trained language models to classification tasks, particularly under parameter-efficient fine-tuning approaches such as adapters. We introduce AdUE1, an efficient post-hoc uncertainty estimation (UE) method, to enhance softmax-based estimates. Our approach (1) uses a differentiable approximation of the maximum function and (2) applies additional regularization through L2-SP, anchoring the fine-tuned head weights and regularizing the model. Evaluations on five NLP classification datasets across four language models (RoBERTa, ELECTRA, LLaMA-2, Qwen) demonstrate that our method consistently outperforms established baselines such as Mahalanobis distance and softmax response. Our approach is lightweight (no base-model changes) and produces better-calibrated confidence.

View on arXiv
@article{zabolotnyi2025_2505.15443,
  title={ AdUE: Improving uncertainty estimation head for LoRA adapters in LLMs },
  author={ Artem Zabolotnyi and Roman Makarov and Mile Mitrovic and Polina Proskura and Oleg Travkin and Roman Alferov and Alexey Zaytsev },
  journal={arXiv preprint arXiv:2505.15443},
  year={ 2025 }
}
Comments on this paper

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.