ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.09593
72
0

Beyond Overconfidence: Foundation Models Redefine Calibration in Deep Neural Networks

11 June 2025
Achim Hekler
Lukas Kuhn
Florian Buettner
    UQCV
ArXiv (abs)PDFHTML
Abstract

Reliable uncertainty calibration is essential for safely deploying deep neural networks in high-stakes applications. Deep neural networks are known to exhibit systematic overconfidence, especially under distribution shifts. Although foundation models such as ConvNeXt, EVA and BEiT have demonstrated significant improvements in predictive performance, their calibration properties remain underexplored. This paper presents a comprehensive investigation into the calibration behavior of foundation models, revealing insights that challenge established paradigms. Our empirical analysis shows that these models tend to be underconfident in in-distribution predictions, resulting in higher calibration errors, while demonstrating improved calibration under distribution shifts. Furthermore, we demonstrate that foundation models are highly responsive to post-hoc calibration techniques in the in-distribution setting, enabling practitioners to effectively mitigate underconfidence bias. However, these methods become progressively less reliable under severe distribution shifts and can occasionally produce counterproductive results. Our findings highlight the complex, non-monotonic effects of architectural and training innovations on calibration, challenging established narratives of continuous improvement.

View on arXiv
@article{hekler2025_2506.09593,
  title={ Beyond Overconfidence: Foundation Models Redefine Calibration in Deep Neural Networks },
  author={ Achim Hekler and Lukas Kuhn and Florian Buettner },
  journal={arXiv preprint arXiv:2506.09593},
  year={ 2025 }
}
Comments on this paper