ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.17477
54
0

Bayesian generative models can flag performance loss, bias, and out-of-distribution image content

21 March 2025
Miguel López-Pérez
M. Miani
Valery Naranjo
Søren Hauberg
Aasa Feragen
    OOD
    MedIm
ArXivPDFHTML
Abstract

Generative models are popular for medical imaging tasks such as anomaly detection, feature extraction, data visualization, or image generation. Since they are parameterized by deep learning models, they are often sensitive to distribution shifts and unreliable when applied to out-of-distribution data, creating a risk of, e.g. underrepresentation bias. This behavior can be flagged using uncertainty quantification methods for generative models, but their availability remains limited. We propose SLUG: A new UQ method for VAEs that combines recent advances in Laplace approximations with stochastic trace estimators to scale gracefully with image dimensionality. We show that our UQ score -- unlike the VAE's encoder variances -- correlates strongly with reconstruction error and racial underrepresentation bias for dermatological images. We also show how pixel-wise uncertainty can detect out-of-distribution image content such as ink, rulers, and patches, which is known to induce learning shortcuts in predictive models.

View on arXiv
@article{lópez-pérez2025_2503.17477,
  title={ Bayesian generative models can flag performance loss, bias, and out-of-distribution image content },
  author={ Miguel López-Pérez and Marco Miani and Valery Naranjo and Søren Hauberg and Aasa Feragen },
  journal={arXiv preprint arXiv:2503.17477},
  year={ 2025 }
}
Comments on this paper