ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.15850
10
0

Uncertainty Estimation by Human Perception versus Neural Models

18 June 2025
Pedro Mendes
Paolo Romano
David Garlan
ArXiv (abs)PDFHTML
Main:9 Pages
1 Figures
Bibliography:3 Pages
4 Tables
Abstract

Modern neural networks (NNs) often achieve high predictive accuracy but remain poorly calibrated, producing overconfident predictions even when wrong. This miscalibration poses serious challenges in applications where reliable uncertainty estimates are critical. In this work, we investigate how human perceptual uncertainty compares to uncertainty estimated by NNs. Using three vision benchmarks annotated with both human disagreement and crowdsourced confidence, we assess the correlation between model-predicted uncertainty and human-perceived uncertainty. Our results show that current methods only weakly align with human intuition, with correlations varying significantly across tasks and uncertainty metrics. Notably, we find that incorporating human-derived soft labels into the training process can improve calibration without compromising accuracy. These findings reveal a persistent gap between model and human uncertainty and highlight the potential of leveraging human insights to guide the development of more trustworthy AI systems.

View on arXiv
@article{mendes2025_2506.15850,
  title={ Uncertainty Estimation by Human Perception versus Neural Models },
  author={ Pedro Mendes and Paolo Romano and David Garlan },
  journal={arXiv preprint arXiv:2506.15850},
  year={ 2025 }
}
Comments on this paper