31
v1v2 (latest)

Unified Multimodal Uncertain Inference

Dengjia Zhang
Alexander Martin
William Jurayj
Kenton Murray
Benjamin Van Durme
Reno Kriz
Main:5 Pages
13 Figures
Bibliography:1 Pages
8 Tables
Appendix:10 Pages
Abstract

We introduce Unified Multimodal Uncertain Inference (UMUI), a multimodal inference task spanning text, audio, and video, where models must produce calibrated probability estimates of hypotheses conditioned on a premise in any modality or combination. While uncertain inference has been explored in text, extension to other modalities has been limited to single-modality binary entailment judgments, leaving no framework for fine-grained probabilistic reasoning in or across other modalities. To address this, we curate a human-annotated evaluation set with scalar probability judgments across audio, visual, and audiovisual settings, and additionally evaluate on existing text and audio benchmarks. We introduce CLUE (Calibrated Latent Uncertainty Estimation), which combines self-consistent teacher calibration and distribution-based confidence probing to produce calibrated predictions. We demonstrate that our 3B-parameter model achieves equivalent or stronger performance than baselines up to 32B parameters across all modalities.

View on arXiv
Comments on this paper