45
0

Contra4: Evaluating Contrastive Cross-Modal Reasoning in Audio, Video, Image, and 3D

Main:4 Pages
10 Figures
Bibliography:3 Pages
7 Tables
Appendix:8 Pages
Abstract

Real-world decision-making often begins with identifying which modality contains the most relevant information for a given query. While recent multimodal models have made impressive progress in processing diverse inputs, it remains unclear whether they can reason contrastively across multiple modalities to select the one that best satisfies a natural language prompt. We argue this capability is foundational, especially in retrieval-augmented and decision-time contexts, where systems must evaluate multiple signals and identify which one conveys the relevant information. To evaluate this skill, we introduce Contra4, a dataset for contrastive cross-modal reasoning across four modalities: image, audio, video, and 3D. Each example presents a natural language question alongside multiple candidate modality instances, and the model must select the one that semantically aligns with the prompt. Contra4 combines human-annotated captions with a mixture-of-models round-trip-consistency filter to ensure high-quality supervision, resulting in 174k training examples and a manually verified test set of 2.3k samples. While task-specific fine-tuning improves performance by 56% relative to baseline, state-of-the-art models still achieve only 56% accuracy overall and 42% in four-modality settings, underscoring a significant limitation in current multimodal models.

View on arXiv
@article{panagopoulou2025_2506.01275,
  title={ Contra4: Evaluating Contrastive Cross-Modal Reasoning in Audio, Video, Image, and 3D },
  author={ Artemis Panagopoulou and Le Xue and Honglu Zhou and silvio savarese and Ran Xu and Caiming Xiong and Chris Callison-Burch and Mark Yatskar and Juan Carlos Niebles },
  journal={arXiv preprint arXiv:2506.01275},
  year={ 2025 }
}
Comments on this paper