49
0

Daily-Omni: Towards Audio-Visual Reasoning with Temporal Alignment across Modalities

Abstract

Recent Multimodal Large Language Models (MLLMs) achieve promising performance on visual and audio benchmarks independently. However, the ability of these models to process cross-modal information synchronously remains largely unexplored. In this paper, we introduce: 1) Daily-Omni, an Audio-Visual Questioning and Answering benchmark comprising 684 videos of daily life scenarios from diverse sources, rich in both audio and visual information, and featuring 1197 multiple-choice QA pairs across 6 major tasks; 2) Daily-Omni QA Generation Pipeline, which includes automatic annotation, QA generation and QA optimization, significantly improves efficiency for human evaluation and scalability of the benchmark; 3) Daily-Omni-Agent, a training-free agent utilizing open-source Visual Language Model (VLM), Audio Language Model (ALM) and Automatic Speech Recognition (ASR) model to establish a baseline for this benchmark. The results show that current MLLMs still struggle significantly with tasks requiring audio-visual integration, but combining VLMs and ALMs with simple temporal alignment techniques can achieve substantially better performance. Codes and benchmark are available at \href{this https URL}{this https URL}.

View on arXiv
@article{zhou2025_2505.17862,
  title={ Daily-Omni: Towards Audio-Visual Reasoning with Temporal Alignment across Modalities },
  author={ Ziwei Zhou and Rui Wang and Zuxuan Wu },
  journal={arXiv preprint arXiv:2505.17862},
  year={ 2025 }
}
Comments on this paper