Hearing from Silence: Reasoning Audio Descriptions from Silent Videos via Vision-Language Model

Humans can intuitively infer sounds from silent videos, but whether multimodal large language models can perform modal-mismatch reasoning without accessing target modalities remains relatively unexplored. Current text-assisted-video-to-audio (VT2A) methods excel in video foley tasks but struggle to acquire audio descriptions during inference. We introduce the task of Reasoning Audio Descriptions from Silent Videos (SVAD) to address this challenge and investigate vision-language models' (VLMs) capabilities on this task. To further enhance the VLMs' reasoning capacity for the SVAD task, we construct a CoT-AudioCaps dataset and propose a Chain-of-Thought-based supervised fine-tuning strategy. Experiments on SVAD and subsequent VT2A tasks demonstrate our method's effectiveness in two key aspects: significantly improving VLMs' modal-mismatch reasoning for SVAD and effectively addressing the challenge of acquiring audio descriptions during VT2A inference.
View on arXiv@article{ren2025_2505.13062, title={ Hearing from Silence: Reasoning Audio Descriptions from Silent Videos via Vision-Language Model }, author={ Yong Ren and Chenxing Li and Le Xu and Hao Gu and Duzhen Zhang and Yujie Chen and Manjie Xu and Ruibo Fu and Shan Yang and Dong Yu }, journal={arXiv preprint arXiv:2505.13062}, year={ 2025 } }