54
0

MM-MovieDubber: Towards Multi-Modal Learning for Multi-Modal Movie Dubbing

Main:4 Pages
3 Figures
Bibliography:1 Pages
6 Tables
Abstract

Current movie dubbing technology can produce the desired speech using a reference voice and input video, maintaining perfect synchronization with the visuals while effectively conveying the intended emotions. However, crucial aspects of movie dubbing, including adaptation to various dubbing styles, effective handling of dialogue, narration, and monologues, as well as consideration of subtle details such as speaker age and gender, remain insufficiently explored. To tackle these challenges, we introduce a multi-modal generative framework. First, it utilizes a multi-modal large vision-language model (VLM) to analyze visual inputs, enabling the recognition of dubbing types and fine-grained attributes. Second, it produces high-quality dubbing using large speech generation models, guided by multi-modal inputs. Additionally, a movie dubbing dataset with annotations for dubbing types and subtle details is constructed to enhance movie understanding and improve dubbing quality for the proposed multi-modal framework. Experimental results across multiple benchmark datasets show superior performance compared to state-of-the-art (SOTA) methods. In details, the LSE-D, SPK-SIM, EMO-SIM, and MCD exhibit improvements of up to 1.09%, 8.80%, 19.08%, and 18.74%, respectively.

View on arXiv
@article{zheng2025_2505.16279,
  title={ MM-MovieDubber: Towards Multi-Modal Learning for Multi-Modal Movie Dubbing },
  author={ Junjie Zheng and Zihao Chen and Chaofan Ding and Yunming Liang and Yihan Fan and Huan Yang and Lei Xie and Xinhan Di },
  journal={arXiv preprint arXiv:2505.16279},
  year={ 2025 }
}
Comments on this paper

We use cookies and other tracking technologies to improve your browsing experience on our website, to show you personalized content and targeted ads, to analyze our website traffic, and to understand where our visitors are coming from. See our policy.