ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.06984
43
0

Synchronized Video-to-Audio Generation via Mel Quantization-Continuum Decomposition

10 March 2025
Juncheng Wang
Chao Xu
Cheng Yu
Lei Shang
Zhe Hu
Shujun Wang
Liefeng Bo
    DiffM
    VGen
ArXivPDFHTML
Abstract

Video-to-audio generation is essential for synthesizing realistic audio tracks that synchronize effectively with silent videos. Following the perspective of extracting essential signals from videos that can precisely control the mature text-to-audio generative diffusion models, this paper presents how to balance the representation of mel-spectrograms in terms of completeness and complexity through a new approach called Mel Quantization-Continuum Decomposition (Mel-QCD). We decompose the mel-spectrogram into three distinct types of signals, employing quantization or continuity to them, we can effectively predict them from video by a devised video-to-all (V2X) predictor. Then, the predicted signals are recomposed and fed into a ControlNet, along with a textual inversion design, to control the audio generation process. Our proposed Mel-QCD method demonstrates state-of-the-art performance across eight metrics, evaluating dimensions such as quality, synchronization, and semantic consistency. Our codes and demos will be released at \href{Website}{this https URL}.

View on arXiv
@article{wang2025_2503.06984,
  title={ Synchronized Video-to-Audio Generation via Mel Quantization-Continuum Decomposition },
  author={ Juncheng Wang and Chao Xu and Cheng Yu and Lei Shang and Zhe Hu and Shujun Wang and Liefeng Bo },
  journal={arXiv preprint arXiv:2503.06984},
  year={ 2025 }
}
Comments on this paper