ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.12131
50
0

DiffGAP: A Lightweight Diffusion Module in Contrastive Space for Bridging Cross-Model Gap

15 March 2025
Shentong Mo
Zehua Chen
Fan Bao
Jun-Jie Zhu
    DiffM
ArXivPDFHTML
Abstract

Recent works in cross-modal understanding and generation, notably through models like CLAP (Contrastive Language-Audio Pretraining) and CAVP (Contrastive Audio-Visual Pretraining), have significantly enhanced the alignment of text, video, and audio embeddings via a single contrastive loss. However, these methods often overlook the bidirectional interactions and inherent noises present in each modality, which can crucially impact the quality and efficacy of cross-modal integration. To address this limitation, we introduce DiffGAP, a novel approach incorporating a lightweight generative module within the contrastive space. Specifically, our DiffGAP employs a bidirectional diffusion process tailored to bridge the cross-modal gap more effectively. This involves a denoising process on text and video embeddings conditioned on audio embeddings and vice versa, thus facilitating a more nuanced and robust cross-modal interaction. Our experimental results on VGGSound and AudioCaps datasets demonstrate that DiffGAP significantly improves performance in video/text-audio generation and retrieval tasks, confirming its effectiveness in enhancing cross-modal understanding and generation capabilities.

View on arXiv
@article{mo2025_2503.12131,
  title={ DiffGAP: A Lightweight Diffusion Module in Contrastive Space for Bridging Cross-Model Gap },
  author={ Shentong Mo and Zehua Chen and Fan Bao and Jun Zhu },
  journal={arXiv preprint arXiv:2503.12131},
  year={ 2025 }
}
Comments on this paper