ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.01880
34
0

Weakly-supervised Audio Temporal Forgery Localization via Progressive Audio-language Co-learning Network

3 May 2025
Junyan Wu
Wenbo Xu
Wei Lu
Xiangyang Luo
Rui Yang
Shize Guo
ArXivPDFHTML
Abstract

Audio temporal forgery localization (ATFL) aims to find the precise forgery regions of the partial spoof audio that is purposefully modified. Existing ATFL methods rely on training efficient networks using fine-grained annotations, which are obtained costly and challenging in real-world scenarios. To meet this challenge, in this paper, we propose a progressive audio-language co-learning network (LOCO) that adopts co-learning and self-supervision manners to prompt localization performance under weak supervision scenarios. Specifically, an audio-language co-learning module is first designed to capture forgery consensus features by aligning semantics from temporal and global perspectives. In this module, forgery-aware prompts are constructed by using utterance-level annotations together with learnable prompts, which can incorporate semantic priors into temporal content features dynamically. In addition, a forgery localization module is applied to produce forgery proposals based on fused forgery-class activation sequences. Finally, a progressive refinement strategy is introduced to generate pseudo frame-level labels and leverage supervised semantic contrastive learning to amplify the semantic distinction between real and fake content, thereby continuously optimizing forgery-aware features. Extensive experiments show that the proposed LOCO achieves SOTA performance on three public benchmarks.

View on arXiv
@article{wu2025_2505.01880,
  title={ Weakly-supervised Audio Temporal Forgery Localization via Progressive Audio-language Co-learning Network },
  author={ Junyan Wu and Wenbo Xu and Wei Lu and Xiangyang Luo and Rui Yang and Shize Guo },
  journal={arXiv preprint arXiv:2505.01880},
  year={ 2025 }
}
Comments on this paper