ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2404.05583
39
0

Towards More General Video-based Deepfake Detection through Facial Component Guided Adaptation for Foundation Model

8 April 2024
Yue-Hua Han
Tai-Ming Huang
Shu-Tzu Lo
Po-Han Huang
ArXivPDFHTML
Abstract

Generative models have enabled the creation of highly realistic facial-synthetic images, raising significant concerns due to their potential for misuse. Despite rapid advancements in the field of deepfake detection, developing efficient approaches to leverage foundation models for improved generalizability to unseen forgery samples remains challenging. To address this challenge, we propose a novel side-network-based decoder that extracts spatial and temporal cues using the CLIP image encoder for generalized video-based Deepfake detection. Additionally, we introduce Facial Component Guidance (FCG) to enhance spatial learning generalizability by encouraging the model to focus on key facial regions. By leveraging the generic features of a vision-language foundation model, our approach demonstrates promising generalizability on challenging Deepfake datasets while also exhibiting superiority in training data efficiency, parameter efficiency, and model robustness.

View on arXiv
@article{han2025_2404.05583,
  title={ Towards More General Video-based Deepfake Detection through Facial Component Guided Adaptation for Foundation Model },
  author={ Yue-Hua Han and Tai-Ming Huang and Kai-Lung Hua and Jun-Cheng Chen },
  journal={arXiv preprint arXiv:2404.05583},
  year={ 2025 }
}
Comments on this paper