19
0

Benchmarking Unified Face Attack Detection via Hierarchical Prompt Tuning

Abstract

Presentation Attack Detection and Face Forgery Detection are designed to protect face data from physical media-based Presentation Attacks and digital editing-based DeepFakes respectively. But separate training of these two models makes them vulnerable to unknown attacks and burdens deployment environments. The lack of a Unified Face Attack Detection model to handle both types of attacks is mainly due to two factors. First, there's a lack of adequate benchmarks for models to explore. Existing UAD datasets have limited attack types and samples, restricting the model's ability to address advanced threats. To address this, we propose UniAttackDataPlus (UniAttackData+), the most extensive and sophisticated collection of forgery techniques to date. It includes 2,875 identities and their 54 kinds of falsified samples, totaling 697,347 videos. Second, there's a lack of a reliable classification criterion. Current methods try to find an arbitrary criterion within the same semantic space, which fails when encountering diverse attacks. So, we present a novel Visual-Language Model-based Hierarchical Prompt Tuning Framework (HiPTune) that adaptively explores multiple classification criteria from different semantic spaces. We build a Visual Prompt Tree to explore various classification rules hierarchically. Then, by adaptively pruning the prompts, the model can select the most suitable prompts to guide the encoder to extract discriminative features at different levels in a coarse-to-fine way. Finally, to help the model understand the classification criteria in visual space, we propose a Dynamically Prompt Integration module to project the visual prompts to the text encoder for more accurate semantics. Experiments on 12 datasets have shown the potential to inspire further innovations in the UAD field.

View on arXiv
@article{liu2025_2505.13327,
  title={ Benchmarking Unified Face Attack Detection via Hierarchical Prompt Tuning },
  author={ Ajian Liu and Haocheng Yuan and Xiao Guo and Hui Ma and Wanyi Zhuang and Changtao Miao and Yan Hong and Chuanbiao Song and Jun Lan and Qi Chu and Tao Gong and Yanyan Liang and Weiqiang Wang and Jun Wan and Xiaoming Liu and Zhen Lei },
  journal={arXiv preprint arXiv:2505.13327},
  year={ 2025 }
}
Comments on this paper