ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.10685
128
0
v1v2v3 (latest)

Unsourced Adversarial CAPTCHA: A Bi-Phase Adversarial CAPTCHA Framework

12 June 2025
Xia Du
Xiaoyuan Liu
Jizhe Zhou
Zheng Lin
Chi-Man Pun
Zhe Chen
Tao Li
Zhe Chen
Wei Ni
Jun Luo
    AAML
ArXiv (abs)PDFHTML
Main:10 Pages
6 Figures
Bibliography:2 Pages
Appendix:1 Pages
Abstract

With the rapid advancements in deep learning, traditional CAPTCHA schemes are increasingly vulnerable to automated attacks powered by deep neural networks (DNNs). Existing adversarial attack methods often rely on original image characteristics, resulting in distortions that hinder human interpretation and limit applicability in scenarios lacking initial input images. To address these challenges, we propose the Unsourced Adversarial CAPTCHA (UAC), a novel framework generating high-fidelity adversarial examples guided by attacker-specified text prompts. Leveraging a Large Language Model (LLM), UAC enhances CAPTCHA diversity and supports both targeted and untargeted attacks. For targeted attacks, the EDICT method optimizes dual latent variables in a diffusion model for superior image quality. In untargeted attacks, especially for black-box scenarios, we introduce bi-path unsourced adversarial CAPTCHA (BP-UAC), a two-step optimization strategy employing multimodal gradients and bi-path optimization for efficient misclassification. Experiments show BP-UAC achieves high attack success rates across diverse systems, generating natural CAPTCHAs indistinguishable to humans and DNNs.

View on arXiv
@article{du2025_2506.10685,
  title={ Defensive Adversarial CAPTCHA: A Semantics-Driven Framework for Natural Adversarial Example Generation },
  author={ Xia Du and Xiaoyuan Liu and Jizhe Zhou and Zheng Lin and Chi-man Pun and Cong Wu and Tao Li and Zhe Chen and Wei Ni and Jun Luo },
  journal={arXiv preprint arXiv:2506.10685},
  year={ 2025 }
}
Comments on this paper