70
0

Enhancing Diffusion-based Unrestricted Adversarial Attacks via Adversary Preferences Alignment

Main:1 Pages
6 Figures
4 Tables
Appendix:10 Pages
Abstract

Preference alignment in diffusion models has primarily focused on benign human preferences (e.g., aesthetic). In this paper, we propose a novel perspective: framing unrestricted adversarial example generation as a problem of aligning with adversary preferences. Unlike benign alignment, adversarial alignment involves two inherently conflicting preferences: visual consistency and attack effectiveness, which often lead to unstable optimization and reward hacking (e.g., reducing visual quality to improve attack success). To address this, we propose APA (Adversary Preferences Alignment), a two-stage framework that decouples conflicting preferences and optimizes each with differentiable rewards. In the first stage, APA fine-tunes LoRA to improve visual consistency using rule-based similarity reward. In the second stage, APA updates either the image latent or prompt embedding based on feedback from a substitute classifier, guided by trajectory-level and step-wise rewards. To enhance black-box transferability, we further incorporate a diffusion augmentation strategy. Experiments demonstrate that APA achieves significantly better attack transferability while maintaining high visual consistency, inspiring further research to approach adversarial attacks from an alignment perspective. Code will be available atthis https URL.

View on arXiv
@article{jiang2025_2506.01511,
  title={ Enhancing Diffusion-based Unrestricted Adversarial Attacks via Adversary Preferences Alignment },
  author={ Kaixun Jiang and Zhaoyu Chen and Haijing Guo and Jinglun Li and Jiyuan Fu and Pinxue Guo and Hao Tang and Bo Li and Wenqiang Zhang },
  journal={arXiv preprint arXiv:2506.01511},
  year={ 2025 }
}
Comments on this paper