25
0

BadReward: Clean-Label Poisoning of Reward Models in Text-to-Image RLHF

Main:3 Pages
10 Figures
3 Tables
Appendix:21 Pages
Abstract

Reinforcement Learning from Human Feedback (RLHF) is crucial for aligning text-to-image (T2I) models with human preferences. However, RLHF's feedback mechanism also opens new pathways for adversaries. This paper demonstrates the feasibility of hijacking T2I models by poisoning a small fraction of preference data with natural-appearing examples. Specifically, we propose BadReward, a stealthy clean-label poisoning attack targeting the reward model in multi-modal RLHF. BadReward operates by inducing feature collisions between visually contradicted preference data instances, thereby corrupting the reward model and indirectly compromising the T2I model's integrity. Unlike existing alignment poisoning techniques focused on single (text) modality, BadReward is independent of the preference annotation process, enhancing its stealth and practical threat. Extensive experiments on popular T2I models show that BadReward can consistently guide the generation towards improper outputs, such as biased or violent imagery, for targeted concepts. Our findings underscore the amplified threat landscape for RLHF in multi-modal systems, highlighting the urgent need for robust defenses. Disclaimer. This paper contains uncensored toxic content that might be offensive or disturbing to the readers.

View on arXiv
@article{duan2025_2506.03234,
  title={ BadReward: Clean-Label Poisoning of Reward Models in Text-to-Image RLHF },
  author={ Kaiwen Duan and Hongwei Yao and Yufei Chen and Ziyun Li and Tong Qiao and Zhan Qin and Cong Wang },
  journal={arXiv preprint arXiv:2506.03234},
  year={ 2025 }
}
Comments on this paper