48
19

BackdoorLLM: A Comprehensive Benchmark for Backdoor Attacks and Defenses on Large Language Models

Abstract

Generative large language models (LLMs) have achieved state-of-the-art results on a wide range of tasks, yet they remain susceptible to backdoor attacks: carefully crafted triggers in the input can manipulate the model to produce adversary-specified outputs. While prior research has predominantly focused on backdoor risks in vision and classification settings, the vulnerability of LLMs in open-ended text generation remains underexplored. To fill this gap, we introduce BackdoorLLM (Our BackdoorLLM benchmark was awarded First Prize in the SafetyBench competition,this https URL, organized by the Center for AI Safety,this https URL.), the first comprehensive benchmark for systematically evaluating backdoor threats in text-generation LLMs. BackdoorLLM provides: (i) a unified repository of benchmarks with a standardized training and evaluation pipeline; (ii) a diverse suite of attack modalities, including data poisoning, weight poisoning, hidden-state manipulation, and chain-of-thought hijacking; (iii) over 200 experiments spanning 8 distinct attack strategies, 7 real-world scenarios, and 6 model architectures; (iv) key insights into the factors that govern backdoor effectiveness and failure modes in LLMs; and (v) a defense toolkit encompassing 7 representative mitigation techniques. Our code and datasets are available atthis https URL. We will continuously incorporate emerging attack and defense methodologies to support the research in advancing the safety and reliability of LLMs.

View on arXiv
@article{li2025_2408.12798,
  title={ BackdoorLLM: A Comprehensive Benchmark for Backdoor Attacks and Defenses on Large Language Models },
  author={ Yige Li and Hanxun Huang and Yunhan Zhao and Xingjun Ma and Jun Sun },
  journal={arXiv preprint arXiv:2408.12798},
  year={ 2025 }
}
Comments on this paper