ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.05376
47
0
v1v2 (latest)

A Red Teaming Roadmap Towards System-Level Safety

30 May 2025
Zifan Wang
Christina Q. Knight
Jeremy Kritz
Willow Primack
Julian Michael
    AAML
ArXiv (abs)PDFHTML
Main:10 Pages
2 Figures
Bibliography:8 Pages
Abstract

Large Language Model (LLM) safeguards, which implement request refusals, have become a widely adopted mitigation strategy against misuse. At the intersection of adversarial machine learning and AI safety, safeguard red teaming has effectively identified critical vulnerabilities in state-of-the-art refusal-trained LLMs. However, in our view the many conference submissions on LLM red teaming do not, in aggregate, prioritize the right research problems. First, testing against clear product safety specifications should take a higher priority than abstract social biases or ethical principles. Second, red teaming should prioritize realistic threat models that represent the expanding risk landscape and what real attackers might do. Finally, we contend that system-level safety is a necessary step to move red teaming research forward, as AI models present new threats as well as affordances for threat mitigation (e.g., detection and banning of malicious users) once placed in a deployment context. Adopting these priorities will be necessary in order for red teaming research to adequately address the slate of new threats that rapid AI advances present today and will present in the very near future.

View on arXiv
@article{wang2025_2506.05376,
  title={ A Red Teaming Roadmap Towards System-Level Safety },
  author={ Zifan Wang and Christina Q. Knight and Jeremy Kritz and Willow E. Primack and Julian Michael },
  journal={arXiv preprint arXiv:2506.05376},
  year={ 2025 }
}
Comments on this paper