53
0

An Example Safety Case for Safeguards Against Misuse

Main:18 Pages
14 Figures
Bibliography:2 Pages
3 Tables
Appendix:6 Pages
Abstract

Existing evaluations of AI misuse safeguards provide a patchwork of evidence that is often difficult to connect to real-world decisions. To bridge this gap, we describe an end-to-end argument (a "safety case") that misuse safeguards reduce the risk posed by an AI assistant to low levels. We first describe how a hypothetical developer red teams safeguards, estimating the effort required to evade them. Then, the developer plugs this estimate into a quantitative "uplift model" to determine how much barriers introduced by safeguards dissuade misuse (this https URL). This procedure provides a continuous signal of risk during deployment that helps the developer rapidly respond to emerging threats. Finally, we describe how to tie these components together into a simple safety case. Our work provides one concrete path -- though not the only path -- to rigorously justifying AI misuse risks are low.

View on arXiv
@article{clymer2025_2505.18003,
  title={ An Example Safety Case for Safeguards Against Misuse },
  author={ Joshua Clymer and Jonah Weinbaum and Robert Kirk and Kimberly Mai and Selena Zhang and Xander Davies },
  journal={arXiv preprint arXiv:2505.18003},
  year={ 2025 }
}
Comments on this paper