ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.24445
28
0

Learning Safety Constraints for Large Language Models

30 May 2025
Xin Chen
Yarden As
Andreas Krause
ArXiv (abs)PDFHTML
Main:8 Pages
8 Figures
Bibliography:4 Pages
19 Tables
Appendix:10 Pages
Abstract

Large language models (LLMs) have emerged as powerful tools but pose significant safety risks through harmful outputs and vulnerability to adversarial attacks. We propose SaP, short for Safety Polytope, a geometric approach to LLM safety that learns and enforces multiple safety constraints directly in the model's representation space. We develop a framework that identifies safe and unsafe regions via the polytope's facets, enabling both detection and correction of unsafe outputs through geometric steering. Unlike existing approaches that modify model weights, SaP operates post-hoc in the representation space, preserving model capabilities while enforcing safety constraints. Experiments across multiple LLMs demonstrate that our method can effectively detect unethical inputs, reduce adversarial attack success rates while maintaining performance on standard tasks, thus highlighting the importance of having an explicit geometric model for safety. Analysis of the learned polytope facets reveals emergence of specialization in detecting different semantic notions of safety, providing interpretable insights into how safety is captured in LLMs' representation space.

View on arXiv
@article{chen2025_2505.24445,
  title={ Learning Safety Constraints for Large Language Models },
  author={ Xin Chen and Yarden As and Andreas Krause },
  journal={arXiv preprint arXiv:2505.24445},
  year={ 2025 }
}
Comments on this paper