ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.14607
2
0

sudoLLM : On Multi-role Alignment of Language Models

20 May 2025
Soumadeep Saha
Akshay Chaturvedi
Joy Mahapatra
Utpal Garain
ArXivPDFHTML
Abstract

User authorization-based access privileges are a key feature in many safety-critical systems, but have thus far been absent from the large language model (LLM) realm. In this work, drawing inspiration from such access control systems, we introduce sudoLLM, a novel framework that results in multi-role aligned LLMs, i.e., LLMs that account for, and behave in accordance with, user access rights. sudoLLM injects subtle user-based biases into queries and trains an LLM to utilize this bias signal in order to produce sensitive information if and only if the user is authorized. We present empirical results demonstrating that this approach shows substantially improved alignment, generalization, and resistance to prompt-based jailbreaking attacks. The persistent tension between the language modeling objective and safety alignment, which is often exploited to jailbreak LLMs, is somewhat resolved with the aid of the injected bias signal. Our framework is meant as an additional security layer, and complements existing guardrail mechanisms for enhanced end-to-end safety with LLMs.

View on arXiv
@article{saha2025_2505.14607,
  title={ sudoLLM : On Multi-role Alignment of Language Models },
  author={ Soumadeep Saha and Akshay Chaturvedi and Joy Mahapatra and Utpal Garain },
  journal={arXiv preprint arXiv:2505.14607},
  year={ 2025 }
}
Comments on this paper