ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.11772
2
0

LAMP: Extracting Locally Linear Decision Surfaces from LLM World Models

17 May 2025
Ryan Chen
Youngmin Ko
Zeyu Zhang
Catherine Cho
Sunny Chung
Mauro Giuffré
Dennis L. Shung
Bradly C. Stadie
ArXivPDFHTML
Abstract

We introduce \textbf{LAMP} (\textbf{L}inear \textbf{A}ttribution \textbf{M}apping \textbf{P}robe), a method that shines light onto a black-box language model's decision surface and studies how reliably a model maps its stated reasons to its predictions through a locally linear model approximating the decision surface. LAMP treats the model's own self-reported explanations as a coordinate system and fits a locally linear surrogate that links those weights to the model's output. By doing so, it reveals which stated factors steer the model's decisions, and by how much. We apply LAMP to three tasks: \textit{sentiment analysis}, \textit{controversial-topic detection}, and \textit{safety-prompt auditing}. Across these tasks, LAMP reveals that many LLMs exhibit locally linear decision landscapes. In addition, these surfaces correlate with human judgments on explanation quality and, on a clinical case-file data set, aligns with expert assessments. Since LAMP operates without requiring access to model gradients, logits, or internal activations, it serves as a practical and lightweight framework for auditing proprietary language models, and enabling assessment of whether a model behaves consistently with the explanations it provides.

View on arXiv
@article{chen2025_2505.11772,
  title={ LAMP: Extracting Locally Linear Decision Surfaces from LLM World Models },
  author={ Ryan Chen and Youngmin Ko and Zeyu Zhang and Catherine Cho and Sunny Chung and Mauro Giuffré and Dennis L. Shung and Bradly C. Stadie },
  journal={arXiv preprint arXiv:2505.11772},
  year={ 2025 }
}
Comments on this paper