ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.09807
14
0

Exploring the generalization of LLM truth directions on conversational formats

14 May 2025
Timour Ichmoukhamedov
David Martens
ArXivPDFHTML
Abstract

Several recent works argue that LLMs have a universal truth direction where true and false statements are linearly separable in the activation space of the model. It has been demonstrated that linear probes trained on a single hidden state of the model already generalize across a range of topics and might even be used for lie detection in LLM conversations. In this work we explore how this truth direction generalizes between various conversational formats. We find good generalization between short conversations that end on a lie, but poor generalization to longer formats where the lie appears earlier in the input prompt. We propose a solution that significantly improves this type of generalization by adding a fixed key phrase at the end of each conversation. Our results highlight the challenges towards reliable LLM lie detectors that generalize to new settings.

View on arXiv
@article{ichmoukhamedov2025_2505.09807,
  title={ Exploring the generalization of LLM truth directions on conversational formats },
  author={ Timour Ichmoukhamedov and David Martens },
  journal={arXiv preprint arXiv:2505.09807},
  year={ 2025 }
}
Comments on this paper