ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.01723
51
0

Tug-of-war between idiom's figurative and literal meanings in LLMs

2 June 2025
Soyoung Oh
Xinting Huang
Mathis Pink
Michael Hahn
Vera Demberg
ArXivPDFHTML
Abstract

Idioms present a unique challenge for language models due to their non-compositional figurative meanings, which often strongly diverge from the idiom's literal interpretation. This duality requires a model to learn representing and deciding between the two meanings to interpret an idiom in a figurative sense, or literally. In this paper, we employ tools from mechanistic interpretability to trace how a large pretrained causal transformer (LLama3.2-1B-base) deals with this ambiguity. We localize three steps of idiom processing: First, the idiom's figurative meaning is retrieved in early attention and MLP sublayers. We identify specific attention heads which boost the figurative meaning of the idiom while suppressing the idiom's literal interpretation. The model subsequently represents the figurative representation through an intermediate path. Meanwhile, a parallel bypass route forwards literal interpretation, ensuring that a both reading remain available. Overall, our findings provide a mechanistic evidence for idiom comprehension in an autoregressive transformer.

View on arXiv
@article{oh2025_2506.01723,
  title={ Tug-of-war between idiom's figurative and literal meanings in LLMs },
  author={ Soyoung Oh and Xinting Huang and Mathis Pink and Michael Hahn and Vera Demberg },
  journal={arXiv preprint arXiv:2506.01723},
  year={ 2025 }
}
Comments on this paper