21
0

When Does Meaning Backfire? Investigating the Role of AMRs in NLI

Main:4 Pages
2 Figures
Bibliography:3 Pages
6 Tables
Appendix:2 Pages
Abstract

Natural Language Inference (NLI) relies heavily on adequately parsing the semantic content of the premise and hypothesis. In this work, we investigate whether adding semantic information in the form of an Abstract Meaning Representation (AMR) helps pretrained language models better generalize in NLI. Our experiments integrating AMR into NLI in both fine-tuning and prompting settings show that the presence of AMR in fine-tuning hinders model generalization while prompting with AMR leads to slight gains in \texttt{GPT-4o}. However, an ablation study reveals that the improvement comes from amplifying surface-level differences rather than aiding semantic reasoning. This amplification can mislead models to predict non-entailment even when the core meaning is preserved.

View on arXiv
@article{min2025_2506.14613,
  title={ When Does Meaning Backfire? Investigating the Role of AMRs in NLI },
  author={ Junghyun Min and Xiulin Yang and Shira Wein },
  journal={arXiv preprint arXiv:2506.14613},
  year={ 2025 }
}
Comments on this paper