5
0

Limited Generalizability in Argument Mining: State-Of-The-Art Models Learn Datasets, Not Arguments

Abstract

Identifying arguments is a necessary prerequisite for various tasks in automated discourse analysis, particularly within contexts such as political debates, online discussions, and scientific reasoning. In addition to theoretical advances in understanding the constitution of arguments, a significant body of research has emerged around practical argument mining, supported by a growing number of publicly available datasets. On these benchmarks, BERT-like transformers have consistently performed best, reinforcing the belief that such models are broadly applicable across diverse contexts of debate. This study offers the first large-scale re-evaluation of such state-of-the-art models, with a specific focus on their ability to generalize in identifying arguments. We evaluate four transformers, three standard and one enhanced with contrastive pre-training for better generalization, on 17 English sentence-level datasets as most relevant to the task. Our findings show that, to varying degrees, these models tend to rely on lexical shortcuts tied to content words, suggesting that apparent progress may often be driven by dataset-specific cues rather than true task alignment. While the models achieve strong results on familiar benchmarks, their performance drops markedly when applied to unseen datasets. Nonetheless, incorporating both task-specific pre-training and joint benchmark training proves effective in enhancing both robustness and generalization.

View on arXiv
@article{feger2025_2505.22137,
  title={ Limited Generalizability in Argument Mining: State-Of-The-Art Models Learn Datasets, Not Arguments },
  author={ Marc Feger and Katarina Boland and Stefan Dietze },
  journal={arXiv preprint arXiv:2505.22137},
  year={ 2025 }
}
Comments on this paper