Relation Extraction or Pattern Matching? Unravelling the Generalisation Limits of Language Models for Biographical RE

Analysing the generalisation capabilities of relation extraction (RE) models is crucial for assessing whether they learn robust relational patterns or rely on spurious correlations. Our cross-dataset experiments find that RE models struggle with unseen data, even within similar domains. Notably, higher intra-dataset performance does not indicate better transferability, instead often signaling overfitting to dataset-specific artefacts. Our results also show that data quality, rather than lexical similarity, is key to robust transfer, and the choice of optimal adaptation strategy depends on the quality of data available: while fine-tuning yields the best cross-dataset performance with high-quality data, few-shot in-context learning (ICL) is more effective with noisier data. However, even in these cases, zero-shot baselines occasionally outperform all cross-dataset results. Structural issues in RE benchmarks, such as single-relation per sample constraints and non-standardised negative class definitions, further hinder model transferability.
View on arXiv@article{arzt2025_2505.12533, title={ Relation Extraction or Pattern Matching? Unravelling the Generalisation Limits of Language Models for Biographical RE }, author={ Varvara Arzt and Allan Hanbury and Michael Wiegand and Gábor Recski and Terra Blevins }, journal={arXiv preprint arXiv:2505.12533}, year={ 2025 } }