Cross-Document Cross-Lingual NLI via RST-Enhanced Graph Fusion and Interpretability Prediction

Natural Language Inference (NLI) is a fundamental task in natural language processing. While NLI has developed many sub-directions such as sentence-level NLI, document-level NLI and cross-lingual NLI, Cross-Document Cross-Lingual NLI (CDCL-NLI) remains largely unexplored. In this paper, we propose a novel paradigm: CDCL-NLI, which extends traditional NLI capabilities to multi-document, multilingual scenarios. To support this task, we construct a high-quality CDCL-NLI dataset including 25,410 instances and spanning 26 languages. To address the limitations of previous methods on CDCL-NLI task, we further propose an innovative method that integrates RST-enhanced graph fusion with interpretability-aware prediction. Our approach leverages RST (Rhetorical Structure Theory) within heterogeneous graph neural networks for cross-document context modeling, and employs a structure-aware semantic alignment based on lexical chains for cross-lingual understanding. For NLI interpretability, we develop an EDU (Elementary Discourse Unit)-level attribution framework that produces extractive explanations. Extensive experiments demonstrate our approach's superior performance, achieving significant improvements over both conventional NLI models as well as large language models. Our work sheds light on the study of NLI and will bring research interest on cross-document cross-lingual context understanding, hallucination elimination and interpretability inference. Our code and datasets are available at \href{this https URL}{CDCL-NLI-link} for peer review.
View on arXiv@article{yuan2025_2504.12324, title={ Cross-Document Cross-Lingual NLI via RST-Enhanced Graph Fusion and Interpretability Prediction }, author={ Mengying Yuan and Wenhao Wang and Zixuan Wang and Yujie Huang and Kangli Wei and Fei Li and Chong Teng and Donghong Ji }, journal={arXiv preprint arXiv:2504.12324}, year={ 2025 } }