ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2305.13981
25
9

Preserving Knowledge Invariance: Rethinking Robustness Evaluation of Open Information Extraction

23 May 2023
Ji Qi
Chuchu Zhang
Xiaozhi Wang
Kaisheng Zeng
Jifan Yu
Jinxin Liu
Jiu Sun
Yuxiang Chen
Lei How
Juanzi Li
Bin Xu
ArXivPDFHTML
Abstract

The robustness to distribution changes ensures that NLP models can be successfully applied in the realistic world, especially for information extraction tasks. However, most prior evaluation benchmarks have been devoted to validating pairwise matching correctness, ignoring the crucial measurement of robustness. In this paper, we present the first benchmark that simulates the evaluation of open information extraction models in the real world, where the syntactic and expressive distributions under the same knowledge meaning may drift variously. We design and annotate a large-scale testbed in which each example is a knowledge-invariant clique that consists of sentences with structured knowledge of the same meaning but with different syntactic and expressive forms. By further elaborating the robustness metric, a model is judged to be robust if its performance is consistently accurate on the overall cliques. We perform experiments on typical models published in the last decade as well as a popular large language model, the results show that the existing successful models exhibit a frustrating degradation, with a maximum drop of 23.43 F1 score. Our resources and code are available atthis https URL.

View on arXiv
@article{qi2025_2305.13981,
  title={ Preserving Knowledge Invariance: Rethinking Robustness Evaluation of Open Information Extraction },
  author={ Ji Qi and Chuchun Zhang and Xiaozhi Wang and Kaisheng Zeng and Jifan Yu and Jinxin Liu and Jiuding Sun and Yuxiang Chen and Lei Hou and Juanzi Li and Bin Xu },
  journal={arXiv preprint arXiv:2305.13981},
  year={ 2025 }
}
Comments on this paper