ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.03201
53
0

Towards Robust Universal Information Extraction: Benchmark, Evaluation, and Solution

5 March 2025
Jizhao Zhu
Akang Shi
Zeju Li
Long Bai
Xiaolong Jin
J. Guo
Xueqi Cheng
ArXivPDFHTML
Abstract

In this paper, we aim to enhance the robustness of Universal Information Extraction (UIE) by introducing a new benchmark dataset, a comprehensive evaluation, and a feasible solution. Existing robust benchmark datasets have two key limitations: 1) They generate only a limited range of perturbations for a single Information Extraction (IE) task, which fails to evaluate the robustness of UIE models effectively; 2) They rely on small models or handcrafted rules to generate perturbations, often resulting in unnatural adversarial examples. Considering the powerful generation capabilities of Large Language Models (LLMs), we introduce a new benchmark dataset for Robust UIE, called RUIE-Bench, which utilizes LLMs to generate more diverse and realistic perturbations across different IE tasks. Based on this dataset, we comprehensively evaluate existing UIE models and reveal that both LLM-based models and other models suffer from significant performance drops. To improve robustness and reduce training costs, we propose a data-augmentation solution that dynamically selects hard samples for iterative training based on the model's inference loss. Experimental results show that training with only \textbf{15\%} of the data leads to an average \textbf{7.5\%} relative performance improvement across three IE tasks.

View on arXiv
@article{zhu2025_2503.03201,
  title={ Towards Robust Universal Information Extraction: Benchmark, Evaluation, and Solution },
  author={ Jizhao Zhu and Akang Shi and Zixuan Li and Long Bai and Xiaolong Jin and Jiafeng Guo and Xueqi Cheng },
  journal={arXiv preprint arXiv:2503.03201},
  year={ 2025 }
}
Comments on this paper