ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2403.07969
25
24

KnowCoder: Coding Structured Knowledge into LLMs for Universal Information Extraction

12 March 2024
Zixuan Li
Yutao Zeng
Yuxin Zuo
Weicheng Ren
Wenxuan Liu
Miao Su
Yucan Guo
Yantao Liu
Xiang Li
Zhilei Hu
Long Bai
Wei Li
Yidan Liu
Pan Yang
Xiaolong Jin
Jiafeng Guo
Xueqi Cheng
    OffRL
ArXivPDFHTML
Abstract

In this paper, we propose KnowCoder, a Large Language Model (LLM) to conduct Universal Information Extraction (UIE) via code generation. KnowCoder aims to develop a kind of unified schema representation that LLMs can easily understand and an effective learning framework that encourages LLMs to follow schemas and extract structured knowledge accurately. To achieve these, KnowCoder introduces a code-style schema representation method to uniformly transform different schemas into Python classes, with which complex schema information, such as constraints among tasks in UIE, can be captured in an LLM-friendly manner. We further construct a code-style schema library covering over 30,000\textbf{30,000}30,000 types of knowledge, which is the largest one for UIE, to the best of our knowledge. To ease the learning process of LLMs, KnowCoder contains a two-phase learning framework that enhances its schema understanding ability via code pretraining and its schema following ability via instruction tuning. After code pretraining on around 1.51.51.5B automatically constructed data, KnowCoder already attains remarkable generalization ability and achieves relative improvements by \textbf{49.8%} F1, compared to LLaMA2, under the few-shot setting. After instruction tuning, KnowCoder further exhibits strong generalization ability on unseen schemas and achieves up to \textbf{12.5%} and \textbf{21.9%}, compared to sota baselines, under the zero-shot setting and the low resource setting, respectively. Additionally, based on our unified schema representations, various human-annotated datasets can simultaneously be utilized to refine KnowCoder, which achieves significant improvements up to \textbf{7.5%} under the supervised setting.

View on arXiv
Comments on this paper