MPL: Multiple Programming Languages with Large Language Models for Information Extraction

Recent research in information extraction (IE) focuses on utilizing code-style inputs to enhance structured output generation. The intuition behind this is that the programming languages (PLs) inherently exhibit greater structural organization than natural languages (NLs). This structural advantage makes PLs particularly suited for IE tasks. Nevertheless, existing research primarily focuses on Python for code-style simulation, overlooking the potential of other widely-used PLs (e.g., C++ and Java) during the supervised fine-tuning (SFT) phase. In this research, we propose \textbf{M}ultiple \textbf{P}rogramming \textbf{L}anguages with large language models for information extraction (abbreviated as \textbf{MPL}), a novel framework that explores the potential of incorporating different PLs in the SFT phase. Additionally, we introduce \texttt{function-prompt} with virtual running to simulate code-style inputs more effectively and efficiently. Experimental results on a wide range of datasets demonstrate the effectiveness of MPL. Furthermore, we conduct extensive experiments to provide a comprehensive analysis. We have released our code for future research.
View on arXiv@article{li2025_2505.16107, title={ MPL: Multiple Programming Languages with Large Language Models for Information Extraction }, author={ Bo Li and Gexiang Fang and Wei Ye and Zhenghua Xu and Jinglei Zhang and Hao Cheng and Shikun Zhang }, journal={arXiv preprint arXiv:2505.16107}, year={ 2025 } }