5
0

Generating Diverse Training Samples for Relation Extraction with Large Language Models

Abstract

Using Large Language Models (LLMs) to generate training data can potentially be a preferable way to improve zero or few-shot NLP tasks. However, many problems remain to be investigated for this direction. For the task of Relation Extraction (RE), we find that samples generated by directly prompting LLMs may easily have high structural similarities with each other. They tend to use a limited variety of phrasing while expressing the relation between a pair of entities. Therefore, in this paper, we study how to effectively improve the diversity of the training samples generated with LLMs for RE, while also maintaining their correctness. We first try to make the LLMs produce dissimilar samples by directly giving instructions in In-Context Learning (ICL) prompts. Then, we propose an approach to fine-tune LLMs for diversity training sample generation through Direct Preference Optimization (DPO). Our experiments on commonly used RE datasets show that both attempts can improve the quality of the generated training data. We also find that comparing with directly performing RE with an LLM, training a non-LLM RE model with its generated samples may lead to better performance.

View on arXiv
@article{li2025_2505.23108,
  title={ Generating Diverse Training Samples for Relation Extraction with Large Language Models },
  author={ Zexuan Li and Hongliang Dai and Piji Li },
  journal={arXiv preprint arXiv:2505.23108},
  year={ 2025 }
}
Comments on this paper