16
0

Review-Instruct: A Review-Driven Multi-Turn Conversations Generation Method for Large Language Models

Abstract

The effectiveness of large language models (LLMs) in conversational AI is hindered by their reliance on single-turn supervised fine-tuning (SFT) data, which limits contextual coherence in multi-turn dialogues. Existing methods for generating multi-turn dialogue data struggle to ensure both diversity and quality in instructions. To address this, we propose Review-Instruct, a novel framework that synthesizes multi-turn conversations through an iterative "Ask-Respond-Review" process involving three agent roles: a Candidate, multiple Reviewers, and a Chairman. The framework iteratively refines instructions by incorporating Reviewer feedback, enhancing dialogue diversity and difficulty. We construct a multi-turn dataset using the Alpaca dataset and fine-tune the LLaMA2-13B model. Evaluations on MT-Bench, MMLU-Pro, and Auto-Arena demonstrate significant improvements, achieving absolute gains of 2.9\% on MMLU-Pro and 2\% on MT-Bench compared to prior state-of-the-art models based on LLaMA2-13B. Ablation studies confirm the critical role of the Review stage and the use of multiple Reviewers in boosting instruction diversity and difficulty. Our work highlights the potential of review-driven, multi-agent frameworks for generating high-quality conversational data at scale.

View on arXiv
@article{wu2025_2505.11010,
  title={ Review-Instruct: A Review-Driven Multi-Turn Conversations Generation Method for Large Language Models },
  author={ Jiangxu Wu and Cong Wang and TianHuang Su and Jun Yang and Haozhi Lin and Chao Zhang and Ming Peng and Kai Shi and SongPan Yang and BinQing Pan and ZiXian Li and Ni Yang and ZhenYu Yang },
  journal={arXiv preprint arXiv:2505.11010},
  year={ 2025 }
}
Comments on this paper