ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.02689
47
0

MASTER: Enhancing Large Language Model via Multi-Agent Simulated Teaching

3 June 2025
Liang Yue
Yihong Tang
Kehai Chen
Jie Liu
Min Zhang
    LLMAG
ArXivPDFHTML
Abstract

Instruction fine-tuning is crucial in NLP tasks, enhancing pretrained models' instruction-following capabilities and task-specific performance. However, obtaining high-quality fine-tuning data for large models is challenging due to data collection difficulties and high production costs. To address this, we propose MASTER, a novel data augmentation method that enriches original data through interactions among multiple agents with varying cognitive levels. We simulate three pedagogically grounded teaching scenarios, leveraging multi-agent conversations to generate high-quality teacher-student interaction data. Utilizing MASTER, we construct BOOST-QA, a fine-tuning dataset augmented from existing datasets like Orca-Math-200k, ProcQA, and OpenHermes2.5. Experiments show that models fine-tuned with BOOST-QA perform excellently across multiple benchmarks, demonstrating strong multitask generalization. Notably, MASTER significantly improves models' reasoning abilities in complex tasks, providing valuable insights for future research.

View on arXiv
@article{yue2025_2506.02689,
  title={ MASTER: Enhancing Large Language Model via Multi-Agent Simulated Teaching },
  author={ Liang Yue and Yihong Tang and Kehai Chen and Jie Liu and Min Zhang },
  journal={arXiv preprint arXiv:2506.02689},
  year={ 2025 }
}
Comments on this paper