ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.01716
51
0

Self-Challenging Language Model Agents

2 June 2025
Yifei Zhou
Sergey Levine
Jason Weston
Xian Li
Sainbayar Sukhbaatar
    ALMELM
ArXiv (abs)PDFHTML
Main:9 Pages
23 Figures
Bibliography:4 Pages
7 Tables
Appendix:12 Pages
Abstract

Large language models are quickly becoming the foundation for intelligent agents that are capable of using tools. However, training such agents is challenging because it requires human creation and annotation of a diverse set of tasks, tools, and evaluation criteria. In this paper, we propose the Self-Challenging framework for training an agent on high-quality tasks that are generated by itself. The agent first plays the role of challenger and generates a task after interacting with the given tools. The tasks take the form of a novel general class of problems termed Code-as-Task, which are defined by an instruction, a verification function and solution and failure cases which serve as tests, allowing to filter only for high-quality tasks. The agent then takes an executor role and trains on those tasks with reinforcement learning using the evaluation feedback as a reward. Evaluation on two existing multi-turn tool-use agent benchmarks, M3ToolEval and TauBench, shows the Self-Challenging framework achieves over a two-fold improvement in Llama-3.1-8B-Instruct, despite using only self-generated training data.

View on arXiv
@article{zhou2025_2506.01716,
  title={ Self-Challenging Language Model Agents },
  author={ Yifei Zhou and Sergey Levine and Jason Weston and Xian Li and Sainbayar Sukhbaatar },
  journal={arXiv preprint arXiv:2506.01716},
  year={ 2025 }
}
Comments on this paper