ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.12259
2
0

Teach2Eval: An Indirect Evaluation Method for LLM by Judging How It Teaches

18 May 2025
Yuhang Zhou
Xutian Chen
Yixin Cao
Yuchen Ni
Yu He
Siyu Tian
Xiang Liu
Jian Zhang
Chuanjun Ji
Guangnan Ye
Xipeng Qiu
    ELM
ArXivPDFHTML
Abstract

Recent progress in large language models (LLMs) has outpaced the development of effective evaluation methods. Traditional benchmarks rely on task-specific metrics and static datasets, which often suffer from fairness issues, limited scalability, and contamination risks. In this paper, we introduce Teach2Eval, an indirect evaluation framework inspired by the Feynman Technique. Instead of directly testing LLMs on predefined tasks, our method evaluates a model's multiple abilities to teach weaker student models to perform tasks effectively. By converting open-ended tasks into standardized multiple-choice questions (MCQs) through teacher-generated feedback, Teach2Eval enables scalable, automated, and multi-dimensional assessment. Our approach not only avoids data leakage and memorization but also captures a broad range of cognitive abilities that are orthogonal to current benchmarks. Experimental results across 26 leading LLMs show strong alignment with existing human and model-based dynamic rankings, while offering additional interpretability for training guidance.

View on arXiv
@article{zhou2025_2505.12259,
  title={ Teach2Eval: An Indirect Evaluation Method for LLM by Judging How It Teaches },
  author={ Yuhang Zhou and Xutian Chen and Yixin Cao and Yuchen Ni and Yu He and Siyu Tian and Xiang Liu and Jian Zhang and Chuanjun Ji and Guangnan Ye and Xipeng Qiu },
  journal={arXiv preprint arXiv:2505.12259},
  year={ 2025 }
}
Comments on this paper