ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.01683
68
2

LLM-Powered Benchmark Factory: Reliable, Generic, and Efficient

2 February 2025
Peiwen Yuan
Shaoxiong Feng
Yiwei Li
X. U. Wang
Y. Zhang
Jiayi Shi
Chuyi Tan
Boyuan Pan
Yao Hu
Kan Li
ArXivPDFHTML
Abstract

The rapid advancement of large language models (LLMs) has led to a surge in both model supply and application demands. To facilitate effective matching between them, reliable, generic and efficient benchmark generators are widely needed. However, human annotators are constrained by inefficiency, and current LLM benchmark generators not only lack generalizability but also struggle with limited reliability, as they lack a comprehensive evaluation framework for validation and optimization. To fill this gap, we first propose an automated and unbiased evaluation framework, structured around four dimensions and ten criteria. Under this framework, we carefully analyze the advantages and weaknesses of directly prompting LLMs as generic benchmark generators. To enhance the reliability, we introduce a series of methods to address the identified weaknesses and integrate them as BenchMaker. Experiments across multiple LLMs and tasks confirm that BenchMaker achieves superior or comparable performance to human-annotated benchmarks on all metrics, highlighting its generalizability and reliability. More importantly, it delivers highly consistent evaluation results across 12 LLMs (0.967 Pearson correlation against MMLU-Pro), while taking only 0.005and0.38minutespersample.0.005 and 0.38 minutes per sample.0.005and0.38minutespersample.

View on arXiv
@article{yuan2025_2502.01683,
  title={ LLM-Powered Benchmark Factory: Reliable, Generic, and Efficient },
  author={ Peiwen Yuan and Shaoxiong Feng and Yiwei Li and Xinglin Wang and Yueqi Zhang and Jiayi Shi and Chuyi Tan and Boyuan Pan and Yao Hu and Kan Li },
  journal={arXiv preprint arXiv:2502.01683},
  year={ 2025 }
}
Comments on this paper