ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2510.18941
56
0

ProfBench: Multi-Domain Rubrics requiring Professional Knowledge to Answer and Judge

21 October 2025
Zhilin Wang
Jaehun Jung
Ximing Lu
Boyao Wang
Ellie Evans
Jiaqi Zeng
Pavlo Molchanov
Yejin Choi
Jan Kautz
Yi Dong
    ELM
ArXiv (abs)PDFHTMLHuggingFace (7 upvotes)Github (4★)
Main:9 Pages
5 Figures
Bibliography:2 Pages
5 Tables
Appendix:12 Pages
Abstract

Evaluating progress in large language models (LLMs) is often constrained by the challenge of verifying responses, limiting assessments to tasks like mathematics, programming, and short-form question-answering. However, many real-world applications require evaluating LLMs in processing professional documents, synthesizing information, and generating comprehensive reports in response to user queries. We introduce ProfBench: a set of over 7000 response-criterion pairs as evaluated by human-experts with professional knowledge across Physics PhD, Chemistry PhD, Finance MBA and Consulting MBA. We build robust and affordable LLM-Judges to evaluate ProfBench rubrics, by mitigating self-enhancement bias and reducing the cost of evaluation by 2-3 orders of magnitude, to make it fair and accessible to the broader community. Our findings reveal that ProfBench poses significant challenges even for state-of-the-art LLMs, with top-performing models like GPT-5-high achieving only 65.9\% overall performance. Furthermore, we identify notable performance disparities between proprietary and open-weight models and provide insights into the role that extended thinking plays in addressing complex, professional-domain tasks. Data:this https URLand Code:this https URL

View on arXiv
Comments on this paper