12
0

YESciEval: Robust LLM-as-a-Judge for Scientific Question Answering

Abstract

Large Language Models (LLMs) drive scientific question-answering on modern search engines, yet their evaluation robustness remains underexplored. We introduce YESciEval, an open-source framework that combines fine-grained rubric-based assessment with reinforcement learning to mitigate optimism bias in LLM evaluators. We release multidisciplinary scienceQ&A datasets, including adversarial variants, with evaluation scores from multiple LLMs. Independent of proprietary models and human feedback, our approach enables scalable, cost-free evaluation. By advancing reliable LLM-as-a-judge models, this work supports AI alignment and fosters robust, transparent evaluation essential for scientific inquiry and artificial general intelligence.

View on arXiv
@article{d'souza2025_2505.14279,
  title={ YESciEval: Robust LLM-as-a-Judge for Scientific Question Answering },
  author={ Jennifer D'Souza and Hamed Babaei Giglou and Quentin Münch },
  journal={arXiv preprint arXiv:2505.14279},
  year={ 2025 }
}
Comments on this paper