12
0

SciCUEval: A Comprehensive Dataset for Evaluating Scientific Context Understanding in Large Language Models

Abstract

Large Language Models (LLMs) have shown impressive capabilities in contextual understanding and reasoning. However, evaluating their performance across diverse scientific domains remains underexplored, as existing benchmarks primarily focus on general domains and fail to capture the intricate complexity of scientific data. To bridge this gap, we construct SciCUEval, a comprehensive benchmark dataset tailored to assess the scientific context understanding capability of LLMs. It comprises ten domain-specific sub-datasets spanning biology, chemistry, physics, biomedicine, and materials science, integrating diverse data modalities including structured tables, knowledge graphs, and unstructured texts. SciCUEval systematically evaluates four core competencies: Relevant information identification, Information-absence detection, Multi-source information integration, and Context-aware inference, through a variety of question formats. We conduct extensive evaluations of state-of-the-art LLMs on SciCUEval, providing a fine-grained analysis of their strengths and limitations in scientific context understanding, and offering valuable insights for the future development of scientific-domain LLMs.

View on arXiv
@article{yu2025_2505.15094,
  title={ SciCUEval: A Comprehensive Dataset for Evaluating Scientific Context Understanding in Large Language Models },
  author={ Jing Yu and Yuqi Tang and Kehua Feng and Mingyang Rao and Lei Liang and Zhiqiang Zhang and Mengshu Sun and Wen Zhang and Qiang Zhang and Keyan Ding and Huajun Chen },
  journal={arXiv preprint arXiv:2505.15094},
  year={ 2025 }
}
Comments on this paper