ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.11855
2
0

When AI Co-Scientists Fail: SPOT-a Benchmark for Automated Verification of Scientific Research

17 May 2025
Guijin Son
Jiwoo Hong
Honglu Fan
Heejeong Nam
Hyunwoo Ko
Seungwon Lim
Jinyeop Song
Jinha Choi
Gonçalo Paulo
Youngjae Yu
Stella Biderman
ArXivPDFHTML
Abstract

Recent advances in large language models (LLMs) have fueled the vision of automated scientific discovery, often called AI Co-Scientists. To date, prior work casts these systems as generative co-authors responsible for crafting hypotheses, synthesizing code, or drafting manuscripts. In this work, we explore a complementary application: using LLMs as verifiers to automate the \textbf{academic verification of scientific manuscripts}. To that end, we introduce SPOT, a dataset of 83 published papers paired with 91 errors significant enough to prompt errata or retraction, cross-validated with actual authors and human annotators. Evaluating state-of-the-art LLMs on SPOT, we find that none surpasses 21.1\% recall or 6.1\% precision (o3 achieves the best scores, with all others near zero). Furthermore, confidence estimates are uniformly low, and across eight independent runs, models rarely rediscover the same errors, undermining their reliability. Finally, qualitative analysis with domain experts reveals that even the strongest models make mistakes resembling student-level misconceptions derived from misunderstandings. These findings highlight the substantial gap between current LLM capabilities and the requirements for dependable AI-assisted academic verification.

View on arXiv
@article{son2025_2505.11855,
  title={ When AI Co-Scientists Fail: SPOT-a Benchmark for Automated Verification of Scientific Research },
  author={ Guijin Son and Jiwoo Hong and Honglu Fan and Heejeong Nam and Hyunwoo Ko and Seungwon Lim and Jinyeop Song and Jinha Choi and Gonçalo Paulo and Youngjae Yu and Stella Biderman },
  journal={arXiv preprint arXiv:2505.11855},
  year={ 2025 }
}
Comments on this paper