ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.19897
68
3
v1v2 (latest)

ScienceBoard: Evaluating Multimodal Autonomous Agents in Realistic Scientific Workflows

26 May 2025
Qiushi Sun
Zhoumianze Liu
Chang Ma
Zichen Ding
Fangzhi Xu
Zhangyue Yin
Haiteng Zhao
Z. Wu
K. Cheng
Zhaoyang Liu
Jianing Wang
Qintong Li
Xiangru Tang
Tianbao Xie
Xiachong Feng
Xiang Li
B. Kao
Wenhai Wang
Biqing Qi
Lingpeng Kong
Zhiyong Wu
    LLMAGLM&Ro
ArXiv (abs)PDFHTML
Main:8 Pages
14 Figures
Bibliography:8 Pages
8 Tables
Appendix:19 Pages
Abstract

Large Language Models (LLMs) have extended their impact beyond Natural Language Processing, substantially fostering the development of interdisciplinary research. Recently, various LLM-based agents have been developed to assist scientific discovery progress across multiple aspects and domains. Among these, computer-using agents, capable of interacting with operating systems as humans do, are paving the way to automated scientific problem-solving and addressing routines in researchers' workflows. Recognizing the transformative potential of these agents, we introduce ScienceBoard, which encompasses two complementary contributions: (i) a realistic, multi-domain environment featuring dynamic and visually rich scientific workflows with integrated professional software, where agents can autonomously interact via different interfaces to accelerate complex research tasks and experiments; and (ii) a challenging benchmark of 169 high-quality, rigorously validated real-world tasks curated by humans, spanning scientific-discovery workflows in domains such as biochemistry, astronomy, and geoinformatics. Extensive evaluations of agents with state-of-the-art backbones (e.g., GPT-4o, Claude 3.7, UI-TARS) show that, despite some promising results, they still fall short of reliably assisting scientists in complex workflows, achieving only a 15% overall success rate. In-depth analysis further provides valuable insights for addressing current agent limitations and more effective design principles, paving the way to build more capable agents for scientific discovery. Our code, environment, and benchmark are atthis https URL.

View on arXiv
@article{sun2025_2505.19897,
  title={ ScienceBoard: Evaluating Multimodal Autonomous Agents in Realistic Scientific Workflows },
  author={ Qiushi Sun and Zhoumianze Liu and Chang Ma and Zichen Ding and Fangzhi Xu and Zhangyue Yin and Haiteng Zhao and Zhenyu Wu and Kanzhi Cheng and Zhaoyang Liu and Jianing Wang and Qintong Li and Xiangru Tang and Tianbao Xie and Xiachong Feng and Xiang Li and Ben Kao and Wenhai Wang and Biqing Qi and Lingpeng Kong and Zhiyong Wu },
  journal={arXiv preprint arXiv:2505.19897},
  year={ 2025 }
}
Comments on this paper