ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2501.10326
88
6
v1v2 (latest)

Large language models for automated scholarly paper review: A survey

17 January 2025
Zhenzhen Zhuang
Jiandong Chen
Hongfeng Xu
Yuwen Jiang
Jialiang Lin
ArXiv (abs)PDFHTML
Main:36 Pages
1 Figures
Bibliography:2 Pages
5 Tables
Appendix:1 Pages
Abstract

Large language models (LLMs) have significantly impacted human society, influencing various domains. Among them, academia is not simply a domain affected by LLMs, but it is also the pivotal force in the development of LLMs. In academic publication, this phenomenon is represented during the incorporation of LLMs into the peer review mechanism for reviewing manuscripts. LLMs hold transformative potential for the full-scale implementation of automated scholarly paper review (ASPR), but they also pose new issues and challenges that need to be addressed. In this survey paper, we aim to provide a holistic view of ASPR in the era of LLMs. We begin with a survey to find out which LLMs are used to conduct ASPR. Then, we review what ASPR-related technological bottlenecks have been solved with the incorporation of LLM technology. After that, we move on to explore new methods, new datasets, new source code, and new online systems that come with LLMs for ASPR. Furthermore, we summarize the performance and issues of LLMs in ASPR, and investigate the attitudes and reactions of publishers and academia to ASPR. Lastly, we discuss the challenges and future directions associated with the development of LLMs for ASPR. This survey serves as an inspirational reference for the researchers and can promote the progress of ASPR for its actual implementation.

View on arXiv
@article{zhuang2025_2501.10326,
  title={ Large language models for automated scholarly paper review: A survey },
  author={ Zhenzhen Zhuang and Jiandong Chen and Hongfeng Xu and Yuwen Jiang and Jialiang Lin },
  journal={arXiv preprint arXiv:2501.10326},
  year={ 2025 }
}
Comments on this paper