9
0

Harnessing Large Language Models for Scientific Novelty Detection

Abstract

In an era of exponential scientific growth, identifying novel research ideas is crucial and challenging in academia. Despite potential, the lack of an appropriate benchmark dataset hinders the research of novelty detection. More importantly, simply adopting existing NLP technologies, e.g., retrieving and then cross-checking, is not a one-size-fits-all solution due to the gap between textual similarity and idea conception. In this paper, we propose to harness large language models (LLMs) for scientific novelty detection (ND), associated with two new datasets in marketing and NLP domains. To construct the considerate datasets for ND, we propose to extract closure sets of papers based on their relationship, and then summarize their main ideas based on LLMs. To capture idea conception, we propose to train a lightweight retriever by distilling the idea-level knowledge from LLMs to align ideas with similar conception, enabling efficient and accurate idea retrieval for LLM novelty detection. Experiments show our method consistently outperforms others on the proposed benchmark datasets for idea retrieval and ND tasks. Codes and data are available atthis https URL.

View on arXiv
@article{liu2025_2505.24615,
  title={ Harnessing Large Language Models for Scientific Novelty Detection },
  author={ Yan Liu and Zonglin Yang and Soujanya Poria and Thanh-Son Nguyen and Erik Cambria },
  journal={arXiv preprint arXiv:2505.24615},
  year={ 2025 }
}
Comments on this paper