ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.21935
10
0

From Reasoning to Learning: A Survey on Hypothesis Discovery and Rule Learning with Large Language Models

28 May 2025
Kaiyu He
Zhiyu Chen
    ReLM
    LRM
    ELM
ArXivPDFHTML
Abstract

Since the advent of Large Language Models (LLMs), efforts have largely focused on improving their instruction-following and deductive reasoning abilities, leaving open the question of whether these models can truly discover new knowledge. In pursuit of artificial general intelligence (AGI), there is a growing need for models that not only execute commands or retrieve information but also learn, reason, and generate new knowledge by formulating novel hypotheses and theories that deepen our understanding of the world. Guided by Peirce's framework of abduction, deduction, and induction, this survey offers a structured lens to examine LLM-based hypothesis discovery. We synthesize existing work in hypothesis generation, application, and validation, identifying both key achievements and critical gaps. By unifying these threads, we illuminate how LLMs might evolve from mere ``information executors'' into engines of genuine innovation, potentially transforming research, science, and real-world problem solving.

View on arXiv
@article{he2025_2505.21935,
  title={ From Reasoning to Learning: A Survey on Hypothesis Discovery and Rule Learning with Large Language Models },
  author={ Kaiyu He and Zhiyu Chen },
  journal={arXiv preprint arXiv:2505.21935},
  year={ 2025 }
}
Comments on this paper