ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.09212
49
0

LP-LM: No Hallucinations in Question Answering with Logic Programming

13 February 2025
Katherine Wu
Yanhong A. Liu
    HILM
    LRM
ArXivPDFHTML
Abstract

Large language models (LLMs) are able to generate human-like responses to user queries. However, LLMs exhibit inherent limitations, especially because they hallucinate. This paper introduces LP-LM, a system that grounds answers to questions in known facts contained in a knowledge base (KB), facilitated through semantic parsing in Prolog, and always produces answers that are reliable.LP-LM generates a most probable constituency parse tree along with a corresponding Prolog term for an input question via Prolog definite clause grammar (DCG) parsing. The term is then executed against a KB of natural language sentences also represented as Prolog terms for question answering. By leveraging DCG and tabling, LP-LM runs in linear time in the size of input sentences for sufficiently many grammar rules. Performing experiments comparing LP-LM with current well-known LLMs in accuracy, we show that LLMs hallucinate on even simple questions, unlike LP-LM.

View on arXiv
@article{wu2025_2502.09212,
  title={ LP-LM: No Hallucinations in Question Answering with Logic Programming },
  author={ Katherine Wu and Yanhong A. Liu },
  journal={arXiv preprint arXiv:2502.09212},
  year={ 2025 }
}
Comments on this paper