ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.09031
24
0

Improving the Reliability of LLMs: Combining CoT, RAG, Self-Consistency, and Self-Verification

13 May 2025
Adarsh Kumar
Hwiyoon Kim
Jawahar Sai Nathani
Neil Roy
    HILM
    LRM
ArXivPDFHTML
Abstract

Hallucination, where large language models (LLMs) generate confident but incorrect or irrelevant information, remains a key limitation in their application to complex, open-ended tasks. Chain-of-thought (CoT) prompting has emerged as a promising method for improving multistep reasoning by guiding models through intermediate steps. However, CoT alone does not fully address the hallucination problem. In this work, we investigate how combining CoT with retrieval-augmented generation (RAG), as well as applying self-consistency and self-verification strategies, can reduce hallucinations and improve factual accuracy. By incorporating external knowledge sources during reasoning and enabling models to verify or revise their own outputs, we aim to generate more accurate and coherent responses. We present a comparative evaluation of baseline LLMs against CoT, CoT+RAG, self-consistency, and self-verification techniques. Our results highlight the effectiveness of each method and identify the most robust approach for minimizing hallucinations while preserving fluency and reasoning depth.

View on arXiv
@article{kumar2025_2505.09031,
  title={ Improving the Reliability of LLMs: Combining CoT, RAG, Self-Consistency, and Self-Verification },
  author={ Adarsh Kumar and Hwiyoon Kim and Jawahar Sai Nathani and Neil Roy },
  journal={arXiv preprint arXiv:2505.09031},
  year={ 2025 }
}
Comments on this paper