ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.06545
  4. Cited By
Mitigating Hallucinations in Large Language Models via
  Self-Refinement-Enhanced Knowledge Retrieval

Mitigating Hallucinations in Large Language Models via Self-Refinement-Enhanced Knowledge Retrieval

10 May 2024
Mengjia Niu
Hao Li
Jie Shi
Hamed Haddadi
Fan Mo
    HILM
ArXivPDFHTML

Papers citing "Mitigating Hallucinations in Large Language Models via Self-Refinement-Enhanced Knowledge Retrieval"

12 / 12 papers shown
Title
Universal Collection of Euclidean Invariants between Pairs of Position-Orientations
Universal Collection of Euclidean Invariants between Pairs of Position-Orientations
Gijs Bellaard
B. Smets
R. Duits
59
0
0
04 Apr 2025
Safeguarding Mobile GUI Agent via Logic-based Action Verification
Safeguarding Mobile GUI Agent via Logic-based Action Verification
Jungjae Lee
Dongjae Lee
Chihun Choi
Youngmin Im
Jaeyoung Wi
Kihong Heo
Sangeun Oh
Sunjae Lee
Insik Shin
LLMAG
75
0
0
24 Mar 2025
RAG-KG-IL: A Multi-Agent Hybrid Framework for Reducing Hallucinations and Enhancing LLM Reasoning through RAG and Incremental Knowledge Graph Learning Integration
RAG-KG-IL: A Multi-Agent Hybrid Framework for Reducing Hallucinations and Enhancing LLM Reasoning through RAG and Incremental Knowledge Graph Learning Integration
Hong Qing Yu
Frank McQuade
48
1
0
14 Mar 2025
TPU-Gen: LLM-Driven Custom Tensor Processing Unit Generator
Deepak Vungarala
Mohammed E. Elbtity
Sumiya Syed
Sakila Alam
Kartik Pandit
Arnob Ghosh
Ramtin Zand
Shaahin Angizi
34
1
0
07 Mar 2025
Position: Multimodal Large Language Models Can Significantly Advance Scientific Reasoning
Position: Multimodal Large Language Models Can Significantly Advance Scientific Reasoning
Yibo Yan
Shen Wang
Jiahao Huo
Jingheng Ye
Zhendong Chu
Xuming Hu
Philip S. Yu
Carla P. Gomes
B. Selman
Qingsong Wen
LRM
127
9
0
05 Feb 2025
CoPrompter: User-Centric Evaluation of LLM Instruction Alignment for
  Improved Prompt Engineering
CoPrompter: User-Centric Evaluation of LLM Instruction Alignment for Improved Prompt Engineering
Ishika Joshi
Simra Shahid
Shreeya Venneti
Manushree Vasu
Yantao Zheng
Yunyao Li
Balaji Krishnamurthy
Gromit Yeuk-Yin Chan
29
3
0
09 Nov 2024
Towards Reliable Medical Question Answering: Techniques and Challenges
  in Mitigating Hallucinations in Language Models
Towards Reliable Medical Question Answering: Techniques and Challenges in Mitigating Hallucinations in Language Models
Duy Khoa Pham
Bao Quoc Vo
LM&MA
HILM
31
4
0
25 Aug 2024
Halu-J: Critique-Based Hallucination Judge
Halu-J: Critique-Based Hallucination Judge
Binjie Wang
Steffi Chern
Ethan Chern
Pengfei Liu
HILM
36
7
0
17 Jul 2024
In-Context Sharpness as Alerts: An Inner Representation Perspective for
  Hallucination Mitigation
In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation
Shiqi Chen
Miao Xiong
Junteng Liu
Zhengxuan Wu
Teng Xiao
Siyang Gao
Junxian He
HILM
51
21
0
03 Mar 2024
Enhancing Uncertainty-Based Hallucination Detection with Stronger Focus
Enhancing Uncertainty-Based Hallucination Detection with Stronger Focus
Tianhang Zhang
Lin Qiu
Qipeng Guo
Cheng Deng
Yue Zhang
Zheng-Wei Zhang
Cheng Zhou
Xinbing Wang
Luoyi Fu
HILM
77
48
0
22 Nov 2023
Large Language Models Meet Knowledge Graphs to Answer Factoid Questions
Large Language Models Meet Knowledge Graphs to Answer Factoid Questions
Mikhail Salnikov
Hai Le
Prateek Rajput
Irina Nikishina
Pavel Braslavski
Valentin Malykh
Alexander Panchenko
KELM
33
13
0
03 Oct 2023
The Internal State of an LLM Knows When It's Lying
The Internal State of an LLM Knows When It's Lying
A. Azaria
Tom Michael Mitchell
HILM
218
299
0
26 Apr 2023
1