ResearchTrend.AI
  • Communities
  • Connect sessions
  • AI calendar
  • Organizations
  • Join Slack
  • Contact Sales
Papers
Communities
Social Events
Terms and Conditions
Pricing
Contact Sales
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Communities
  3. ...

Neighbor communities

0 / 0 papers shown
Title
Top Contributors
Name# Papers# Citations
Social Events
DateLocationEvent
  1. Home
  2. Communities
  3. HILM

Hallucination in Language Models

HILM
More data

Dedicated to studies primarily investigating the causes, implications, and solutions for the phenomenon where language models generate plausible but incorrect or nonsensical outputs.

Neighbor communities

51015

Featured Papers

0 / 0 papers shown
Title

All papers

50 / 1,232 papers shown
Title
Citation-Grounded Code Comprehension: Preventing LLM Hallucination Through Hybrid Retrieval and Graph-Augmented Context
Citation-Grounded Code Comprehension: Preventing LLM Hallucination Through Hybrid Retrieval and Graph-Augmented Context
Jahidul Arafat
HILM
61
0
0
13 Dec 2025
Does Less Hallucination Mean Less Creativity? An Empirical Investigation in LLMs
Does Less Hallucination Mean Less Creativity? An Empirical Investigation in LLMs
Mohor Banerjee
Nadya Yuki Wangsajaya
Syed Ali Redha Alsagoff
Min Sen Tan
Zachary Choy Kit Chun
Alvin Chan Guo Wei
HILM
37
0
0
12 Dec 2025
CLINIC: Evaluating Multilingual Trustworthiness in Language Models for Healthcare
CLINIC: Evaluating Multilingual Trustworthiness in Language Models for Healthcare
Akash Ghosh
Srivarshinee Sridhar
Raghav Kaushik Ravi
Muhsin Muhsin
Sriparna Saha
Chirag Agarwal
HILM
12
0
0
12 Dec 2025
The FACTS Leaderboard: A Comprehensive Benchmark for Large Language Model Factuality
The FACTS Leaderboard: A Comprehensive Benchmark for Large Language Model Factuality
Aileen Cheng
Alon Jacovi
Amir Globerson
Ben Golan
Charles Kwong
...
Srinivasan Venkatachary
Tulsee Doshi
Yossi Matias
Sasha Goldshtein
Dipanjan Das
HILMALMKELM
128
0
0
11 Dec 2025
FIBER: A Multilingual Evaluation Resource for Factual Inference Bias
FIBER: A Multilingual Evaluation Resource for Factual Inference Bias
Evren Ayberk Munis
Deniz Yılmaz
Arianna Muti
Çağrı Toraman
HILM
104
0
0
11 Dec 2025
Calibrated Trust in Dealing with LLM Hallucinations: A Qualitative Study
Calibrated Trust in Dealing with LLM Hallucinations: A Qualitative Study
Adrian Ryser
Florian Allwein
Tim Schlippe
HILM
8
0
0
09 Dec 2025
Training LLMs for Honesty via Confessions
Training LLMs for Honesty via Confessions
Manas Joglekar
Jeremy Chen
Gabriel Wu
Jason Yosinski
Jasmine Wang
Boaz Barak
Amelia Glaese
HILM
139
0
0
08 Dec 2025
FVA-RAG: Falsification-Verification Alignment for Mitigating Sycophantic Hallucinations
FVA-RAG: Falsification-Verification Alignment for Mitigating Sycophantic Hallucinations
Mayank Ravishankara
HILM
108
0
0
07 Dec 2025
Faithfulness metric fusion: Improving the evaluation of LLM trustworthiness across domains
Faithfulness metric fusion: Improving the evaluation of LLM trustworthiness across domains
Ben Malin
Tatiana Kalganova
Nikolaos Boulgouris
HILM
128
0
0
05 Dec 2025
HalluGen: Synthesizing Realistic and Controllable Hallucinations for Evaluating Image Restoration
HalluGen: Synthesizing Realistic and Controllable Hallucinations for Evaluating Image Restoration
Seunghoi Kim
Henry F. J. Tregidgo
Chen Jin
Matteo Figini
Daniel C. Alexander
DiffMHILM
60
0
0
03 Dec 2025
AlignCheck: a Semantic Open-Domain Metric for Factual Consistency Assessment
AlignCheck: a Semantic Open-Domain Metric for Factual Consistency Assessment
Ahmad Aghaebrahimian
HILM
20
0
0
03 Dec 2025
Towards Unification of Hallucination Detection and Fact Verification for Large Language Models
Towards Unification of Hallucination Detection and Fact Verification for Large Language Models
Weihang Su
Jianming Long
Changyue Wang
Shiyu Lin
Jingyan Xu
Ziyi Ye
Qingyao Ai
Yiqun Liu
HILM
28
0
0
02 Dec 2025
Detecting AI Hallucinations in Finance: An Information-Theoretic Method Cuts Hallucination Rate by 92%
Detecting AI Hallucinations in Finance: An Information-Theoretic Method Cuts Hallucination Rate by 92%
Mainak Singha
HILM
132
0
0
02 Dec 2025
InEx: Hallucination Mitigation via Introspection and Cross-Modal Multi-Agent Collaboration
InEx: Hallucination Mitigation via Introspection and Cross-Modal Multi-Agent Collaboration
Zhongyu Yang
Yingfang Yuan
Xuanming Jiang
Baoyi An
Wei Pang
LLMAGHILMLRM
52
0
0
02 Dec 2025
Fine-Tuned Large Language Models for Logical Translation: Reducing Hallucinations with Lang2Logic
Fine-Tuned Large Language Models for Logical Translation: Reducing Hallucinations with Lang2LogicInternational Symposium on Networks, Computers and Communications (ISNCC), 2025
Muyu Pan
Dheeraj Kodakandla
Mahfuza Farooque
HILMLRM
208
0
0
02 Dec 2025
A Concise Review of Hallucinations in LLMs and their Mitigation
A Concise Review of Hallucinations in LLMs and their Mitigation
Parth Pulkundwar
Vivek Dhanawade
Rohit Yadav
Minal Sonkar
Medha Asurlekar
Sarita Rathod
HILM
16
0
0
02 Dec 2025
HalluGraph: Auditable Hallucination Detection for Legal RAG Systems via Knowledge Graph Alignment
HalluGraph: Auditable Hallucination Detection for Legal RAG Systems via Knowledge Graph Alignment
Valentin Noël
Elimane Yassine Seidou
Charly Ken Capo-Chichi
Ghanem Amari
HILM
40
0
0
01 Dec 2025
BHRAM-IL: A Benchmark for Hallucination Recognition and Assessment in Multiple Indian Languages
Hrishikesh Terdalkar
Kirtan Bhojani
Aryan Dongare
Omm Aditya Behera
HILMVLM
60
0
0
01 Dec 2025
H-Neurons: On the Existence, Impact, and Origin of Hallucination-Associated Neurons in LLMs
H-Neurons: On the Existence, Impact, and Origin of Hallucination-Associated Neurons in LLMs
Cheng Gao
Huimin Chen
Chaojun Xiao
Zhiyi Chen
Zhiyuan Liu
Maosong Sun
HILMLRM
48
0
0
01 Dec 2025
Graphing the Truth: Structured Visualizations for Automated Hallucination Detection in LLMs
Graphing the Truth: Structured Visualizations for Automated Hallucination Detection in LLMs
Tanmay Agrawal
HILM
166
0
0
29 Nov 2025
RoParQ: Paraphrase-Aware Alignment of Large Language Models Towards Robustness to Paraphrased Questions
RoParQ: Paraphrase-Aware Alignment of Large Language Models Towards Robustness to Paraphrased Questions
Minjoon Choi
HILM
24
0
0
26 Nov 2025
REFLEX: Self-Refining Explainable Fact-Checking via Disentangling Truth into Style and Substance
REFLEX: Self-Refining Explainable Fact-Checking via Disentangling Truth into Style and Substance
Chuyi Kong
Gao Wei
Jing Ma
Hongzhan Lin
Yaxin Fan
KELMHILM
178
0
0
25 Nov 2025
Large Language Models Require Curated Context for Reliable Political Fact-Checking -- Even with Reasoning and Web Search
Large Language Models Require Curated Context for Reliable Political Fact-Checking -- Even with Reasoning and Web Search
Matthew R. Deverna
Kai-Cheng Yang
Harry Yaojun Yan
Filippo Menczer
KELMHILMLRMELM
146
0
0
24 Nov 2025
FISCAL: Financial Synthetic Claim-document Augmented Learning for Efficient Fact-Checking
FISCAL: Financial Synthetic Claim-document Augmented Learning for Efficient Fact-Checking
Rishab Sharma
Iman Saberi
Elham Alipour
Jie JW Wu
Fatemeh H. Fard
HILM
44
0
0
24 Nov 2025
Representational Stability of Truth in Large Language Models
Representational Stability of Truth in Large Language Models
Samantha Dies
Courtney Maynard
Germans Savcisens
Tina Eliassi-Rad
HILM
177
0
0
24 Nov 2025
"AGI" team at SHROOM-CAP: Data-Centric Approach to Multilingual Hallucination Detection using XLM-RoBERTa
"AGI" team at SHROOM-CAP: Data-Centric Approach to Multilingual Hallucination Detection using XLM-RoBERTa
Harsh Rathva
Pruthwik Mishra
Shrikant Malviya
HILM
38
0
0
23 Nov 2025
Measuring the Impact of Lexical Training Data Coverage on Hallucination Detection in Large Language Models
Measuring the Impact of Lexical Training Data Coverage on Hallucination Detection in Large Language Models
Shuo Zhang
Fabrizio Gotti
Fengran Mo
J. Nie
HILM
182
0
0
22 Nov 2025
MUCH: A Multilingual Claim Hallucination Benchmark
MUCH: A Multilingual Claim Hallucination Benchmark
Jérémie Dentan
Alexi Canesse
Davide Buscaldi
A. Shabou
Sonia Vanier
HILM
114
0
0
21 Nov 2025
Liars' Bench: Evaluating Lie Detectors for Language Models
Kieron Kretschmar
Walter Laurito
Sharan Maiya
Samuel Marks
HILM
81
1
0
20 Nov 2025
Thinking, Faithful and Stable: Mitigating Hallucinations in LLMs
Chelsea Zou
Yiheng Yao
Basant Khalil
HILM
124
0
0
19 Nov 2025
Enhancing Reliability across Short and Long-Form QA via Reinforcement Learning
Enhancing Reliability across Short and Long-Form QA via Reinforcement Learning
Yudong Wang
Zhe Yang
Wenhan Ma
Zhifang Sui
Liang Zhao
HILMOffRLLRM
100
0
0
19 Nov 2025
AA-Omniscience: Evaluating Cross-Domain Knowledge Reliability in Large Language Models
AA-Omniscience: Evaluating Cross-Domain Knowledge Reliability in Large Language Models
Declan Jackson
William Keating
George Cameron
Micah Hill-Smith
HILMRALMELM
432
0
0
17 Nov 2025
Quantifying consistency and accuracy of Latent Dirichlet Allocation
Quantifying consistency and accuracy of Latent Dirichlet Allocation
Saranzaya Magsarjav
Melissa Humphries
J. Tuke
Lewis Mitchell
HILM
116
0
0
17 Nov 2025
Consistency Is the Key: Detecting Hallucinations in LLM Generated Text By Checking Inconsistencies About Key Facts
Consistency Is the Key: Detecting Hallucinations in LLM Generated Text By Checking Inconsistencies About Key Facts
Raavi Gupta
Pranav Hari Panicker
S. Bhatia
Ganesh Ramakrishnan
HILM
72
0
0
15 Nov 2025
Honesty over Accuracy: Trustworthy Language Models through Reinforced Hesitation
Honesty over Accuracy: Trustworthy Language Models through Reinforced Hesitation
Mohamad Amin Mohamadi
Tianhao Wang
Zhiyuan Li
HILM
270
0
0
14 Nov 2025
Can LLMs Detect Their Own Hallucinations?
Can LLMs Detect Their Own Hallucinations?
Sora Kadotani
Kosuke Nishida
Kyosuke Nishida
HILMLRM
189
0
0
14 Nov 2025
Faithful Summarization of Consumer Health Queries: A Cross-Lingual Framework with LLMs
Faithful Summarization of Consumer Health Queries: A Cross-Lingual Framework with LLMs
Ajwad Abrar
Nafisa Tabassum Oeshy
Prianka Maheru
Farzana Tabassum
T. Chowdhury
HILM
240
0
0
13 Nov 2025
The Map of Misbelief: Tracing Intrinsic and Extrinsic Hallucinations Through Attention Patterns
The Map of Misbelief: Tracing Intrinsic and Extrinsic Hallucinations Through Attention Patterns
Elyes Hajji
Aymen Bouguerra
Fabio Arnez
HILM
36
0
0
13 Nov 2025
Hallucinate or Memorize? The Two Sides of Probabilistic Learning in Large Language Models
Hallucinate or Memorize? The Two Sides of Probabilistic Learning in Large Language ModelsJournal of Imaging (JI), 2025
Junichiro Niimi
HILM
128
0
0
12 Nov 2025
Taming Object Hallucinations with Verified Atomic Confidence Estimation
Taming Object Hallucinations with Verified Atomic Confidence Estimation
Jiarui Liu
Weihao Xuan
Zhijing Jin
Mona T. Diab
MLLMHILM
152
0
0
12 Nov 2025
HalluClean: A Unified Framework to Combat Hallucinations in LLMs
HalluClean: A Unified Framework to Combat Hallucinations in LLMs
Yaxin Zhao
Yu Zhang
HILM
76
0
0
12 Nov 2025
Chain of Summaries: Summarization Through Iterative Questioning
William Brach
Lukas Galke Poech
HILM
140
0
0
12 Nov 2025
When Bias Pretends to Be Truth: How Spurious Correlations Undermine Hallucination Detection in LLMs
When Bias Pretends to Be Truth: How Spurious Correlations Undermine Hallucination Detection in LLMs
Shaowen Wang
Yiqi Dong
Ruinian Chang
Tansheng Zhu
Yuebo Sun
Kaifeng Lyu
Jian Li
HILM
177
0
0
10 Nov 2025
Stress Testing Factual Consistency Metrics for Long-Document Summarization
Stress Testing Factual Consistency Metrics for Long-Document Summarization
Zain Muhammad Mujahid
Dustin Wright
Isabelle Augenstein
HILM
101
0
0
10 Nov 2025
When Evidence Contradicts: Toward Safer Retrieval-Augmented Generation in Healthcare
When Evidence Contradicts: Toward Safer Retrieval-Augmented Generation in Healthcare
Saeedeh Javadi
Sara Mirabi
Manan Gangar
Bahadorreza Ofoghi
RALMHILM
199
0
0
10 Nov 2025
NOAH: Benchmarking Narrative Prior driven Hallucination and Omission in Video Large Language Models
NOAH: Benchmarking Narrative Prior driven Hallucination and Omission in Video Large Language Models
Kyuho Lee
Euntae Kim
Jinwoo Choi
Buru Chang
HILM
75
0
0
09 Nov 2025
Injecting Falsehoods: Adversarial Man-in-the-Middle Attacks Undermining Factual Recall in LLMs
Injecting Falsehoods: Adversarial Man-in-the-Middle Attacks Undermining Factual Recall in LLMs
Alina Fastowski
Bardh Prenkaj
Yuxiao Li
Gjergji Kasneci
AAMLKELMHILM
215
0
0
08 Nov 2025
Stemming Hallucination in Language Models Using a Licensing Oracle
Stemming Hallucination in Language Models Using a Licensing Oracle
Simeon Emanuilov
Richard Ackermann
HILM
111
0
0
08 Nov 2025
REFLEX: Reference-Free Evaluation of Log Summarization via Large Language Model Judgment
REFLEX: Reference-Free Evaluation of Log Summarization via Large Language Model Judgment
Priyanka Mudgal
HILM
204
0
0
06 Nov 2025
HaluMem: Evaluating Hallucinations in Memory Systems of Agents
HaluMem: Evaluating Hallucinations in Memory Systems of Agents
Ding Chen
Simin Niu
Kehang Li
Peng Liu
Xiangping Zheng
Bo Tang
X. Li
Feiyu Xiong
Zhiyu Li
LLMAGHILMVLM
294
0
0
05 Nov 2025
Loading #Papers per Month with "HILM"
Past speakers
Name (-)
Top Contributors
Name (-)
Top Organizations at ResearchTrend.AI
Name (-)
Social Events
DateLocationEvent
No social events available