ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2104.14839
  4. Cited By
The Factual Inconsistency Problem in Abstractive Text Summarization: A
  Survey

The Factual Inconsistency Problem in Abstractive Text Summarization: A Survey

30 April 2021
Yi-Chong Huang
Xiachong Feng
Xiaocheng Feng
Bing Qin
    HILM
ArXivPDFHTML

Papers citing "The Factual Inconsistency Problem in Abstractive Text Summarization: A Survey"

50 / 72 papers shown
Title
HausaNLP at SemEval-2025 Task 3: Towards a Fine-Grained Model-Aware Hallucination Detection
HausaNLP at SemEval-2025 Task 3: Towards a Fine-Grained Model-Aware Hallucination Detection
Maryam Bala
Amina Imam Abubakar
Abdulhamid Abubakar
Abdulkadir Shehu Bichi
Hafsa Kabir Ahmad
Sani Abdullahi Sani
Idris Abdulmumin
Shamsuddeen Hassan Muhamad
I. Ahmad
HILM
44
1
0
25 Mar 2025
Treble Counterfactual VLMs: A Causal Approach to Hallucination
Treble Counterfactual VLMs: A Causal Approach to Hallucination
Li Li
Jiashu Qu
Yuxiao Zhou
Yuehan Qin
Tiankai Yang
Yue Zhao
95
2
0
08 Mar 2025
Valuable Hallucinations: Realizable Non-realistic Propositions
Valuable Hallucinations: Realizable Non-realistic Propositions
Qiucheng Chen
Bo Wang
LRM
59
0
0
16 Feb 2025
Can LVLMs and Automatic Metrics Capture Underlying Preferences of Blind and Low-Vision Individuals for Navigational Aid?
Can LVLMs and Automatic Metrics Capture Underlying Preferences of Blind and Low-Vision Individuals for Navigational Aid?
Na Min An
Eunki Kim
Wan Ju Kang
Sangryul Kim
Hyunjung Shim
James Thorne
41
0
0
15 Feb 2025
Summarization of Opinionated Political Documents with Varied
  Perspectives
Summarization of Opinionated Political Documents with Varied Perspectives
Nicholas Deas
Kathleen McKeown
21
0
0
06 Nov 2024
Evaluating the Correctness of Inference Patterns Used by LLMs for Judgment
Evaluating the Correctness of Inference Patterns Used by LLMs for Judgment
Lu Chen
Yuxuan Huang
Yixing Li
Dongrui Liu
Qihan Ren
Shuai Zhao
Kun Kuang
Zilong Zheng
Quanshi Zhang
31
1
0
06 Oct 2024
THaMES: An End-to-End Tool for Hallucination Mitigation and Evaluation
  in Large Language Models
THaMES: An End-to-End Tool for Hallucination Mitigation and Evaluation in Large Language Models
Mengfei Liang
Archish Arun
Zekun Wu
Cristian Muñoz
Jonathan Lutch
Emre Kazim
Adriano Soares Koshiyama
Philip C. Treleaven
HILM
32
0
0
17 Sep 2024
GLARE: Guided LexRank for Advanced Retrieval in Legal Analysis
GLARE: Guided LexRank for Advanced Retrieval in Legal Analysis
Fabio Gregório
Rafaela Castro
Kele Belloze
Rui Pedro Lopes
Eduardo Bezerra
AILaw
ELM
19
0
0
10 Sep 2024
CodeMirage: Hallucinations in Code Generated by Large Language Models
CodeMirage: Hallucinations in Code Generated by Large Language Models
Vibhor Agarwal
Yulong Pei
Salwa Alamir
Xiaomo Liu
40
4
0
14 Aug 2024
Developing a Reliable, Fast, General-Purpose Hallucination Detection and Mitigation Service
Developing a Reliable, Fast, General-Purpose Hallucination Detection and Mitigation Service
Song Wang
Xun Wang
Jie Mei
Yujia Xie
Sean Muarray
Zhang Li
Lingfeng Wu
Sihan Chen
Wayne Xiong
HILM
61
0
0
22 Jul 2024
Hallucination Detection: Robustly Discerning Reliable Answers in Large
  Language Models
Hallucination Detection: Robustly Discerning Reliable Answers in Large Language Models
Yuyan Chen
Qiang Fu
Yichen Yuan
Zhihao Wen
Ge Fan
Dayiheng Liu
Dongmei Zhang
Zhixu Li
Yanghua Xiao
HILM
46
69
0
04 Jul 2024
VideoHallucer: Evaluating Intrinsic and Extrinsic Hallucinations in
  Large Video-Language Models
VideoHallucer: Evaluating Intrinsic and Extrinsic Hallucinations in Large Video-Language Models
Yuxuan Wang
Yueqian Wang
Dongyan Zhao
Cihang Xie
Zilong Zheng
MLLM
VLM
52
26
0
24 Jun 2024
Factual Dialogue Summarization via Learning from Large Language Models
Factual Dialogue Summarization via Learning from Large Language Models
Rongxin Zhu
Jey Han Lau
Jianzhong Qi
HILM
52
1
0
20 Jun 2024
Towards Minimal Targeted Updates of Language Models with Targeted
  Negative Training
Towards Minimal Targeted Updates of Language Models with Targeted Negative Training
Lily H. Zhang
Rajesh Ranganath
Arya Tafvizi
33
1
0
19 Jun 2024
Mitigating Large Language Model Hallucination with Faithful Finetuning
Mitigating Large Language Model Hallucination with Faithful Finetuning
Minda Hu
Bowei He
Yufei Wang
Liangyou Li
Chen-li Ma
Irwin King
HILM
46
7
0
17 Jun 2024
M-QALM: A Benchmark to Assess Clinical Reading Comprehension and
  Knowledge Recall in Large Language Models via Question Answering
M-QALM: A Benchmark to Assess Clinical Reading Comprehension and Knowledge Recall in Large Language Models via Question Answering
Anand Subramanian
Viktor Schlegel
Abhinav Ramesh Kashyap
Thanh-Tung Nguyen
Vijay Prakash Dwivedi
Stefan Winkler
ELM
LM&MA
AI4MH
31
3
0
06 Jun 2024
Confidence Under the Hood: An Investigation into the
  Confidence-Probability Alignment in Large Language Models
Confidence Under the Hood: An Investigation into the Confidence-Probability Alignment in Large Language Models
Abhishek Kumar
Robert D Morabito
Sanzhar Umbet
Jad Kabbara
Ali Emami
53
5
0
25 May 2024
Can We Catch the Elephant? A Survey of the Evolvement of Hallucination
  Evaluation on Natural Language Generation
Can We Catch the Elephant? A Survey of the Evolvement of Hallucination Evaluation on Natural Language Generation
Siya Qi
Yulan He
Zheng Yuan
LRM
HILM
43
1
0
18 Apr 2024
KnowHalu: Hallucination Detection via Multi-Form Knowledge Based Factual
  Checking
KnowHalu: Hallucination Detection via Multi-Form Knowledge Based Factual Checking
Jiawei Zhang
Chejian Xu
Y. Gai
Freddy Lecue
Dawn Song
Bo-wen Li
HILM
29
10
0
03 Apr 2024
Multi-Modal Hallucination Control by Visual Information Grounding
Multi-Modal Hallucination Control by Visual Information Grounding
Alessandro Favero
L. Zancato
Matthew Trager
Siddharth Choudhary
Pramuditha Perera
Alessandro Achille
Ashwin Swaminathan
Stefano Soatto
MLLM
87
62
0
20 Mar 2024
German also Hallucinates! Inconsistency Detection in News Summaries with
  the Absinth Dataset
German also Hallucinates! Inconsistency Detection in News Summaries with the Absinth Dataset
Laura Mascarell
Ribin Chalumattu
Annette Rios
HILM
46
0
0
06 Mar 2024
A Data-Centric Approach To Generate Faithful and High Quality Patient
  Summaries with Large Language Models
A Data-Centric Approach To Generate Faithful and High Quality Patient Summaries with Large Language Models
S. Hegselmann
Zejiang Shen
Florian Gierse
Monica Agrawal
David Sontag
Xiaoyi Jiang
HILM
VLM
26
6
0
23 Feb 2024
Entity-level Factual Adaptiveness of Fine-tuning based Abstractive
  Summarization Models
Entity-level Factual Adaptiveness of Fine-tuning based Abstractive Summarization Models
Jongyoon Song
Nohil Park
Bongkyu Hwang
Jaewoong Yun
Seongho Joe
Youngjune Gwon
Sungroh Yoon
KELM
HILM
38
1
0
23 Feb 2024
Strong hallucinations from negation and how to fix them
Strong hallucinations from negation and how to fix them
Nicholas Asher
Swarnadeep Bhar
ReLM
LRM
40
4
0
16 Feb 2024
Hallucination is Inevitable: An Innate Limitation of Large Language Models
Hallucination is Inevitable: An Innate Limitation of Large Language Models
Ziwei Xu
Sanjay Jain
Mohan S. Kankanhalli
HILM
LRM
71
212
0
22 Jan 2024
Small Language Model Can Self-correct
Small Language Model Can Self-correct
Haixia Han
Jiaqing Liang
Jie Shi
Qi He
Yanghua Xiao
LRM
SyDa
ReLM
KELM
40
11
0
14 Jan 2024
AI Hallucinations: A Misnomer Worth Clarifying
AI Hallucinations: A Misnomer Worth Clarifying
Negar Maleki
Balaji Padmanabhan
Kaushik Dutta
28
34
0
09 Jan 2024
Mitigating Fine-Grained Hallucination by Fine-Tuning Large
  Vision-Language Models with Caption Rewrites
Mitigating Fine-Grained Hallucination by Fine-Tuning Large Vision-Language Models with Caption Rewrites
Lei Wang
Jiabang He
Shenshen Li
Ning Liu
Ee-Peng Lim
MLLM
27
39
0
04 Dec 2023
A Survey of the Evolution of Language Model-Based Dialogue Systems
A Survey of the Evolution of Language Model-Based Dialogue Systems
Hongru Wang
Lingzhi Wang
Yiming Du
Liang Chen
Jing Zhou
Yufei Wang
Kam-Fai Wong
LRM
59
20
0
28 Nov 2023
Enhancing Uncertainty-Based Hallucination Detection with Stronger Focus
Enhancing Uncertainty-Based Hallucination Detection with Stronger Focus
Tianhang Zhang
Lin Qiu
Qipeng Guo
Cheng Deng
Yue Zhang
Zheng-Wei Zhang
Cheng Zhou
Xinbing Wang
Luoyi Fu
HILM
77
48
0
22 Nov 2023
A Survey on Hallucination in Large Language Models: Principles,
  Taxonomy, Challenges, and Open Questions
A Survey on Hallucination in Large Language Models: Principles, Taxonomy, Challenges, and Open Questions
Lei Huang
Weijiang Yu
Weitao Ma
Weihong Zhong
Zhangyin Feng
...
Qianglong Chen
Weihua Peng
Xiaocheng Feng
Bing Qin
Ting Liu
LRM
HILM
39
722
0
09 Nov 2023
Evaluating Generative Ad Hoc Information Retrieval
Evaluating Generative Ad Hoc Information Retrieval
Lukas Gienapp
Harrisen Scells
Niklas Deckers
Janek Bevendorff
Shuai Wang
...
Maik Frobe
Guide Zucoon
Benno Stein
Matthias Hagen
Martin Potthast
RALM
37
11
0
08 Nov 2023
FaMeSumm: Investigating and Improving Faithfulness of Medical
  Summarization
FaMeSumm: Investigating and Improving Faithfulness of Medical Summarization
Nan Zhang
Yusen Zhang
Wu Guo
P. Mitra
Rui Zhang
HILM
35
4
0
03 Nov 2023
Are Large Language Models Reliable Judges? A Study on the Factuality
  Evaluation Capabilities of LLMs
Are Large Language Models Reliable Judges? A Study on the Factuality Evaluation Capabilities of LLMs
Xue-Yong Fu
Md Tahmid Rahman Laskar
Cheng-Hsiung Chen
TN ShashiBhushan
HILM
ELM
68
18
0
01 Nov 2023
Sequence-Level Certainty Reduces Hallucination In Knowledge-Grounded
  Dialogue Generation
Sequence-Level Certainty Reduces Hallucination In Knowledge-Grounded Dialogue Generation
Yixin Wan
Fanyou Wu
Weijie Xu
Srinivasan H. Sengamedu
HILM
24
5
0
28 Oct 2023
LUNA: A Model-Based Universal Analysis Framework for Large Language
  Models
LUNA: A Model-Based Universal Analysis Framework for Large Language Models
Da Song
Xuan Xie
Jiayang Song
Derui Zhu
Yuheng Huang
Felix Juefei Xu
Lei Ma
ALM
35
3
0
22 Oct 2023
Factored Verification: Detecting and Reducing Hallucination in Summaries
  of Academic Papers
Factored Verification: Detecting and Reducing Hallucination in Summaries of Academic Papers
Charlie George
Andreas Stuhlmuller
HILM
20
5
0
16 Oct 2023
Metric Ensembles For Hallucination Detection
Metric Ensembles For Hallucination Detection
Grant C. Forbes
Parth Katlana
Zeydy Ortiz
HILM
38
4
0
16 Oct 2023
"Kelly is a Warm Person, Joseph is a Role Model": Gender Biases in
  LLM-Generated Reference Letters
"Kelly is a Warm Person, Joseph is a Role Model": Gender Biases in LLM-Generated Reference Letters
Yixin Wan
George Pu
Jiao Sun
Aparna Garimella
Kai-Wei Chang
Nanyun Peng
34
160
0
13 Oct 2023
Teaching Language Models to Hallucinate Less with Synthetic Tasks
Teaching Language Models to Hallucinate Less with Synthetic Tasks
Erik Jones
Hamid Palangi
Clarisse Simoes
Varun Chandrasekaran
Subhabrata Mukherjee
Arindam Mitra
Ahmed Hassan Awadallah
Ece Kamar
HILM
21
24
0
10 Oct 2023
LongDocFACTScore: Evaluating the Factuality of Long Document Abstractive
  Summarisation
LongDocFACTScore: Evaluating the Factuality of Long Document Abstractive Summarisation
Jennifer A Bishop
Qianqian Xie
Sophia Ananiadou
HILM
17
9
0
21 Sep 2023
Aligning Large Language Models for Clinical Tasks
Aligning Large Language Models for Clinical Tasks
Supun Manathunga
Isuru Hettigoda
LM&MA
ELM
AI4MH
30
10
0
06 Sep 2023
Optimizing Factual Accuracy in Text Generation through Dynamic Knowledge
  Selection
Optimizing Factual Accuracy in Text Generation through Dynamic Knowledge Selection
Hongjin Qian
Zhicheng Dou
Jiejun Tan
Haonan Chen
Haoqi Gu
Ruofei Lai
Xinyu Zhang
Zhao Cao
Ji-Rong Wen
29
2
0
30 Aug 2023
Is Stack Overflow Obsolete? An Empirical Study of the Characteristics of
  ChatGPT Answers to Stack Overflow Questions
Is Stack Overflow Obsolete? An Empirical Study of the Characteristics of ChatGPT Answers to Stack Overflow Questions
Samia Kabir
David N. Udo-Imeh
Bonan Kou
Tianyi Zhang
ELM
24
66
0
04 Aug 2023
Tackling Hallucinations in Neural Chart Summarization
Tackling Hallucinations in Neural Chart Summarization
Saad Obaid ul Islam
Iza vSkrjanec
Ondrej Dusek
Vera Demberg
HILM
34
7
0
01 Aug 2023
Generating Benchmarks for Factuality Evaluation of Language Models
Generating Benchmarks for Factuality Evaluation of Language Models
Dor Muhlgay
Ori Ram
Inbal Magar
Yoav Levine
Nir Ratner
Yonatan Belinkov
Omri Abend
Kevin Leyton-Brown
Amnon Shashua
Y. Shoham
HILM
25
91
0
13 Jul 2023
A Stitch in Time Saves Nine: Detecting and Mitigating Hallucinations of
  LLMs by Validating Low-Confidence Generation
A Stitch in Time Saves Nine: Detecting and Mitigating Hallucinations of LLMs by Validating Low-Confidence Generation
Neeraj Varshney
Wenlin Yao
Hongming Zhang
Jianshu Chen
Dong Yu
HILM
42
155
0
08 Jul 2023
Challenges in Domain-Specific Abstractive Summarization and How to
  Overcome them
Challenges in Domain-Specific Abstractive Summarization and How to Overcome them
Anum Afzal
Juraj Vladika
Daniel Braun
Florian Matthes
HILM
25
10
0
03 Jul 2023
Self-contradictory Hallucinations of Large Language Models: Evaluation,
  Detection and Mitigation
Self-contradictory Hallucinations of Large Language Models: Evaluation, Detection and Mitigation
Niels Mündler
Jingxuan He
Slobodan Jenko
Martin Vechev
HILM
22
108
0
25 May 2023
CRITIC: Large Language Models Can Self-Correct with Tool-Interactive
  Critiquing
CRITIC: Large Language Models Can Self-Correct with Tool-Interactive Critiquing
Zhibin Gou
Zhihong Shao
Yeyun Gong
Yelong Shen
Yujiu Yang
Nan Duan
Weizhu Chen
KELM
LRM
36
357
0
19 May 2023
12
Next