ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2103.15025
  4. Cited By
On Hallucination and Predictive Uncertainty in Conditional Language
  Generation

On Hallucination and Predictive Uncertainty in Conditional Language Generation

28 March 2021
Yijun Xiao
Luu Anh Tuan
    HILM
ArXivPDFHTML

Papers citing "On Hallucination and Predictive Uncertainty in Conditional Language Generation"

50 / 126 papers shown
Title
Mitigating Hallucination in Abstractive Summarization with
  Domain-Conditional Mutual Information
Mitigating Hallucination in Abstractive Summarization with Domain-Conditional Mutual Information
Kyubyung Chae
Jaepill Choi
Yohan Jo
Taesup Kim
HILM
30
1
0
15 Apr 2024
MetaCheckGPT -- A Multi-task Hallucination Detector Using LLM
  Uncertainty and Meta-models
MetaCheckGPT -- A Multi-task Hallucination Detector Using LLM Uncertainty and Meta-models
Rahul Mehta
Andrew Hoblitzell
Jack O’keefe
Hyeju Jang
Vasudeva Varma
HILM
KELM
19
0
0
10 Apr 2024
PoLLMgraph: Unraveling Hallucinations in Large Language Models via State
  Transition Dynamics
PoLLMgraph: Unraveling Hallucinations in Large Language Models via State Transition Dynamics
Derui Zhu
Dingfan Chen
Qing Li
Zongxiong Chen
Lei Ma
Jens Grossklags
Mario Fritz
HILM
35
10
0
06 Apr 2024
Multicalibration for Confidence Scoring in LLMs
Multicalibration for Confidence Scoring in LLMs
Gianluca Detommaso
Martín Bertrán
Riccardo Fogliato
Aaron Roth
52
12
0
06 Apr 2024
GENEVIC: GENetic data Exploration and Visualization via Intelligent
  interactive Console
GENEVIC: GENetic data Exploration and Visualization via Intelligent interactive Console
Anindita Nath
Savannah Mwesigwa
Yulin Dai
Xiaoqian Jiang
MD Anderson Cancer Center
14
1
0
04 Apr 2024
Uncertainty in Language Models: Assessment through Rank-Calibration
Uncertainty in Language Models: Assessment through Rank-Calibration
Xinmeng Huang
Shuo Li
Mengxin Yu
Matteo Sesia
Hamed Hassani
Insup Lee
Osbert Bastani
Yan Sun
43
16
0
04 Apr 2024
Mechanistic Understanding and Mitigation of Language Model Non-Factual
  Hallucinations
Mechanistic Understanding and Mitigation of Language Model Non-Factual Hallucinations
Lei Yu
Meng Cao
Jackie Chi Kit Cheung
Yue Dong
HILM
33
9
0
27 Mar 2024
TrustSQL: Benchmarking Text-to-SQL Reliability with Penalty-Based
  Scoring
TrustSQL: Benchmarking Text-to-SQL Reliability with Penalty-Based Scoring
Gyubok Lee
Woosog Chay
Seonhee Cho
Edward Choi
LMTD
47
4
0
23 Mar 2024
SemEval-2024 Shared Task 6: SHROOM, a Shared-task on Hallucinations and
  Related Observable Overgeneration Mistakes
SemEval-2024 Shared Task 6: SHROOM, a Shared-task on Hallucinations and Related Observable Overgeneration Mistakes
Timothee Mickus
Elaine Zosa
Raúl Vázquez
Teemu Vahtola
Jörg Tiedemann
Vincent Segonne
Alessandro Raganato
Marianna Apidianaki
HILM
LRM
43
21
0
12 Mar 2024
Predict the Next Word: Humans exhibit uncertainty in this task and
  language models _____
Predict the Next Word: Humans exhibit uncertainty in this task and language models _____
Evgenia Ilia
Wilker Aziz
34
2
0
27 Feb 2024
Enhanced Hallucination Detection in Neural Machine Translation through
  Simple Detector Aggregation
Enhanced Hallucination Detection in Neural Machine Translation through Simple Detector Aggregation
Anas Himmi
Guillaume Staerman
Marine Picot
Pierre Colombo
Nuno M. Guerreiro
38
4
0
20 Feb 2024
Multi-Perspective Consistency Enhances Confidence Estimation in Large
  Language Models
Multi-Perspective Consistency Enhances Confidence Estimation in Large Language Models
Pei Wang
Yejie Wang
Muxi Diao
Keqing He
Guanting Dong
Weiran Xu
32
0
0
17 Feb 2024
Uncertainty Quantification for In-Context Learning of Large Language
  Models
Uncertainty Quantification for In-Context Learning of Large Language Models
Chen Ling
Xujiang Zhao
Xuchao Zhang
Wei Cheng
Yanchi Liu
...
Katsushi Matsuda
Jie Ji
Guangji Bai
Liang Zhao
Haifeng Chen
29
14
0
15 Feb 2024
Benchmarking Large Language Models in Complex Question Answering
  Attribution using Knowledge Graphs
Benchmarking Large Language Models in Complex Question Answering Attribution using Knowledge Graphs
Nan Hu
Jiaoyan Chen
Yike Wu
Guilin Qi
Sheng Bi
Tongtong Wu
Jeff Z. Pan
HILM
42
8
0
26 Jan 2024
Towards Uncertainty-Aware Language Agent
Towards Uncertainty-Aware Language Agent
Jiuzhou Han
Wray L. Buntine
Ehsan Shareghi
LLMAG
AI4CE
32
5
0
25 Jan 2024
AI Hallucinations: A Misnomer Worth Clarifying
AI Hallucinations: A Misnomer Worth Clarifying
Negar Maleki
Balaji Padmanabhan
Kaushik Dutta
28
34
0
09 Jan 2024
RAGTruth: A Hallucination Corpus for Developing Trustworthy
  Retrieval-Augmented Language Models
RAGTruth: A Hallucination Corpus for Developing Trustworthy Retrieval-Augmented Language Models
Cheng Niu
Yuanhao Wu
Juno Zhu
Siliang Xu
Kashun Shum
Randy Zhong
Juntong Song
Tong Zhang
HILM
36
88
0
31 Dec 2023
On Diversified Preferences of Large Language Model Alignment
On Diversified Preferences of Large Language Model Alignment
Dun Zeng
Yong Dai
Pengyu Cheng
Longyue Wang
Tianhao Hu
Wanshun Chen
Nan Du
Zenglin Xu
ALM
38
16
0
12 Dec 2023
Mitigating Fine-Grained Hallucination by Fine-Tuning Large
  Vision-Language Models with Caption Rewrites
Mitigating Fine-Grained Hallucination by Fine-Tuning Large Vision-Language Models with Caption Rewrites
Lei Wang
Jiabang He
Shenshen Li
Ning Liu
Ee-Peng Lim
MLLM
27
39
0
04 Dec 2023
On the Calibration of Large Language Models and Alignment
On the Calibration of Large Language Models and Alignment
Chiwei Zhu
Benfeng Xu
Quan Wang
Yongdong Zhang
Zhendong Mao
77
32
0
22 Nov 2023
Enhancing Uncertainty-Based Hallucination Detection with Stronger Focus
Enhancing Uncertainty-Based Hallucination Detection with Stronger Focus
Tianhang Zhang
Lin Qiu
Qipeng Guo
Cheng Deng
Yue Zhang
Zheng-Wei Zhang
Cheng Zhou
Xinbing Wang
Luoyi Fu
HILM
77
49
0
22 Nov 2023
Trustworthy Large Models in Vision: A Survey
Trustworthy Large Models in Vision: A Survey
Ziyan Guo
Li Xu
Jun Liu
MU
66
0
0
16 Nov 2023
Think While You Write: Hypothesis Verification Promotes Faithful
  Knowledge-to-Text Generation
Think While You Write: Hypothesis Verification Promotes Faithful Knowledge-to-Text Generation
Yifu Qiu
Varun R. Embar
Shay B. Cohen
Benjamin Han
29
4
0
16 Nov 2023
Investigating Hallucinations in Pruned Large Language Models for
  Abstractive Summarization
Investigating Hallucinations in Pruned Large Language Models for Abstractive Summarization
G. Chrysostomou
Zhixue Zhao
Miles Williams
Nikolaos Aletras
HILM
34
10
0
15 Nov 2023
A Survey of Confidence Estimation and Calibration in Large Language
  Models
A Survey of Confidence Estimation and Calibration in Large Language Models
Jiahui Geng
Fengyu Cai
Yuxia Wang
Heinz Koeppl
Preslav Nakov
Iryna Gurevych
UQCV
41
56
0
14 Nov 2023
LM-Polygraph: Uncertainty Estimation for Language Models
LM-Polygraph: Uncertainty Estimation for Language Models
Ekaterina Fadeeva
Roman Vashurin
Akim Tsvigun
Artem Vazhentsev
Sergey Petrakov
...
Elizaveta Goncharova
Alexander Panchenko
Maxim Panov
Timothy Baldwin
Artem Shelmanov
27
48
0
13 Nov 2023
A Survey on Hallucination in Large Language Models: Principles,
  Taxonomy, Challenges, and Open Questions
A Survey on Hallucination in Large Language Models: Principles, Taxonomy, Challenges, and Open Questions
Lei Huang
Weijiang Yu
Weitao Ma
Weihong Zhong
Zhangyin Feng
...
Qianglong Chen
Weihua Peng
Xiaocheng Feng
Bing Qin
Ting Liu
LRM
HILM
56
744
0
09 Nov 2023
SAC3: Reliable Hallucination Detection in Black-Box Language Models via
  Semantic-aware Cross-check Consistency
SAC3: Reliable Hallucination Detection in Black-Box Language Models via Semantic-aware Cross-check Consistency
Jiaxin Zhang
Zhuohang Li
Kamalika Das
Bradley Malin
Kumar Sricharan
HILM
LRM
24
56
0
03 Nov 2023
Sequence-Level Certainty Reduces Hallucination In Knowledge-Grounded
  Dialogue Generation
Sequence-Level Certainty Reduces Hallucination In Knowledge-Grounded Dialogue Generation
Yixin Wan
Fanyou Wu
Weijie Xu
Srinivasan H. Sengamedu
HILM
29
5
0
28 Oct 2023
Why LLMs Hallucinate, and How to Get (Evidential) Closure: Perceptual,
  Intensional, and Extensional Learning for Faithful Natural Language
  Generation
Why LLMs Hallucinate, and How to Get (Evidential) Closure: Perceptual, Intensional, and Extensional Learning for Faithful Natural Language Generation
Adam Bouyamourn
108
15
0
23 Oct 2023
Hallucination Detection for Grounded Instruction Generation
Hallucination Detection for Grounded Instruction Generation
Lingjun Zhao
Khanh Nguyen
Hal Daumé
HILM
39
7
0
23 Oct 2023
Fidelity-Enriched Contrastive Search: Reconciling the
  Faithfulness-Diversity Trade-Off in Text Generation
Fidelity-Enriched Contrastive Search: Reconciling the Faithfulness-Diversity Trade-Off in Text Generation
Wei-Lin Chen
Cheng-Kuang Wu
Hsin-Hsi Chen
Chung-Chi Chen
HILM
26
6
0
23 Oct 2023
LUNA: A Model-Based Universal Analysis Framework for Large Language
  Models
LUNA: A Model-Based Universal Analysis Framework for Large Language Models
Da Song
Xuan Xie
Jiayang Song
Derui Zhu
Yuheng Huang
Felix Juefei Xu
Lei Ma
ALM
40
3
0
22 Oct 2023
Negative Object Presence Evaluation (NOPE) to Measure Object
  Hallucination in Vision-Language Models
Negative Object Presence Evaluation (NOPE) to Measure Object Hallucination in Vision-Language Models
Holy Lovenia
Wenliang Dai
Samuel Cahyawijaya
Ziwei Ji
Pascale Fung
MLLM
36
51
0
09 Oct 2023
Siren's Song in the AI Ocean: A Survey on Hallucination in Large
  Language Models
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Yue Zhang
Yafu Li
Leyang Cui
Deng Cai
Lemao Liu
...
Longyue Wang
A. Luu
Wei Bi
Freda Shi
Shuming Shi
RALM
LRM
HILM
48
524
0
03 Sep 2023
Studying the impacts of pre-training using ChatGPT-generated text on
  downstream tasks
Studying the impacts of pre-training using ChatGPT-generated text on downstream tasks
Sarthak Anand
27
0
0
02 Sep 2023
Inducing Causal Structure for Abstractive Text Summarization
Inducing Causal Structure for Abstractive Text Summarization
Luyao Chen
Ruqing Zhang
Wei Huang
Wei Chen
J. Guo
Xueqi Cheng
CML
21
1
0
24 Aug 2023
Evaluation of Faithfulness Using the Longest Supported Subsequence
Evaluation of Faithfulness Using the Longest Supported Subsequence
Anirudh Mittal
Timo Schick
Mikel Artetxe
Jane Dwivedi-Yu
ALM
27
0
0
23 Aug 2023
Neural Conversation Models and How to Rein Them in: A Survey of Failures
  and Fixes
Neural Conversation Models and How to Rein Them in: A Survey of Failures and Fixes
Fabian Galetzka
Anne Beyer
David Schlangen
AI4CE
32
1
0
11 Aug 2023
Uncertainty in Natural Language Generation: From Theory to Applications
Uncertainty in Natural Language Generation: From Theory to Applications
Joris Baan
Nico Daheim
Evgenia Ilia
Dennis Ulmer
Haau-Sing Li
Raquel Fernández
Barbara Plank
Rico Sennrich
Chrysoula Zerva
Wilker Aziz
UQLM
37
40
0
28 Jul 2023
Is attention all you need in medical image analysis? A review
Is attention all you need in medical image analysis? A review
G. Papanastasiou
Nikolaos Dikaios
Jiahao Huang
Chengjia Wang
Guang Yang
ViT
MedIm
25
23
0
24 Jul 2023
Look Before You Leap: An Exploratory Study of Uncertainty Measurement for Large Language Models
Look Before You Leap: An Exploratory Study of Uncertainty Measurement for Large Language Models
Yuheng Huang
Jiayang Song
Zhijie Wang
Shengming Zhao
Huaming Chen
Felix Juefei-Xu
Lei Ma
33
6
0
16 Jul 2023
Knowledge Graph for NLG in the context of conversational agents
Knowledge Graph for NLG in the context of conversational agents
Hussam Ghanem
Massinissa Atmani
C. Cruz
29
1
0
04 Jul 2023
Can LLMs Express Their Uncertainty? An Empirical Evaluation of
  Confidence Elicitation in LLMs
Can LLMs Express Their Uncertainty? An Empirical Evaluation of Confidence Elicitation in LLMs
Miao Xiong
Zhiyuan Hu
Xinyang Lu
Yifei Li
Jie Fu
Junxian He
Bryan Hooi
39
373
0
22 Jun 2023
Deceptive AI Ecosystems: The Case of ChatGPT
Deceptive AI Ecosystems: The Case of ChatGPT
Xiao Zhan
Yifan Xu
Stefan Sarkadi
SILM
34
22
0
18 Jun 2023
Can Large Language Models Capture Dissenting Human Voices?
Can Large Language Models Capture Dissenting Human Voices?
Noah Lee
Na Min An
James Thorne
ALM
47
30
0
23 May 2023
Knowledge of Knowledge: Exploring Known-Unknowns Uncertainty with Large
  Language Models
Knowledge of Knowledge: Exploring Known-Unknowns Uncertainty with Large Language Models
Alfonso Amayuelas
Kyle Wong
Liangming Pan
Wenhu Chen
Luu Anh Tuan
42
25
0
23 May 2023
Has It All Been Solved? Open NLP Research Questions Not Solved by Large
  Language Models
Has It All Been Solved? Open NLP Research Questions Not Solved by Large Language Models
Oana Ignat
Zhijing Jin
Artem Abzaliev
Laura Biester
Santiago Castro
...
Verónica Pérez-Rosas
Siqi Shen
Zekun Wang
Winston Wu
Rada Mihalcea
LRM
41
6
0
21 May 2023
Pointwise Mutual Information Based Metric and Decoding Strategy for
  Faithful Generation in Document Grounded Dialogs
Pointwise Mutual Information Based Metric and Decoding Strategy for Faithful Generation in Document Grounded Dialogs
Yatin Nandwani
Vineet Kumar
Dinesh Raghu
Sachindra Joshi
Luis A. Lastras
35
6
0
20 May 2023
CRITIC: Large Language Models Can Self-Correct with Tool-Interactive
  Critiquing
CRITIC: Large Language Models Can Self-Correct with Tool-Interactive Critiquing
Zhibin Gou
Zhihong Shao
Yeyun Gong
Yelong Shen
Yujiu Yang
Nan Duan
Weizhu Chen
KELM
LRM
36
360
0
19 May 2023
Previous
123
Next