ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2310.04988
  4. Cited By
The Troubling Emergence of Hallucination in Large Language Models -- An
  Extensive Definition, Quantification, and Prescriptive Remediations

The Troubling Emergence of Hallucination in Large Language Models -- An Extensive Definition, Quantification, and Prescriptive Remediations

8 October 2023
Vipula Rawte
Swagata Chakraborty
Agnibh Pathak
Anubhav Sarkar
S.M. Towhidul Islam Tonmoy
Aman Chadha
Mikel Artetxe
Punit Daniel Simig
    HILM
ArXivPDFHTML

Papers citing "The Troubling Emergence of Hallucination in Large Language Models -- An Extensive Definition, Quantification, and Prescriptive Remediations"

27 / 77 papers shown
Title
"Sorry, Come Again?" Prompting -- Enhancing Comprehension and
  Diminishing Hallucination with [PAUSE]-injected Optimal Paraphrasing
"Sorry, Come Again?" Prompting -- Enhancing Comprehension and Diminishing Hallucination with [PAUSE]-injected Optimal Paraphrasing
Vipula Rawte
Islam Tonmoy
M. M. Zaman
Prachi Priya
Marcin Kardas
Alan Schelten
Ruan Silva
LRM
33
1
0
27 Mar 2024
ILLUMINER: Instruction-tuned Large Language Models as Few-shot Intent
  Classifier and Slot Filler
ILLUMINER: Instruction-tuned Large Language Models as Few-shot Intent Classifier and Slot Filler
Paramita Mirza
Viju Sudhi
S. Sahoo
Sinchana Ramakanth Bhat
25
4
0
26 Mar 2024
Visual Hallucination: Definition, Quantification, and Prescriptive
  Remediations
Visual Hallucination: Definition, Quantification, and Prescriptive Remediations
Anku Rani
Vipula Rawte
Harshad Sharma
Neeraj Anand
Krishnav Rajbangshi
Amit P. Sheth
Amitava Das
MLLM
70
6
0
26 Mar 2024
LLM-based agents for automating the enhancement of user story quality:
  An early report
LLM-based agents for automating the enhancement of user story quality: An early report
Zheying Zhang
Maruf Rayhan
Tomas Herda
Manuel Goisauf
Pekka Abrahamsson
LLMAG
22
11
0
14 Mar 2024
Collaborative decoding of critical tokens for boosting factuality of
  large language models
Collaborative decoding of critical tokens for boosting factuality of large language models
Lifeng Jin
Baolin Peng
Linfeng Song
Haitao Mi
Ye Tian
Dong Yu
HILM
27
6
0
28 Feb 2024
HypoTermQA: Hypothetical Terms Dataset for Benchmarking Hallucination
  Tendency of LLMs
HypoTermQA: Hypothetical Terms Dataset for Benchmarking Hallucination Tendency of LLMs
Cem Uluoglakci
T. Taşkaya-Temizel
HILM
35
2
0
25 Feb 2024
MoRAL: MoE Augmented LoRA for LLMs' Lifelong Learning
MoRAL: MoE Augmented LoRA for LLMs' Lifelong Learning
Shu Yang
Muhammad Asif Ali
Cheng-Long Wang
Lijie Hu
Di Wang
CLL
MoE
37
38
0
17 Feb 2024
Do LLMs Know about Hallucination? An Empirical Investigation of LLM's
  Hidden States
Do LLMs Know about Hallucination? An Empirical Investigation of LLM's Hidden States
Hanyu Duan
Yi Yang
Kar Yan Tam
HILM
27
28
0
15 Feb 2024
Factuality of Large Language Models in the Year 2024
Factuality of Large Language Models in the Year 2024
Yuxia Wang
Minghan Wang
Muhammad Arslan Manzoor
Fei Liu
Georgi Georgiev
Rocktim Jyoti Das
Preslav Nakov
LRM
HILM
38
22
0
04 Feb 2024
Hallucination is Inevitable: An Innate Limitation of Large Language Models
Hallucination is Inevitable: An Innate Limitation of Large Language Models
Ziwei Xu
Sanjay Jain
Mohan S. Kankanhalli
HILM
LRM
71
218
0
22 Jan 2024
Large Language Models are Null-Shot Learners
Large Language Models are Null-Shot Learners
Pittawat Taveekitworachai
Febri Abdullah
R. Thawonmas
LRM
26
2
0
16 Jan 2024
Prompting open-source and commercial language models for grammatical error correction of English learner text
Prompting open-source and commercial language models for grammatical error correction of English learner text
Christopher Davis
Andrew Caines
Oistein Andersen
Shiva Taslimipoor
H. Yannakoudakis
Zheng Yuan
Christopher Bryant
Marek Rei
P. Buttery
35
13
0
15 Jan 2024
Fine-grained Hallucination Detection and Editing for Language Models
Fine-grained Hallucination Detection and Editing for Language Models
Abhika Mishra
Akari Asai
Vidhisha Balachandran
Yizhong Wang
Graham Neubig
Yulia Tsvetkov
Hannaneh Hajishirzi
HILM
37
79
0
12 Jan 2024
A Comprehensive Survey of Hallucination Mitigation Techniques in Large
  Language Models
A Comprehensive Survey of Hallucination Mitigation Techniques in Large Language Models
S.M. Towhidul Islam Tonmoy
S. M. M. Zaman
Vinija Jain
Anku Rani
Vipula Rawte
Aman Chadha
Amitava Das
HILM
43
184
0
02 Jan 2024
Evaluating and Enhancing Large Language Models for Conversational
  Reasoning on Knowledge Graphs
Evaluating and Enhancing Large Language Models for Conversational Reasoning on Knowledge Graphs
Yuxuan Huang
Lida Shi
Anqi Liu
Hao Xu
LLMAG
ELM
KELM
LRM
16
2
0
18 Dec 2023
Knowledge Trees: Gradient Boosting Decision Trees on Knowledge Neurons
  as Probing Classifier
Knowledge Trees: Gradient Boosting Decision Trees on Knowledge Neurons as Probing Classifier
Sergey A. Saltykov
30
0
0
17 Dec 2023
Context Matters: Data-Efficient Augmentation of Large Language Models
  for Scientific Applications
Context Matters: Data-Efficient Augmentation of Large Language Models for Scientific Applications
Xiang Li
Haoran Tang
Siyu Chen
Ziwei Wang
Anurag Maravi
Marcin Abram
21
0
0
12 Dec 2023
Steering Llama 2 via Contrastive Activation Addition
Steering Llama 2 via Contrastive Activation Addition
Nina Rimsky
Nick Gabrieli
Julian Schulz
Meg Tong
Evan Hubinger
Alexander Matt Turner
LLMSV
14
163
0
09 Dec 2023
Building Trustworthy NeuroSymbolic AI Systems: Consistency, Reliability,
  Explainability, and Safety
Building Trustworthy NeuroSymbolic AI Systems: Consistency, Reliability, Explainability, and Safety
Manas Gaur
Amit P. Sheth
26
17
0
05 Dec 2023
FFT: Towards Harmlessness Evaluation and Analysis for LLMs with
  Factuality, Fairness, Toxicity
FFT: Towards Harmlessness Evaluation and Analysis for LLMs with Factuality, Fairness, Toxicity
Shiyao Cui
Zhenyu Zhang
Yilong Chen
Wenyuan Zhang
Tianyun Liu
Siqi Wang
Tingwen Liu
41
13
0
30 Nov 2023
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models
  via Unconstrained Generation
UHGEval: Benchmarking the Hallucination of Chinese Large Language Models via Unconstrained Generation
Xun Liang
Shichao Song
Simin Niu
Zhiyu Li
Zhiyu Li
...
Zhaohui Wy
Dawei He
Peng Cheng
Zhonghao Wang
Haiying Deng
HILM
34
19
0
26 Nov 2023
Can Knowledge Graphs Reduce Hallucinations in LLMs? : A Survey
Can Knowledge Graphs Reduce Hallucinations in LLMs? : A Survey
Garima Agrawal
Tharindu Kumarage
Zeyad Alghami
Huanmin Liu
37
81
0
14 Nov 2023
Is Your Code Generated by ChatGPT Really Correct? Rigorous Evaluation of
  Large Language Models for Code Generation
Is Your Code Generated by ChatGPT Really Correct? Rigorous Evaluation of Large Language Models for Code Generation
Jiawei Liu
Chun Xia
Yuyao Wang
Lingming Zhang
ELM
ALM
189
799
0
02 May 2023
Hallucinated but Factual! Inspecting the Factuality of Hallucinations in
  Abstractive Summarization
Hallucinated but Factual! Inspecting the Factuality of Hallucinations in Abstractive Summarization
Mengyao Cao
Yue Dong
Jackie C.K. Cheung
HILM
178
146
0
30 Aug 2021
Train Short, Test Long: Attention with Linear Biases Enables Input
  Length Extrapolation
Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation
Ofir Press
Noah A. Smith
M. Lewis
253
698
0
27 Aug 2021
Zero-Shot Text-to-Image Generation
Zero-Shot Text-to-Image Generation
Aditya A. Ramesh
Mikhail Pavlov
Gabriel Goh
Scott Gray
Chelsea Voss
Alec Radford
Mark Chen
Ilya Sutskever
VLM
255
4,796
0
24 Feb 2021
Fine-Tuning Language Models from Human Preferences
Fine-Tuning Language Models from Human Preferences
Daniel M. Ziegler
Nisan Stiennon
Jeff Wu
Tom B. Brown
Alec Radford
Dario Amodei
Paul Christiano
G. Irving
ALM
301
1,610
0
18 Sep 2019
Previous
12