ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2405.19648
  4. Cited By
Detecting Hallucinations in Large Language Model Generation: A Token
  Probability Approach

Detecting Hallucinations in Large Language Model Generation: A Token Probability Approach

30 May 2024
Ernesto Quevedo
Jorge Yero
Rachel Koerner
Pablo Rivas
Tomas Cerny
    HILM
ArXivPDFHTML

Papers citing "Detecting Hallucinations in Large Language Model Generation: A Token Probability Approach"

13 / 13 papers shown
Title
Prompt-Guided Internal States for Hallucination Detection of Large Language Models
Prompt-Guided Internal States for Hallucination Detection of Large Language Models
Fujie Zhang
Peiqi Yu
Biao Yi
Baolei Zhang
Tong Li
Zheli Liu
HILM
LRM
111
0
0
07 Nov 2024
Mini-DALLE3: Interactive Text to Image by Prompting Large Language
  Models
Mini-DALLE3: Interactive Text to Image by Prompting Large Language Models
Zeqiang Lai
Xizhou Zhu
Jifeng Dai
Yu Qiao
Wenhai Wang
MLLM
DiffM
61
24
0
11 Oct 2023
Siren's Song in the AI Ocean: A Survey on Hallucination in Large
  Language Models
Siren's Song in the AI Ocean: A Survey on Hallucination in Large Language Models
Yue Zhang
Yafu Li
Leyang Cui
Deng Cai
Lemao Liu
...
Longyue Wang
Anh Tuan Luu
Wei Bi
Freda Shi
Shuming Shi
RALM
LRM
HILM
96
558
0
03 Sep 2023
Do Large Language Models Know What They Don't Know?
Do Large Language Models Know What They Don't Know?
Zhangyue Yin
Qiushi Sun
Qipeng Guo
Jiawen Wu
Xipeng Qiu
Xuanjing Huang
ELM
AI4MH
71
161
0
29 May 2023
StructGPT: A General Framework for Large Language Model to Reason over
  Structured Data
StructGPT: A General Framework for Large Language Model to Reason over Structured Data
Jinhao Jiang
Kun Zhou
Zican Dong
Keming Ye
Wayne Xin Zhao
Ji-Rong Wen
LRM
LMTD
RALM
88
294
0
16 May 2023
The Internal State of an LLM Knows When It's Lying
The Internal State of an LLM Knows When It's Lying
A. Azaria
Tom Michael Mitchell
HILM
264
340
0
26 Apr 2023
Diving Deep into Modes of Fact Hallucinations in Dialogue Systems
Diving Deep into Modes of Fact Hallucinations in Dialogue Systems
Souvik Das
Sougata Saha
Rohini Srihari
HILM
36
33
0
11 Jan 2023
A Survey of Controllable Text Generation using Transformer-based
  Pre-trained Language Models
A Survey of Controllable Text Generation using Transformer-based Pre-trained Language Models
Hanqing Zhang
Haolin Song
Shaoyu Li
Ming Zhou
Dawei Song
80
223
0
14 Jan 2022
Evaluating Large Language Models Trained on Code
Evaluating Large Language Models Trained on Code
Mark Chen
Jerry Tworek
Heewoo Jun
Qiming Yuan
Henrique Pondé
...
Bob McGrew
Dario Amodei
Sam McCandlish
Ilya Sutskever
Wojciech Zaremba
ELM
ALM
229
5,539
0
07 Jul 2021
BARTScore: Evaluating Generated Text as Text Generation
BARTScore: Evaluating Generated Text as Text Generation
Weizhe Yuan
Graham Neubig
Pengfei Liu
101
843
0
22 Jun 2021
Retrieval Augmentation Reduces Hallucination in Conversation
Retrieval Augmentation Reduces Hallucination in Conversation
Kurt Shuster
Spencer Poff
Moya Chen
Douwe Kiela
Jason Weston
HILM
95
734
0
15 Apr 2021
Longformer: The Long-Document Transformer
Longformer: The Long-Document Transformer
Iz Beltagy
Matthew E. Peters
Arman Cohan
RALM
VLM
168
4,071
0
10 Apr 2020
Pointer Sentinel Mixture Models
Pointer Sentinel Mixture Models
Stephen Merity
Caiming Xiong
James Bradbury
R. Socher
RALM
319
2,859
0
26 Sep 2016
1