ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2306.06085
  4. Cited By
Trapping LLM Hallucinations Using Tagged Context Prompts

Trapping LLM Hallucinations Using Tagged Context Prompts

9 June 2023
Philip G. Feldman
James R. Foulds
Shimei Pan
    HILM
    LLMAG
ArXivPDFHTML

Papers citing "Trapping LLM Hallucinations Using Tagged Context Prompts"

4 / 4 papers shown
Title
Focus, Merge, Rank: Improved Question Answering Based on Semi-structured Knowledge Bases
Focus, Merge, Rank: Improved Question Answering Based on Semi-structured Knowledge Bases
Derian Boer
Stephen Roth
Stefan Kramer
KELM
32
0
0
14 May 2025
Semantic Entropy Probes: Robust and Cheap Hallucination Detection in
  LLMs
Semantic Entropy Probes: Robust and Cheap Hallucination Detection in LLMs
Jannik Kossen
Jiatong Han
Muhammed Razzak
Lisa Schut
Shreshth A. Malik
Yarin Gal
HILM
60
35
0
22 Jun 2024
Large Language Models are Efficient Learners of Noise-Robust Speech
  Recognition
Large Language Models are Efficient Learners of Noise-Robust Speech Recognition
Yuchen Hu
Chen Chen
Chao-Han Huck Yang
Ruizhe Li
Chao Zhang
Pin-Yu Chen
Ensiong Chng
27
20
0
19 Jan 2024
SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for
  Generative Large Language Models
SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models
Potsawee Manakul
Adian Liusie
Mark Gales
HILM
LRM
152
396
0
15 Mar 2023
1