ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2409.02976
  4. Cited By
Hallucination Detection in LLMs: Fast and Memory-Efficient Finetuned
  Models

Hallucination Detection in LLMs: Fast and Memory-Efficient Finetuned Models

4 September 2024
Gabriel Y. Arteaga
Thomas B. Schon
Nicolas Pielawski
ArXivPDFHTML

Papers citing "Hallucination Detection in LLMs: Fast and Memory-Efficient Finetuned Models"

3 / 3 papers shown
Title
Uncertainty Distillation: Teaching Language Models to Express Semantic Confidence
Uncertainty Distillation: Teaching Language Models to Express Semantic Confidence
Sophia Hager
David Mueller
Kevin Duh
Nicholas Andrews
76
0
0
18 Mar 2025
Uncertainty quantification in fine-tuned LLMs using LoRA ensembles
Uncertainty quantification in fine-tuned LLMs using LoRA ensembles
Oleksandr Balabanov
Hampus Linander
UQCV
60
15
0
19 Feb 2024
Simple and Scalable Predictive Uncertainty Estimation using Deep
  Ensembles
Simple and Scalable Predictive Uncertainty Estimation using Deep Ensembles
Balaji Lakshminarayanan
Alexander Pritzel
Charles Blundell
UQCV
BDL
295
5,726
0
05 Dec 2016
1