ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2410.18860
  4. Cited By
DeCoRe: Decoding by Contrasting Retrieval Heads to Mitigate
  Hallucinations

DeCoRe: Decoding by Contrasting Retrieval Heads to Mitigate Hallucinations

24 October 2024
Aryo Pradipta Gema
Chen Jin
Ahmed Abdulaal
Tom Diethe
Philip Teare
Beatrice Alex
Pasquale Minervini
Amrutha Saseendran
ArXivPDFHTML

Papers citing "DeCoRe: Decoding by Contrasting Retrieval Heads to Mitigate Hallucinations"

3 / 3 papers shown
Title
Steering off Course: Reliability Challenges in Steering Language Models
Steering off Course: Reliability Challenges in Steering Language Models
Patrick Queiroz Da Silva
Hari Sethuraman
Dheeraj Rajagopal
Hannaneh Hajishirzi
Sachin Kumar
LLMSV
37
1
0
06 Apr 2025
An Analysis of Decoding Methods for LLM-based Agents for Faithful Multi-Hop Question Answering
An Analysis of Decoding Methods for LLM-based Agents for Faithful Multi-Hop Question Answering
Alexander Murphy
Mohd Sanad Zaki Rizvi
Aden Haussmann
Ping Nie
Guifu Liu
Aryo Pradipta Gema
Pasquale Minervini
52
0
0
30 Mar 2025
HICD: Hallucination-Inducing via Attention Dispersion for Contrastive Decoding to Mitigate Hallucinations in Large Language Models
HICD: Hallucination-Inducing via Attention Dispersion for Contrastive Decoding to Mitigate Hallucinations in Large Language Models
Xinyan Jiang
Hang Ye
Yongxin Zhu
Xiaoying Zheng
Zikang Chen
Jun Gong
49
0
0
17 Mar 2025
1