ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2403.05612
  4. Cited By
Unfamiliar Finetuning Examples Control How Language Models Hallucinate

Unfamiliar Finetuning Examples Control How Language Models Hallucinate

8 March 2024
Katie Kang
Eric Wallace
Claire Tomlin
Aviral Kumar
Sergey Levine
    HILM
    LRM
ArXivPDFHTML

Papers citing "Unfamiliar Finetuning Examples Control How Language Models Hallucinate"

45 / 45 papers shown
Title
ToolACE-DEV: Self-Improving Tool Learning via Decomposition and EVolution
ToolACE-DEV: Self-Improving Tool Learning via Decomposition and EVolution
X. Huang
Weiwen Liu
Xingshan Zeng
Y. Huang
Xinlong Hao
...
Yirong Zeng
Chuhan Wu
Yishuo Wang
R. Tang
Defu Lian
KELM
36
0
0
12 May 2025
Memorization and Knowledge Injection in Gated LLMs
Memorization and Knowledge Injection in Gated LLMs
Xu Pan
Ely Hahami
Zechen Zhang
H. Sompolinsky
KELM
CLL
RALM
106
1
0
30 Apr 2025
Think, Prune, Train, Improve: Scaling Reasoning without Scaling Models
Think, Prune, Train, Improve: Scaling Reasoning without Scaling Models
Caia Costello
Simon Guo
Anna Goldie
Azalia Mirhoseini
ReLM
SyDa
LRM
113
1
0
25 Apr 2025
ToolACE-R: Tool Learning with Adaptive Self-Refinement
ToolACE-R: Tool Learning with Adaptive Self-Refinement
Xingshan Zeng
Wei Liu
X. Huang
Zezhong Wang
Lingzhi Wang
...
Yishuo Wang
Lifeng Shang
Xin Jiang
Ruiming Tang
Qiang Liu
CLL
57
0
0
02 Apr 2025
Winning Big with Small Models: Knowledge Distillation vs. Self-Training for Reducing Hallucination in QA Agents
Winning Big with Small Models: Knowledge Distillation vs. Self-Training for Reducing Hallucination in QA Agents
A. Lewis
Michael White
Jing Liu
T. Koike-Akino
K. Parsons
Yanjie Wang
HILM
72
0
0
26 Feb 2025
Self-Memory Alignment: Mitigating Factual Hallucinations with Generalized Improvement
Self-Memory Alignment: Mitigating Factual Hallucinations with Generalized Improvement
Siyuan Zhang
Y. Zhang
Yinpeng Dong
Hang Su
HILM
KELM
221
0
0
26 Feb 2025
Navigating the Helpfulness-Truthfulness Trade-Off with Uncertainty-Aware Instruction Fine-Tuning
Navigating the Helpfulness-Truthfulness Trade-Off with Uncertainty-Aware Instruction Fine-Tuning
Tianyi Wu
Jingwei Ni
Bryan Hooi
Jiaheng Zhang
Elliott Ash
See-Kiong Ng
Mrinmaya Sachan
Markus Leippold
59
0
0
17 Feb 2025
Hallucination, Monofacts, and Miscalibration: An Empirical Investigation
Hallucination, Monofacts, and Miscalibration: An Empirical Investigation
Miranda Muqing Miao
Michael Kearns
67
0
0
11 Feb 2025
Think or Remember? Detecting and Directing LLMs Towards Memorization or
  Generalization
Think or Remember? Detecting and Directing LLMs Towards Memorization or Generalization
Yi-Fu Fu
Yu-Chieh Tu
Tzu-Ling Cheng
Cheng-Yu Lin
Yi-Ting Yang
Heng-Yi Liu
Keng-Te Liao
Da-Cheng Juan
Shou-de Lin
49
0
0
24 Dec 2024
NILE: Internal Consistency Alignment in Large Language Models
NILE: Internal Consistency Alignment in Large Language Models
Minda Hu
Qiyuan Zhang
Yufei Wang
Bowei He
Hongru Wang
Jingyan Zhou
Liangyou Li
Yasheng Wang
Chen Ma
Irwin King
91
0
0
21 Dec 2024
Quantized Delta Weight Is Safety Keeper
Quantized Delta Weight Is Safety Keeper
Yule Liu
Zhen Sun
Xinlei He
Xinyi Huang
96
2
0
29 Nov 2024
Do Large Language Models Perform Latent Multi-Hop Reasoning without
  Exploiting Shortcuts?
Do Large Language Models Perform Latent Multi-Hop Reasoning without Exploiting Shortcuts?
Sohee Yang
Nora Kassner
E. Gribovskaya
Sebastian Riedel
Mor Geva
KELM
LRM
ReLM
78
5
0
25 Nov 2024
Continual Memorization of Factoids in Language Models
Continual Memorization of Factoids in Language Models
Howard Chen
Jiayi Geng
Adithya Bhaskar
Dan Friedman
Danqi Chen
KELM
56
0
0
11 Nov 2024
Exploring Knowledge Boundaries in Large Language Models for Retrieval
  Judgment
Exploring Knowledge Boundaries in Large Language Models for Retrieval Judgment
Zhen Zhang
Xinyu Wang
Yong Jiang
Zhuo Chen
Feiteng Mu
Mengting Hu
Pengjun Xie
Fei Huang
KELM
59
2
0
09 Nov 2024
Gradient Localization Improves Lifelong Pretraining of Language Models
Gradient Localization Improves Lifelong Pretraining of Language Models
Jared Fernandez
Yonatan Bisk
Emma Strubell
KELM
39
1
0
07 Nov 2024
Fictitious Synthetic Data Can Improve LLM Factuality via Prerequisite
  Learning
Fictitious Synthetic Data Can Improve LLM Factuality via Prerequisite Learning
Yujian Liu
Shiyu Chang
Tommi Jaakkola
Yang Zhang
28
0
0
25 Oct 2024
Improving Model Factuality with Fine-grained Critique-based Evaluator
Improving Model Factuality with Fine-grained Critique-based Evaluator
Yiqing Xie
Wenxuan Zhou
Pradyot Prakash
Di Jin
Yuning Mao
...
Sinong Wang
Han Fang
Carolyn Rose
Daniel Fried
Hejia Zhang
HILM
33
6
0
24 Oct 2024
Who's Who: Large Language Models Meet Knowledge Conflicts in Practice
Who's Who: Large Language Models Meet Knowledge Conflicts in Practice
Quang Hieu Pham
Hoang Ngo
Anh Tuan Luu
Dat Quoc Nguyen
RALM
HILM
27
4
0
21 Oct 2024
LoGU: Long-form Generation with Uncertainty Expressions
LoGU: Long-form Generation with Uncertainty Expressions
Ruihan Yang
Caiqi Zhang
Zhisong Zhang
Xinting Huang
Sen Yang
Nigel Collier
Dong Yu
Deqing Yang
HILM
32
4
0
18 Oct 2024
3DS: Decomposed Difficulty Data Selection's Case Study on LLM Medical
  Domain Adaptation
3DS: Decomposed Difficulty Data Selection's Case Study on LLM Medical Domain Adaptation
Hongxin Ding
Yue Fang
Runchuan Zhu
Xinke Jiang
Jinyang Zhang
Yongxin Xu
Xu Chu
Junfeng Zhao
Yasha Wang
33
0
0
13 Oct 2024
Automatic Curriculum Expert Iteration for Reliable LLM Reasoning
Automatic Curriculum Expert Iteration for Reliable LLM Reasoning
Zirui Zhao
Hanze Dong
Amrita Saha
Caiming Xiong
Doyen Sahoo
LRM
35
3
0
10 Oct 2024
Utilize the Flow before Stepping into the Same River Twice: Certainty
  Represented Knowledge Flow for Refusal-Aware Instruction Tuning
Utilize the Flow before Stepping into the Same River Twice: Certainty Represented Knowledge Flow for Refusal-Aware Instruction Tuning
Runchuan Zhu
Zhipeng Ma
Jiang Wu
Junyuan Gao
Jiaqi Wang
Dahua Lin
Conghui He
24
2
0
09 Oct 2024
Gradual Learning: Optimizing Fine-Tuning with Partially Mastered
  Knowledge in Large Language Models
Gradual Learning: Optimizing Fine-Tuning with Partially Mastered Knowledge in Large Language Models
Bozhou Li
Hao Liang
Yang Li
Fangcheng Fu
Hongzhi Yin
Conghui He
Wentao Zhang
KELM
CLL
48
0
0
08 Oct 2024
FactAlign: Long-form Factuality Alignment of Large Language Models
FactAlign: Long-form Factuality Alignment of Large Language Models
Chao-Wei Huang
Yun-Nung Chen
HILM
30
2
0
02 Oct 2024
A Survey on the Honesty of Large Language Models
A Survey on the Honesty of Large Language Models
Siheng Li
Cheng Yang
Taiqiang Wu
Chufan Shi
Yuji Zhang
...
Jie Zhou
Yujiu Yang
Ngai Wong
Xixin Wu
Wai Lam
HILM
35
5
0
27 Sep 2024
Personalizing Reinforcement Learning from Human Feedback with
  Variational Preference Learning
Personalizing Reinforcement Learning from Human Feedback with Variational Preference Learning
S. Poddar
Yanming Wan
Hamish Ivison
Abhishek Gupta
Natasha Jaques
40
35
0
19 Aug 2024
Trust or Escalate: LLM Judges with Provable Guarantees for Human
  Agreement
Trust or Escalate: LLM Judges with Provable Guarantees for Human Agreement
Jaehun Jung
Faeze Brahman
Yejin Choi
ALM
44
12
0
25 Jul 2024
Knowledge Mechanisms in Large Language Models: A Survey and Perspective
Knowledge Mechanisms in Large Language Models: A Survey and Perspective
Meng Wang
Yunzhi Yao
Ziwen Xu
Shuofei Qiao
Shumin Deng
...
Yong-jia Jiang
Pengjun Xie
Fei Huang
Huajun Chen
Ningyu Zhang
55
28
0
22 Jul 2024
From Loops to Oops: Fallback Behaviors of Language Models Under Uncertainty
From Loops to Oops: Fallback Behaviors of Language Models Under Uncertainty
Maor Ivgi
Ori Yoran
Jonathan Berant
Mor Geva
HILM
66
8
0
08 Jul 2024
Emergence of Hidden Capabilities: Exploring Learning Dynamics in Concept
  Space
Emergence of Hidden Capabilities: Exploring Learning Dynamics in Concept Space
Core Francisco Park
Maya Okawa
Andrew Lee
Ekdeep Singh Lubana
Hidenori Tanaka
62
7
0
27 Jun 2024
Understanding Finetuning for Factual Knowledge Extraction
Understanding Finetuning for Factual Knowledge Extraction
Gaurav R. Ghosal
Tatsunori Hashimoto
Aditi Raghunathan
44
12
0
20 Jun 2024
Current state of LLM Risks and AI Guardrails
Current state of LLM Risks and AI Guardrails
Suriya Ganesh Ayyamperumal
Limin Ge
59
22
0
16 Jun 2024
Teaching Large Language Models to Express Knowledge Boundary from Their
  Own Signals
Teaching Large Language Models to Express Knowledge Boundary from Their Own Signals
Lida Chen
Zujie Liang
Xintao Wang
Jiaqing Liang
Yanghua Xiao
Feng Wei
Jinglei Chen
Zhenghong Hao
Bing Han
Wei Wang
55
10
0
16 Jun 2024
Kernel Language Entropy: Fine-grained Uncertainty Quantification for
  LLMs from Semantic Similarities
Kernel Language Entropy: Fine-grained Uncertainty Quantification for LLMs from Semantic Similarities
Alexander Nikitin
Jannik Kossen
Yarin Gal
Pekka Marttinen
UQCV
53
25
0
30 May 2024
Can Large Language Models Faithfully Express Their Intrinsic Uncertainty
  in Words?
Can Large Language Models Faithfully Express Their Intrinsic Uncertainty in Words?
G. Yona
Roee Aharoni
Mor Geva
HILM
49
17
0
27 May 2024
OLAPH: Improving Factuality in Biomedical Long-form Question Answering
OLAPH: Improving Factuality in Biomedical Long-form Question Answering
Minbyul Jeong
Hyeon Hwang
Chanwoong Yoon
Taewhoo Lee
Jaewoo Kang
MedIm
HILM
LM&MA
46
12
0
21 May 2024
Does Fine-Tuning LLMs on New Knowledge Encourage Hallucinations?
Does Fine-Tuning LLMs on New Knowledge Encourage Hallucinations?
Zorik Gekhman
G. Yona
Roee Aharoni
Matan Eyal
Amir Feder
Roi Reichart
Jonathan Herzig
52
104
0
09 May 2024
FLAME: Factuality-Aware Alignment for Large Language Models
FLAME: Factuality-Aware Alignment for Large Language Models
Sheng-Chieh Lin
Luyu Gao
Barlas Oğuz
Wenhan Xiong
Jimmy Lin
Wen-tau Yih
Xilun Chen
HILM
41
16
0
02 May 2024
Cognitive Dissonance: Why Do Language Model Outputs Disagree with
  Internal Representations of Truthfulness?
Cognitive Dissonance: Why Do Language Model Outputs Disagree with Internal Representations of Truthfulness?
Kevin Liu
Stephen Casper
Dylan Hadfield-Menell
Jacob Andreas
HILM
64
36
0
27 Nov 2023
The Internal State of an LLM Knows When It's Lying
The Internal State of an LLM Knows When It's Lying
A. Azaria
Tom Michael Mitchell
HILM
218
301
0
26 Apr 2023
Sparks of Artificial General Intelligence: Early experiments with GPT-4
Sparks of Artificial General Intelligence: Early experiments with GPT-4
Sébastien Bubeck
Varun Chandrasekaran
Ronen Eldan
J. Gehrke
Eric Horvitz
...
Scott M. Lundberg
Harsha Nori
Hamid Palangi
Marco Tulio Ribeiro
Yi Zhang
ELM
AI4MH
AI4CE
ALM
328
2,232
0
22 Mar 2023
SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for
  Generative Large Language Models
SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models
Potsawee Manakul
Adian Liusie
Mark J. F. Gales
HILM
LRM
152
396
0
15 Mar 2023
Training language models to follow instructions with human feedback
Training language models to follow instructions with human feedback
Long Ouyang
Jeff Wu
Xu Jiang
Diogo Almeida
Carroll L. Wainwright
...
Amanda Askell
Peter Welinder
Paul Christiano
Jan Leike
Ryan J. Lowe
OSLM
ALM
351
12,003
0
04 Mar 2022
The Pile: An 800GB Dataset of Diverse Text for Language Modeling
The Pile: An 800GB Dataset of Diverse Text for Language Modeling
Leo Gao
Stella Biderman
Sid Black
Laurence Golding
Travis Hoppe
...
Horace He
Anish Thite
Noa Nabeshima
Shawn Presser
Connor Leahy
AIMat
282
1,996
0
31 Dec 2020
Offline Reinforcement Learning: Tutorial, Review, and Perspectives on
  Open Problems
Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems
Sergey Levine
Aviral Kumar
George Tucker
Justin Fu
OffRL
GP
340
1,960
0
04 May 2020
1