ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2306.03341
  4. Cited By
Inference-Time Intervention: Eliciting Truthful Answers from a Language
  Model

Inference-Time Intervention: Eliciting Truthful Answers from a Language Model

6 June 2023
Kenneth Li
Oam Patel
Fernanda Viégas
Hanspeter Pfister
Martin Wattenberg
    KELM
    HILM
ArXivPDFHTML

Papers citing "Inference-Time Intervention: Eliciting Truthful Answers from a Language Model"

50 / 411 papers shown
Title
The First to Know: How Token Distributions Reveal Hidden Knowledge in
  Large Vision-Language Models?
The First to Know: How Token Distributions Reveal Hidden Knowledge in Large Vision-Language Models?
Qinyu Zhao
Ming Xu
Kartik Gupta
Akshay Asthana
Liang Zheng
Stephen Gould
29
8
0
14 Mar 2024
Knowledge Conflicts for LLMs: A Survey
Knowledge Conflicts for LLMs: A Survey
Rongwu Xu
Zehan Qi
Zhijiang Guo
Cunxiang Wang
Hongru Wang
Yue Zhang
Wei Xu
208
94
0
13 Mar 2024
pyvene: A Library for Understanding and Improving PyTorch Models via
  Interventions
pyvene: A Library for Understanding and Improving PyTorch Models via Interventions
Zhengxuan Wu
Atticus Geiger
Aryaman Arora
Jing-ling Huang
Zheng Wang
Noah D. Goodman
Christopher D. Manning
Christopher Potts
MU
62
26
0
12 Mar 2024
Truth-Aware Context Selection: Mitigating Hallucinations of Large
  Language Models Being Misled by Untruthful Contexts
Truth-Aware Context Selection: Mitigating Hallucinations of Large Language Models Being Misled by Untruthful Contexts
Tian Yu
Shaolei Zhang
Yang Feng
HILM
42
7
0
12 Mar 2024
Extending Activation Steering to Broad Skills and Multiple Behaviours
Extending Activation Steering to Broad Skills and Multiple Behaviours
Teun van der Weij
Massimo Poesio
Nandi Schoots
LLMSV
47
12
0
09 Mar 2024
Tuning-Free Accountable Intervention for LLM Deployment -- A
  Metacognitive Approach
Tuning-Free Accountable Intervention for LLM Deployment -- A Metacognitive Approach
Zhen Tan
Jie Peng
Tianlong Chen
Huan Liu
37
6
0
08 Mar 2024
Unfamiliar Finetuning Examples Control How Language Models Hallucinate
Unfamiliar Finetuning Examples Control How Language Models Hallucinate
Katie Kang
Eric Wallace
Claire Tomlin
Aviral Kumar
Sergey Levine
HILM
LRM
49
49
0
08 Mar 2024
Defending Against Unforeseen Failure Modes with Latent Adversarial
  Training
Defending Against Unforeseen Failure Modes with Latent Adversarial Training
Stephen Casper
Lennart Schulze
Oam Patel
Dylan Hadfield-Menell
AAML
57
28
0
08 Mar 2024
HaluEval-Wild: Evaluating Hallucinations of Language Models in the Wild
HaluEval-Wild: Evaluating Hallucinations of Language Models in the Wild
Zhiying Zhu
Yiming Yang
Zhiqing Sun
HILM
VLM
49
14
0
07 Mar 2024
On the Origins of Linear Representations in Large Language Models
On the Origins of Linear Representations in Large Language Models
Yibo Jiang
Goutham Rajendran
Pradeep Ravikumar
Bryon Aragam
Victor Veitch
67
25
0
06 Mar 2024
In-Context Sharpness as Alerts: An Inner Representation Perspective for
  Hallucination Mitigation
In-Context Sharpness as Alerts: An Inner Representation Perspective for Hallucination Mitigation
Shiqi Chen
Miao Xiong
Junteng Liu
Zhengxuan Wu
Teng Xiao
Siyang Gao
Junxian He
HILM
51
21
0
03 Mar 2024
AtP*: An efficient and scalable method for localizing LLM behaviour to
  components
AtP*: An efficient and scalable method for localizing LLM behaviour to components
János Kramár
Tom Lieberum
Rohin Shah
Neel Nanda
KELM
45
42
0
01 Mar 2024
Towards Tracing Trustworthiness Dynamics: Revisiting Pre-training Period
  of Large Language Models
Towards Tracing Trustworthiness Dynamics: Revisiting Pre-training Period of Large Language Models
Chao Qian
Jie Zhang
Wei Yao
Dongrui Liu
Zhen-fei Yin
Yu Qiao
Yong Liu
Jing Shao
LLMSV
LRM
57
13
0
29 Feb 2024
Whispers that Shake Foundations: Analyzing and Mitigating False Premise
  Hallucinations in Large Language Models
Whispers that Shake Foundations: Analyzing and Mitigating False Premise Hallucinations in Large Language Models
Hongbang Yuan
Pengfei Cao
Zhuoran Jin
Yubo Chen
Daojian Zeng
Kang Liu
Jun Zhao
HILM
37
3
0
29 Feb 2024
How do Large Language Models Handle Multilingualism?
How do Large Language Models Handle Multilingualism?
Yiran Zhao
Wenxuan Zhang
Guizhen Chen
Kenji Kawaguchi
Lidong Bing
LRM
46
55
0
29 Feb 2024
Language Models Represent Beliefs of Self and Others
Language Models Represent Beliefs of Self and Others
Wentao Zhu
Zhining Zhang
Yizhou Wang
MILM
LRM
52
7
0
28 Feb 2024
Exploring Multilingual Concepts of Human Value in Large Language Models:
  Is Value Alignment Consistent, Transferable and Controllable across
  Languages?
Exploring Multilingual Concepts of Human Value in Large Language Models: Is Value Alignment Consistent, Transferable and Controllable across Languages?
Shaoyang Xu
Weilong Dong
Zishan Guo
Xinwei Wu
Deyi Xiong
44
6
0
28 Feb 2024
Characterizing Truthfulness in Large Language Model Generations with
  Local Intrinsic Dimension
Characterizing Truthfulness in Large Language Model Generations with Local Intrinsic Dimension
Fan Yin
Jayanth Srinivasa
Kai-Wei Chang
HILM
60
21
0
28 Feb 2024
Collaborative decoding of critical tokens for boosting factuality of
  large language models
Collaborative decoding of critical tokens for boosting factuality of large language models
Lifeng Jin
Baolin Peng
Linfeng Song
Haitao Mi
Ye Tian
Dong Yu
HILM
35
6
0
28 Feb 2024
TruthX: Alleviating Hallucinations by Editing Large Language Models in
  Truthful Space
TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space
Shaolei Zhang
Tian Yu
Yang Feng
HILM
KELM
37
40
0
27 Feb 2024
InstructEdit: Instruction-based Knowledge Editing for Large Language
  Models
InstructEdit: Instruction-based Knowledge Editing for Large Language Models
Ningyu Zhang
Bo Tian
Siyuan Cheng
Xiaozhuan Liang
Yi Hu
Kouying Xue
Yanjie Gou
Xi Chen
Huajun Chen
KELM
52
4
0
25 Feb 2024
Citation-Enhanced Generation for LLM-based Chatbots
Citation-Enhanced Generation for LLM-based Chatbots
Weitao Li
Junkai Li
Weizhi Ma
Yang Liu
68
18
0
25 Feb 2024
Fine-Grained Self-Endorsement Improves Factuality and Reasoning
Fine-Grained Self-Endorsement Improves Factuality and Reasoning
Ante Wang
Linfeng Song
Baolin Peng
Ye Tian
Lifeng Jin
Haitao Mi
Jinsong Su
Dong Yu
HILM
LRM
23
6
0
23 Feb 2024
A Language Model's Guide Through Latent Space
A Language Model's Guide Through Latent Space
Dimitri von Rutte
Sotiris Anagnostidis
Gregor Bachmann
Thomas Hofmann
45
24
0
22 Feb 2024
Understanding and Patching Compositional Reasoning in LLMs
Understanding and Patching Compositional Reasoning in LLMs
Zhaoyi Li
Gangwei Jiang
Hong Xie
Linqi Song
Defu Lian
Ying Wei
LRM
58
22
0
22 Feb 2024
Distillation Contrastive Decoding: Improving LLMs Reasoning with
  Contrastive Decoding and Distillation
Distillation Contrastive Decoding: Improving LLMs Reasoning with Contrastive Decoding and Distillation
Phuc Phan
Hieu Tran
Long Phan
35
7
0
21 Feb 2024
How Easy is It to Fool Your Multimodal LLMs? An Empirical Analysis on
  Deceptive Prompts
How Easy is It to Fool Your Multimodal LLMs? An Empirical Analysis on Deceptive Prompts
Yusu Qian
Haotian Zhang
Yinfei Yang
Zhe Gan
93
26
0
20 Feb 2024
GenAudit: Fixing Factual Errors in Language Model Outputs with Evidence
GenAudit: Fixing Factual Errors in Language Model Outputs with Evidence
Kundan Krishna
S. Ramprasad
Prakhar Gupta
Byron C. Wallace
Zachary Chase Lipton
Jeffrey P. Bigham
HILM
KELM
SyDa
52
9
0
19 Feb 2024
CausalGym: Benchmarking causal interpretability methods on linguistic
  tasks
CausalGym: Benchmarking causal interpretability methods on linguistic tasks
Aryaman Arora
Daniel Jurafsky
Christopher Potts
50
22
0
19 Feb 2024
Towards Uncovering How Large Language Model Works: An Explainability
  Perspective
Towards Uncovering How Large Language Model Works: An Explainability Perspective
Haiyan Zhao
Fan Yang
Bo Shen
Himabindu Lakkaraju
Mengnan Du
35
10
0
16 Feb 2024
Representation Surgery: Theory and Practice of Affine Steering
Representation Surgery: Theory and Practice of Affine Steering
Shashwat Singh
Shauli Ravfogel
Jonathan Herzig
Roee Aharoni
Ryan Cotterell
Ponnurangam Kumaraguru
LLMSV
35
13
0
15 Feb 2024
Self-Alignment for Factuality: Mitigating Hallucinations in LLMs via
  Self-Evaluation
Self-Alignment for Factuality: Mitigating Hallucinations in LLMs via Self-Evaluation
Xiaoying Zhang
Baolin Peng
Ye Tian
Jingyan Zhou
Lifeng Jin
Linfeng Song
Haitao Mi
Helen Meng
HILM
45
45
0
14 Feb 2024
Learning Interpretable Concepts: Unifying Causal Representation Learning
  and Foundation Models
Learning Interpretable Concepts: Unifying Causal Representation Learning and Foundation Models
Goutham Rajendran
Simon Buchholz
Bryon Aragam
Bernhard Schölkopf
Pradeep Ravikumar
AI4CE
91
21
0
14 Feb 2024
Into the Unknown: Self-Learning Large Language Models
Into the Unknown: Self-Learning Large Language Models
Teddy Ferdinan
Jan Kocoñ
P. Kazienko
33
2
0
14 Feb 2024
InstructGraph: Boosting Large Language Models via Graph-centric
  Instruction Tuning and Preference Alignment
InstructGraph: Boosting Large Language Models via Graph-centric Instruction Tuning and Preference Alignment
Jianing Wang
Junda Wu
Yupeng Hou
Yao Liu
Ming Gao
Julian McAuley
35
32
0
13 Feb 2024
Measuring and Controlling Instruction (In)Stability in Language Model
  Dialogs
Measuring and Controlling Instruction (In)Stability in Language Model Dialogs
Kenneth Li
Tianle Liu
Naomi Bashkansky
David Bau
Fernanda Viégas
Hanspeter Pfister
Martin Wattenberg
18
6
0
13 Feb 2024
EntGPT: Linking Generative Large Language Models with Knowledge Bases
EntGPT: Linking Generative Large Language Models with Knowledge Bases
Yifan Ding
Amrit Poudel
Qingkai Zeng
Tim Weninger
Balaji Veeramani
Sanmitra Bhattacharya
ReLM
KELM
LRM
46
4
0
09 Feb 2024
Understanding the Effects of Iterative Prompting on Truthfulness
Understanding the Effects of Iterative Prompting on Truthfulness
Satyapriya Krishna
Chirag Agarwal
Himabindu Lakkaraju
HILM
30
9
0
09 Feb 2024
Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank
  Modifications
Assessing the Brittleness of Safety Alignment via Pruning and Low-Rank Modifications
Boyi Wei
Kaixuan Huang
Yangsibo Huang
Tinghao Xie
Xiangyu Qi
Mengzhou Xia
Prateek Mittal
Mengdi Wang
Peter Henderson
AAML
63
84
0
07 Feb 2024
Challenges in Mechanistically Interpreting Model Representations
Challenges in Mechanistically Interpreting Model Representations
Satvik Golechha
James Dao
43
3
0
06 Feb 2024
INSIDE: LLMs' Internal States Retain the Power of Hallucination
  Detection
INSIDE: LLMs' Internal States Retain the Power of Hallucination Detection
Chao Chen
Kai-Chun Liu
Ze Chen
Yi Gu
Yue-bo Wu
Mingyuan Tao
Zhihang Fu
Jieping Ye
HILM
85
86
0
06 Feb 2024
Distinguishing the Knowable from the Unknowable with Language Models
Distinguishing the Knowable from the Unknowable with Language Models
Gustaf Ahdritz
Tian Qin
Nikhil Vyas
Boaz Barak
Benjamin L. Edelman
37
18
0
05 Feb 2024
Aligner: Efficient Alignment by Learning to Correct
Aligner: Efficient Alignment by Learning to Correct
Jiaming Ji
Boyuan Chen
Hantao Lou
Chongye Guo
Borong Zhang
Xuehai Pan
Juntao Dai
Tianyi Qiu
Yaodong Yang
29
28
0
04 Feb 2024
Vaccine: Perturbation-aware Alignment for Large Language Model
Vaccine: Perturbation-aware Alignment for Large Language Model
Tiansheng Huang
Sihao Hu
Ling Liu
50
35
0
02 Feb 2024
Tradeoffs Between Alignment and Helpfulness in Language Models with
  Representation Engineering
Tradeoffs Between Alignment and Helpfulness in Language Models with Representation Engineering
Yotam Wolf
Noam Wies
Dorin Shteyman
Binyamin Rothberg
Yoav Levine
Amnon Shashua
LLMSV
31
13
0
29 Jan 2024
Learning to Trust Your Feelings: Leveraging Self-awareness in LLMs for
  Hallucination Mitigation
Learning to Trust Your Feelings: Leveraging Self-awareness in LLMs for Hallucination Mitigation
Yuxin Liang
Zhuoyang Song
Hao Wang
Jiaxing Zhang
HILM
43
30
0
27 Jan 2024
Can AI Assistants Know What They Don't Know?
Can AI Assistants Know What They Don't Know?
Qinyuan Cheng
Tianxiang Sun
Xiangyang Liu
Wenwei Zhang
Zhangyue Yin
Shimin Li
Linyang Li
Zhengfu He
Kai Chen
Xipeng Qiu
41
24
0
24 Jan 2024
From Understanding to Utilization: A Survey on Explainability for Large
  Language Models
From Understanding to Utilization: A Survey on Explainability for Large Language Models
Haoyan Luo
Lucia Specia
56
20
0
23 Jan 2024
GRATH: Gradual Self-Truthifying for Large Language Models
GRATH: Gradual Self-Truthifying for Large Language Models
Weixin Chen
D. Song
Bo-wen Li
HILM
SyDa
33
5
0
22 Jan 2024
InferAligner: Inference-Time Alignment for Harmlessness through
  Cross-Model Guidance
InferAligner: Inference-Time Alignment for Harmlessness through Cross-Model Guidance
Pengyu Wang
Dong Zhang
Linyang Li
Chenkun Tan
Xinghao Wang
Ke Ren
Botian Jiang
Xipeng Qiu
LLMSV
26
41
0
20 Jan 2024
Previous
123456789
Next