ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2306.03341
  4. Cited By
Inference-Time Intervention: Eliciting Truthful Answers from a Language
  Model

Inference-Time Intervention: Eliciting Truthful Answers from a Language Model

6 June 2023
Kenneth Li
Oam Patel
Fernanda Viégas
Hanspeter Pfister
Martin Wattenberg
    KELM
    HILM
ArXivPDFHTML

Papers citing "Inference-Time Intervention: Eliciting Truthful Answers from a Language Model"

50 / 411 papers shown
Title
Knowledge Mechanisms in Large Language Models: A Survey and Perspective
Knowledge Mechanisms in Large Language Models: A Survey and Perspective
Meng Wang
Yunzhi Yao
Ziwen Xu
Shuofei Qiao
Shumin Deng
...
Yong-jia Jiang
Pengjun Xie
Fei Huang
Huajun Chen
Ningyu Zhang
55
28
0
22 Jul 2024
MAVEN-Fact: A Large-scale Event Factuality Detection Dataset
MAVEN-Fact: A Large-scale Event Factuality Detection Dataset
Chunyang Li
Hao Peng
Xiaozhi Wang
Y. Qi
Lei Hou
Bin Xu
Juanzi Li
HILM
35
1
0
22 Jul 2024
Relational Composition in Neural Networks: A Survey and Call to Action
Relational Composition in Neural Networks: A Survey and Call to Action
Martin Wattenberg
Fernanda Viégas
CoGe
48
9
0
19 Jul 2024
Internal Consistency and Self-Feedback in Large Language Models: A
  Survey
Internal Consistency and Self-Feedback in Large Language Models: A Survey
Xun Liang
Shichao Song
Zifan Zheng
Hanyu Wang
Qingchen Yu
...
Rong-Hua Li
Peng Cheng
Zhonghao Wang
Zhiyu Li
Zhiyu Li
HILM
LRM
70
26
0
19 Jul 2024
Analyzing the Generalization and Reliability of Steering Vectors
Analyzing the Generalization and Reliability of Steering Vectors
Daniel Tan
David Chanin
Aengus Lynch
Dimitrios Kanoulas
Brooks Paige
Adrià Garriga-Alonso
Robert Kirk
LLMSV
84
17
0
17 Jul 2024
The Better Angels of Machine Personality: How Personality Relates to LLM
  Safety
The Better Angels of Machine Personality: How Personality Relates to LLM Safety
Jie Zhang
Dongrui Liu
Chao Qian
Ziyue Gan
Yong-jin Liu
Yu Qiao
Jing Shao
LLMAG
PILM
53
12
0
17 Jul 2024
Lookback Lens: Detecting and Mitigating Contextual Hallucinations in
  Large Language Models Using Only Attention Maps
Lookback Lens: Detecting and Mitigating Contextual Hallucinations in Large Language Models Using Only Attention Maps
Yung-Sung Chuang
Linlu Qiu
Cheng-Yu Hsieh
Ranjay Krishna
Yoon Kim
James R. Glass
HILM
18
35
0
09 Jul 2024
A Factuality and Diversity Reconciled Decoding Method for
  Knowledge-Grounded Dialogue Generation
A Factuality and Diversity Reconciled Decoding Method for Knowledge-Grounded Dialogue Generation
Chenxu Yang
Zheng Lin
Chong Tian
Liang Pang
Lanrui Wang
Zhengyang Tong
Qirong Ho
Yanan Cao
Weiping Wang
HILM
44
0
0
08 Jul 2024
From Loops to Oops: Fallback Behaviors of Language Models Under Uncertainty
From Loops to Oops: Fallback Behaviors of Language Models Under Uncertainty
Maor Ivgi
Ori Yoran
Jonathan Berant
Mor Geva
HILM
66
8
0
08 Jul 2024
ANAH-v2: Scaling Analytical Hallucination Annotation of Large Language
  Models
ANAH-v2: Scaling Analytical Hallucination Annotation of Large Language Models
Yuzhe Gu
Ziwei Ji
Wenwei Zhang
Chengqi Lyu
Dahua Lin
Kai Chen
HILM
42
5
0
05 Jul 2024
Functional Faithfulness in the Wild: Circuit Discovery with
  Differentiable Computation Graph Pruning
Functional Faithfulness in the Wild: Circuit Discovery with Differentiable Computation Graph Pruning
Lei Yu
Jingcheng Niu
Zining Zhu
Gerald Penn
38
6
0
04 Jul 2024
Truth is Universal: Robust Detection of Lies in LLMs
Truth is Universal: Robust Detection of Lies in LLMs
Lennart Bürger
Fred Hamprecht
B. Nadler
HILM
43
10
0
03 Jul 2024
A Practical Review of Mechanistic Interpretability for Transformer-Based Language Models
A Practical Review of Mechanistic Interpretability for Transformer-Based Language Models
Daking Rai
Yilun Zhou
Shi Feng
Abulhair Saparov
Ziyu Yao
82
22
0
02 Jul 2024
Multi-property Steering of Large Language Models with Dynamic Activation
  Composition
Multi-property Steering of Large Language Models with Dynamic Activation Composition
Daniel Scalena
Gabriele Sarti
Malvina Nissim
KELM
LLMSV
AI4CE
29
13
0
25 Jun 2024
Brittle Minds, Fixable Activations: Understanding Belief Representations in Language Models
Brittle Minds, Fixable Activations: Understanding Belief Representations in Language Models
Matteo Bortoletto
Constantin Ruhdorfer
Lei Shi
Andreas Bulling
AI4MH
LRM
48
4
0
25 Jun 2024
Semantic Entropy Probes: Robust and Cheap Hallucination Detection in
  LLMs
Semantic Entropy Probes: Robust and Cheap Hallucination Detection in LLMs
Jannik Kossen
Jiatong Han
Muhammed Razzak
Lisa Schut
Shreshth A. Malik
Yarin Gal
HILM
60
35
0
22 Jun 2024
Steering Without Side Effects: Improving Post-Deployment Control of
  Language Models
Steering Without Side Effects: Improving Post-Deployment Control of Language Models
Asa Cooper Stickland
Alexander Lyzhov
Jacob Pfau
Salsabila Mahdi
Samuel R. Bowman
LLMSV
AAML
65
18
0
21 Jun 2024
Understanding Finetuning for Factual Knowledge Extraction
Understanding Finetuning for Factual Knowledge Extraction
Gaurav R. Ghosal
Tatsunori Hashimoto
Aditi Raghunathan
44
12
0
20 Jun 2024
Insights into LLM Long-Context Failures: When Transformers Know but
  Don't Tell
Insights into LLM Long-Context Failures: When Transformers Know but Don't Tell
Taiming Lu
Muhan Gao
Kuai Yu
Adam Byerly
Daniel Khashabi
51
12
0
20 Jun 2024
Synchronous Faithfulness Monitoring for Trustworthy Retrieval-Augmented
  Generation
Synchronous Faithfulness Monitoring for Trustworthy Retrieval-Augmented Generation
Di Wu
Jia-Chen Gu
Fan Yin
Nanyun Peng
Kai-Wei Chang
HILM
58
1
0
19 Jun 2024
BeHonest: Benchmarking Honesty in Large Language Models
BeHonest: Benchmarking Honesty in Large Language Models
Steffi Chern
Zhulin Hu
Yuqing Yang
Ethan Chern
Yuan Guo
Jiahe Jin
Binjie Wang
Pengfei Liu
HILM
ALM
86
3
0
19 Jun 2024
Enhancing Language Model Factuality via Activation-Based Confidence
  Calibration and Guided Decoding
Enhancing Language Model Factuality via Activation-Based Confidence Calibration and Guided Decoding
Xin Liu
Farima Fatahi Bayat
Lu Wang
31
4
0
19 Jun 2024
Locating and Extracting Relational Concepts in Large Language Models
Locating and Extracting Relational Concepts in Large Language Models
Zijian Wang
Britney White
Chang Xu
KELM
43
0
0
19 Jun 2024
When Parts are Greater Than Sums: Individual LLM Components Can
  Outperform Full Models
When Parts are Greater Than Sums: Individual LLM Components Can Outperform Full Models
Ting-Yun Chang
Jesse Thomason
Robin Jia
48
4
0
19 Jun 2024
Estimating Knowledge in Large Language Models Without Generating a
  Single Token
Estimating Knowledge in Large Language Models Without Generating a Single Token
Daniela Gottesman
Mor Geva
43
11
0
18 Jun 2024
Beyond Under-Alignment: Atomic Preference Enhanced Factuality Tuning for
  Large Language Models
Beyond Under-Alignment: Atomic Preference Enhanced Factuality Tuning for Large Language Models
Hongbang Yuan
Yubo Chen
Pengfei Cao
Zhuoran Jin
Kang Liu
Jun Zhao
44
0
0
18 Jun 2024
Who's asking? User personas and the mechanics of latent misalignment
Who's asking? User personas and the mechanics of latent misalignment
Asma Ghandeharioun
Ann Yuan
Marius Guerard
Emily Reif
Michael A. Lepori
Lucas Dixon
LLMSV
44
8
0
17 Jun 2024
InternalInspector $I^2$: Robust Confidence Estimation in LLMs through
  Internal States
InternalInspector I2I^2I2: Robust Confidence Estimation in LLMs through Internal States
Mohammad Beigi
Ying Shen
Runing Yang
Zihao Lin
Qifan Wang
Ankith Mohan
Jianfeng He
Ming Jin
Chang-Tien Lu
Lifu Huang
HILM
36
4
0
17 Jun 2024
Dialogue Action Tokens: Steering Language Models in Goal-Directed
  Dialogue with a Multi-Turn Planner
Dialogue Action Tokens: Steering Language Models in Goal-Directed Dialogue with a Multi-Turn Planner
Kenneth Li
Yiming Wang
Fernanda Viégas
Martin Wattenberg
38
6
0
17 Jun 2024
Refusal in Language Models Is Mediated by a Single Direction
Refusal in Language Models Is Mediated by a Single Direction
Andy Arditi
Oscar Obeso
Aaquib Syed
Daniel Paleka
Nina Panickssery
Wes Gurnee
Neel Nanda
50
136
0
17 Jun 2024
Mitigating Large Language Model Hallucination with Faithful Finetuning
Mitigating Large Language Model Hallucination with Faithful Finetuning
Minda Hu
Bowei He
Yufei Wang
Liangyou Li
Chen Ma
Irwin King
HILM
46
7
0
17 Jun 2024
Teaching Large Language Models to Express Knowledge Boundary from Their
  Own Signals
Teaching Large Language Models to Express Knowledge Boundary from Their Own Signals
Lida Chen
Zujie Liang
Xintao Wang
Jiaqing Liang
Yanghua Xiao
Feng Wei
Jinglei Chen
Zhenghong Hao
Bing Han
Wei Wang
55
10
0
16 Jun 2024
On the Hardness of Faithful Chain-of-Thought Reasoning in Large Language
  Models
On the Hardness of Faithful Chain-of-Thought Reasoning in Large Language Models
Sree Harsha Tanneru
Dan Ley
Chirag Agarwal
Himabindu Lakkaraju
LRM
31
4
0
15 Jun 2024
Legend: Leveraging Representation Engineering to Annotate Safety Margin
  for Preference Datasets
Legend: Leveraging Representation Engineering to Annotate Safety Margin for Preference Datasets
Duanyu Feng
Bowen Qin
Chen Huang
Youcheng Huang
Zheng-Wei Zhang
Wenqiang Lei
44
2
0
12 Jun 2024
Designing a Dashboard for Transparency and Control of Conversational AI
Designing a Dashboard for Transparency and Control of Conversational AI
Yida Chen
Aoyu Wu
Trevor DePodesta
Catherine Yeh
Kenneth Li
...
Jan Riecke
Shivam Raval
Olivia Seow
Martin Wattenberg
Fernanda Viégas
44
16
0
12 Jun 2024
We Have a Package for You! A Comprehensive Analysis of Package Hallucinations by Code Generating LLMs
We Have a Package for You! A Comprehensive Analysis of Package Hallucinations by Code Generating LLMs
Joseph Spracklen
Raveen Wijewickrama
A. H. M. N. Sakib
Anindya Maiti
Murtuza Jadliwala
Murtuza Jadliwala
48
10
0
12 Jun 2024
REAL Sampling: Boosting Factuality and Diversity of Open-Ended
  Generation via Asymptotic Entropy
REAL Sampling: Boosting Factuality and Diversity of Open-Ended Generation via Asymptotic Entropy
Haw-Shiuan Chang
Nanyun Peng
Mohit Bansal
Anil Ramakrishna
Tagyoung Chung
HILM
42
2
0
11 Jun 2024
Estimating the Hallucination Rate of Generative AI
Estimating the Hallucination Rate of Generative AI
Andrew Jesson
Nicolas Beltran-Velez
Quentin Chu
Sweta Karlekar
Jannik Kossen
Yarin Gal
John P. Cunningham
David M. Blei
51
6
0
11 Jun 2024
Aligning Large Language Models with Representation Editing: A Control
  Perspective
Aligning Large Language Models with Representation Editing: A Control Perspective
Lingkai Kong
Haorui Wang
Wenhao Mu
Yuanqi Du
Yuchen Zhuang
Yifei Zhou
Yue Song
Rongzhi Zhang
Kai Wang
Chao Zhang
35
22
0
10 Jun 2024
More Victories, Less Cooperation: Assessing Cicero's Diplomacy Play
More Victories, Less Cooperation: Assessing Cicero's Diplomacy Play
Wichayaporn Wongkamjan
Feng Gu
Yanze Wang
Ulf Hermjakob
Jonathan May
Brandon M. Stewart
Jonathan K. Kummerfeld
Denis Peskoff
Jordan L. Boyd-Graber
53
3
0
07 Jun 2024
Discovering Bias in Latent Space: An Unsupervised Debiasing Approach
Discovering Bias in Latent Space: An Unsupervised Debiasing Approach
Dyah Adila
Shuai Zhang
Boran Han
Yuyang Wang
AAML
LLMSV
34
6
0
05 Jun 2024
Towards Detecting LLMs Hallucination via Markov Chain-based Multi-agent
  Debate Framework
Towards Detecting LLMs Hallucination via Markov Chain-based Multi-agent Debate Framework
Xiaoxi Sun
Jinpeng Li
Yan Zhong
Dongyan Zhao
Rui Yan
LLMAG
HILM
29
5
0
05 Jun 2024
Dishonesty in Helpful and Harmless Alignment
Dishonesty in Helpful and Harmless Alignment
Youcheng Huang
Jingkun Tang
Duanyu Feng
Zheng-Wei Zhang
Wenqiang Lei
Jiancheng Lv
Anthony G. Cohn
LLMSV
46
3
0
04 Jun 2024
LoFiT: Localized Fine-tuning on LLM Representations
LoFiT: Localized Fine-tuning on LLM Representations
Fangcong Yin
Xi Ye
Greg Durrett
38
13
0
03 Jun 2024
PrivacyRestore: Privacy-Preserving Inference in Large Language Models
  via Privacy Removal and Restoration
PrivacyRestore: Privacy-Preserving Inference in Large Language Models via Privacy Removal and Restoration
Ziqian Zeng
Jianwei Wang
Zhengdong Lu
Huiping Zhuang
Cen Chen
RALM
KELM
50
7
0
03 Jun 2024
The Geometry of Categorical and Hierarchical Concepts in Large Language Models
The Geometry of Categorical and Hierarchical Concepts in Large Language Models
Kiho Park
Yo Joong Choe
Yibo Jiang
Victor Veitch
50
27
0
03 Jun 2024
BoNBoN Alignment for Large Language Models and the Sweetness of
  Best-of-n Sampling
BoNBoN Alignment for Large Language Models and the Sweetness of Best-of-n Sampling
Lin Gui
Cristina Garbacea
Victor Veitch
BDL
LM&MA
43
36
0
02 Jun 2024
Controlling Large Language Model Agents with Entropic Activation
  Steering
Controlling Large Language Model Agents with Entropic Activation Steering
Nate Rahn
P. DÓro
Marc G. Bellemare
LLMSV
32
6
0
01 Jun 2024
Standards for Belief Representations in LLMs
Standards for Belief Representations in LLMs
Daniel A. Herrmann
B. Levinstein
44
9
0
31 May 2024
ANAH: Analytical Annotation of Hallucinations in Large Language Models
ANAH: Analytical Annotation of Hallucinations in Large Language Models
Ziwei Ji
Yuzhe Gu
Wenwei Zhang
Chengqi Lyu
Dahua Lin
Kai-xiang Chen
HILM
56
2
0
30 May 2024
Previous
123456789
Next