Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2312.16374
Cited By
LLM Factoscope: Uncovering LLMs' Factual Discernment through Inner States Analysis
27 December 2023
Jinwen He
Yujia Gong
Kai-xiang Chen
Zijin Lin
Chengán Wei
Yue Zhao
Re-assign community
ArXiv
PDF
HTML
Papers citing
"LLM Factoscope: Uncovering LLMs' Factual Discernment through Inner States Analysis"
5 / 5 papers shown
Title
Interpreting and Editing Vision-Language Representations to Mitigate Hallucinations
Nick Jiang
Anish Kachinthaya
Suzie Petryk
Yossi Gandelsman
VLM
34
16
0
03 Oct 2024
The Geometry of Truth: Emergent Linear Structure in Large Language Model Representations of True/False Datasets
Samuel Marks
Max Tegmark
HILM
102
173
0
10 Oct 2023
The Internal State of an LLM Knows When It's Lying
A. Azaria
Tom Michael Mitchell
HILM
218
301
0
26 Apr 2023
SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models
Potsawee Manakul
Adian Liusie
Mark Gales
HILM
LRM
152
396
0
15 Mar 2023
Generalized Out-of-Distribution Detection: A Survey
Jingkang Yang
Kaiyang Zhou
Yixuan Li
Ziwei Liu
188
879
0
21 Oct 2021
1