Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2404.04722
Cited By
PoLLMgraph: Unraveling Hallucinations in Large Language Models via State Transition Dynamics
6 April 2024
Derui Zhu
Dingfan Chen
Qing Li
Zongxiong Chen
Lei Ma
Jens Grossklags
Mario Fritz
HILM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"PoLLMgraph: Unraveling Hallucinations in Large Language Models via State Transition Dynamics"
6 / 6 papers shown
Title
Too Consistent to Detect: A Study of Self-Consistent Errors in LLMs
Hexiang Tan
Fei Sun
Sha Liu
Du Su
Qi Cao
...
Jingang Wang
Xunliang Cai
Yuanzhuo Wang
Huawei Shen
Xueqi Cheng
HILM
132
0
0
23 May 2025
Self-contradictory Hallucinations of Large Language Models: Evaluation, Detection and Mitigation
Niels Mündler
Jingxuan He
Slobodan Jenko
Martin Vechev
HILM
53
113
0
25 May 2023
Constitutional AI: Harmlessness from AI Feedback
Yuntao Bai
Saurav Kadavath
Sandipan Kundu
Amanda Askell
John Kernion
...
Dario Amodei
Nicholas Joseph
Sam McCandlish
Tom B. Brown
Jared Kaplan
SyDa
MoMe
156
1,583
0
15 Dec 2022
Discovering Latent Knowledge in Language Models Without Supervision
Collin Burns
Haotian Ye
Dan Klein
Jacob Steinhardt
109
350
0
07 Dec 2022
Language Models as Knowledge Bases?
Fabio Petroni
Tim Rocktaschel
Patrick Lewis
A. Bakhtin
Yuxiang Wu
Alexander H. Miller
Sebastian Riedel
KELM
AI4MH
545
2,639
0
03 Sep 2019
Handling Divergent Reference Texts when Evaluating Table-to-Text Generation
Bhuwan Dhingra
Manaal Faruqui
Ankur P. Parikh
Ming-Wei Chang
Dipanjan Das
William W. Cohen
61
195
0
03 Jun 2019
1