Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2306.02980
Cited By
KNOW How to Make Up Your Mind! Adversarially Detecting and Alleviating Inconsistencies in Natural Language Explanations
5 June 2023
Myeongjun Jang
Bodhisattwa Prasad Majumder
Julian McAuley
Thomas Lukasiewicz
Oana-Maria Camburu
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"KNOW How to Make Up Your Mind! Adversarially Detecting and Alleviating Inconsistencies in Natural Language Explanations"
8 / 8 papers shown
Title
Do Large Language Models Latently Perform Multi-Hop Reasoning?
Sohee Yang
E. Gribovskaya
Nora Kassner
Mor Geva
Sebastian Riedel
ReLM
LRM
56
88
0
26 Feb 2024
Improving Language Models Meaning Understanding and Consistency by Learning Conceptual Roles from Dictionary
Myeongjun Jang
Thomas Lukasiewicz
40
4
0
24 Oct 2023
The Impact of Imperfect XAI on Human-AI Decision-Making
Katelyn Morrison
Philipp Spitzer
Violet Turri
Michelle C. Feng
Niklas Kühl
Adam Perer
50
33
0
25 Jul 2023
Consistency Analysis of ChatGPT
Myeongjun Jang
Thomas Lukasiewicz
49
53
0
11 Mar 2023
Measuring and Improving Consistency in Pretrained Language Models
Yanai Elazar
Nora Kassner
Shauli Ravfogel
Abhilasha Ravichander
Eduard H. Hovy
Hinrich Schütze
Yoav Goldberg
HILM
275
353
0
01 Feb 2021
Measuring Association Between Labels and Free-Text Rationales
Sarah Wiegreffe
Ana Marasović
Noah A. Smith
282
176
0
24 Oct 2020
Language Models as Knowledge Bases?
Fabio Petroni
Tim Rocktaschel
Patrick Lewis
A. Bakhtin
Yuxiang Wu
Alexander H. Miller
Sebastian Riedel
KELM
AI4MH
472
2,618
0
03 Sep 2019
e-SNLI: Natural Language Inference with Natural Language Explanations
Oana-Maria Camburu
Tim Rocktaschel
Thomas Lukasiewicz
Phil Blunsom
LRM
287
629
0
04 Dec 2018
1