Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2203.00056
Cited By
An Empirical Study on Explanations in Out-of-Domain Settings
28 February 2022
G. Chrysostomou
Nikolaos Aletras
LRM
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"An Empirical Study on Explanations in Out-of-Domain Settings"
32 / 32 papers shown
Title
Normalized AOPC: Fixing Misleading Faithfulness Metrics for Feature Attribution Explainability
Joakim Edin
Andreas Geert Motzfeldt
Casper L. Christensen
Tuukka Ruotsalo
Lars Maaløe
Maria Maistro
113
4
0
15 Aug 2024
Evaluating the Faithfulness of Importance Measures in NLP by Recursively Masking Allegedly Important Tokens and Retraining
Andreas Madsen
Nicholas Meade
Vaibhav Adlakha
Siva Reddy
132
36
0
15 Oct 2021
SPECTRA: Sparse Structured Text Rationalization
Nuno M. Guerreiro
André F. T. Martins
61
27
0
09 Sep 2021
Causal Inference in Natural Language Processing: Estimation, Prediction, Interpretation and Beyond
Amir Feder
Katherine A. Keith
Emaad A. Manzoor
Reid Pryzant
Dhanya Sridhar
...
Roi Reichart
Margaret E. Roberts
Brandon M Stewart
Victor Veitch
Diyi Yang
CML
83
243
0
02 Sep 2021
Paragraph-level Rationale Extraction through Regularization: A case study on European Court of Human Rights Cases
Ilias Chalkidis
Manos Fergadiotis
D. Tsarapatsanis
Nikolaos Aletras
Ion Androutsopoulos
Prodromos Malakasiotis
AILaw
51
109
0
24 Mar 2021
Debugging Tests for Model Explanations
Julius Adebayo
M. Muelly
Ilaria Liccardi
Been Kim
FAtt
66
181
0
10 Nov 2020
Measuring Association Between Labels and Free-Text Rationales
Sarah Wiegreffe
Ana Marasović
Noah A. Smith
313
182
0
24 Oct 2020
Evaluating and Characterizing Human Rationales
Samuel Carton
Anirudh Rathore
Chenhao Tan
55
49
0
09 Oct 2020
A Diagnostic Study of Explainability Techniques for Text Classification
Pepa Atanasova
J. Simonsen
Christina Lioma
Isabelle Augenstein
XAI
FAtt
81
224
0
25 Sep 2020
NILE : Natural Language Inference with Faithful Natural Language Explanations
Sawan Kumar
Partha P. Talukdar
XAI
LRM
89
163
0
25 May 2020
Contextualizing Hate Speech Classifiers with Post-hoc Explanation
Brendan Kennedy
Xisen Jin
Aida Mostafazadeh Davani
Morteza Dehghani
Xiang Ren
94
141
0
05 May 2020
Learning to Faithfully Rationalize by Construction
Sarthak Jain
Sarah Wiegreffe
Yuval Pinter
Byron C. Wallace
82
164
0
30 Apr 2020
Pretrained Transformers Improve Out-of-Distribution Robustness
Dan Hendrycks
Xiaoyuan Liu
Eric Wallace
Adam Dziedzic
R. Krishnan
Basel Alomair
OOD
193
434
0
13 Apr 2020
Towards Faithfully Interpretable NLP Systems: How should we define and evaluate faithfulness?
Alon Jacovi
Yoav Goldberg
XAI
124
597
0
07 Apr 2020
Calibration of Pre-trained Transformers
Shrey Desai
Greg Durrett
UQLM
291
300
0
17 Mar 2020
ERASER: A Benchmark to Evaluate Rationalized NLP Models
Jay DeYoung
Sarthak Jain
Nazneen Rajani
Eric P. Lehman
Caiming Xiong
R. Socher
Byron C. Wallace
117
637
0
08 Nov 2019
Is Attention Interpretable?
Sofia Serrano
Noah A. Smith
108
684
0
09 Jun 2019
Can You Trust Your Model's Uncertainty? Evaluating Predictive Uncertainty Under Dataset Shift
Yaniv Ovadia
Emily Fertig
Jie Jessie Ren
Zachary Nado
D. Sculley
Sebastian Nowozin
Joshua V. Dillon
Balaji Lakshminarayanan
Jasper Snoek
UQCV
170
1,695
0
06 Jun 2019
Explain Yourself! Leveraging Language Models for Commonsense Reasoning
Nazneen Rajani
Bryan McCann
Caiming Xiong
R. Socher
ReLM
LRM
82
565
0
06 Jun 2019
Neural Legal Judgment Prediction in English
Ilias Chalkidis
Ion Androutsopoulos
Nikolaos Aletras
AILaw
ELM
172
338
0
05 Jun 2019
Interpretable Neural Predictions with Differentiable Binary Variables
Jasmijn Bastings
Wilker Aziz
Ivan Titov
82
214
0
20 May 2019
Attention is not Explanation
Sarthak Jain
Byron C. Wallace
FAtt
145
1,324
0
26 Feb 2019
e-SNLI: Natural Language Inference with Natural Language Explanations
Oana-Maria Camburu
Tim Rocktaschel
Thomas Lukasiewicz
Phil Blunsom
LRM
417
638
0
04 Dec 2018
Context-Aware Attention for Understanding Twitter Abuse
Tuhin Chakrabarty
Kilol Gupta
66
28
0
24 Sep 2018
DeClarE: Debunking Fake News and False Claims using Evidence-Aware Deep Learning
Kashyap Popat
Subhabrata Mukherjee
Andrew Yates
Gerhard Weikum
HILM
87
305
0
17 Sep 2018
Learning Important Features Through Propagating Activation Differences
Avanti Shrikumar
Peyton Greenside
A. Kundaje
FAtt
201
3,873
0
10 Apr 2017
Axiomatic Attribution for Deep Networks
Mukund Sundararajan
Ankur Taly
Qiqi Yan
OOD
FAtt
188
5,989
0
04 Mar 2017
Investigating the influence of noise and distractors on the interpretation of neural networks
Pieter-Jan Kindermans
Kristof T. Schütt
K. Müller
Sven Dähne
FAtt
71
125
0
22 Nov 2016
Rationalizing Neural Predictions
Tao Lei
Regina Barzilay
Tommi Jaakkola
118
812
0
13 Jun 2016
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
1.2K
16,990
0
16 Feb 2016
Character-level Convolutional Networks for Text Classification
Xiang Zhang
Jiaqi Zhao
Yann LeCun
268
6,113
0
04 Sep 2015
Adam: A Method for Stochastic Optimization
Diederik P. Kingma
Jimmy Ba
ODL
1.9K
150,115
0
22 Dec 2014
1