ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2203.00056
  4. Cited By
An Empirical Study on Explanations in Out-of-Domain Settings

An Empirical Study on Explanations in Out-of-Domain Settings

28 February 2022
G. Chrysostomou
Nikolaos Aletras
    LRM
ArXiv (abs)PDFHTML

Papers citing "An Empirical Study on Explanations in Out-of-Domain Settings"

32 / 32 papers shown
Title
Normalized AOPC: Fixing Misleading Faithfulness Metrics for Feature Attribution Explainability
Normalized AOPC: Fixing Misleading Faithfulness Metrics for Feature Attribution Explainability
Joakim Edin
Andreas Geert Motzfeldt
Casper L. Christensen
Tuukka Ruotsalo
Lars Maaløe
Maria Maistro
113
4
0
15 Aug 2024
Evaluating the Faithfulness of Importance Measures in NLP by Recursively
  Masking Allegedly Important Tokens and Retraining
Evaluating the Faithfulness of Importance Measures in NLP by Recursively Masking Allegedly Important Tokens and Retraining
Andreas Madsen
Nicholas Meade
Vaibhav Adlakha
Siva Reddy
132
36
0
15 Oct 2021
SPECTRA: Sparse Structured Text Rationalization
SPECTRA: Sparse Structured Text Rationalization
Nuno M. Guerreiro
André F. T. Martins
61
27
0
09 Sep 2021
Causal Inference in Natural Language Processing: Estimation, Prediction,
  Interpretation and Beyond
Causal Inference in Natural Language Processing: Estimation, Prediction, Interpretation and Beyond
Amir Feder
Katherine A. Keith
Emaad A. Manzoor
Reid Pryzant
Dhanya Sridhar
...
Roi Reichart
Margaret E. Roberts
Brandon M Stewart
Victor Veitch
Diyi Yang
CML
83
243
0
02 Sep 2021
Paragraph-level Rationale Extraction through Regularization: A case
  study on European Court of Human Rights Cases
Paragraph-level Rationale Extraction through Regularization: A case study on European Court of Human Rights Cases
Ilias Chalkidis
Manos Fergadiotis
D. Tsarapatsanis
Nikolaos Aletras
Ion Androutsopoulos
Prodromos Malakasiotis
AILaw
51
109
0
24 Mar 2021
Debugging Tests for Model Explanations
Debugging Tests for Model Explanations
Julius Adebayo
M. Muelly
Ilaria Liccardi
Been Kim
FAtt
66
181
0
10 Nov 2020
Measuring Association Between Labels and Free-Text Rationales
Measuring Association Between Labels and Free-Text Rationales
Sarah Wiegreffe
Ana Marasović
Noah A. Smith
313
182
0
24 Oct 2020
Evaluating and Characterizing Human Rationales
Evaluating and Characterizing Human Rationales
Samuel Carton
Anirudh Rathore
Chenhao Tan
55
49
0
09 Oct 2020
A Diagnostic Study of Explainability Techniques for Text Classification
A Diagnostic Study of Explainability Techniques for Text Classification
Pepa Atanasova
J. Simonsen
Christina Lioma
Isabelle Augenstein
XAIFAtt
81
224
0
25 Sep 2020
NILE : Natural Language Inference with Faithful Natural Language
  Explanations
NILE : Natural Language Inference with Faithful Natural Language Explanations
Sawan Kumar
Partha P. Talukdar
XAILRM
89
163
0
25 May 2020
Contextualizing Hate Speech Classifiers with Post-hoc Explanation
Contextualizing Hate Speech Classifiers with Post-hoc Explanation
Brendan Kennedy
Xisen Jin
Aida Mostafazadeh Davani
Morteza Dehghani
Xiang Ren
94
141
0
05 May 2020
Learning to Faithfully Rationalize by Construction
Learning to Faithfully Rationalize by Construction
Sarthak Jain
Sarah Wiegreffe
Yuval Pinter
Byron C. Wallace
82
164
0
30 Apr 2020
Pretrained Transformers Improve Out-of-Distribution Robustness
Pretrained Transformers Improve Out-of-Distribution Robustness
Dan Hendrycks
Xiaoyuan Liu
Eric Wallace
Adam Dziedzic
R. Krishnan
Basel Alomair
OOD
193
434
0
13 Apr 2020
Towards Faithfully Interpretable NLP Systems: How should we define and
  evaluate faithfulness?
Towards Faithfully Interpretable NLP Systems: How should we define and evaluate faithfulness?
Alon Jacovi
Yoav Goldberg
XAI
124
597
0
07 Apr 2020
Calibration of Pre-trained Transformers
Calibration of Pre-trained Transformers
Shrey Desai
Greg Durrett
UQLM
291
300
0
17 Mar 2020
ERASER: A Benchmark to Evaluate Rationalized NLP Models
ERASER: A Benchmark to Evaluate Rationalized NLP Models
Jay DeYoung
Sarthak Jain
Nazneen Rajani
Eric P. Lehman
Caiming Xiong
R. Socher
Byron C. Wallace
117
637
0
08 Nov 2019
Is Attention Interpretable?
Is Attention Interpretable?
Sofia Serrano
Noah A. Smith
108
684
0
09 Jun 2019
Can You Trust Your Model's Uncertainty? Evaluating Predictive
  Uncertainty Under Dataset Shift
Can You Trust Your Model's Uncertainty? Evaluating Predictive Uncertainty Under Dataset Shift
Yaniv Ovadia
Emily Fertig
Jie Jessie Ren
Zachary Nado
D. Sculley
Sebastian Nowozin
Joshua V. Dillon
Balaji Lakshminarayanan
Jasper Snoek
UQCV
170
1,695
0
06 Jun 2019
Explain Yourself! Leveraging Language Models for Commonsense Reasoning
Explain Yourself! Leveraging Language Models for Commonsense Reasoning
Nazneen Rajani
Bryan McCann
Caiming Xiong
R. Socher
ReLMLRM
82
565
0
06 Jun 2019
Neural Legal Judgment Prediction in English
Neural Legal Judgment Prediction in English
Ilias Chalkidis
Ion Androutsopoulos
Nikolaos Aletras
AILawELM
172
338
0
05 Jun 2019
Interpretable Neural Predictions with Differentiable Binary Variables
Interpretable Neural Predictions with Differentiable Binary Variables
Jasmijn Bastings
Wilker Aziz
Ivan Titov
82
214
0
20 May 2019
Attention is not Explanation
Attention is not Explanation
Sarthak Jain
Byron C. Wallace
FAtt
145
1,324
0
26 Feb 2019
e-SNLI: Natural Language Inference with Natural Language Explanations
e-SNLI: Natural Language Inference with Natural Language Explanations
Oana-Maria Camburu
Tim Rocktaschel
Thomas Lukasiewicz
Phil Blunsom
LRM
417
638
0
04 Dec 2018
Context-Aware Attention for Understanding Twitter Abuse
Context-Aware Attention for Understanding Twitter Abuse
Tuhin Chakrabarty
Kilol Gupta
66
28
0
24 Sep 2018
DeClarE: Debunking Fake News and False Claims using Evidence-Aware Deep
  Learning
DeClarE: Debunking Fake News and False Claims using Evidence-Aware Deep Learning
Kashyap Popat
Subhabrata Mukherjee
Andrew Yates
Gerhard Weikum
HILM
87
305
0
17 Sep 2018
Learning Important Features Through Propagating Activation Differences
Learning Important Features Through Propagating Activation Differences
Avanti Shrikumar
Peyton Greenside
A. Kundaje
FAtt
201
3,873
0
10 Apr 2017
Axiomatic Attribution for Deep Networks
Axiomatic Attribution for Deep Networks
Mukund Sundararajan
Ankur Taly
Qiqi Yan
OODFAtt
188
5,989
0
04 Mar 2017
Investigating the influence of noise and distractors on the
  interpretation of neural networks
Investigating the influence of noise and distractors on the interpretation of neural networks
Pieter-Jan Kindermans
Kristof T. Schütt
K. Müller
Sven Dähne
FAtt
71
125
0
22 Nov 2016
Rationalizing Neural Predictions
Rationalizing Neural Predictions
Tao Lei
Regina Barzilay
Tommi Jaakkola
118
812
0
13 Jun 2016
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAttFaML
1.2K
16,990
0
16 Feb 2016
Character-level Convolutional Networks for Text Classification
Character-level Convolutional Networks for Text Classification
Xiang Zhang
Jiaqi Zhao
Yann LeCun
268
6,113
0
04 Sep 2015
Adam: A Method for Stochastic Optimization
Adam: A Method for Stochastic Optimization
Diederik P. Kingma
Jimmy Ba
ODL
1.9K
150,115
0
22 Dec 2014
1