ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2004.03685
  4. Cited By
Towards Faithfully Interpretable NLP Systems: How should we define and
  evaluate faithfulness?

Towards Faithfully Interpretable NLP Systems: How should we define and evaluate faithfulness?

7 April 2020
Alon Jacovi
Yoav Goldberg
    XAI
ArXivPDFHTML

Papers citing "Towards Faithfully Interpretable NLP Systems: How should we define and evaluate faithfulness?"

31 / 381 papers shown
Title
HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection
HateXplain: A Benchmark Dataset for Explainable Hate Speech Detection
Binny Mathew
Punyajoy Saha
Seid Muhie Yimam
Chris Biemann
Pawan Goyal
Animesh Mukherjee
47
551
0
18 Dec 2020
Predicting Events in MOBA Games: Prediction, Attribution, and Evaluation
Predicting Events in MOBA Games: Prediction, Attribution, and Evaluation
Zelong Yang
Yan Wang
Piji Li
Shaobin Lin
Shuming Shi
Shao-Lun Huang
Wei Bi
20
12
0
17 Dec 2020
AIST: An Interpretable Attention-based Deep Learning Model for Crime
  Prediction
AIST: An Interpretable Attention-based Deep Learning Model for Crime Prediction
Yeasir Rayhan
T. Hashem
24
22
0
16 Dec 2020
Learning to Rationalize for Nonmonotonic Reasoning with Distant
  Supervision
Learning to Rationalize for Nonmonotonic Reasoning with Distant Supervision
Faeze Brahman
Vered Shwartz
Rachel Rudinger
Yejin Choi
LRM
15
42
0
14 Dec 2020
Deep Argumentative Explanations
Deep Argumentative Explanations
Emanuele Albini
Piyawat Lertvittayakumjorn
Antonio Rago
Francesca Toni
AAML
26
4
0
10 Dec 2020
Efficient Estimation of Influence of a Training Instance
Efficient Estimation of Influence of a Training Instance
Sosuke Kobayashi
Sho Yokoi
Jun Suzuki
Kentaro Inui
TDI
32
15
0
08 Dec 2020
Probing Multilingual BERT for Genetic and Typological Signals
Probing Multilingual BERT for Genetic and Typological Signals
Taraka Rama
Lisa Beinborn
Steffen Eger
19
24
0
04 Nov 2020
Measuring Association Between Labels and Free-Text Rationales
Measuring Association Between Labels and Free-Text Rationales
Sarah Wiegreffe
Ana Marasović
Noah A. Smith
282
172
0
24 Oct 2020
Natural Language Rationales with Full-Stack Visual Reasoning: From
  Pixels to Semantic Frames to Commonsense Graphs
Natural Language Rationales with Full-Stack Visual Reasoning: From Pixels to Semantic Frames to Commonsense Graphs
Ana Marasović
Chandra Bhagavatula
J. S. Park
Ronan Le Bras
Noah A. Smith
Yejin Choi
ReLM
LRM
18
62
0
15 Oct 2020
Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and
  Goals of Human Trust in AI
Formalizing Trust in Artificial Intelligence: Prerequisites, Causes and Goals of Human Trust in AI
Alon Jacovi
Ana Marasović
Tim Miller
Yoav Goldberg
255
427
0
15 Oct 2020
The elephant in the interpretability room: Why use attention as
  explanation when we have saliency methods?
The elephant in the interpretability room: Why use attention as explanation when we have saliency methods?
Jasmijn Bastings
Katja Filippova
XAI
LRM
54
174
0
12 Oct 2020
Explaining Neural Matrix Factorization with Gradient Rollback
Explaining Neural Matrix Factorization with Gradient Rollback
Carolin (Haas) Lawrence
T. Sztyler
Mathias Niepert
22
12
0
12 Oct 2020
Leakage-Adjusted Simulatability: Can Models Generate Non-Trivial
  Explanations of Their Behavior in Natural Language?
Leakage-Adjusted Simulatability: Can Models Generate Non-Trivial Explanations of Their Behavior in Natural Language?
Peter Hase
Shiyue Zhang
Harry Xie
Joey Tianyi Zhou
29
99
0
08 Oct 2020
Why do you think that? Exploring Faithful Sentence-Level Rationales
  Without Supervision
Why do you think that? Exploring Faithful Sentence-Level Rationales Without Supervision
Max Glockner
Ivan Habernal
Iryna Gurevych
LRM
27
25
0
07 Oct 2020
Learning Variational Word Masks to Improve the Interpretability of
  Neural Text Classifiers
Learning Variational Word Masks to Improve the Interpretability of Neural Text Classifiers
Hanjie Chen
Yangfeng Ji
AAML
VLM
15
63
0
01 Oct 2020
Interpreting Graph Neural Networks for NLP With Differentiable Edge
  Masking
Interpreting Graph Neural Networks for NLP With Differentiable Edge Masking
M. Schlichtkrull
Nicola De Cao
Ivan Titov
AI4CE
36
214
0
01 Oct 2020
A Diagnostic Study of Explainability Techniques for Text Classification
A Diagnostic Study of Explainability Techniques for Text Classification
Pepa Atanasova
J. Simonsen
Christina Lioma
Isabelle Augenstein
XAI
FAtt
17
219
0
25 Sep 2020
Are Interpretations Fairly Evaluated? A Definition Driven Pipeline for
  Post-Hoc Interpretability
Are Interpretations Fairly Evaluated? A Definition Driven Pipeline for Post-Hoc Interpretability
Ninghao Liu
Yunsong Meng
Xia Hu
Tie Wang
Bo Long
XAI
FAtt
23
7
0
16 Sep 2020
QED: A Framework and Dataset for Explanations in Question Answering
QED: A Framework and Dataset for Explanations in Question Answering
Matthew Lamm
J. Palomaki
Chris Alberti
D. Andor
Eunsol Choi
Livio Baldini Soares
Michael Collins
18
68
0
08 Sep 2020
Text Modular Networks: Learning to Decompose Tasks in the Language of
  Existing Models
Text Modular Networks: Learning to Decompose Tasks in the Language of Existing Models
Tushar Khot
Daniel Khashabi
Kyle Richardson
Peter Clark
Ashish Sabharwal
ReLM
9
85
0
01 Sep 2020
Influence Functions in Deep Learning Are Fragile
Influence Functions in Deep Learning Are Fragile
S. Basu
Phillip E. Pope
S. Feizi
TDI
37
219
0
25 Jun 2020
Aligning Faithful Interpretations with their Social Attribution
Aligning Faithful Interpretations with their Social Attribution
Alon Jacovi
Yoav Goldberg
23
105
0
01 Jun 2020
Explaining Black Box Predictions and Unveiling Data Artifacts through
  Influence Functions
Explaining Black Box Predictions and Unveiling Data Artifacts through Influence Functions
Xiaochuang Han
Byron C. Wallace
Yulia Tsvetkov
MILM
FAtt
AAML
TDI
23
165
0
14 May 2020
Evaluating Explanation Methods for Neural Machine Translation
Evaluating Explanation Methods for Neural Machine Translation
Jierui Li
Lemao Liu
Huayang Li
Guanlin Li
Guoping Huang
Shuming Shi
18
23
0
04 May 2020
Obtaining Faithful Interpretations from Compositional Neural Networks
Obtaining Faithful Interpretations from Compositional Neural Networks
Sanjay Subramanian
Ben Bogin
Nitish Gupta
Tomer Wolfson
Sameer Singh
Jonathan Berant
Matt Gardner
19
42
0
02 May 2020
How do Decisions Emerge across Layers in Neural Models? Interpretation
  with Differentiable Masking
How do Decisions Emerge across Layers in Neural Models? Interpretation with Differentiable Masking
Nicola De Cao
M. Schlichtkrull
Wilker Aziz
Ivan Titov
25
90
0
30 Apr 2020
Explainable Deep Learning: A Field Guide for the Uninitiated
Explainable Deep Learning: A Field Guide for the Uninitiated
Gabrielle Ras
Ning Xie
Marcel van Gerven
Derek Doran
AAML
XAI
43
371
0
30 Apr 2020
Ground Truth Evaluation of Neural Network Explanations with CLEVR-XAI
Ground Truth Evaluation of Neural Network Explanations with CLEVR-XAI
L. Arras
Ahmed Osman
Wojciech Samek
XAI
AAML
21
150
0
16 Mar 2020
Model Agnostic Multilevel Explanations
Model Agnostic Multilevel Explanations
Karthikeyan N. Ramamurthy
B. Vinzamuri
Yunfeng Zhang
Amit Dhurandhar
26
41
0
12 Mar 2020
ERASER: A Benchmark to Evaluate Rationalized NLP Models
ERASER: A Benchmark to Evaluate Rationalized NLP Models
Jay DeYoung
Sarthak Jain
Nazneen Rajani
Eric P. Lehman
Caiming Xiong
R. Socher
Byron C. Wallace
50
627
0
08 Nov 2019
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
257
3,696
0
28 Feb 2017
Previous
12345678