ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2009.07494
  4. Cited By
Are Interpretations Fairly Evaluated? A Definition Driven Pipeline for
  Post-Hoc Interpretability

Are Interpretations Fairly Evaluated? A Definition Driven Pipeline for Post-Hoc Interpretability

16 September 2020
Ninghao Liu
Yunsong Meng
Xia Hu
Tie Wang
Bo Long
    XAI
    FAtt
ArXivPDFHTML

Papers citing "Are Interpretations Fairly Evaluated? A Definition Driven Pipeline for Post-Hoc Interpretability"

33 / 33 papers shown
Title
Towards Faithfully Interpretable NLP Systems: How should we define and
  evaluate faithfulness?
Towards Faithfully Interpretable NLP Systems: How should we define and evaluate faithfulness?
Alon Jacovi
Yoav Goldberg
XAI
119
597
0
07 Apr 2020
The POLAR Framework: Polar Opposites Enable Interpretability of
  Pre-Trained Word Embeddings
The POLAR Framework: Polar Opposites Enable Interpretability of Pre-Trained Word Embeddings
Binny Mathew
Sandipan Sikdar
Florian Lemmerich
M. Strohmaier
39
36
0
27 Jan 2020
AllenNLP Interpret: A Framework for Explaining Predictions of NLP Models
AllenNLP Interpret: A Framework for Explaining Predictions of NLP Models
Eric Wallace
Jens Tuyls
Junlin Wang
Sanjay Subramanian
Matt Gardner
Sameer Singh
MILM
63
138
0
19 Sep 2019
Attention is not not Explanation
Attention is not not Explanation
Sarah Wiegreffe
Yuval Pinter
XAI
AAML
FAtt
120
909
0
13 Aug 2019
Evaluating Explanation Without Ground Truth in Interpretable Machine
  Learning
Evaluating Explanation Without Ground Truth in Interpretable Machine Learning
Fan Yang
Mengnan Du
Xia Hu
XAI
ELM
57
67
0
16 Jul 2019
Explanations can be manipulated and geometry is to blame
Explanations can be manipulated and geometry is to blame
Ann-Kathrin Dombrowski
Maximilian Alber
Christopher J. Anders
M. Ackermann
K. Müller
Pan Kessel
AAML
FAtt
81
332
0
19 Jun 2019
Is Attention Interpretable?
Is Attention Interpretable?
Sofia Serrano
Noah A. Smith
108
684
0
09 Jun 2019
Attention is not Explanation
Attention is not Explanation
Sarthak Jain
Byron C. Wallace
FAtt
145
1,324
0
26 Feb 2019
Human-Centered Artificial Intelligence and Machine Learning
Human-Centered Artificial Intelligence and Machine Learning
Mark O. Riedl
SyDa
113
267
0
31 Jan 2019
Theoretically Principled Trade-off between Robustness and Accuracy
Theoretically Principled Trade-off between Robustness and Accuracy
Hongyang R. Zhang
Yaodong Yu
Jiantao Jiao
Eric Xing
L. Ghaoui
Michael I. Jordan
134
2,551
0
24 Jan 2019
Applying Deep Learning To Airbnb Search
Applying Deep Learning To Airbnb Search
Malay Haldar
Mustafa Abdool
Prashant Ramanathan
Tao Xu
Shulin Yang
...
Qing Zhang
Nick Barrow-Williams
B. Turnbull
Brendan M. Collins
Thomas Legrand
DML
49
85
0
22 Oct 2018
BERT: Pre-training of Deep Bidirectional Transformers for Language
  Understanding
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Jacob Devlin
Ming-Wei Chang
Kenton Lee
Kristina Toutanova
VLM
SSL
SSeg
1.8K
94,891
0
11 Oct 2018
Towards Explanation of DNN-based Prediction with Guided Feature
  Inversion
Towards Explanation of DNN-based Prediction with Guided Feature Inversion
Mengnan Du
Ninghao Liu
Qingquan Song
Xia Hu
FAtt
68
127
0
19 Mar 2018
Interpretability Beyond Feature Attribution: Quantitative Testing with
  Concept Activation Vectors (TCAV)
Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
Been Kim
Martin Wattenberg
Justin Gilmer
Carrie J. Cai
James Wexler
F. Viégas
Rory Sayres
FAtt
214
1,842
0
30 Nov 2017
Graph Attention Networks
Graph Attention Networks
Petar Velickovic
Guillem Cucurull
Arantxa Casanova
Adriana Romero
Pietro Lio
Yoshua Bengio
GNN
479
20,164
0
30 Oct 2017
Interpretation of Neural Networks is Fragile
Interpretation of Neural Networks is Fragile
Amirata Ghorbani
Abubakar Abid
James Zou
FAtt
AAML
133
867
0
29 Oct 2017
Dynamic Routing Between Capsules
Dynamic Routing Between Capsules
S. Sabour
Nicholas Frosst
Geoffrey E. Hinton
174
4,596
0
26 Oct 2017
SmoothGrad: removing noise by adding noise
SmoothGrad: removing noise by adding noise
D. Smilkov
Nikhil Thorat
Been Kim
F. Viégas
Martin Wattenberg
FAtt
ODL
201
2,226
0
12 Jun 2017
Attention Is All You Need
Attention Is All You Need
Ashish Vaswani
Noam M. Shazeer
Niki Parmar
Jakob Uszkoreit
Llion Jones
Aidan Gomez
Lukasz Kaiser
Illia Polosukhin
3DV
701
131,652
0
12 Jun 2017
Interpretable Explanations of Black Boxes by Meaningful Perturbation
Interpretable Explanations of Black Boxes by Meaningful Perturbation
Ruth C. Fong
Andrea Vedaldi
FAtt
AAML
74
1,520
0
11 Apr 2017
Understanding Black-box Predictions via Influence Functions
Understanding Black-box Predictions via Influence Functions
Pang Wei Koh
Percy Liang
TDI
210
2,894
0
14 Mar 2017
Axiomatic Attribution for Deep Networks
Axiomatic Attribution for Deep Networks
Mukund Sundararajan
Ankur Taly
Qiqi Yan
OOD
FAtt
188
5,989
0
04 Mar 2017
Understanding Neural Networks through Representation Erasure
Understanding Neural Networks through Representation Erasure
Jiwei Li
Will Monroe
Dan Jurafsky
AAML
MILM
88
565
0
24 Dec 2016
Interpretation of Prediction Models Using the Input Gradient
Interpretation of Prediction Models Using the Input Gradient
Yotam Hechtlinger
FaML
AI4CE
FAtt
58
85
0
23 Nov 2016
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based
  Localization
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization
Ramprasaath R. Selvaraju
Michael Cogswell
Abhishek Das
Ramakrishna Vedantam
Devi Parikh
Dhruv Batra
FAtt
303
20,023
0
07 Oct 2016
Semi-Supervised Classification with Graph Convolutional Networks
Semi-Supervised Classification with Graph Convolutional Networks
Thomas Kipf
Max Welling
GNN
SSL
635
29,076
0
09 Sep 2016
The Mythos of Model Interpretability
The Mythos of Model Interpretability
Zachary Chase Lipton
FaML
180
3,701
0
10 Jun 2016
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
1.2K
16,990
0
16 Feb 2016
Character-level Convolutional Networks for Text Classification
Character-level Convolutional Networks for Text Classification
Xiang Zhang
Jiaqi Zhao
Yann LeCun
268
6,113
0
04 Sep 2015
Extraction of Salient Sentences from Labelled Documents
Extraction of Salient Sentences from Labelled Documents
Misha Denil
Alban Demiraj
Nando de Freitas
73
137
0
21 Dec 2014
Explaining and Harnessing Adversarial Examples
Explaining and Harnessing Adversarial Examples
Ian Goodfellow
Jonathon Shlens
Christian Szegedy
AAML
GAN
277
19,066
0
20 Dec 2014
Neural Machine Translation by Jointly Learning to Align and Translate
Neural Machine Translation by Jointly Learning to Align and Translate
Dzmitry Bahdanau
Kyunghyun Cho
Yoshua Bengio
AIMat
558
27,311
0
01 Sep 2014
Deep Inside Convolutional Networks: Visualising Image Classification
  Models and Saliency Maps
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
Karen Simonyan
Andrea Vedaldi
Andrew Zisserman
FAtt
312
7,295
0
20 Dec 2013
1