Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1602.04938
Cited By
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
16 February 2016
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
Re-assign community
ArXiv
PDF
HTML
Papers citing
""Why Should I Trust You?": Explaining the Predictions of Any Classifier"
50 / 4,330 papers shown
Title
Counterfactual Explanations for Neural Recommenders
Khanh Tran
Azin Ghazimatin
Rishiraj Saha Roy
AAML
CML
60
65
0
11 May 2021
Leveraging Sparse Linear Layers for Debuggable Deep Networks
Eric Wong
Shibani Santurkar
Aleksander Madry
FAtt
22
88
0
11 May 2021
Rationalization through Concepts
Diego Antognini
Boi Faltings
FAtt
27
19
0
11 May 2021
Towards Benchmarking the Utility of Explanations for Model Debugging
Maximilian Idahl
Lijun Lyu
U. Gadiraju
Avishek Anand
XAI
21
18
0
10 May 2021
Do Concept Bottleneck Models Learn as Intended?
Andrei Margeloiu
Matthew Ashman
Umang Bhatt
Yanzhi Chen
M. Jamnik
Adrian Weller
SLR
25
92
0
10 May 2021
On Guaranteed Optimal Robust Explanations for NLP Models
Emanuele La Malfa
A. Zbrzezny
Rhiannon Michelmore
Nicola Paoletti
Marta Z. Kwiatkowska
FAtt
19
47
0
08 May 2021
Two4Two: Evaluating Interpretable Machine Learning - A Synthetic Dataset For Controlled Experiments
M. Schuessler
Philipp Weiß
Leon Sixt
43
3
0
06 May 2021
A Framework of Explanation Generation toward Reliable Autonomous Robots
Tatsuya Sakai
Kazuki Miyazawa
Takato Horii
Takayuki Nagai
27
8
0
06 May 2021
Explainable Autonomous Robots: A Survey and Perspective
Tatsuya Sakai
Takayuki Nagai
25
68
0
06 May 2021
Improving the Faithfulness of Attention-based Explanations with Task-specific Information for Text Classification
G. Chrysostomou
Nikolaos Aletras
32
37
0
06 May 2021
Reliability Testing for Natural Language Processing Systems
Samson Tan
Chenyu You
K. Baxter
Araz Taeihagh
G. Bennett
Min-Yen Kan
22
39
0
06 May 2021
Explainable Artificial Intelligence for Human Decision-Support System in Medical Domain
Samanta Knapic
A. Malhi
Rohit Saluja
Kary Främling
18
99
0
05 May 2021
When Fair Ranking Meets Uncertain Inference
Avijit Ghosh
Ritam Dutt
Christo Wilson
41
44
0
05 May 2021
Software Engineering for AI-Based Systems: A Survey
Silverio Martínez-Fernández
Justus Bogner
Xavier Franch
Marc Oriol
Julien Siebert
Adam Trendowicz
Anna Maria Vollmer
Stefan Wagner
29
211
0
05 May 2021
XAI-KG: knowledge graph to support XAI and decision-making in manufacturing
Jože M. Rožanec
Patrik Zajec
K. Kenda
I. Novalija
B. Fortuna
Dunja Mladenić
13
10
0
05 May 2021
Quality Assurance Challenges for Machine Learning Software Applications During Software Development Life Cycle Phases
Md. Abdullah Al Alamin
Gias Uddin
37
11
0
03 May 2021
LFI-CAM: Learning Feature Importance for Better Visual Explanation
Kwang Hee Lee
Chaewon Park
J. Oh
Nojun Kwak
FAtt
37
27
0
03 May 2021
Combating small molecule aggregation with machine learning
Kuan-Ting Lee
An Yang
Yen-Chu Lin
D. Reker
G. Bernardes
T. Rodrigues
30
12
0
01 May 2021
Explaining a Series of Models by Propagating Shapley Values
Hugh Chen
Scott M. Lundberg
Su-In Lee
TDI
FAtt
32
125
0
30 Apr 2021
Explanation-Based Human Debugging of NLP Models: A Survey
Piyawat Lertvittayakumjorn
Francesca Toni
LRM
47
79
0
30 Apr 2021
Twin Systems for DeepCBR: A Menagerie of Deep Learning and Case-Based Reasoning Pairings for Explanation and Data Augmentation
Markt. Keane
Eoin M. Kenny
M. Temraz
Derek Greene
Barry Smyth
19
5
0
29 Apr 2021
Inspect, Understand, Overcome: A Survey of Practical Methods for AI Safety
Sebastian Houben
Stephanie Abrecht
Maram Akila
Andreas Bär
Felix Brockherde
...
Serin Varghese
Michael Weber
Sebastian J. Wirkert
Tim Wirtz
Matthias Woehrle
AAML
13
58
0
29 Apr 2021
A First Look: Towards Explainable TextVQA Models via Visual and Textual Explanations
Varun Nagaraj Rao
Xingjian Zhen
K. Hovsepian
Mingwei Shen
37
18
0
29 Apr 2021
Do Feature Attribution Methods Correctly Attribute Features?
Yilun Zhou
Serena Booth
Marco Tulio Ribeiro
J. Shah
FAtt
XAI
43
132
0
27 Apr 2021
From Human Explanation to Model Interpretability: A Framework Based on Weight of Evidence
David Alvarez-Melis
Harmanpreet Kaur
Hal Daumé
Hanna M. Wallach
Jennifer Wortman Vaughan
FAtt
56
28
0
27 Apr 2021
Metamorphic Detection of Repackaged Malware
S. Singh
Gail E. Kaiser
24
8
0
27 Apr 2021
LCS-DIVE: An Automated Rule-based Machine Learning Visualization Pipeline for Characterizing Complex Associations in Classification
Robert F. Zhang
Rachael Stolzenberg-Solomon
Shannon M. Lynch
Ryan J. Urbanowicz
26
10
0
26 Apr 2021
TrustyAI Explainability Toolkit
Rob Geada
Tommaso Teofili
Rui Vieira
Rebecca Whitworth
Daniele Zonca
19
2
0
26 Apr 2021
Bridging observation, theory and numerical simulation of the ocean using Machine Learning
Maike Sonnewald
Redouane Lguensat
Daniel C. Jones
P. Dueben
J. Brajard
Venkatramani Balaji
AI4Cl
AI4CE
51
100
0
26 Apr 2021
Weakly Supervised Multi-task Learning for Concept-based Explainability
Catarina Belém
Vladimir Balayan
Pedro Saleiro
P. Bizarro
86
10
0
26 Apr 2021
Towards Rigorous Interpretations: a Formalisation of Feature Attribution
Darius Afchar
Romain Hennequin
Vincent Guigue
FAtt
40
20
0
26 Apr 2021
Attention vs non-attention for a Shapley-based explanation method
T. Kersten
Hugh Mee Wong
Jaap Jumelet
Dieuwke Hupkes
49
4
0
26 Apr 2021
Explainable AI For COVID-19 CT Classifiers: An Initial Comparison Study
Qinghao Ye
Jun Xia
Guang Yang
31
57
0
25 Apr 2021
EXplainable Neural-Symbolic Learning (X-NeSyL) methodology to fuse deep learning representations with expert knowledge graphs: the MonuMAI cultural heritage use case
Natalia Díaz Rodríguez
Alberto Lamas
Jules Sanchez
Gianni Franchi
Ivan Donadello
Siham Tabik
David Filliat
P. Cruz
Rosana Montes
Francisco Herrera
54
77
0
24 Apr 2021
Tracking Peaceful Tractors on Social Media -- XAI-enabled analysis of Red Fort Riots 2021
A. Agarwal
Basant Agarwal
8
1
0
24 Apr 2021
Towards Trustworthy Deception Detection: Benchmarking Model Robustness across Domains, Modalities, and Languages
M. Glenski
Ellyn Ayton
Robin Cosbey
Dustin L. Arendt
Svitlana Volkova
40
7
0
23 Apr 2021
Patch Shortcuts: Interpretable Proxy Models Efficiently Find Black-Box Vulnerabilities
Julia Rosenzweig
Joachim Sicking
Sebastian Houben
Michael Mock
Maram Akila
AAML
42
3
0
22 Apr 2021
Interpretation of multi-label classification models using shapley values
Shikun Chen
FAtt
TDI
44
9
0
21 Apr 2021
Revisiting The Evaluation of Class Activation Mapping for Explainability: A Novel Metric and Experimental Analysis
Samuele Poppi
Marcella Cornia
Lorenzo Baraldi
Rita Cucchiara
FAtt
131
33
0
20 Apr 2021
Bayesian subset selection and variable importance for interpretable prediction and classification
Daniel R. Kowal
28
10
0
20 Apr 2021
Robustness Tests of NLP Machine Learning Models: Search and Semantically Replace
Rahul Singh
Karan Jindal
Yufei Yu
Hanyu Yang
Tarun Joshi
Matthew A. Campbell
Wayne B. Shoumaker
58
2
0
20 Apr 2021
Interpretability in deep learning for finance: a case study for the Heston model
D. Brigo
Xiaoshan Huang
A. Pallavicini
Haitz Sáez de Ocáriz Borde
FAtt
22
8
0
19 Apr 2021
Improving Attribution Methods by Learning Submodular Functions
Piyushi Manupriya
Tarun Ram Menta
S. Jagarlapudi
V. Balasubramanian
TDI
35
6
0
19 Apr 2021
DA-DGCEx: Ensuring Validity of Deep Guided Counterfactual Explanations With Distribution-Aware Autoencoder Loss
Jokin Labaien
E. Zugasti
Xabier De Carlos
CML
38
4
0
19 Apr 2021
SurvNAM: The machine learning survival model explanation
Lev V. Utkin
Egor D. Satyukov
A. Konstantinov
AAML
FAtt
44
28
0
18 Apr 2021
GraphSVX: Shapley Value Explanations for Graph Neural Networks
Alexandre Duval
Fragkiskos D. Malliaros
FAtt
22
87
0
18 Apr 2021
On the Sensitivity and Stability of Model Interpretations in NLP
Fan Yin
Zhouxing Shi
Cho-Jui Hsieh
Kai-Wei Chang
FAtt
27
33
0
18 Apr 2021
Case-based Reasoning for Natural Language Queries over Knowledge Bases
Rajarshi Das
Manzil Zaheer
Dung Ngoc Thai
Ameya Godbole
Ethan Perez
Jay Yoon Lee
Lizhen Tan
L. Polymenakos
Andrew McCallum
36
163
0
18 Apr 2021
Distributed NLI: Learning to Predict Human Opinion Distributions for Language Reasoning
Xiang Zhou
Yixin Nie
Joey Tianyi Zhou
UQCV
22
28
0
18 Apr 2021
Flexible Instance-Specific Rationalization of NLP Models
G. Chrysostomou
Nikolaos Aletras
36
14
0
16 Apr 2021
Previous
1
2
3
...
69
70
71
...
85
86
87
Next