Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1611.05817
Cited By
Nothing Else Matters: Model-Agnostic Explanations By Identifying Prediction Invariance
17 November 2016
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Nothing Else Matters: Model-Agnostic Explanations By Identifying Prediction Invariance"
15 / 15 papers shown
Title
neuralGAM: An R Package for Fitting Generalized Additive Neural Networks
Ines Ortega-Fernandez
Marta Sestelo
41
0
0
13 May 2025
Recent Advances in Malware Detection: Graph Learning and Explainability
Hossein Shokouhinejad
Roozbeh Razavi-Far
Hesamodin Mohammadian
Mahdi Rabbani
Samuel Ansong
Griffin Higgins
Ali Ghorbani
AAML
83
2
0
14 Feb 2025
T-Explainer: A Model-Agnostic Explainability Framework Based on Gradients
Evandro S. Ortigossa
Fábio F. Dias
Brian Barr
Claudio T. Silva
L. G. Nonato
FAtt
66
2
0
25 Apr 2024
Comparison of decision trees with Local Interpretable Model-Agnostic Explanations (LIME) technique and multi-linear regression for explaining support vector regression model in terms of root mean square error (RMSE) values
Amit Thombre
FAtt
21
1
0
10 Apr 2024
Ensemble Interpretation: A Unified Method for Interpretable Machine Learning
Chao Min
Guoyong Liao
Guo-quan Wen
Yingjun Li
Xing Guo
FAtt
19
0
0
11 Dec 2023
To trust or not to trust an explanation: using LEAF to evaluate local linear XAI methods
E. Amparore
Alan Perotti
P. Bajardi
FAtt
33
68
0
01 Jun 2021
Why model why? Assessing the strengths and limitations of LIME
Jurgen Dieber
S. Kirrane
FAtt
26
97
0
30 Nov 2020
Explainable Deep Learning: A Field Guide for the Uninitiated
Gabrielle Ras
Ning Xie
Marcel van Gerven
Derek Doran
AAML
XAI
52
371
0
30 Apr 2020
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI
Alejandro Barredo Arrieta
Natalia Díaz Rodríguez
Javier Del Ser
Adrien Bennetot
Siham Tabik
...
S. Gil-Lopez
Daniel Molina
Richard Benjamins
Raja Chatila
Francisco Herrera
XAI
44
6,125
0
22 Oct 2019
Can I Trust the Explainer? Verifying Post-hoc Explanatory Methods
Oana-Maria Camburu
Eleonora Giunchiglia
Jakob N. Foerster
Thomas Lukasiewicz
Phil Blunsom
FAtt
AAML
34
60
0
04 Oct 2019
Understanding Individual Decisions of CNNs via Contrastive Backpropagation
Jindong Gu
Yinchong Yang
Volker Tresp
FAtt
17
94
0
05 Dec 2018
Did the Model Understand the Question?
Pramod Kaushik Mudrakarta
Ankur Taly
Mukund Sundararajan
Kedar Dhamdhere
ELM
OOD
FAtt
27
196
0
14 May 2018
A Symbolic Approach to Explaining Bayesian Network Classifiers
Andy Shih
Arthur Choi
Adnan Darwiche
FAtt
27
237
0
09 May 2018
Explanation Methods in Deep Learning: Users, Values, Concerns and Challenges
Gabrielle Ras
Marcel van Gerven
W. Haselager
XAI
17
217
0
20 Mar 2018
Interpretable Deep Convolutional Neural Networks via Meta-learning
Xuan Liu
Xiaoguang Wang
Stan Matwin
FaML
27
38
0
02 Feb 2018
1