Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1807.02910
Cited By
Model Agnostic Supervised Local Explanations
9 July 2018
Gregory Plumb
Denali Molitor
Ameet Talwalkar
FAtt
LRM
MILM
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Model Agnostic Supervised Local Explanations"
50 / 50 papers shown
Title
Axiomatic Explainer Globalness via Optimal Transport
Davin Hill
Josh Bone
A. Masoomi
Max Torop
Jennifer Dy
110
1
0
13 Mar 2025
Integrated feature analysis for deep learning interpretation and class activation maps
Yanli Li
Tahereh Hassanzadeh
D. Shamonin
Monique Reijnierse
A. H. V. D. H. Mil
B. Stoel
48
0
0
01 Jul 2024
ShapG: new feature importance method based on the Shapley value
Chi Zhao
Jing Liu
Elena Parilina
FAtt
141
4
0
29 Jun 2024
Explainable automatic industrial carbon footprint estimation from bank transaction classification using natural language processing
Jaime González-González
Silvia García-Méndez
Francisco de Arriba-Pérez
Francisco J. González Castaño
Oscar Barba-Seara
38
8
0
23 May 2024
Accurate estimation of feature importance faithfulness for tree models
Mateusz Gajewski
Adam Karczmarz
Mateusz Rapicki
Piotr Sankowski
41
0
0
04 Apr 2024
Information-Theoretic State Variable Selection for Reinforcement Learning
Charles Westphal
Stephen Hailes
Mirco Musolesi
26
3
0
21 Jan 2024
BELLA: Black box model Explanations by Local Linear Approximations
N. Radulovic
Albert Bifet
Fabian M. Suchanek
FAtt
44
1
0
18 May 2023
HarsanyiNet: Computing Accurate Shapley Values in a Single Forward Propagation
Lu Chen
Siyu Lou
Keyan Zhang
Jin Huang
Quanshi Zhang
TDI
FAtt
34
9
0
04 Apr 2023
Explaining text classifiers through progressive neighborhood approximation with realistic samples
Yi Cai
Arthur Zimek
Eirini Ntoutsi
Gerhard Wunder
AI4TS
24
0
0
11 Feb 2023
Towards Meaningful Anomaly Detection: The Effect of Counterfactual Explanations on the Investigation of Anomalies in Multivariate Time Series
Max Schemmer
Joshua Holstein
Niklas Bauer
Niklas Kühl
G. Satzger
38
2
0
07 Feb 2023
Explaining Random Forests using Bipolar Argumentation and Markov Networks (Technical Report)
Nico Potyka
Xiang Yin
Francesca Toni
26
10
0
21 Nov 2022
Why Deep Surgical Models Fail?: Revisiting Surgical Action Triplet Recognition through the Lens of Robustness
Ya-Hsin Cheng
Lihao Liu
Shujun Wang
Yueming Jin
Carola-Bibiane Schönlieb
Angelica I. Aviles-Rivero
28
7
0
18 Sep 2022
Use-Case-Grounded Simulations for Explanation Evaluation
Valerie Chen
Nari Johnson
Nicholay Topin
Gregory Plumb
Ameet Talwalkar
FAtt
ELM
30
24
0
05 Jun 2022
Towards a Theory of Faithfulness: Faithful Explanations of Differentiable Classifiers over Continuous Data
Nico Potyka
Xiang Yin
Francesca Toni
FAtt
24
2
0
19 May 2022
The Solvability of Interpretability Evaluation Metrics
Yilun Zhou
J. Shah
78
8
0
18 May 2022
The Road to Explainability is Paved with Bias: Measuring the Fairness of Explanations
Aparna Balagopalan
Haoran Zhang
Kimia Hamidieh
Thomas Hartvigsen
Frank Rudzicz
Marzyeh Ghassemi
45
78
0
06 May 2022
Adapting and Evaluating Influence-Estimation Methods for Gradient-Boosted Decision Trees
Jonathan Brophy
Zayd Hammoudeh
Daniel Lowd
TDI
42
22
0
30 Apr 2022
Exploring Hidden Semantics in Neural Networks with Symbolic Regression
Yuanzhen Luo
Qiang Lu
Xilei Hu
Jake Luo
Zhiguang Wang
23
0
0
22 Apr 2022
CAIPI in Practice: Towards Explainable Interactive Medical Image Classification
E. Slany
Yannik Ott
Stephan Scheele
Jan Paulus
Ute Schmid
30
8
0
06 Apr 2022
Statistics and Deep Learning-based Hybrid Model for Interpretable Anomaly Detection
Thabang Mathonsi
Terence L van Zyl
40
0
0
25 Feb 2022
Human-Centered Concept Explanations for Neural Networks
Chih-Kuan Yeh
Been Kim
Pradeep Ravikumar
FAtt
47
26
0
25 Feb 2022
Prolog-based agnostic explanation module for structured pattern classification
Gonzalo Nápoles
Fabian Hoitsma
A. Knoben
A. Jastrzębska
Maikel Leon Espinosa
25
13
0
23 Dec 2021
Explainable Deep Learning in Healthcare: A Methodological Survey from an Attribution View
Di Jin
Elena Sergeeva
W. Weng
Geeticka Chauhan
Peter Szolovits
OOD
56
55
0
05 Dec 2021
Counterfactual Shapley Additive Explanations
Emanuele Albini
Jason Long
Danial Dervovic
Daniele Magazzeni
31
49
0
27 Oct 2021
XPROAX-Local explanations for text classification with progressive neighborhood approximation
Yi Cai
Arthur Zimek
Eirini Ntoutsi
27
5
0
30 Sep 2021
Instance-Based Neural Dependency Parsing
Hiroki Ouchi
Jun Suzuki
Sosuke Kobayashi
Sho Yokoi
Tatsuki Kuribayashi
Masashi Yoshikawa
Kentaro Inui
44
3
0
28 Sep 2021
Improved Feature Importance Computations for Tree Models: Shapley vs. Banzhaf
Adam Karczmarz
A. Mukherjee
Piotr Sankowski
Piotr Wygocki
FAtt
TDI
30
6
0
09 Aug 2021
On Locality of Local Explanation Models
Sahra Ghalebikesabi
Lucile Ter-Minassian
Karla Diaz-Ordaz
Chris Holmes
FedML
FAtt
28
39
0
24 Jun 2021
Synthetic Benchmarks for Scientific Research in Explainable Machine Learning
Yang Liu
Sujay Khandagale
Colin White
Willie Neiswanger
39
65
0
23 Jun 2021
A Framework for Evaluating Post Hoc Feature-Additive Explainers
Zachariah Carmichael
Walter J. Scheirer
FAtt
51
4
0
15 Jun 2021
Information-theoretic Evolution of Model Agnostic Global Explanations
Sukriti Verma
Nikaash Puri
Piyush B. Gupta
Balaji Krishnamurthy
FAtt
29
0
0
14 May 2021
On the Computational Intelligibility of Boolean Classifiers
Gilles Audemard
S. Bellart
Louenas Bounia
F. Koriche
Jean-Marie Lagniez
Pierre Marquis
24
56
0
13 Apr 2021
Interpretable Machine Learning: Moving From Mythos to Diagnostics
Valerie Chen
Jeffrey Li
Joon Sik Kim
Gregory Plumb
Ameet Talwalkar
32
29
0
10 Mar 2021
Forest Guided Smoothing
I. Verdinelli
Larry A. Wasserman
33
3
0
08 Mar 2021
How can I choose an explainer? An Application-grounded Evaluation of Post-hoc Explanations
Sérgio Jesus
Catarina Belém
Vladimir Balayan
João Bento
Pedro Saleiro
P. Bizarro
João Gama
136
119
0
21 Jan 2021
Efficient Estimation of Influence of a Training Instance
Sosuke Kobayashi
Sho Yokoi
Jun Suzuki
Kentaro Inui
TDI
37
15
0
08 Dec 2020
Why model why? Assessing the strengths and limitations of LIME
Jurgen Dieber
S. Kirrane
FAtt
26
97
0
30 Nov 2020
Explaining Deep Neural Networks
Oana-Maria Camburu
XAI
FAtt
38
26
0
04 Oct 2020
Reliable Post hoc Explanations: Modeling Uncertainty in Explainability
Dylan Slack
Sophie Hilgard
Sameer Singh
Himabindu Lakkaraju
FAtt
29
162
0
11 Aug 2020
Adversarial Infidelity Learning for Model Interpretation
Jian Liang
Bing Bai
Yuren Cao
Kun Bai
Fei-Yue Wang
AAML
62
18
0
09 Jun 2020
Evaluating and Aggregating Feature-based Model Explanations
Umang Bhatt
Adrian Weller
J. M. F. Moura
XAI
38
218
0
01 May 2020
Generating Hierarchical Explanations on Text Classification via Feature Interaction Detection
Hanjie Chen
Guangtao Zheng
Yangfeng Ji
FAtt
38
92
0
04 Apr 2020
Model Agnostic Multilevel Explanations
Karthikeyan N. Ramamurthy
B. Vinzamuri
Yunfeng Zhang
Amit Dhurandhar
31
41
0
12 Mar 2020
Causal Interpretability for Machine Learning -- Problems, Methods and Evaluation
Raha Moraffah
Mansooreh Karami
Ruocheng Guo
A. Raglin
Huan Liu
CML
ELM
XAI
32
213
0
09 Mar 2020
Can I Trust the Explainer? Verifying Post-hoc Explanatory Methods
Oana-Maria Camburu
Eleonora Giunchiglia
Jakob N. Foerster
Thomas Lukasiewicz
Phil Blunsom
FAtt
AAML
34
60
0
04 Oct 2019
FACE: Feasible and Actionable Counterfactual Explanations
Rafael Poyiadzi
Kacper Sokol
Raúl Santos-Rodríguez
T. D. Bie
Peter A. Flach
27
365
0
20 Sep 2019
DLIME: A Deterministic Local Interpretable Model-Agnostic Explanations Approach for Computer-Aided Diagnosis Systems
Muhammad Rehman Zafar
N. Khan
FAtt
14
153
0
24 Jun 2019
Explainable AI for Trees: From Local Explanations to Global Understanding
Scott M. Lundberg
G. Erion
Hugh Chen
A. DeGrave
J. Prutkin
B. Nair
R. Katz
J. Himmelfarb
N. Bansal
Su-In Lee
FAtt
28
286
0
11 May 2019
Regularizing Black-box Models for Improved Interpretability
Gregory Plumb
Maruan Al-Shedivat
Ángel Alexander Cabrera
Adam Perer
Eric Xing
Ameet Talwalkar
AAML
26
79
0
18 Feb 2019
On the (In)fidelity and Sensitivity for Explanations
Chih-Kuan Yeh
Cheng-Yu Hsieh
A. Suggala
David I. Inouye
Pradeep Ravikumar
FAtt
39
449
0
27 Jan 2019
1