ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1705.07874
  4. Cited By
A Unified Approach to Interpreting Model Predictions
v1v2 (latest)

A Unified Approach to Interpreting Model Predictions

22 May 2017
Scott M. Lundberg
Su-In Lee
    FAtt
ArXiv (abs)PDFHTML

Papers citing "A Unified Approach to Interpreting Model Predictions"

50 / 3,922 papers shown
Title
Statistical stability indices for LIME: obtaining reliable explanations
  for Machine Learning models
Statistical stability indices for LIME: obtaining reliable explanations for Machine Learning models
Giorgio Visani
Enrico Bagli
F. Chesani
A. Poluzzi
D. Capuzzo
FAtt
66
170
0
31 Jan 2020
TCMI: a non-parametric mutual-dependence estimator for multivariate
  continuous distributions
TCMI: a non-parametric mutual-dependence estimator for multivariate continuous distributions
Benjamin Regler
Matthias Scheffler
L. Ghiringhelli
74
2
0
30 Jan 2020
One Explanation Does Not Fit All: The Promise of Interactive
  Explanations for Machine Learning Transparency
One Explanation Does Not Fit All: The Promise of Interactive Explanations for Machine Learning Transparency
Kacper Sokol
Peter A. Flach
49
178
0
27 Jan 2020
An interpretable semi-supervised classifier using two different
  strategies for amended self-labeling
An interpretable semi-supervised classifier using two different strategies for amended self-labeling
Isel Grau
Dipankar Sengupta
M. Lorenzo
A. Nowé
SSL
93
4
0
26 Jan 2020
Explainable Active Learning (XAL): An Empirical Study of How Local
  Explanations Impact Annotator Experience
Explainable Active Learning (XAL): An Empirical Study of How Local Explanations Impact Annotator Experience
Bhavya Ghai
Q. V. Liao
Yunfeng Zhang
Rachel K. E. Bellamy
Klaus Mueller
94
29
0
24 Jan 2020
Visual Summary of Value-level Feature Attribution in Prediction Classes
  with Recurrent Neural Networks
Visual Summary of Value-level Feature Attribution in Prediction Classes with Recurrent Neural Networks
Chuan-Chi Wang
Xumeng Wang
K. Ma
FAttHAI
35
1
0
23 Jan 2020
Adequate and fair explanations
Adequate and fair explanations
Nicholas M. Asher
Soumya Paul
Chris Russell
68
9
0
21 Jan 2020
Explaining Data-Driven Decisions made by AI Systems: The Counterfactual
  Approach
Explaining Data-Driven Decisions made by AI Systems: The Counterfactual Approach
Carlos Fernandez
F. Provost
Xintian Han
CML
74
72
0
21 Jan 2020
An interpretable neural network model through piecewise linear
  approximation
An interpretable neural network model through piecewise linear approximation
Mengzhuo Guo
Qingpeng Zhang
Xiuwu Liao
D. Zeng
MILMFAtt
51
8
0
20 Jan 2020
Machine learning and AI-based approaches for bioactive ligand discovery
  and GPCR-ligand recognition
Machine learning and AI-based approaches for bioactive ligand discovery and GPCR-ligand recognition
S. Raschka
Benjamin Kaufman
AI4CE
81
67
0
17 Jan 2020
Adapting Grad-CAM for Embedding Networks
Adapting Grad-CAM for Embedding Networks
Lei Chen
Jianhui Chen
Hossein Hajimirsadeghi
Greg Mori
73
57
0
17 Jan 2020
GraphLIME: Local Interpretable Model Explanations for Graph Neural
  Networks
GraphLIME: Local Interpretable Model Explanations for Graph Neural Networks
Q. Huang
M. Yamada
Yuan Tian
Dinesh Singh
D. Yin
Yi-Ju Chang
FAtt
104
361
0
17 Jan 2020
"Why is 'Chicago' deceptive?" Towards Building Model-Driven Tutorials
  for Humans
"Why is 'Chicago' deceptive?" Towards Building Model-Driven Tutorials for Humans
Vivian Lai
Han Liu
Chenhao Tan
93
143
0
14 Jan 2020
Interpretable feature subset selection: A Shapley value based approach
Interpretable feature subset selection: A Shapley value based approach
Sandhya Tripathi
N. Hemachandra
Prashant Trivedi
TDIFAtt
73
2
0
12 Jan 2020
Explaining the Explainer: A First Theoretical Analysis of LIME
Explaining the Explainer: A First Theoretical Analysis of LIME
Damien Garreau
U. V. Luxburg
FAtt
61
183
0
10 Jan 2020
On Interpretability of Artificial Neural Networks: A Survey
On Interpretability of Artificial Neural Networks: A Survey
Fenglei Fan
Jinjun Xiong
Mengzhou Li
Ge Wang
AAMLAI4CE
94
318
0
08 Jan 2020
Questioning the AI: Informing Design Practices for Explainable AI User
  Experiences
Questioning the AI: Informing Design Practices for Explainable AI User Experiences
Q. V. Liao
D. Gruen
Sarah Miller
144
734
0
08 Jan 2020
Gradient Boosting on Decision Trees for Mortality Prediction in
  Transcatheter Aortic Valve Implantation
Gradient Boosting on Decision Trees for Mortality Prediction in Transcatheter Aortic Valve Implantation
Marco Mamprin
J. Zelis
P. Tonino
S. Zinger
Peter H. N. de With
24
7
0
08 Jan 2020
Effect of Confidence and Explanation on Accuracy and Trust Calibration
  in AI-Assisted Decision Making
Effect of Confidence and Explanation on Accuracy and Trust Calibration in AI-Assisted Decision Making
Yunfeng Zhang
Q. V. Liao
Rachel K. E. Bellamy
126
688
0
07 Jan 2020
A Deep Learning Approach to Diagnosing Multiple Sclerosis from
  Smartphone Data
A Deep Learning Approach to Diagnosing Multiple Sclerosis from Smartphone Data
Patrick Schwab
W. Karlen
70
26
0
02 Jan 2020
A New Approach for Explainable Multiple Organ Annotation with Few Data
A New Approach for Explainable Multiple Organ Annotation with Few Data
Régis Pierrard
Jean-Philippe Poli
C´eline Hudelot
29
8
0
30 Dec 2019
Explain Your Move: Understanding Agent Actions Using Specific and
  Relevant Feature Attribution
Explain Your Move: Understanding Agent Actions Using Specific and Relevant Feature Attribution
Nikaash Puri
Sukriti Verma
Piyush B. Gupta
Dhruv Kayastha
Shripad Deshmukh
Balaji Krishnamurthy
Sameer Singh
FAttAAML
84
79
0
23 Dec 2019
Regularized Operating Envelope with Interpretability and
  Implementability Constraints
Regularized Operating Envelope with Interpretability and Implementability Constraints
Qiyao Wang
Haiyan Wang
Chetan Gupta
Susumu Serita
14
0
0
21 Dec 2019
Explainability and Adversarial Robustness for RNNs
Explainability and Adversarial Robustness for RNNs
Alexander Hartl
Maximilian Bachl
J. Fabini
Tanja Zseby
AAML
59
32
0
20 Dec 2019
When Explanations Lie: Why Many Modified BP Attributions Fail
When Explanations Lie: Why Many Modified BP Attributions Fail
Leon Sixt
Maximilian Granz
Tim Landgraf
BDLFAttXAI
96
132
0
20 Dec 2019
Temporal Fusion Transformers for Interpretable Multi-horizon Time Series
  Forecasting
Temporal Fusion Transformers for Interpretable Multi-horizon Time Series Forecasting
Bryan Lim
Sercan O. Arik
Nicolas Loeff
Tomas Pfister
AI4TS
147
1,500
0
19 Dec 2019
Clusters in Explanation Space: Inferring disease subtypes from model
  explanations
Clusters in Explanation Space: Inferring disease subtypes from model explanations
Marc-Andre Schulz
M. Chapman-Rounds
Manisha Verma
D. Bzdok
K. Georgatzis
22
2
0
18 Dec 2019
Analysing Deep Reinforcement Learning Agents Trained with Domain
  Randomisation
Analysing Deep Reinforcement Learning Agents Trained with Domain Randomisation
Tianhong Dai
Kai Arulkumaran
Tamara Gerbert
Samyakh Tukra
Feryal M. P. Behbahani
Anil Anthony Bharath
87
28
0
18 Dec 2019
On the Explanation of Machine Learning Predictions in Clinical Gait
  Analysis
On the Explanation of Machine Learning Predictions in Clinical Gait Analysis
D. Slijepcevic
Fabian Horst
Sebastian Lapuschkin
Anna-Maria Raberger
Matthias Zeppelzauer
Wojciech Samek
C. Breiteneder
W. Schöllhorn
B. Horsak
115
51
0
16 Dec 2019
From Shallow to Deep Interactions Between Knowledge Representation,
  Reasoning and Machine Learning (Kay R. Amel group)
From Shallow to Deep Interactions Between Knowledge Representation, Reasoning and Machine Learning (Kay R. Amel group)
Zied Bouraoui
Antoine Cornuéjols
Thierry Denoeux
Sebastien Destercke
Didier Dubois
...
Jérôme Mengin
H. Prade
Steven Schockaert
M. Serrurier
Christel Vrain
128
14
0
13 Dec 2019
An Empirical Study on the Relation between Network Interpretability and
  Adversarial Robustness
An Empirical Study on the Relation between Network Interpretability and Adversarial Robustness
Adam Noack
Isaac Ahern
Dejing Dou
Boyang Albert Li
OODAAML
158
10
0
07 Dec 2019
Preserving Causal Constraints in Counterfactual Explanations for Machine
  Learning Classifiers
Preserving Causal Constraints in Counterfactual Explanations for Machine Learning Classifiers
Divyat Mahajan
Chenhao Tan
Amit Sharma
OODCML
147
207
0
06 Dec 2019
Counterfactual Explanation Algorithms for Behavioral and Textual Data
Counterfactual Explanation Algorithms for Behavioral and Textual Data
Yanou Ramon
David Martens
F. Provost
Theodoros Evgeniou
FAtt
126
88
0
04 Dec 2019
Explainable artificial intelligence model to predict acute critical
  illness from electronic health records
Explainable artificial intelligence model to predict acute critical illness from electronic health records
S. Lauritsen
Mads Kristensen
Mathias Vassard Olsen
Morten Skaarup Larsen
K. M. Lauritsen
Marianne Johansson Jørgensen
Jeppe Lange
B. Thiesson
65
305
0
03 Dec 2019
Automated Dependence Plots
Automated Dependence Plots
David I. Inouye
Liu Leqi
Joon Sik Kim
Bryon Aragam
Pradeep Ravikumar
73
1
0
02 Dec 2019
Learning Word Ratings for Empathy and Distress from Document-Level User
  Responses
Learning Word Ratings for Empathy and Distress from Document-Level User Responses
João Sedoc
Sven Buechel
Yehonathan Nachmany
Anneke Buffone
L. Ungar
40
29
0
02 Dec 2019
EMAP: Explanation by Minimal Adversarial Perturbation
EMAP: Explanation by Minimal Adversarial Perturbation
M. Chapman-Rounds
Marc-Andre Schulz
Erik Pazos
K. Georgatzis
AAMLFAtt
47
6
0
02 Dec 2019
ACE -- An Anomaly Contribution Explainer for Cyber-Security Applications
ACE -- An Anomaly Contribution Explainer for Cyber-Security Applications
Xiao Zhang
Manish Marwah
I-Ta Lee
M. Arlitt
Dan Goldwasser
42
14
0
01 Dec 2019
A Programmatic and Semantic Approach to Explaining and DebuggingNeural
  Network Based Object Detectors
A Programmatic and Semantic Approach to Explaining and DebuggingNeural Network Based Object Detectors
Edward J. Kim
D. Gopinath
C. Păsăreanu
Sanjit A. Seshia
47
26
0
01 Dec 2019
Sanity Checks for Saliency Metrics
Sanity Checks for Saliency Metrics
Richard J. Tomsett
Daniel Harborne
Supriyo Chakraborty
Prudhvi K. Gurram
Alun D. Preece
XAI
125
170
0
29 Nov 2019
AR-Net: A simple Auto-Regressive Neural Network for time-series
AR-Net: A simple Auto-Regressive Neural Network for time-series
Oskar Triebe
N. Laptev
Ram Rajagopal
AI4TSAI4CE
102
59
0
27 Nov 2019
The relationship between trust in AI and trustworthy machine learning
  technologies
The relationship between trust in AI and trustworthy machine learning technologies
Ehsan Toreini
Mhairi Aitken
Kovila P. L. Coopamootoo
Karen Elliott
Carlos Vladimiro Gonzalez Zelaya
Aad van Moorsel
FaML
87
262
0
27 Nov 2019
Analysis of Explainers of Black Box Deep Neural Networks for Computer
  Vision: A Survey
Analysis of Explainers of Black Box Deep Neural Networks for Computer Vision: A Survey
Vanessa Buhrmester
David Münch
Michael Arens
MLAUFaMLXAIAAML
120
369
0
27 Nov 2019
Explaining Models by Propagating Shapley Values of Local Components
Explaining Models by Propagating Shapley Values of Local Components
Hugh Chen
Scott M. Lundberg
Su-In Lee
FAttFedML
85
110
0
27 Nov 2019
Improving Feature Attribution through Input-specific Network Pruning
Improving Feature Attribution through Input-specific Network Pruning
Ashkan Khakzar
Soroosh Baselizadeh
Saurabh Khanduja
Christian Rupprecht
S. T. Kim
Nassir Navab
FAtt
58
11
0
25 Nov 2019
A psychophysics approach for quantitative comparison of interpretable
  computer vision models
A psychophysics approach for quantitative comparison of interpretable computer vision models
F. Biessmann
D. Refiano
60
5
0
24 Nov 2019
Domain Knowledge Aided Explainable Artificial Intelligence for Intrusion
  Detection and Response
Domain Knowledge Aided Explainable Artificial Intelligence for Intrusion Detection and Response
Sheikh Rabiul Islam
W. Eberle
S. Ghafoor
Ambareen Siraj
Mike Rogers
69
39
0
22 Nov 2019
LionForests: Local Interpretation of Random Forests
LionForests: Local Interpretation of Random Forests
Ioannis Mollas
Nick Bassiliades
I. Vlahavas
Grigorios Tsoumakas
90
12
0
20 Nov 2019
"How do I fool you?": Manipulating User Trust via Misleading Black Box
  Explanations
"How do I fool you?": Manipulating User Trust via Misleading Black Box Explanations
Himabindu Lakkaraju
Osbert Bastani
90
258
0
15 Nov 2019
ERASER: A Benchmark to Evaluate Rationalized NLP Models
ERASER: A Benchmark to Evaluate Rationalized NLP Models
Jay DeYoung
Sarthak Jain
Nazneen Rajani
Eric P. Lehman
Caiming Xiong
R. Socher
Byron C. Wallace
163
640
0
08 Nov 2019
Previous
123...737475...777879
Next