ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1705.07874
  4. Cited By
A Unified Approach to Interpreting Model Predictions
v1v2 (latest)

A Unified Approach to Interpreting Model Predictions

22 May 2017
Scott M. Lundberg
Su-In Lee
    FAtt
ArXiv (abs)PDFHTML

Papers citing "A Unified Approach to Interpreting Model Predictions"

50 / 3,921 papers shown
Title
XDeep: An Interpretation Tool for Deep Neural Networks
XDeep: An Interpretation Tool for Deep Neural Networks
Fan Yang
Zijian Zhang
Haofan Wang
Yuening Li
Helen Zhou
XAIHAI
37
2
0
04 Nov 2019
Explaining black box decisions by Shapley cohort refinement
Explaining black box decisions by Shapley cohort refinement
Masayoshi Mase
Art B. Owen
Benjamin B. Seiler
67
52
0
01 Nov 2019
EnergyStar++: Towards more accurate and explanatory building energy
  benchmarking
EnergyStar++: Towards more accurate and explanatory building energy benchmarking
P. Arjunan
K. Poolla
Clayton Miller
34
116
0
30 Oct 2019
Weight of Evidence as a Basis for Human-Oriented Explanations
Weight of Evidence as a Basis for Human-Oriented Explanations
David Alvarez-Melis
Hal Daumé
Jennifer Wortman Vaughan
Hanna M. Wallach
XAIFAtt
91
20
0
29 Oct 2019
Feature relevance quantification in explainable AI: A causal problem
Feature relevance quantification in explainable AI: A causal problem
Dominik Janzing
Lenon Minorics
Patrick Blobaum
FAttCML
109
286
0
29 Oct 2019
Rethinking Cooperative Rationalization: Introspective Extraction and
  Complement Control
Rethinking Cooperative Rationalization: Introspective Extraction and Complement Control
Mo Yu
Shiyu Chang
Yang Zhang
Tommi Jaakkola
143
146
0
29 Oct 2019
bLIMEy: Surrogate Prediction Explanations Beyond LIME
bLIMEy: Surrogate Prediction Explanations Beyond LIME
Kacper Sokol
Alexander Hepburn
Raúl Santos-Rodríguez
Peter A. Flach
FAtt
145
38
0
29 Oct 2019
A Game Theoretic Approach to Class-wise Selective Rationalization
A Game Theoretic Approach to Class-wise Selective Rationalization
Shiyu Chang
Yang Zhang
Mo Yu
Tommi Jaakkola
66
62
0
28 Oct 2019
CXPlain: Causal Explanations for Model Interpretation under Uncertainty
CXPlain: Causal Explanations for Model Interpretation under Uncertainty
Patrick Schwab
W. Karlen
FAttCML
143
211
0
27 Oct 2019
Data Augmentation for Skin Lesion using Self-Attention based Progressive
  Generative Adversarial Network
Data Augmentation for Skin Lesion using Self-Attention based Progressive Generative Adversarial Network
Ibrahim Saad Ali
Mamdouh Farouk Mohamed
Y. B. Mahdy
GANMedIm
49
122
0
25 Oct 2019
Seeing What a GAN Cannot Generate
Seeing What a GAN Cannot Generate
David Bau
Jun-Yan Zhu
Jonas Wulff
William S. Peebles
Hendrik Strobelt
Bolei Zhou
Antonio Torralba
GAN
113
310
0
24 Oct 2019
Predicting In-game Actions from Interviews of NBA Players
Predicting In-game Actions from Interviews of NBA Players
Nadav Oved
Amir Feder
Roi Reichart
79
2
0
24 Oct 2019
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies,
  Opportunities and Challenges toward Responsible AI
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI
Alejandro Barredo Arrieta
Natalia Díaz Rodríguez
Javier Del Ser
Adrien Bennetot
Siham Tabik
...
S. Gil-Lopez
Daniel Molina
Richard Benjamins
Raja Chatila
Francisco Herrera
XAI
267
6,386
0
22 Oct 2019
Contextual Prediction Difference Analysis for Explaining Individual
  Image Classifications
Contextual Prediction Difference Analysis for Explaining Individual Image Classifications
Jindong Gu
Volker Tresp
FAtt
51
8
0
21 Oct 2019
Many Faces of Feature Importance: Comparing Built-in and Post-hoc
  Feature Importance in Text Classification
Many Faces of Feature Importance: Comparing Built-in and Post-hoc Feature Importance in Text Classification
Vivian Lai
Zheng Jon Cai
Chenhao Tan
FAtt
60
19
0
18 Oct 2019
Personalized Treatment for Coronary Artery Disease Patients: A Machine
  Learning Approach
Personalized Treatment for Coronary Artery Disease Patients: A Machine Learning Approach
Dimitris Bertsimas
Agni Orfanoudaki
R. Weiner
65
41
0
18 Oct 2019
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
Chih-Kuan Yeh
Been Kim
Sercan O. Arik
Chun-Liang Li
Tomas Pfister
Pradeep Ravikumar
FAtt
331
307
0
17 Oct 2019
Do Explanations Reflect Decisions? A Machine-centric Strategy to
  Quantify the Performance of Explainability Algorithms
Do Explanations Reflect Decisions? A Machine-centric Strategy to Quantify the Performance of Explainability Algorithms
Z. Q. Lin
M. Shafiee
S. Bochkarev
Michael St. Jules
Xiao Yu Wang
A. Wong
FAtt
83
81
0
16 Oct 2019
Asymmetric Shapley values: incorporating causal knowledge into
  model-agnostic explainability
Asymmetric Shapley values: incorporating causal knowledge into model-agnostic explainability
Christopher Frye
C. Rowat
Ilya Feige
105
184
0
14 Oct 2019
Measuring Unfairness through Game-Theoretic Interpretability
Measuring Unfairness through Game-Theoretic Interpretability
Juliana Cesaro
Fabio Gagliardi Cozman
FAtt
75
13
0
12 Oct 2019
NLS: an accurate and yet easy-to-interpret regression method
NLS: an accurate and yet easy-to-interpret regression method
Victor Coscrato
M. Inácio
T. Botari
Rafael Izbicki
FAtt
55
4
0
11 Oct 2019
Explaining image classifiers by removing input features using generative
  models
Explaining image classifiers by removing input features using generative models
Chirag Agarwal
Anh Totti Nguyen
FAtt
100
15
0
09 Oct 2019
Who's responsible? Jointly quantifying the contribution of the learning
  algorithm and training data
Who's responsible? Jointly quantifying the contribution of the learning algorithm and training data
G. Yona
Amirata Ghorbani
James Zou
TDI
61
13
0
09 Oct 2019
Make Up Your Mind! Adversarial Generation of Inconsistent Natural
  Language Explanations
Make Up Your Mind! Adversarial Generation of Inconsistent Natural Language Explanations
Oana-Maria Camburu
Brendan Shillingford
Pasquale Minervini
Thomas Lukasiewicz
Phil Blunsom
AAMLGAN
114
97
0
07 Oct 2019
Testing and verification of neural-network-based safety-critical control
  software: A systematic literature review
Testing and verification of neural-network-based safety-critical control software: A systematic literature review
Jin Zhang
Jingyue Li
85
48
0
05 Oct 2019
Can I Trust the Explainer? Verifying Post-hoc Explanatory Methods
Can I Trust the Explainer? Verifying Post-hoc Explanatory Methods
Oana-Maria Camburu
Eleonora Giunchiglia
Jakob N. Foerster
Thomas Lukasiewicz
Phil Blunsom
FAttAAML
115
61
0
04 Oct 2019
The Bouncer Problem: Challenges to Remote Explainability
The Bouncer Problem: Challenges to Remote Explainability
Erwan Le Merrer
Gilles Tredan
66
8
0
03 Oct 2019
Silas: High Performance, Explainable and Verifiable Machine Learning
Silas: High Performance, Explainable and Verifiable Machine Learning
Hadrien Bride
Zhe Hou
Jie Dong
J. Dong
Seyedali Mirjalili
66
7
0
03 Oct 2019
Synthesizing Action Sequences for Modifying Model Decisions
Synthesizing Action Sequences for Modifying Model Decisions
Goutham Ramakrishnan
Yun Chan Lee
Aws Albarghouthi
121
33
0
30 Sep 2019
MonoNet: Towards Interpretable Models by Learning Monotonic Features
MonoNet: Towards Interpretable Models by Learning Monotonic Features
An-phi Nguyen
María Rodríguez Martínez
FAtt
60
13
0
30 Sep 2019
Interpretations are useful: penalizing explanations to align neural
  networks with prior knowledge
Interpretations are useful: penalizing explanations to align neural networks with prior knowledge
Laura Rieger
Chandan Singh
W. James Murdoch
Bin Yu
FAtt
111
215
0
30 Sep 2019
Decision Explanation and Feature Importance for Invertible Networks
Decision Explanation and Feature Importance for Invertible Networks
Juntang Zhuang
Nicha Dvornek
Xiaoxiao Li
Junlin Yang
James S. Duncan
AAMLFAtt
61
5
0
30 Sep 2019
Multi-classifier prediction of knee osteoarthritis progression from
  incomplete imbalanced longitudinal data
Multi-classifier prediction of knee osteoarthritis progression from incomplete imbalanced longitudinal data
P. Widera
P. Welsing
C. Ladel
J. Loughlin
F. Lafeber
F. Dop
J. Larkin
H. Weinans
A. Mobasheri
J. Bacardit
80
54
0
30 Sep 2019
Interpreting Undesirable Pixels for Image Classification on Black-Box
  Models
Interpreting Undesirable Pixels for Image Classification on Black-Box Models
Sin-Han Kang
Hong G Jung
Seong-Whan Lee
FAtt
50
3
0
27 Sep 2019
Towards Explainable Artificial Intelligence
Towards Explainable Artificial Intelligence
Wojciech Samek
K. Müller
XAI
93
449
0
26 Sep 2019
Data Valuation using Reinforcement Learning
Data Valuation using Reinforcement Learning
Jinsung Yoon
Sercan O. Arik
Tomas Pfister
TDI
93
183
0
25 Sep 2019
Explaining and Interpreting LSTMs
Explaining and Interpreting LSTMs
L. Arras
Jose A. Arjona-Medina
Michael Widrich
G. Montavon
Michael Gillhofer
K. Müller
Sepp Hochreiter
Wojciech Samek
FAttAI4TS
76
79
0
25 Sep 2019
Model-Agnostic Linear Competitors -- When Interpretable Models Compete
  and Collaborate with Black-Box Models
Model-Agnostic Linear Competitors -- When Interpretable Models Compete and Collaborate with Black-Box Models
Hassan Rafique
Tong Wang
Qihang Lin
51
4
0
23 Sep 2019
Towards Interpreting Recurrent Neural Networks through Probabilistic
  Abstraction
Towards Interpreting Recurrent Neural Networks through Probabilistic Abstraction
Guoliang Dong
Jingyi Wang
Jun Sun
Yang Zhang
Xinyu Wang
Ting Dai
J. Dong
Xingen Wang
FaML
49
3
0
22 Sep 2019
FACE: Feasible and Actionable Counterfactual Explanations
FACE: Feasible and Actionable Counterfactual Explanations
Rafael Poyiadzi
Kacper Sokol
Raúl Santos-Rodríguez
T. D. Bie
Peter A. Flach
84
372
0
20 Sep 2019
Representation Learning for Electronic Health Records
Representation Learning for Electronic Health Records
W. Weng
Peter Szolovits
81
19
0
19 Sep 2019
InterpretML: A Unified Framework for Machine Learning Interpretability
InterpretML: A Unified Framework for Machine Learning Interpretability
Harsha Nori
Samuel Jenkins
Paul Koch
R. Caruana
AI4CE
171
490
0
19 Sep 2019
Analysing Neural Language Models: Contextual Decomposition Reveals
  Default Reasoning in Number and Gender Assignment
Analysing Neural Language Models: Contextual Decomposition Reveals Default Reasoning in Number and Gender Assignment
Jaap Jumelet
Willem H. Zuidema
Dieuwke Hupkes
LRM
73
37
0
19 Sep 2019
The Explanation Game: Explaining Machine Learning Models Using Shapley
  Values
The Explanation Game: Explaining Machine Learning Models Using Shapley Values
Luke Merrick
Ankur Taly
FAttTDI
59
33
0
17 Sep 2019
Measure Contribution of Participants in Federated Learning
Measure Contribution of Participants in Federated Learning
Guan Wang
Charlie Xiaoqian Dang
Ziye Zhou
FedML
111
200
0
17 Sep 2019
Towards a Rigorous Evaluation of XAI Methods on Time Series
Towards a Rigorous Evaluation of XAI Methods on Time Series
U. Schlegel
Hiba Arnout
Mennatallah El-Assady
Daniela Oelke
Daniel A. Keim
XAIAI4TS
117
174
0
16 Sep 2019
Shapley Interpretation and Activation in Neural Networks
Shapley Interpretation and Activation in Neural Networks
Yadong Li
Xin Cui
TDIFAttLLMSV
50
3
0
13 Sep 2019
FAT Forensics: A Python Toolbox for Algorithmic Fairness, Accountability
  and Transparency
FAT Forensics: A Python Toolbox for Algorithmic Fairness, Accountability and Transparency
Kacper Sokol
Raúl Santos-Rodríguez
Peter A. Flach
55
37
0
11 Sep 2019
Interpretable Biomanufacturing Process Risk and Sensitivity Analyses for
  Quality-by-Design and Stability Control
Interpretable Biomanufacturing Process Risk and Sensitivity Analyses for Quality-by-Design and Stability Control
Wei Xie
Bo Wang
Cheng Li
D. Xie
Jared R. Auclair
22
20
0
10 Sep 2019
NormLime: A New Feature Importance Metric for Explaining Deep Neural
  Networks
NormLime: A New Feature Importance Metric for Explaining Deep Neural Networks
Isaac Ahern
Adam Noack
Luis Guzman-Nateras
Dejing Dou
Boyang Albert Li
Jun Huan
FAtt
55
40
0
10 Sep 2019
Previous
123...747576777879
Next