ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1705.07874
  4. Cited By
A Unified Approach to Interpreting Model Predictions
v1v2 (latest)

A Unified Approach to Interpreting Model Predictions

22 May 2017
Scott M. Lundberg
Su-In Lee
    FAtt
ArXiv (abs)PDFHTML

Papers citing "A Unified Approach to Interpreting Model Predictions"

16 / 3,916 papers shown
Title
Explanations of model predictions with live and breakDown packages
Explanations of model predictions with live and breakDown packages
M. Staniak
P. Biecek
FAtt
61
118
0
05 Apr 2018
The Challenge of Crafting Intelligible Intelligence
The Challenge of Crafting Intelligible Intelligence
Daniel S. Weld
Gagan Bansal
58
244
0
09 Mar 2018
Learning to Explain: An Information-Theoretic Perspective on Model
  Interpretation
Learning to Explain: An Information-Theoretic Perspective on Model Interpretation
Jianbo Chen
Le Song
Martin J. Wainwright
Michael I. Jordan
MLTFAtt
184
576
0
21 Feb 2018
Interpreting Neural Network Judgments via Minimal, Stable, and Symbolic
  Corrections
Interpreting Neural Network Judgments via Minimal, Stable, and Symbolic Corrections
Xin Zhang
Armando Solar-Lezama
Rishabh Singh
FAtt
115
63
0
21 Feb 2018
Consistent Individualized Feature Attribution for Tree Ensembles
Consistent Individualized Feature Attribution for Tree Ensembles
Scott M. Lundberg
G. Erion
Su-In Lee
FAttTDI
90
1,410
0
12 Feb 2018
Granger-causal Attentive Mixtures of Experts: Learning Important
  Features with Neural Networks
Granger-causal Attentive Mixtures of Experts: Learning Important Features with Neural Networks
Patrick Schwab
Djordje Miladinovic
W. Karlen
CML
107
57
0
06 Feb 2018
Interpreting CNNs via Decision Trees
Interpreting CNNs via Decision Trees
Quanshi Zhang
Yu Yang
Ying Nian Wu
Song-Chun Zhu
FAtt
104
324
0
01 Feb 2018
Interpretability Beyond Feature Attribution: Quantitative Testing with
  Concept Activation Vectors (TCAV)
Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
Been Kim
Martin Wattenberg
Justin Gilmer
Carrie J. Cai
James Wexler
F. Viégas
Rory Sayres
FAtt
272
1,851
0
30 Nov 2017
MARGIN: Uncovering Deep Neural Networks using Graph Signal Analysis
MARGIN: Uncovering Deep Neural Networks using Graph Signal Analysis
Rushil Anirudh
Jayaraman J. Thiagarajan
R. Sridhar
T. Bremer
FAttAAML
72
12
0
15 Nov 2017
Embedding Deep Networks into Visual Explanations
Embedding Deep Networks into Visual Explanations
Zhongang Qi
Saeed Khorram
Fuxin Li
41
27
0
15 Sep 2017
MAGIX: Model Agnostic Globally Interpretable Explanations
MAGIX: Model Agnostic Globally Interpretable Explanations
Nikaash Puri
Piyush B. Gupta
Pratiksha Agarwal
Sukriti Verma
Balaji Krishnamurthy
FAtt
111
41
0
22 Jun 2017
Consistent feature attribution for tree ensembles
Consistent feature attribution for tree ensembles
Scott M. Lundberg
Su-In Lee
FAtt
80
122
0
19 Jun 2017
Contextual Explanation Networks
Contextual Explanation Networks
Maruan Al-Shedivat
Kumar Avinava Dubey
Eric Xing
CML
112
83
0
29 May 2017
Learning Important Features Through Propagating Activation Differences
Learning Important Features Through Propagating Activation Differences
Avanti Shrikumar
Peyton Greenside
A. Kundaje
FAtt
209
3,896
0
10 Apr 2017
Not Just a Black Box: Learning Important Features Through Propagating
  Activation Differences
Not Just a Black Box: Learning Important Features Through Propagating Activation Differences
Avanti Shrikumar
Peyton Greenside
A. Shcherbina
A. Kundaje
FAtt
124
793
0
05 May 2016
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAttFaML
1.3K
17,225
0
16 Feb 2016
Previous
123...777879