ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1611.07478
  4. Cited By
An unexpected unity among methods for interpreting model predictions

An unexpected unity among methods for interpreting model predictions

22 November 2016
Scott M. Lundberg
Su-In Lee
    FAtt
ArXivPDFHTML

Papers citing "An unexpected unity among methods for interpreting model predictions"

22 / 22 papers shown
Title
Mapping Knowledge Representations to Concepts: A Review and New
  Perspectives
Mapping Knowledge Representations to Concepts: A Review and New Perspectives
Lars Holmberg
P. Davidsson
Per Linde
36
1
0
31 Dec 2022
Easy to Decide, Hard to Agree: Reducing Disagreements Between Saliency
  Methods
Easy to Decide, Hard to Agree: Reducing Disagreements Between Saliency Methods
Josip Jukić
Martin Tutek
Jan Snajder
FAtt
33
0
0
15 Nov 2022
Machine Learning for a Sustainable Energy Future
Machine Learning for a Sustainable Energy Future
Zhenpeng Yao
Yanwei Lum
Andrew K. Johnston
L. M. Mejia-Mendoza
Xiaoxia Zhou
Yonggang Wen
Alán Aspuru-Guzik
E. Sargent
Z. Seh
32
211
0
19 Oct 2022
Detection of ADHD based on Eye Movements during Natural Viewing
Detection of ADHD based on Eye Movements during Natural Viewing
Shuwen Deng
Paul Prasse
D. R. Reich
S. Dziemian
Maja Stegenwallner-Schütz
Daniel G. Krakowczyk
Silvia Makowski
N. Langer
Tobias Scheffer
Lena A. Jäger
36
9
0
04 Jul 2022
Decorrelated Variable Importance
Decorrelated Variable Importance
I. Verdinelli
Larry A. Wasserman
FAtt
17
18
0
21 Nov 2021
Explaining Deep Reinforcement Learning Agents In The Atari Domain
  through a Surrogate Model
Explaining Deep Reinforcement Learning Agents In The Atari Domain through a Surrogate Model
Alexander Sieusahai
Matthew J. Guzdial
35
13
0
07 Oct 2021
Information-theoretic Evolution of Model Agnostic Global Explanations
Information-theoretic Evolution of Model Agnostic Global Explanations
Sukriti Verma
Nikaash Puri
Piyush B. Gupta
Balaji Krishnamurthy
FAtt
29
0
0
14 May 2021
Why model why? Assessing the strengths and limitations of LIME
Why model why? Assessing the strengths and limitations of LIME
Jurgen Dieber
S. Kirrane
FAtt
26
97
0
30 Nov 2020
Explainable Predictive Process Monitoring
Explainable Predictive Process Monitoring
Musabir Musabayli
F. Maggi
Williams Rizzi
Josep Carmona
Chiara Di Francescomarino
19
60
0
04 Aug 2020
Making deep neural networks right for the right scientific reasons by
  interacting with their explanations
Making deep neural networks right for the right scientific reasons by interacting with their explanations
P. Schramowski
Wolfgang Stammer
Stefano Teso
Anna Brugger
Xiaoting Shao
Hans-Georg Luigs
Anne-Katrin Mahlein
Kristian Kersting
42
207
0
15 Jan 2020
Technical Report: Partial Dependence through Stratification
Technical Report: Partial Dependence through Stratification
T. Parr
James D. Wilson
18
2
0
15 Jul 2019
Global Aggregations of Local Explanations for Black Box models
Global Aggregations of Local Explanations for Black Box models
I. V. D. Linden
H. Haned
Evangelos Kanoulas
FAtt
27
63
0
05 Jul 2019
Training Machine Learning Models by Regularizing their Explanations
Training Machine Learning Models by Regularizing their Explanations
A. Ross
FaML
26
0
0
29 Sep 2018
Stakeholders in Explainable AI
Stakeholders in Explainable AI
Alun D. Preece
Daniel Harborne
Dave Braines
Richard J. Tomsett
Supriyo Chakraborty
15
154
0
29 Sep 2018
Contrastive Explanations for Reinforcement Learning in terms of Expected
  Consequences
Contrastive Explanations for Reinforcement Learning in terms of Expected Consequences
J. V. D. Waa
J. Diggelen
K. Bosch
Mark Antonius Neerincx
OffRL
31
107
0
23 Jul 2018
Contrastive Explanations with Local Foil Trees
Contrastive Explanations with Local Foil Trees
J. V. D. Waa
M. Robeer
J. Diggelen
Matthieu J. S. Brinkhuis
Mark Antonius Neerincx
FAtt
24
82
0
19 Jun 2018
"Why Should I Trust Interactive Learners?" Explaining Interactive
  Queries of Classifiers to Users
"Why Should I Trust Interactive Learners?" Explaining Interactive Queries of Classifiers to Users
Stefano Teso
Kristian Kersting
FAtt
HAI
25
12
0
22 May 2018
Beyond Sparsity: Tree Regularization of Deep Models for Interpretability
Beyond Sparsity: Tree Regularization of Deep Models for Interpretability
Mike Wu
M. C. Hughes
S. Parbhoo
Maurizio Zazzi
Volker Roth
Finale Doshi-Velez
AI4CE
28
281
0
16 Nov 2017
MAGIX: Model Agnostic Globally Interpretable Explanations
MAGIX: Model Agnostic Globally Interpretable Explanations
Nikaash Puri
Piyush B. Gupta
Pratiksha Agarwal
Sukriti Verma
Balaji Krishnamurthy
FAtt
32
41
0
22 Jun 2017
Interpreting Blackbox Models via Model Extraction
Interpreting Blackbox Models via Model Extraction
Osbert Bastani
Carolyn Kim
Hamsa Bastani
FAtt
35
170
0
23 May 2017
Right for the Right Reasons: Training Differentiable Models by
  Constraining their Explanations
Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations
A. Ross
M. C. Hughes
Finale Doshi-Velez
FAtt
54
583
0
10 Mar 2017
Axiomatic Attribution for Deep Networks
Axiomatic Attribution for Deep Networks
Mukund Sundararajan
Ankur Taly
Qiqi Yan
OOD
FAtt
45
5,865
0
04 Mar 2017
1