ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1902.02384
  4. Cited By
Global Explanations of Neural Networks: Mapping the Landscape of
  Predictions

Global Explanations of Neural Networks: Mapping the Landscape of Predictions

6 February 2019
Mark Ibrahim
Melissa Louie
C. Modarres
John Paisley
    FAtt
ArXivPDFHTML

Papers citing "Global Explanations of Neural Networks: Mapping the Landscape of Predictions"

14 / 14 papers shown
Title
ShapG: new feature importance method based on the Shapley value
ShapG: new feature importance method based on the Shapley value
Chi Zhao
Jing Liu
Elena Parilina
FAtt
191
4
0
29 Jun 2024
The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective
The Disagreement Problem in Explainable Machine Learning: A Practitioner's Perspective
Satyapriya Krishna
Tessa Han
Alex Gu
Steven Wu
S. Jabbari
Himabindu Lakkaraju
241
190
0
03 Feb 2022
Global Model Interpretation via Recursive Partitioning
Global Model Interpretation via Recursive Partitioning
Chengliang Yang
Anand Rangarajan
Sanjay Ranka
FAtt
40
79
0
11 Feb 2018
Distilling a Neural Network Into a Soft Decision Tree
Distilling a Neural Network Into a Soft Decision Tree
Nicholas Frosst
Geoffrey E. Hinton
264
635
0
27 Nov 2017
The (Un)reliability of saliency methods
The (Un)reliability of saliency methods
Pieter-Jan Kindermans
Sara Hooker
Julius Adebayo
Maximilian Alber
Kristof T. Schütt
Sven Dähne
D. Erhan
Been Kim
FAtt
XAI
89
683
0
02 Nov 2017
A Unified Approach to Interpreting Model Predictions
A Unified Approach to Interpreting Model Predictions
Scott M. Lundberg
Su-In Lee
FAtt
718
21,613
0
22 May 2017
Learning how to explain neural networks: PatternNet and
  PatternAttribution
Learning how to explain neural networks: PatternNet and PatternAttribution
Pieter-Jan Kindermans
Kristof T. Schütt
Maximilian Alber
K. Müller
D. Erhan
Been Kim
Sven Dähne
XAI
FAtt
65
338
0
16 May 2017
Learning Important Features Through Propagating Activation Differences
Learning Important Features Through Propagating Activation Differences
Avanti Shrikumar
Peyton Greenside
A. Kundaje
FAtt
148
3,848
0
10 Apr 2017
Axiomatic Attribution for Deep Networks
Axiomatic Attribution for Deep Networks
Mukund Sundararajan
Ankur Taly
Qiqi Yan
OOD
FAtt
149
5,920
0
04 Mar 2017
Towards A Rigorous Science of Interpretable Machine Learning
Towards A Rigorous Science of Interpretable Machine Learning
Finale Doshi-Velez
Been Kim
XAI
FaML
359
3,742
0
28 Feb 2017
Layer-wise Relevance Propagation for Neural Networks with Local
  Renormalization Layers
Layer-wise Relevance Propagation for Neural Networks with Local Renormalization Layers
Alexander Binder
G. Montavon
Sebastian Lapuschkin
K. Müller
Wojciech Samek
FAtt
63
456
0
04 Apr 2016
XGBoost: A Scalable Tree Boosting System
XGBoost: A Scalable Tree Boosting System
Tianqi Chen
Carlos Guestrin
496
37,815
0
09 Mar 2016
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
772
16,828
0
16 Feb 2016
Deep Inside Convolutional Networks: Visualising Image Classification
  Models and Saliency Maps
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
Karen Simonyan
Andrea Vedaldi
Andrew Zisserman
FAtt
219
7,252
0
20 Dec 2013
1