Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2409.13137
Cited By
Interpret the Predictions of Deep Networks via Re-Label Distillation
20 September 2024
Yingying Hua
Shiming Ge
Daichi Zhang
FAtt
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Interpret the Predictions of Deep Networks via Re-Label Distillation"
18 / 18 papers shown
Title
A Survey on Neural Network Interpretability
Yu Zhang
Peter Tiño
A. Leonardis
K. Tang
FaML
XAI
204
682
0
28 Dec 2020
Interpretable Machine Learning -- A Brief History, State-of-the-Art and Challenges
Christoph Molnar
Giuseppe Casalicchio
B. Bischl
AI4TS
AI4CE
87
403
0
19 Oct 2020
Explaining Neural Network Predictions for Functional Data Using Principal Component Analysis and Feature Importance
Katherine Goode
Daniel Ries
J. Zollweg
23
3
0
15 Oct 2020
ICAM: Interpretable Classification via Disentangled Representations and Feature Attribution Mapping
Cher Bass
Mariana da Silva
Carole Sudre
Petru-Daniel Tudosiu
Stephen M. Smith
E. C. Robinson
FAtt
40
40
0
15 Jun 2020
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI
Alejandro Barredo Arrieta
Natalia Díaz Rodríguez
Javier Del Ser
Adrien Bennetot
Siham Tabik
...
S. Gil-Lopez
Daniel Molina
Richard Benjamins
Raja Chatila
Francisco Herrera
XAI
124
6,269
0
22 Oct 2019
Understanding Deep Networks via Extremal Perturbations and Smooth Masks
Ruth C. Fong
Mandela Patrick
Andrea Vedaldi
AAML
73
416
0
18 Oct 2019
Score-CAM: Score-Weighted Visual Explanations for Convolutional Neural Networks
Mehdi Neshat
Zifan Wang
Bradley Alexander
Fan Yang
Zijian Zhang
Sirui Ding
Markus Wagner
Xia Hu
FAtt
93
1,074
0
03 Oct 2019
Interpretable and Fine-Grained Visual Explanations for Convolutional Neural Networks
Jörg Wagner
Jan M. Köhler
Tobias Gindele
Leon Hetzel
Thaddäus Wiedemer
Sven Behnke
AAML
FAtt
83
122
0
07 Aug 2019
Improving the Interpretability of Deep Neural Networks with Knowledge Distillation
Xuan Liu
Xiaoguang Wang
Stan Matwin
HAI
57
101
0
28 Dec 2018
RISE: Randomized Input Sampling for Explanation of Black-box Models
Vitali Petsiuk
Abir Das
Kate Saenko
FAtt
181
1,171
0
19 Jun 2018
Explanation Methods in Deep Learning: Users, Values, Concerns and Challenges
Gabrielle Ras
Marcel van Gerven
W. Haselager
XAI
102
219
0
20 Mar 2018
Visual Interpretability for Deep Learning: a Survey
Quanshi Zhang
Song-Chun Zhu
FaML
HAI
142
821
0
02 Feb 2018
Grammar Variational Autoencoder
Matt J. Kusner
Brooks Paige
José Miguel Hernández-Lobato
BDL
DRL
85
844
0
06 Mar 2017
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization
Ramprasaath R. Selvaraju
Michael Cogswell
Abhishek Das
Ramakrishna Vedantam
Devi Parikh
Dhruv Batra
FAtt
321
20,070
0
07 Oct 2016
Top-down Neural Attention by Excitation Backprop
Jianming Zhang
Zhe Lin
Jonathan Brandt
Xiaohui Shen
Stan Sclaroff
89
948
0
01 Aug 2016
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
1.2K
17,027
0
16 Feb 2016
Distilling the Knowledge in a Neural Network
Geoffrey E. Hinton
Oriol Vinyals
J. Dean
FedML
362
19,723
0
09 Mar 2015
Visualizing and Understanding Convolutional Networks
Matthew D. Zeiler
Rob Fergus
FAtt
SSL
595
15,893
0
12 Nov 2013
1