Papers
Communities
Organizations
Events
Blog
Pricing
Search
Open menu
Home
Papers
1802.00121
Cited By
v1
v2 (latest)
Interpreting CNNs via Decision Trees
1 February 2018
Quanshi Zhang
Yu Yang
Ying Nian Wu
Song-Chun Zhu
FAtt
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Interpreting CNNs via Decision Trees"
34 / 134 papers shown
Title
Explaining Knowledge Distillation by Quantifying the Knowledge
Xu Cheng
Zhefan Rao
Yilan Chen
Quanshi Zhang
94
122
0
07 Mar 2020
What's the relationship between CNNs and communication systems?
Hao Ge
X. Tu
Yanxiang Gong
M. Xie
Zheng Ma
32
0
0
03 Mar 2020
Leveraging Rationales to Improve Human Task Performance
Devleena Das
Sonia Chernova
87
50
0
11 Feb 2020
Questioning the AI: Informing Design Practices for Explainable AI User Experiences
Q. V. Liao
D. Gruen
Sarah Miller
156
734
0
08 Jan 2020
Transparent Classification with Multilayer Logical Perceptrons and Random Binarization
Zhuo Wang
Wei Zhang
Ning Liu
Jianyong Wang
61
30
0
10 Dec 2019
DRNet: Dissect and Reconstruct the Convolutional Neural Network via Interpretable Manners
Xiaolong Hu
Zhulin An
Chuanguang Yang
Hui Zhu
Kaiqiang Xu
Yongjun Xu
60
3
0
20 Nov 2019
TAB-VCR: Tags and Attributes based Visual Commonsense Reasoning Baselines
Jingxiang Lin
Unnat Jain
Alex Schwing
LRM
ReLM
107
9
0
31 Oct 2019
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI
Alejandro Barredo Arrieta
Natalia Díaz Rodríguez
Javier Del Ser
Adrien Bennetot
Siham Tabik
...
S. Gil-Lopez
Daniel Molina
Richard Benjamins
Raja Chatila
Francisco Herrera
XAI
383
6,403
0
22 Oct 2019
A Logic-Based Framework Leveraging Neural Networks for Studying the Evolution of Neurological Disorders
Francesco Calimeri
Francesco Cauteruccio
Luca Cinelli
A. Marzullo
C. Stamile
G. Terracina
F. Durand-Dubief
D. Sappey-Marinier
48
19
0
21 Oct 2019
A game method for improving the interpretability of convolution neural network
Jinwei Zhao
Qizhou Wang
Fuqiang Zhang
Wanli Qiu
Yufei Wang
Yu Liu
Guo Xie
Weigang Ma
Bin Wang
Xinhong Hei
AI4CE
63
0
0
21 Oct 2019
Leveraging Model Interpretability and Stability to increase Model Robustness
Leilei Gan
T. Michel
Alexandre Briot
AAML
FAtt
48
1
0
01 Oct 2019
Facial age estimation by deep residual decision making
Shichao Li
K. Cheng
CVBM
47
6
0
28 Aug 2019
Evaluating Explanation Without Ground Truth in Interpretable Machine Learning
Fan Yang
Mengnan Du
Helen Zhou
XAI
ELM
74
67
0
16 Jul 2019
Conservative Q-Improvement: Reinforcement Learning for an Interpretable Decision-Tree Policy
Aaron M. Roth
Nicholay Topin
Pooyan Jamshidi
Manuela Veloso
OffRL
95
48
0
02 Jul 2019
Trepan Reloaded: A Knowledge-driven Approach to Explaining Artificial Neural Networks
R. Confalonieri
Tillman Weyde
Tarek R. Besold
Fermín Moscoso del Prado Martín
58
24
0
19 Jun 2019
Extracting Interpretable Concept-Based Decision Trees from CNNs
Conner Chyung
Michael Tsang
Yan Liu
FAtt
63
8
0
11 Jun 2019
Interpretable Neural Network Decoupling
Yuchao Li
Rongrong Ji
Shaohui Lin
Baochang Zhang
Chenqian Yan
Yongjian Wu
Feiyue Huang
Ling Shao
77
2
0
04 Jun 2019
Concise Fuzzy System Modeling Integrating Soft Subspace Clustering and Sparse Learning
Peng Xu
Zhaohong Deng
Chen Cui
Te Zhang
K. Choi
Suhang Gu
Jun Wang
Shitong Wang
59
32
0
24 Apr 2019
Visualizing the decision-making process in deep neural decision forest
Shichao Li
K. Cheng
FAtt
69
7
0
19 Apr 2019
Optimization Methods for Interpretable Differentiable Decision Trees in Reinforcement Learning
I. D. Rodriguez
Taylor W. Killian
Ivan Dario Jimenez Rodriguez
Sung-Hyun Son
Matthew C. Gombolay
OffRL
87
12
0
22 Mar 2019
A novel method for extracting interpretable knowledge from a spiking neural classifier with time-varying synaptic weights
Abeegithan Jeyasothy
Suresh Sundaram
Savitha Ramasamy
N. Sundararajan
48
4
0
28 Feb 2019
Architecting Dependable Learning-enabled Autonomous Systems: A Survey
Chih-Hong Cheng
Dhiraj Gulati
Rongjie Yan
56
4
0
27 Feb 2019
Learning Decision Trees Recurrently Through Communication
Stephan Alaniz
Diego Marcos
Bernt Schiele
Zeynep Akata
71
16
0
05 Feb 2019
Interpretable CNNs for Object Classification
Quanshi Zhang
Xin Eric Wang
Ying Nian Wu
Huilin Zhou
Song-Chun Zhu
61
54
0
08 Jan 2019
Improving the Interpretability of Deep Neural Networks with Knowledge Distillation
Xuan Liu
Xiaoguang Wang
Stan Matwin
HAI
81
101
0
28 Dec 2018
Explanatory Graphs for CNNs
Quanshi Zhang
Xin Eric Wang
Ruiming Cao
Ying Nian Wu
Feng Shi
Song-Chun Zhu
FAtt
GNN
49
3
0
18 Dec 2018
Explaining Neural Networks Semantically and Quantitatively
Runjin Chen
Hao Chen
Ge Huang
Jie Ren
Quanshi Zhang
FAtt
81
56
0
18 Dec 2018
Abduction-Based Explanations for Machine Learning Models
Alexey Ignatiev
Nina Narodytska
Sasha Rubin
FAtt
74
226
0
26 Nov 2018
HSD-CNN: Hierarchically self decomposing CNN architecture using class specific filter sensitivity analysis
Kasanagottu Sairam
J. Mukherjee
A. Patra
P. Das
72
5
0
11 Nov 2018
Semantic bottleneck for computer vision tasks
Apostolos Modas
Seyed-Mohsen Moosavi-Dezfooli
P. Frossard
95
17
0
06 Nov 2018
Explainable Neural Computation via Stack Neural Module Networks
Ronghang Hu
Jacob Andreas
Trevor Darrell
Kate Saenko
LRM
OCL
132
199
0
23 Jul 2018
Object-Oriented Dynamics Predictor
Guangxiang Zhu
Zhiao Huang
Chongjie Zhang
AI4CE
98
35
0
25 May 2018
Unsupervised Learning of Neural Networks to Explain Neural Networks
Quanshi Zhang
Yu Yang
Yuchen Liu
Ying Nian Wu
Song-Chun Zhu
FAtt
SSL
80
27
0
18 May 2018
Visual Interpretability for Deep Learning: a Survey
Quanshi Zhang
Song-Chun Zhu
FaML
HAI
210
827
0
02 Feb 2018
Previous
1
2
3