Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2111.06206
Cited By
Defining and Quantifying the Emergence of Sparse Concepts in DNNs
11 November 2021
Jie Ren
Mingjie Li
Qirui Chen
Huiqi Deng
Quanshi Zhang
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Defining and Quantifying the Emergence of Sparse Concepts in DNNs"
22 / 22 papers shown
Title
Technical Report: Quantifying and Analyzing the Generalization Power of a DNN
Yuxuan He
Junpeng Zhang
Lei Cheng
Hongyuan Zhang
Quanshi Zhang
AI4CE
40
0
0
11 May 2025
Interpreting Attributions and Interactions of Adversarial Attacks
Xin Eric Wang
Shuyu Lin
Hao Zhang
Yufei Zhu
Quanshi Zhang
AAML
FAtt
42
15
0
16 Aug 2021
A Hypothesis for the Aesthetic Appreciation in Neural Networks
Xu Cheng
Xin Eric Wang
Haotian Xue
Zhe Liang
Quanshi Zhang
43
11
0
31 Jul 2021
A Game-Theoretic Taxonomy of Visual Concepts in DNNs
Xu Cheng
Chuntung Chu
Yi Zheng
Jie Ren
Quanshi Zhang
32
21
0
21 Jun 2021
Entropy-based Logic Explanations of Neural Networks
Pietro Barbiero
Gabriele Ciravegna
Francesco Giannini
Pietro Lio
Marco Gori
S. Melacci
FAtt
XAI
48
79
0
12 Jun 2021
Explanations for Monotonic Classifiers
Sasha Rubin
Thomas Gerspacher
M. Cooper
Alexey Ignatiev
Nina Narodytska
FAtt
55
44
0
01 Jun 2021
Improving KernelSHAP: Practical Shapley Value Estimation via Linear Regression
Ian Covert
Su-In Lee
FAtt
50
166
0
02 Dec 2020
A Unified Approach to Interpreting and Boosting Adversarial Transferability
Xin Eric Wang
Jie Ren
Shuyu Lin
Xiangming Zhu
Yisen Wang
Quanshi Zhang
AAML
62
95
0
08 Oct 2020
Mining Interpretable AOG Representations from Convolutional Networks via Active Question Answering
Quanshi Zhang
Ruiming Cao
Ying Nian Wu
Song-Chun Zhu
38
14
0
18 Dec 2018
Neural Network Acceptability Judgments
Alex Warstadt
Amanpreet Singh
Samuel R. Bowman
198
1,390
0
31 May 2018
Causal Learning and Explanation of Deep Neural Networks via Autoencoded Activations
M. Harradon
Jeff Druce
Brian E. Ruttenberg
BDL
CML
43
82
0
02 Feb 2018
Beyond Word Importance: Contextual Decomposition to Extract Interactions from LSTMs
W. James Murdoch
Peter J. Liu
Bin Yu
58
209
0
16 Jan 2018
Beyond Sparsity: Tree Regularization of Deep Models for Interpretability
Mike Wu
M. C. Hughes
S. Parbhoo
Maurizio Zazzi
Volker Roth
Finale Doshi-Velez
AI4CE
114
281
0
16 Nov 2017
Interpreting CNN Knowledge via an Explanatory Graph
Quanshi Zhang
Ruiming Cao
Feng Shi
Ying Nian Wu
Song-Chun Zhu
FAtt
GNN
SSL
54
242
0
05 Aug 2017
Real Time Image Saliency for Black Box Classifiers
P. Dabkowski
Y. Gal
62
589
0
22 May 2017
Interpretable Explanations of Black Boxes by Meaningful Perturbation
Ruth C. Fong
Andrea Vedaldi
FAtt
AAML
74
1,514
0
11 Apr 2017
Axiomatic Attribution for Deep Networks
Mukund Sundararajan
Ankur Taly
Qiqi Yan
OOD
FAtt
158
5,920
0
04 Mar 2017
Not Just a Black Box: Learning Important Features Through Propagating Activation Differences
Avanti Shrikumar
Peyton Greenside
A. Shcherbina
A. Kundaje
FAtt
77
782
0
05 May 2016
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
829
16,828
0
16 Feb 2016
Learning Deep Features for Discriminative Localization
Bolei Zhou
A. Khosla
Àgata Lapedriza
A. Oliva
Antonio Torralba
SSL
SSeg
FAtt
216
9,280
0
14 Dec 2015
Object Detectors Emerge in Deep Scene CNNs
Bolei Zhou
A. Khosla
Àgata Lapedriza
A. Oliva
Antonio Torralba
ObjD
133
1,279
0
22 Dec 2014
Visualizing and Understanding Convolutional Networks
Matthew D. Zeiler
Rob Fergus
FAtt
SSL
430
15,849
0
12 Nov 2013
1