Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1704.05796
Cited By
Network Dissection: Quantifying Interpretability of Deep Visual Representations
19 April 2017
David Bau
Bolei Zhou
A. Khosla
A. Oliva
Antonio Torralba
MILM
FAtt
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Network Dissection: Quantifying Interpretability of Deep Visual Representations"
50 / 787 papers shown
Title
Role Taxonomy of Units in Deep Neural Networks
Yang Zhao
Hao Zhang
Xiuyuan Hu
20
1
0
02 Nov 2020
Quantifying Learnability and Describability of Visual Concepts Emerging in Representation Learning
Iro Laina
Ruth C. Fong
Andrea Vedaldi
OCL
82
13
0
27 Oct 2020
Exemplary Natural Images Explain CNN Activations Better than State-of-the-Art Feature Visualization
Judy Borowski
Roland S. Zimmermann
Judith Schepers
Robert Geirhos
Thomas S. A. Wallis
Matthias Bethge
Wieland Brendel
FAtt
99
7
0
23 Oct 2020
Deep Neural Mobile Networking
Chaoyun Zhang
78
1
0
23 Oct 2020
Towards falsifiable interpretability research
Matthew L. Leavitt
Ari S. Morcos
AAML
AI4CE
84
68
0
22 Oct 2020
What do CNN neurons learn: Visualization & Clustering
Haoyue Dai
SSL
38
0
0
18 Oct 2020
Difference-in-Differences: Bridging Normalization and Disentanglement in PG-GAN
Xiao-Yang Liu
Jiajin Zhang
Siting Li
Zuotong Wu
Yang Yu
37
0
0
16 Oct 2020
Interpreting Deep Learning Model Using Rule-based Method
Xiaojian Wang
Jingyuan Wang
Ke Tang
23
3
0
15 Oct 2020
Distilling a Deep Neural Network into a Takagi-Sugeno-Kang Fuzzy Inference System
Xiangming Gu
Xiang Cheng
33
7
0
10 Oct 2020
Unsupervised Point Cloud Pre-Training via Occlusion Completion
Hanchen Wang
Qi Liu
Xiangyu Yue
Joan Lasenby
Matt J. Kusner
3DPC
128
256
0
02 Oct 2020
VATLD: A Visual Analytics System to Assess, Understand and Improve Traffic Light Detection
Liang Gou
Lincan Zou
Nanxiang Li
M. Hofmann
A. Shekar
A. Wendt
Liu Ren
131
62
0
27 Sep 2020
Tied Block Convolution: Leaner and Better CNNs with Shared Thinner Filters
Xudong Wang
Stella X. Yu
52
38
0
25 Sep 2020
Improving Robustness and Generality of NLP Models Using Disentangled Representations
Jiawei Wu
Xiaoya Li
Xiang Ao
Yuxian Meng
Leilei Gan
Jiwei Li
OOD
DRL
38
11
0
21 Sep 2020
Contextual Semantic Interpretability
Diego Marcos
Ruth C. Fong
Sylvain Lobry
Rémi Flamary
Nicolas Courty
D. Tuia
SSL
122
28
0
18 Sep 2020
The Intriguing Relation Between Counterfactual Explanations and Adversarial Examples
Timo Freiesleben
GAN
104
65
0
11 Sep 2020
CuratorNet: Visually-aware Recommendation of Art Images
Pablo Messina
Manuel Cartagena
Patricio Cerda
Felipe del-Rio
Denis Parra
44
14
0
09 Sep 2020
Quantifying Explainability of Saliency Methods in Deep Neural Networks with a Synthetic Dataset
Erico Tjoa
Cuntai Guan
XAI
FAtt
103
27
0
07 Sep 2020
A Survey on Machine Learning from Few Samples
Jiang Lu
Pinghua Gong
Jieping Ye
Jianwei Zhang
Changshu Zhang
110
52
0
06 Sep 2020
Bluff: Interactively Deciphering Adversarial Attacks on Deep Neural Networks
Nilaksh Das
Haekyu Park
Zijie J. Wang
Fred Hohman
Robert Firstman
Emily Rogers
Duen Horng Chau
AAML
60
27
0
05 Sep 2020
What is being transferred in transfer learning?
Behnam Neyshabur
Hanie Sedghi
Chiyuan Zhang
158
531
0
26 Aug 2020
Tackling COVID-19 through Responsible AI Innovation: Five Steps in the Right Direction
David Leslie
194
67
0
15 Aug 2020
Abstracting Deep Neural Networks into Concept Graphs for Concept Level Interpretability
Avinash Kori
Parth Natekar
Ganapathy Krishnamurthi
Balaji Srinivasan
82
9
0
14 Aug 2020
Survey of XAI in digital pathology
Milda Pocevičiūtė
Gabriel Eilertsen
Claes Lundström
75
56
0
14 Aug 2020
Axiom-based Grad-CAM: Towards Accurate Visualization and Explanation of CNNs
Ruigang Fu
Qingyong Hu
Xiaohu Dong
Yulan Guo
Yinghui Gao
Biao Li
FAtt
97
272
0
05 Aug 2020
Making Sense of CNNs: Interpreting Deep Representations & Their Invariances with INNs
Robin Rombach
Patrick Esser
Bjorn Ommer
88
16
0
04 Aug 2020
Explainable Face Recognition
Jonathan R. Williford
Brandon B. May
J. Byrne
CVBM
69
71
0
03 Aug 2020
Learning Task-oriented Disentangled Representations for Unsupervised Domain Adaptation
Pingyang Dai
Peixian Chen
Qiong Wu
Xiaopeng Hong
QiXiang Ye
Q. Tian
Rongrong Ji
OOD
37
0
0
27 Jul 2020
Are Visual Explanations Useful? A Case Study in Model-in-the-Loop Prediction
Eric Chu
D. Roy
Jacob Andreas
FAtt
LRM
81
71
0
23 Jul 2020
Interpretable Anomaly Detection with DIFFI: Depth-based Isolation Forest Feature Importance
Mattia Carletti
M. Terzi
Gian Antonio Susto
54
42
0
21 Jul 2020
Volumetric Transformer Networks
Seungryong Kim
Sabine Süsstrunk
Mathieu Salzmann
ViT
107
5
0
18 Jul 2020
Understanding and Diagnosing Vulnerability under Adversarial Attacks
Haizhong Zheng
Ziqi Zhang
Honglak Lee
A. Prakash
FAtt
AAML
76
6
0
17 Jul 2020
Training Interpretable Convolutional Neural Networks by Differentiating Class-specific Filters
Haoyun Liang
Zhihao Ouyang
Yuyuan Zeng
Hang Su
Zihao He
Shutao Xia
Jun Zhu
Bo Zhang
94
47
0
16 Jul 2020
When and how CNNs generalize to out-of-distribution category-viewpoint combinations
Spandan Madan
Timothy M. Henry
Jamell Dozier
Helen Ho
Nishchal Bhandari
Tomotake Sasaki
F. Durand
Hanspeter Pfister
Xavier Boix
OOD
124
25
0
15 Jul 2020
Visualizing Transfer Learning
Róbert Szabó
Dániel Katona
M. Csillag
Adrián Csiszárik
D. Varga
33
8
0
15 Jul 2020
Locality Guided Neural Networks for Explainable Artificial Intelligence
Randy Tan
N. Khan
L. Guan
33
8
0
12 Jul 2020
Adversarially-Trained Deep Nets Transfer Better: Illustration on Image Classification
Francisco Utrera
Evan Kravitz
N. Benjamin Erichson
Rekha Khanna
Michael W. Mahoney
GAN
90
33
0
11 Jul 2020
Scientific Discovery by Generating Counterfactuals using Image Translation
Arunachalam Narayanaswamy
Subhashini Venugopalan
D. Webster
L. Peng
G. Corrado
...
Abigail E. Huang
Siva Balasubramanian
Michael P. Brenner
Phil Q. Nelson
A. Varadarajan
DiffM
MedIm
134
21
0
10 Jul 2020
Concept Bottleneck Models
Pang Wei Koh
Thao Nguyen
Y. S. Tang
Stephen Mussmann
Emma Pierson
Been Kim
Percy Liang
126
836
0
09 Jul 2020
Hierarchical nucleation in deep neural networks
Diego Doimo
Aldo Glielmo
A. Ansuini
Alessandro Laio
BDL
AI4CE
57
32
0
07 Jul 2020
Are there any óbject detectors' in the hidden layers of CNNs trained to identify objects or scenes?
E. Gale
Nicholas Martin
R. Blything
Anh Nguyen
J. Bowers
83
15
0
02 Jul 2020
Robustness to Transformations Across Categories: Is Robustness To Transformations Driven by Invariant Neural Representations?
Hojin Jang
Syed Suleman Abbas Zaidi
Xavier Boix
Neeraj Prasad
Sharon Gilad-Gutnick
S. Ben-Ami
P. Sinha
OOD
82
4
0
30 Jun 2020
Building Interpretable Interaction Trees for Deep NLP Models
Die Zhang
Huilin Zhou
Hao Zhang
Xiaoyi Bao
Da Huo
Ruizhao Chen
Xu Cheng
Mengyue Wu
Quanshi Zhang
FAtt
28
3
0
29 Jun 2020
Compositional Convolutional Neural Networks: A Robust and Interpretable Model for Object Recognition under Occlusion
Adam Kortylewski
Qing Liu
Angtian Wang
Yihong Sun
Alan Yuille
79
78
0
28 Jun 2020
Video Representation Learning with Visual Tempo Consistency
Ceyuan Yang
Yinghao Xu
Bo Dai
Bolei Zhou
68
92
0
28 Jun 2020
Invertible Concept-based Explanations for CNN Models with Non-negative Concept Activation Vectors
Ruihan Zhang
Prashan Madumal
Tim Miller
Krista A. Ehinger
Benjamin I. P. Rubinstein
FAtt
105
106
0
27 Jun 2020
Proper Network Interpretability Helps Adversarial Robustness in Classification
Akhilan Boopathy
Sijia Liu
Gaoyuan Zhang
Cynthia Liu
Pin-Yu Chen
Shiyu Chang
Luca Daniel
AAML
FAtt
132
66
0
26 Jun 2020
Compositional Explanations of Neurons
Jesse Mu
Jacob Andreas
FAtt
CoGe
MILM
119
179
0
24 Jun 2020
The shape and simplicity biases of adversarially robust ImageNet-trained CNNs
Peijie Chen
Chirag Agarwal
Anh Totti Nguyen
AAML
89
18
0
16 Jun 2020
GAN Memory with No Forgetting
Yulai Cong
Miaoyun Zhao
Jianqiao Li
Sijia Wang
Lawrence Carin
CLL
80
124
0
13 Jun 2020
Learning Effective Representations for Person-Job Fit by Feature Fusion
Jun-hai Jiang
Songyun Ye
Wei Wang
Jingran Xu
Xia Luo
FaML
44
48
0
12 Jun 2020
Previous
1
2
3
...
10
11
12
...
14
15
16
Next