Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1704.05796
Cited By
Network Dissection: Quantifying Interpretability of Deep Visual Representations
19 April 2017
David Bau
Bolei Zhou
A. Khosla
A. Oliva
Antonio Torralba
MILM
FAtt
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Network Dissection: Quantifying Interpretability of Deep Visual Representations"
50 / 787 papers shown
Title
Self-conditioning pre-trained language models
Xavier Suau
Luca Zappella
N. Apostoloff
54
12
0
30 Sep 2021
TSM: Temporal Shift Module for Efficient and Scalable Video Understanding on Edge Device
Ji Lin
Chuang Gan
Kuan-Chieh Wang
Song Han
100
65
0
27 Sep 2021
Learning Interpretable Concept Groups in CNNs
Saurabh Varshneya
Antoine Ledent
Robert A. Vandermeulen
Yunwen Lei
Matthias Enders
Damian Borth
Marius Kloft
111
6
0
21 Sep 2021
Explaining Convolutional Neural Networks by Tagging Filters
Anna Nguyen
Daniel Hagenmayer
T. Weller
Michael Färber
FAtt
56
0
0
20 Sep 2021
Detection Accuracy for Evaluating Compositional Explanations of Units
Sayo M. Makinwa
Biagio La Rosa
Roberto Capobianco
FAtt
CoGe
85
1
0
16 Sep 2021
Cross-Model Consensus of Explanations and Beyond for Image Classification Models: An Empirical Study
Xuhong Li
Haoyi Xiong
Siyu Huang
Shilei Ji
Dejing Dou
63
10
0
02 Sep 2021
Understanding of Kernels in CNN Models by Suppressing Irrelevant Visual Features in Images
Jiafan Zhuang
Wanying Tao
Jianfei Xing
Wei Shi
Ruixuan Wang
Weishi Zheng
FAtt
52
3
0
25 Aug 2021
Interpreting Face Inference Models using Hierarchical Network Dissection
Divyang Teotia
Àgata Lapedriza
Sarah Ostadabbas
CVBM
67
3
0
23 Aug 2021
Towards Interpretable Deep Networks for Monocular Depth Estimation
Zunzhi You
Yi-Hsuan Tsai
W. Chiu
Guanbin Li
FAtt
77
17
0
11 Aug 2021
Interpreting Generative Adversarial Networks for Interactive Image Generation
Bolei Zhou
GAN
59
6
0
10 Aug 2021
COVID-view: Diagnosis of COVID-19 using Chest CT
Shreeraj Jadhav
Gaofeng Deng
M. Zawin
Arie Kaufman
78
18
0
09 Aug 2021
Spatiotemporal Contrastive Learning of Facial Expressions in Videos
Shuvendu Roy
Ali Etemad
108
18
0
06 Aug 2021
Where do Models go Wrong? Parameter-Space Saliency Maps for Explainability
Roman Levin
Manli Shu
Eitan Borgnia
Furong Huang
Micah Goldblum
Tom Goldstein
FAtt
AAML
55
11
0
03 Aug 2021
Shared Interest: Measuring Human-AI Alignment to Identify Recurring Patterns in Model Behavior
Angie Boggust
Benjamin Hoover
Arvindmani Satyanarayan
Hendrik Strobelt
73
52
0
20 Jul 2021
One Map Does Not Fit All: Evaluating Saliency Map Explanation on Multi-Modal Medical Images
Weina Jin
Xiaoxiao Li
Ghassan Hamarneh
FAtt
45
16
0
11 Jul 2021
Using Causal Analysis for Conceptual Deep Learning Explanation
Sumedha Singla
Stephen Wallace
Sofia Triantafillou
Kayhan Batmanghelich
CML
53
15
0
10 Jul 2021
Interpretable Compositional Convolutional Neural Networks
Wen Shen
Zhihua Wei
Shikun Huang
Binbin Zhang
Jiaqi Fan
Ping Zhao
Quanshi Zhang
FAtt
76
36
0
09 Jul 2021
Subspace Clustering Based Analysis of Neural Networks
Uday Singh Saini
Pravallika Devineni
Evangelos E. Papalexakis
GNN
33
1
0
02 Jul 2021
What do End-to-End Speech Models Learn about Speaker, Language and Channel Information? A Layer-wise and Neuron-level Analysis
Shammur A. Chowdhury
Nadir Durrani
Ahmed M. Ali
118
16
0
01 Jul 2021
Inverting and Understanding Object Detectors
Ang Cao
Justin Johnson
ObjD
128
3
0
26 Jun 2021
Towards Fully Interpretable Deep Neural Networks: Are We There Yet?
Sandareka Wickramanayake
Wynne Hsu
Mong Li Lee
FaML
AI4CE
46
3
0
24 Jun 2021
Evaluation of Saliency-based Explainability Method
Sam Zabdiel Sunder Samuel
V. Kamakshi
Namrata Lodhi
N. C. Krishnan
FAtt
XAI
56
12
0
24 Jun 2021
Visual Probing: Cognitive Framework for Explaining Self-Supervised Image Representations
Witold Oleszkiewicz
Dominika Basaj
Igor Sieradzki
Michal Górszczak
Barbara Rychalska
K. Lewandowska
Tomasz Trzciñski
Bartosz Zieliñski
SSL
75
3
0
21 Jun 2021
A Game-Theoretic Taxonomy of Visual Concepts in DNNs
Xu Cheng
Chuntung Chu
Yi Zheng
Jie Ren
Quanshi Zhang
47
22
0
21 Jun 2021
Cogradient Descent for Dependable Learning
Runqi Wang
Baochang Zhang
Lian Zhuo
QiXiang Ye
David Doermann
30
0
0
20 Jun 2021
Guided Integrated Gradients: An Adaptive Path Method for Removing Noise
A. Kapishnikov
Subhashini Venugopalan
Besim Avci
Benjamin D. Wedin
Michael Terry
Tolga Bolukbasi
122
95
0
17 Jun 2021
Best of both worlds: local and global explanations with human-understandable concepts
Jessica Schrouff
Sebastien Baur
Shaobo Hou
Diana Mincu
Eric Loreaux
Ralph Blanes
James Wexler
Alan Karthikesalingam
Been Kim
FAtt
106
28
0
16 Jun 2021
On the Evolution of Neuron Communities in a Deep Learning Architecture
Sakib Mostafa
Debajyoti Mondal
GNN
52
3
0
08 Jun 2021
3DB: A Framework for Debugging Computer Vision Models
Guillaume Leclerc
Hadi Salman
Andrew Ilyas
Sai H. Vemprala
Logan Engstrom
...
Pengchuan Zhang
Shibani Santurkar
Greg Yang
Ashish Kapoor
Aleksander Madry
118
42
0
07 Jun 2021
Improving Compositionality of Neural Networks by Decoding Representations to Inputs
Mike Wu
Noah D. Goodman
Stefano Ermon
AI4CE
56
3
0
01 Jun 2021
Drop Clause: Enhancing Performance, Interpretability and Robustness of the Tsetlin Machine
Jivitesh Sharma
Rohan Kumar Yadav
Ole-Christoffer Granmo
Lei Jiao
VLM
58
12
0
30 May 2021
The Definitions of Interpretability and Learning of Interpretable Models
Weishen Pan
Changshui Zhang
FaML
XAI
51
4
0
29 May 2021
Transparent Model of Unabridged Data (TMUD)
Jie Xu
Min Ding
40
0
0
23 May 2021
A Comprehensive Taxonomy for Explainable Artificial Intelligence: A Systematic Survey of Surveys on Methods and Concepts
Gesina Schwalbe
Bettina Finzel
XAI
153
198
0
15 May 2021
The Low-Dimensional Linear Geometry of Contextualized Word Representations
Evan Hernandez
Jacob Andreas
MILM
106
45
0
15 May 2021
Cause and Effect: Hierarchical Concept-based Explanation of Neural Networks
Mohammad Nokhbeh Zaeem
Majid Komeili
CML
74
9
0
14 May 2021
Verification of Size Invariance in DNN Activations using Concept Embeddings
Gesina Schwalbe
3DPC
59
8
0
14 May 2021
XAI Handbook: Towards a Unified Framework for Explainable AI
Sebastián M. Palacio
Adriano Lucieri
Mohsin Munir
Jörn Hees
Sheraz Ahmed
Andreas Dengel
58
32
0
14 May 2021
Boosting Light-Weight Depth Estimation Via Knowledge Distillation
Junjie Hu
Chenyou Fan
Hualie Jiang
Xiyue Guo
Yuan Gao
Xiangyong Lu
Tin Lun Lam
64
27
0
13 May 2021
Leveraging Sparse Linear Layers for Debuggable Deep Networks
Eric Wong
Shibani Santurkar
Aleksander Madry
FAtt
67
92
0
11 May 2021
Rationalization through Concepts
Diego Antognini
Boi Faltings
FAtt
124
22
0
11 May 2021
This Looks Like That... Does it? Shortcomings of Latent Space Prototype Interpretability in Deep Networks
Adrian Hoffmann
Claudio Fanconi
Rahul Rade
Jonas Köhler
71
63
0
05 May 2021
Do Feature Attribution Methods Correctly Attribute Features?
Yilun Zhou
Serena Booth
Marco Tulio Ribeiro
J. Shah
FAtt
XAI
112
136
0
27 Apr 2021
Exploiting Explanations for Model Inversion Attacks
Xu Zhao
Wencan Zhang
Xiao Xiao
Brian Y. Lim
MIACV
101
85
0
26 Apr 2021
EigenGAN: Layer-Wise Eigen-Learning for GANs
Zhenliang He
Meina Kan
Shiguang Shan
GAN
115
51
0
26 Apr 2021
Neural Mean Discrepancy for Efficient Out-of-Distribution Detection
Xin Dong
Junfeng Guo
Ang Li
W. Ting
Cong Liu
H. T. Kung
OODD
131
59
0
23 Apr 2021
Equivariant Wavelets: Fast Rotation and Translation Invariant Wavelet Scattering Transforms
A. Saydjari
D. Finkbeiner
67
20
0
22 Apr 2021
Do Deep Neural Networks Forget Facial Action Units? -- Exploring the Effects of Transfer Learning in Health Related Facial Expression Recognition
Pooja Prajod
Dominik Schiller
Tobias Huber
Elisabeth André
CVBM
28
8
0
15 Apr 2021
An Interpretability Illusion for BERT
Tolga Bolukbasi
Adam Pearce
Ann Yuan
Andy Coenen
Emily Reif
Fernanda Viégas
Martin Wattenberg
MILM
FAtt
103
82
0
14 Apr 2021
Automatic Correction of Internal Units in Generative Neural Networks
A. Tousi
Haedong Jeong
Jiyeon Han
Hwanil Choi
Jaesik Choi
GAN
57
11
0
13 Apr 2021
Previous
1
2
3
...
8
9
10
...
14
15
16
Next