Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1704.05796
Cited By
Network Dissection: Quantifying Interpretability of Deep Visual Representations
19 April 2017
David Bau
Bolei Zhou
A. Khosla
A. Oliva
Antonio Torralba
MILM
FAtt
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Network Dissection: Quantifying Interpretability of Deep Visual Representations"
50 / 787 papers shown
Title
Concept Evolution in Deep Learning Training: A Unified Interpretation Framework and Discoveries
Haekyu Park
Seongmin Lee
Benjamin Hoover
Austin P. Wright
Omar Shaikh
Rahul Duggal
Nilaksh Das
Kevin Wenliang Li
Judy Hoffman
Duen Horng Chau
71
2
0
30 Mar 2022
Interpretable Vertebral Fracture Diagnosis
Paul Engstler
Matthias Keicher
D. Schinz
Kristina Mach
A. Gersing
...
Anna-Sophia Dietrich
Benedikt Wiestler
Jan S. Kirschke
Ashkan Khakzar
Nassir Navab
FAtt
MedIm
64
6
0
30 Mar 2022
Long-Tailed Recognition via Weight Balancing
Shaden Alshammari
Yu-Xiong Wang
Deva Ramanan
Shu Kong
MQ
100
147
0
27 Mar 2022
HINT: Hierarchical Neuron Concept Explainer
Andong Wang
Wei-Ning Lee
Xiaojuan Qi
64
19
0
27 Mar 2022
Concept Embedding Analysis: A Review
Gesina Schwalbe
77
28
0
25 Mar 2022
Self-supervised Semantic Segmentation Grounded in Visual Concepts
Wenbin He
William C. Surmeier
A. Shekar
Liangke Gou
Liu Ren
SSL
74
7
0
25 Mar 2022
Interactive Style Transfer: All is Your Palette
Zheng Lin
Zhao Zhang
Kangkang Zhang
Bo Ren
Ming-Ming Cheng
98
3
0
25 Mar 2022
HP-Capsule: Unsupervised Face Part Discovery by Hierarchical Parsing Capsule Network
Chang Yu
Xiangyu Zhu
Xiaomei Zhang
Zidu Wang
Zhaoxiang Zhang
Zhen Lei
OCL
57
15
0
21 Mar 2022
Towards understanding deep learning with the natural clustering prior
Simon Carbonnelle
54
0
0
15 Mar 2022
Beyond Explaining: Opportunities and Challenges of XAI-Based Model Improvement
Leander Weber
Sebastian Lapuschkin
Alexander Binder
Wojciech Samek
109
103
0
15 Mar 2022
Evaluating Explainable AI on a Multi-Modal Medical Imaging Task: Can Existing Algorithms Fulfill Clinical Requirements?
Weina Jin
Xiaoxiao Li
Ghassan Hamarneh
52
52
0
12 Mar 2022
Symmetry Group Equivariant Architectures for Physics
A. Bogatskiy
S. Ganguly
Thomas Kipf
Risi Kondor
David W. Miller
...
Jan T. Offermann
M. Pettee
P. Shanahan
C. Shimmin
S. Thais
AI4CE
85
27
0
11 Mar 2022
Sparse Subspace Clustering for Concept Discovery (SSCCD)
Johanna Vielhaben
Stefan Blücher
Nils Strodthoff
67
6
0
11 Mar 2022
DIME: Fine-grained Interpretations of Multimodal Models via Disentangled Local Explanations
Yiwei Lyu
Paul Pu Liang
Zihao Deng
Ruslan Salakhutdinov
Louis-Philippe Morency
86
36
0
03 Mar 2022
Measuring Self-Supervised Representation Quality for Downstream Classification using Discriminative Features
Neha Kalibhat
Kanika Narang
Hamed Firooz
Maziar Sanjabi
Soheil Feizi
SSL
150
8
0
03 Mar 2022
A study on the distribution of social biases in self-supervised learning visual models
Kirill Sirotkin
Pablo Carballeira
Marcos Escudero-Viñolo
100
19
0
03 Mar 2022
ADVISE: ADaptive Feature Relevance and VISual Explanations for Convolutional Neural Networks
Mohammad Mahdi Dehshibi
Mona Ashtari-Majlan
Gereziher W. Adhane
David Masip
AAML
FAtt
52
3
0
02 Mar 2022
Threading the Needle of On and Off-Manifold Value Functions for Shapley Explanations
Chih-Kuan Yeh
Kuan-Yun Lee
Frederick Liu
Pradeep Ravikumar
FAtt
TDI
52
9
0
24 Feb 2022
An Analysis of Complex-Valued CNNs for RF Data-Driven Wireless Device Classification
Jun Chen
Weng-Keen Wong
B. Hamdaoui
Abdurrahman Elmaghbub
K. Sivanesan
R. Dorrance
Lily L. Yang
31
10
0
20 Feb 2022
Explaining, Evaluating and Enhancing Neural Networks' Learned Representations
Marco Bertolini
Djork-Arné Clevert
F. Montanari
FAtt
49
5
0
18 Feb 2022
Guidelines and Evaluation of Clinical Explainable AI in Medical Image Analysis
Weina Jin
Xiaoxiao Li
M. Fatehi
Ghassan Hamarneh
ELM
XAI
76
95
0
16 Feb 2022
Aligning Eyes between Humans and Deep Neural Network through Interactive Attention Alignment
Yuyang Gao
Tong Sun
Liang Zhao
Sungsoo Ray Hong
HAI
92
38
0
06 Feb 2022
Concept Bottleneck Model with Additional Unsupervised Concepts
Yoshihide Sawada
Keigo Nakamura
SSL
80
74
0
03 Feb 2022
Toward a traceable, explainable, and fairJD/Resume recommendation system
Amine Barrak
Bram Adams
Payel Das
51
2
0
02 Feb 2022
Debiased-CAM to mitigate systematic error with faithful visual explanations of machine learning
Wencan Zhang
Mariella Dimiccoli
Brian Y. Lim
FAtt
75
1
0
30 Jan 2022
Natural Language Descriptions of Deep Visual Features
Evan Hernandez
Sarah Schwettmann
David Bau
Teona Bagashvili
Antonio Torralba
Jacob Andreas
MILM
315
126
0
26 Jan 2022
Deeply Explain CNN via Hierarchical Decomposition
Mingg-Ming Cheng
Peng-Tao Jiang
Linghao Han
Liang Wang
Philip Torr
FAtt
96
15
0
23 Jan 2022
From Anecdotal Evidence to Quantitative Evaluation Methods: A Systematic Review on Evaluating Explainable AI
Meike Nauta
Jan Trienes
Shreyasi Pathak
Elisa Nguyen
Michelle Peters
Yasmin Schmitt
Jorg Schlotterer
M. V. Keulen
C. Seifert
ELM
XAI
178
423
0
20 Jan 2022
Enabling Verification of Deep Neural Networks in Perception Tasks Using Fuzzy Logic and Concept Embeddings
Gesina Schwalbe
Christian Wirth
Ute Schmid
AAML
51
7
0
03 Jan 2022
PCACE: A Statistical Approach to Ranking Neurons for CNN Interpretability
Sílvia Casacuberta
Esra Suel
Seth Flaxman
FAtt
33
1
0
31 Dec 2021
Forward Composition Propagation for Explainable Neural Reasoning
Isel Grau
Gonzalo Nápoles
M. Bello
Yamisleydi Salgueiro
A. Jastrzębska
55
1
0
23 Dec 2021
Neural-Symbolic Integration for Interactive Learning and Conceptual Grounding
Benedikt Wagner
Artur Garcez
NAI
64
6
0
22 Dec 2021
RELAX: Representation Learning Explainability
Kristoffer Wickstrøm
Daniel J. Trosten
Sigurd Løkse
Ahcène Boubekki
Karl Øyvind Mikalsen
Michael C. Kampffmeyer
Robert Jenssen
FAtt
51
15
0
19 Dec 2021
Masked Feature Prediction for Self-Supervised Visual Pre-Training
Chen Wei
Haoqi Fan
Saining Xie
Chaoxia Wu
Alan Yuille
Christoph Feichtenhofer
ViT
203
677
0
16 Dec 2021
Decomposing the Deep: Finding Class Specific Filters in Deep CNNs
Akshay Badola
Cherian Roy
V. Padmanabhan
R. Lal
FAtt
64
2
0
14 Dec 2021
HIVE: Evaluating the Human Interpretability of Visual Explanations
Sunnie S. Y. Kim
Nicole Meister
V. V. Ramaswamy
Ruth C. Fong
Olga Russakovsky
163
119
0
06 Dec 2021
Temporal-Spatial Causal Interpretations for Vision-Based Reinforcement Learning
Wenjie Shi
Gao Huang
Shiji Song
Cheng Wu
84
11
0
06 Dec 2021
Editing a classifier by rewriting its prediction rules
Shibani Santurkar
Dimitris Tsipras
Mahalaxmi Elango
David Bau
Antonio Torralba
Aleksander Madry
KELM
241
90
0
02 Dec 2021
Label-Free Model Evaluation with Semi-Structured Dataset Representations
Xiaoxiao Sun
Yunzhong Hou
Hongdong Li
Liang Zheng
69
12
0
01 Dec 2021
Attribute-specific Control Units in StyleGAN for Fine-grained Image Manipulation
Rui Wang
Jian Chen
Gang Yu
Li Sun
Changqian Yu
Changxin Gao
Nong Sang
81
17
0
25 Nov 2021
Efficient Decompositional Rule Extraction for Deep Neural Networks
Mateo Espinosa Zarlenga
Z. Shams
M. Jamnik
85
17
0
24 Nov 2021
Acquisition of Chess Knowledge in AlphaZero
Thomas McGrath
A. Kapishnikov
Nenad Tomašev
Adam Pearce
Demis Hassabis
Been Kim
Ulrich Paquet
Vladimir Kramnik
77
168
0
17 Nov 2021
Unsupervised Part Discovery from Contrastive Reconstruction
Subhabrata Choudhury
Iro Laina
Christian Rupprecht
Andrea Vedaldi
OCL
SSL
241
62
0
11 Nov 2021
Transparency of Deep Neural Networks for Medical Image Analysis: A Review of Interpretability Methods
Zohaib Salahuddin
Henry C. Woodruff
A. Chatterjee
Philippe Lambin
90
321
0
01 Nov 2021
Neural View Synthesis and Matching for Semi-Supervised Few-Shot Learning of 3D Pose
Angtian Wang
Shenxiao Mei
Alan Yuille
Adam Kortylewski
3DV
103
21
0
27 Oct 2021
StyleAlign: Analysis and Applications of Aligned StyleGAN Models
Zongze Wu
Yotam Nitzan
Eli Shechtman
Dani Lischinski
127
58
0
21 Oct 2021
NeuroView: Explainable Deep Network Decision Making
C. Barberan
Randall Balestriero
Richard G. Baraniuk
FAtt
51
2
0
15 Oct 2021
Quantifying Local Specialization in Deep Neural Networks
Shlomi Hod
Daniel Filan
Stephen Casper
Andrew Critch
Stuart J. Russell
104
11
0
13 Oct 2021
Robust Feature-Level Adversaries are Interpretability Tools
Stephen Casper
Max Nadeau
Dylan Hadfield-Menell
Gabriel Kreiman
AAML
195
28
0
07 Oct 2021
Exploring the Common Principal Subspace of Deep Features in Neural Networks
Haoran Liu
Haoyi Xiong
Yaqing Wang
Haozhe An
Dongrui Wu
Dejing Dou
32
1
0
06 Oct 2021
Previous
1
2
3
...
7
8
9
...
14
15
16
Next