ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1704.05796
  4. Cited By
Network Dissection: Quantifying Interpretability of Deep Visual
  Representations

Network Dissection: Quantifying Interpretability of Deep Visual Representations

19 April 2017
David Bau
Bolei Zhou
A. Khosla
A. Oliva
Antonio Torralba
    MILM
    FAtt
ArXivPDFHTML

Papers citing "Network Dissection: Quantifying Interpretability of Deep Visual Representations"

50 / 294 papers shown
Title
Do Feature Attribution Methods Correctly Attribute Features?
Do Feature Attribution Methods Correctly Attribute Features?
Yilun Zhou
Serena Booth
Marco Tulio Ribeiro
J. Shah
FAtt
XAI
38
132
0
27 Apr 2021
Exploiting Explanations for Model Inversion Attacks
Exploiting Explanations for Model Inversion Attacks
Xu Zhao
Wencan Zhang
Xiao Xiao
Brian Y. Lim
MIACV
34
82
0
26 Apr 2021
EigenGAN: Layer-Wise Eigen-Learning for GANs
EigenGAN: Layer-Wise Eigen-Learning for GANs
Zhenliang He
Meina Kan
Shiguang Shan
GAN
57
50
0
26 Apr 2021
Equivariant Wavelets: Fast Rotation and Translation Invariant Wavelet
  Scattering Transforms
Equivariant Wavelets: Fast Rotation and Translation Invariant Wavelet Scattering Transforms
A. Saydjari
D. Finkbeiner
33
20
0
22 Apr 2021
An Interpretability Illusion for BERT
An Interpretability Illusion for BERT
Tolga Bolukbasi
Adam Pearce
Ann Yuan
Andy Coenen
Emily Reif
Fernanda Viégas
Martin Wattenberg
MILM
FAtt
46
68
0
14 Apr 2021
TS-CAM: Token Semantic Coupled Attention Map for Weakly Supervised
  Object Localization
TS-CAM: Token Semantic Coupled Attention Map for Weakly Supervised Object Localization
Wei Gao
Fang Wan
Xingjia Pan
Zhiliang Peng
Qi Tian
Zhenjun Han
Bolei Zhou
QiXiang Ye
ViT
WSOL
32
198
0
27 Mar 2021
EX-RAY: Distinguishing Injected Backdoor from Natural Features in Neural
  Networks by Examining Differential Feature Symmetry
EX-RAY: Distinguishing Injected Backdoor from Natural Features in Neural Networks by Examining Differential Feature Symmetry
Yingqi Liu
Guangyu Shen
Guanhong Tao
Zhenting Wang
Shiqing Ma
Xinming Zhang
AAML
37
8
0
16 Mar 2021
Intraclass clustering: an implicit learning ability that regularizes
  DNNs
Intraclass clustering: an implicit learning ability that regularizes DNNs
Simon Carbonnelle
Christophe De Vleeschouwer
65
8
0
11 Mar 2021
Explainable Person Re-Identification with Attribute-guided Metric
  Distillation
Explainable Person Re-Identification with Attribute-guided Metric Distillation
Xiaodong Chen
Xinchen Liu
Wu Liu
Xiaoping Zhang
Yongdong Zhang
Tao Mei
53
44
0
02 Mar 2021
Explainability of deep vision-based autonomous driving systems: Review
  and challenges
Explainability of deep vision-based autonomous driving systems: Review and challenges
Éloi Zablocki
H. Ben-younes
P. Pérez
Matthieu Cord
XAI
50
170
0
13 Jan 2021
Explaining the Black-box Smoothly- A Counterfactual Approach
Explaining the Black-box Smoothly- A Counterfactual Approach
Junyu Chen
Yong Du
Yufan He
W. Paul Segars
Ye Li
MedIm
FAtt
67
100
0
11 Jan 2021
TransPose: Keypoint Localization via Transformer
TransPose: Keypoint Localization via Transformer
Sen Yang
Zhibin Quan
Mu Nie
Wankou Yang
ViT
143
263
0
28 Dec 2020
Demystifying Deep Neural Networks Through Interpretation: A Survey
Demystifying Deep Neural Networks Through Interpretation: A Survey
Giang Dao
Minwoo Lee
FaML
FAtt
22
1
0
13 Dec 2020
The Three Ghosts of Medical AI: Can the Black-Box Present Deliver?
The Three Ghosts of Medical AI: Can the Black-Box Present Deliver?
Thomas P. Quinn
Stephan Jacobs
M. Senadeera
Vuong Le
S. Coghlan
33
112
0
10 Dec 2020
Debiased-CAM to mitigate image perturbations with faithful visual
  explanations of machine learning
Debiased-CAM to mitigate image perturbations with faithful visual explanations of machine learning
Wencan Zhang
Mariella Dimiccoli
Brian Y. Lim
FAtt
34
18
0
10 Dec 2020
StyleSpace Analysis: Disentangled Controls for StyleGAN Image Generation
StyleSpace Analysis: Disentangled Controls for StyleGAN Image Generation
Zongze Wu
Dani Lischinski
Eli Shechtman
DRL
49
482
0
25 Nov 2020
Deep Interpretable Classification and Weakly-Supervised Segmentation of
  Histology Images via Max-Min Uncertainty
Deep Interpretable Classification and Weakly-Supervised Segmentation of Histology Images via Max-Min Uncertainty
Soufiane Belharbi
Jérôme Rony
Jose Dolz
Ismail Ben Ayed
Luke McCaffrey
Eric Granger
24
52
0
14 Nov 2020
Teaching with Commentaries
Teaching with Commentaries
Aniruddh Raghu
M. Raghu
Simon Kornblith
David Duvenaud
Geoffrey E. Hinton
27
24
0
05 Nov 2020
This Looks Like That, Because ... Explaining Prototypes for
  Interpretable Image Recognition
This Looks Like That, Because ... Explaining Prototypes for Interpretable Image Recognition
Meike Nauta
Annemarie Jutte
Jesper C. Provoost
C. Seifert
FAtt
22
65
0
05 Nov 2020
Role Taxonomy of Units in Deep Neural Networks
Role Taxonomy of Units in Deep Neural Networks
Yang Zhao
Hao Zhang
Xiuyuan Hu
18
1
0
02 Nov 2020
Quantifying Learnability and Describability of Visual Concepts Emerging
  in Representation Learning
Quantifying Learnability and Describability of Visual Concepts Emerging in Representation Learning
Iro Laina
Ruth C. Fong
Andrea Vedaldi
OCL
33
13
0
27 Oct 2020
Exemplary Natural Images Explain CNN Activations Better than
  State-of-the-Art Feature Visualization
Exemplary Natural Images Explain CNN Activations Better than State-of-the-Art Feature Visualization
Judy Borowski
Roland S. Zimmermann
Judith Schepers
Robert Geirhos
Thomas S. A. Wallis
Matthias Bethge
Wieland Brendel
FAtt
47
7
0
23 Oct 2020
VATLD: A Visual Analytics System to Assess, Understand and Improve
  Traffic Light Detection
VATLD: A Visual Analytics System to Assess, Understand and Improve Traffic Light Detection
Liang Gou
Lincan Zou
Nanxiang Li
M. Hofmann
A. Shekar
A. Wendt
Liu Ren
36
60
0
27 Sep 2020
Contextual Semantic Interpretability
Contextual Semantic Interpretability
Diego Marcos
Ruth C. Fong
Sylvain Lobry
Rémi Flamary
Nicolas Courty
D. Tuia
SSL
20
27
0
18 Sep 2020
The Intriguing Relation Between Counterfactual Explanations and
  Adversarial Examples
The Intriguing Relation Between Counterfactual Explanations and Adversarial Examples
Timo Freiesleben
GAN
46
62
0
11 Sep 2020
CuratorNet: Visually-aware Recommendation of Art Images
CuratorNet: Visually-aware Recommendation of Art Images
Pablo Messina
Manuel Cartagena
Patricio Cerda
Felipe del-Rio
Denis Parra
25
14
0
09 Sep 2020
Quantifying Explainability of Saliency Methods in Deep Neural Networks
  with a Synthetic Dataset
Quantifying Explainability of Saliency Methods in Deep Neural Networks with a Synthetic Dataset
Erico Tjoa
Cuntai Guan
XAI
FAtt
19
27
0
07 Sep 2020
Abstracting Deep Neural Networks into Concept Graphs for Concept Level
  Interpretability
Abstracting Deep Neural Networks into Concept Graphs for Concept Level Interpretability
Avinash Kori
Parth Natekar
Ganapathy Krishnamurthi
Balaji Srinivasan
26
8
0
14 Aug 2020
Survey of XAI in digital pathology
Survey of XAI in digital pathology
Milda Pocevičiūtė
Gabriel Eilertsen
Claes Lundström
14
56
0
14 Aug 2020
Axiom-based Grad-CAM: Towards Accurate Visualization and Explanation of
  CNNs
Axiom-based Grad-CAM: Towards Accurate Visualization and Explanation of CNNs
Ruigang Fu
Qingyong Hu
Xiaohu Dong
Yulan Guo
Yinghui Gao
Biao Li
FAtt
24
266
0
05 Aug 2020
Explainable Face Recognition
Explainable Face Recognition
Jonathan R. Williford
Brandon B. May
J. Byrne
CVBM
16
71
0
03 Aug 2020
Interpretable Anomaly Detection with DIFFI: Depth-based Isolation Forest
  Feature Importance
Interpretable Anomaly Detection with DIFFI: Depth-based Isolation Forest Feature Importance
Mattia Carletti
M. Terzi
Gian Antonio Susto
36
42
0
21 Jul 2020
Volumetric Transformer Networks
Volumetric Transformer Networks
Seungryong Kim
Sabine Süsstrunk
Mathieu Salzmann
ViT
35
5
0
18 Jul 2020
Training Interpretable Convolutional Neural Networks by Differentiating
  Class-specific Filters
Training Interpretable Convolutional Neural Networks by Differentiating Class-specific Filters
Haoyun Liang
Zhihao Ouyang
Yuyuan Zeng
Hang Su
Zihao He
Shutao Xia
Jun Zhu
Bo Zhang
16
47
0
16 Jul 2020
Scientific Discovery by Generating Counterfactuals using Image
  Translation
Scientific Discovery by Generating Counterfactuals using Image Translation
Arunachalam Narayanaswamy
Subhashini Venugopalan
D. Webster
L. Peng
G. Corrado
...
Abigail E. Huang
Siva Balasubramanian
Michael P. Brenner
Phil Q. Nelson
A. Varadarajan
DiffM
MedIm
30
20
0
10 Jul 2020
Concept Bottleneck Models
Concept Bottleneck Models
Pang Wei Koh
Thao Nguyen
Y. S. Tang
Stephen Mussmann
Emma Pierson
Been Kim
Percy Liang
40
784
0
09 Jul 2020
Proper Network Interpretability Helps Adversarial Robustness in
  Classification
Proper Network Interpretability Helps Adversarial Robustness in Classification
Akhilan Boopathy
Sijia Liu
Gaoyuan Zhang
Cynthia Liu
Pin-Yu Chen
Shiyu Chang
Luca Daniel
AAML
FAtt
34
66
0
26 Jun 2020
GAN Memory with No Forgetting
GAN Memory with No Forgetting
Yulai Cong
Miaoyun Zhao
Jianqiao Li
Sijia Wang
Lawrence Carin
CLL
13
117
0
13 Jun 2020
Learning to Branch for Multi-Task Learning
Learning to Branch for Multi-Task Learning
Pengsheng Guo
Chen-Yu Lee
Daniel Ulbricht
18
175
0
02 Jun 2020
Interpretable and Accurate Fine-grained Recognition via Region Grouping
Interpretable and Accurate Fine-grained Recognition via Region Grouping
Zixuan Huang
Yin Li
14
138
0
21 May 2020
Explaining AI-based Decision Support Systems using Concept Localization
  Maps
Explaining AI-based Decision Support Systems using Concept Localization Maps
Adriano Lucieri
Muhammad Naseer Bajwa
Andreas Dengel
Sheraz Ahmed
27
26
0
04 May 2020
Under the Hood of Neural Networks: Characterizing Learned
  Representations by Functional Neuron Populations and Network Ablations
Under the Hood of Neural Networks: Characterizing Learned Representations by Functional Neuron Populations and Network Ablations
Richard Meyes
Constantin Waubert de Puiseau
Andres Felipe Posada-Moreno
Tobias Meisen
AI4CE
30
21
0
02 Apr 2020
A Survey of Deep Learning for Scientific Discovery
A Survey of Deep Learning for Scientific Discovery
M. Raghu
Erica Schmidt
OOD
AI4CE
47
120
0
26 Mar 2020
Foundations of Explainable Knowledge-Enabled Systems
Foundations of Explainable Knowledge-Enabled Systems
Shruthi Chari
Daniel Gruen
Oshani Seneviratne
D. McGuinness
39
28
0
17 Mar 2020
Self-Supervised Discovering of Interpretable Features for Reinforcement
  Learning
Self-Supervised Discovering of Interpretable Features for Reinforcement Learning
Wenjie Shi
Gao Huang
Shiji Song
Zhuoyuan Wang
Tingyu Lin
Cheng Wu
SSL
28
18
0
16 Mar 2020
TIME: A Transparent, Interpretable, Model-Adaptive and Explainable
  Neural Network for Dynamic Physical Processes
TIME: A Transparent, Interpretable, Model-Adaptive and Explainable Neural Network for Dynamic Physical Processes
Gurpreet Singh
Soumyajit Gupta
Matt Lease
Clint Dawson
AI4TS
AI4CE
22
2
0
05 Mar 2020
Selectivity considered harmful: evaluating the causal impact of class
  selectivity in DNNs
Selectivity considered harmful: evaluating the causal impact of class selectivity in DNNs
Matthew L. Leavitt
Ari S. Morcos
58
33
0
03 Mar 2020
On Leveraging Pretrained GANs for Generation with Limited Data
On Leveraging Pretrained GANs for Generation with Limited Data
Miaoyun Zhao
Yulai Cong
Lawrence Carin
34
21
0
26 Feb 2020
Neuron Shapley: Discovering the Responsible Neurons
Neuron Shapley: Discovering the Responsible Neurons
Amirata Ghorbani
James Zou
FAtt
TDI
25
108
0
23 Feb 2020
Classifying the classifier: dissecting the weight space of neural
  networks
Classifying the classifier: dissecting the weight space of neural networks
Gabriel Eilertsen
Daniel Jonsson
Timo Ropinski
Jonas Unger
Anders Ynnerman
20
53
0
13 Feb 2020
Previous
123456
Next