ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1704.05796
  4. Cited By
Network Dissection: Quantifying Interpretability of Deep Visual
  Representations

Network Dissection: Quantifying Interpretability of Deep Visual Representations

19 April 2017
David Bau
Bolei Zhou
A. Khosla
A. Oliva
Antonio Torralba
    MILMFAtt
ArXiv (abs)PDFHTML

Papers citing "Network Dissection: Quantifying Interpretability of Deep Visual Representations"

50 / 787 papers shown
Title
Connecting Attributions and QA Model Behavior on Realistic
  Counterfactuals
Connecting Attributions and QA Model Behavior on Realistic Counterfactuals
Xi Ye
Rohan Nair
Greg Durrett
61
24
0
09 Apr 2021
Robust Semantic Interpretability: Revisiting Concept Activation Vectors
Robust Semantic Interpretability: Revisiting Concept Activation Vectors
J. Pfau
A. Young
Jerome Wei
Maria L. Wei
Michael J. Keiser
FAtt
58
15
0
06 Apr 2021
Test-Time Adaptation for Super-Resolution: You Only Need to Overfit on a
  Few More Images
Test-Time Adaptation for Super-Resolution: You Only Need to Overfit on a Few More Images
Mohammad Saeed Rad
Thomas Yu
Behzad Bozorgtabar
Jean-Philippe Thiran
74
4
0
06 Apr 2021
DeepEverest: Accelerating Declarative Top-K Queries for Deep Neural
  Network Interpretation
DeepEverest: Accelerating Declarative Top-K Queries for Deep Neural Network Interpretation
Dong He
Maureen Daum
Walter Cai
Magdalena Balazinska
HAI
55
6
0
06 Apr 2021
Towards Semantic Interpretation of Thoracic Disease and COVID-19
  Diagnosis Models
Towards Semantic Interpretation of Thoracic Disease and COVID-19 Diagnosis Models
Ashkan Khakzar
Sabrina Musatian
Jonas Buchberger
Icxel Valeriano Quiroz
Nikolaus Pinger
Soroosh Baselizadeh
Seong Tae Kim
Nassir Navab
83
13
0
04 Apr 2021
Estimating the Generalization in Deep Neural Networks via Sparsity
Estimating the Generalization in Deep Neural Networks via Sparsity
Yang Zhao
Hao Zhang
78
2
0
02 Apr 2021
Memorability: An image-computable measure of information utility
Memorability: An image-computable measure of information utility
Zoya Bylinskii
L. Goetschalckx
Anelise Newman
A. Oliva
HAI
40
19
0
01 Apr 2021
Neural Response Interpretation through the Lens of Critical Pathways
Neural Response Interpretation through the Lens of Critical Pathways
Ashkan Khakzar
Soroosh Baselizadeh
Saurabh Khanduja
Christian Rupprecht
Seong Tae Kim
Nassir Navab
58
34
0
31 Mar 2021
MISA: Online Defense of Trojaned Models using Misattributions
MISA: Online Defense of Trojaned Models using Misattributions
Panagiota Kiourti
Wenchao Li
Anirban Roy
Karan Sikka
Susmit Jha
56
10
0
29 Mar 2021
FocusedDropout for Convolutional Neural Network
FocusedDropout for Convolutional Neural Network
Tianshu Xie
Minghui Liu
Jiali Deng
Xuan Cheng
Xiaomin Wang
Meilin Liu
40
10
0
29 Mar 2021
TS-CAM: Token Semantic Coupled Attention Map for Weakly Supervised
  Object Localization
TS-CAM: Token Semantic Coupled Attention Map for Weakly Supervised Object Localization
Wei Gao
Fang Wan
Xingjia Pan
Zhiliang Peng
Qi Tian
Zhenjun Han
Bolei Zhou
QiXiang Ye
ViTWSOL
91
204
0
27 Mar 2021
Preserve, Promote, or Attack? GNN Explanation via Topology Perturbation
Preserve, Promote, or Attack? GNN Explanation via Topology Perturbation
Yi Sun
Abel N. Valente
Sijia Liu
Dakuo Wang
AAML
71
7
0
25 Mar 2021
UniMoCo: Unsupervised, Semi-Supervised and Full-Supervised Visual
  Representation Learning
UniMoCo: Unsupervised, Semi-Supervised and Full-Supervised Visual Representation Learning
Zhigang Dai
Bolun Cai
Yugeng Lin
Junying Chen
SSL
68
6
0
19 Mar 2021
Interpretable Deep Learning: Interpretation, Interpretability,
  Trustworthiness, and Beyond
Interpretable Deep Learning: Interpretation, Interpretability, Trustworthiness, and Beyond
Xuhong Li
Haoyi Xiong
Xingjian Li
Xuanyu Wu
Xiao Zhang
Ji Liu
Jiang Bian
Dejing Dou
AAMLFaMLXAIHAI
82
344
0
19 Mar 2021
Quantitative Performance Assessment of CNN Units via Topological Entropy
  Calculation
Quantitative Performance Assessment of CNN Units via Topological Entropy Calculation
Yang Zhao
Hao Zhang
77
7
0
17 Mar 2021
EX-RAY: Distinguishing Injected Backdoor from Natural Features in Neural
  Networks by Examining Differential Feature Symmetry
EX-RAY: Distinguishing Injected Backdoor from Natural Features in Neural Networks by Examining Differential Feature Symmetry
Yingqi Liu
Guangyu Shen
Guanhong Tao
Zhenting Wang
Shiqing Ma
Xinming Zhang
AAML
97
8
0
16 Mar 2021
Intraclass clustering: an implicit learning ability that regularizes
  DNNs
Intraclass clustering: an implicit learning ability that regularizes DNNs
Simon Carbonnelle
Christophe De Vleeschouwer
90
8
0
11 Mar 2021
Explainable Person Re-Identification with Attribute-guided Metric
  Distillation
Explainable Person Re-Identification with Attribute-guided Metric Distillation
Xiaodong Chen
Xinchen Liu
Wu Liu
Xiaoping Zhang
Yongdong Zhang
Tao Mei
108
47
0
02 Mar 2021
Wider Vision: Enriching Convolutional Neural Networks via Alignment to
  External Knowledge Bases
Wider Vision: Enriching Convolutional Neural Networks via Alignment to External Knowledge Bases
Xuehao Liu
Sarah Jane Delany
Susan Mckeever
27
0
0
22 Feb 2021
Evaluating the Interpretability of Generative Models by Interactive
  Reconstruction
Evaluating the Interpretability of Generative Models by Interactive Reconstruction
A. Ross
Nina Chen
Elisa Zhao Hang
Elena L. Glassman
Finale Doshi-Velez
160
49
0
02 Feb 2021
The Mind's Eye: Visualizing Class-Agnostic Features of CNNs
The Mind's Eye: Visualizing Class-Agnostic Features of CNNs
Alexandros Stergiou
FAtt
29
3
0
29 Jan 2021
Position, Padding and Predictions: A Deeper Look at Position Information
  in CNNs
Position, Padding and Predictions: A Deeper Look at Position Information in CNNs
Md. Amirul Islam
M. Kowal
Sen Jia
Konstantinos G. Derpanis
Neil D. B. Bruce
61
58
0
28 Jan 2021
Shape or Texture: Understanding Discriminative Features in CNNs
Shape or Texture: Understanding Discriminative Features in CNNs
Md. Amirul Islam
M. Kowal
Patrick Esser
Sen Jia
Bjorn Ommer
Konstantinos G. Derpanis
Neil D. B. Bruce
93
77
0
27 Jan 2021
Generating Attribution Maps with Disentangled Masked Backpropagation
Generating Attribution Maps with Disentangled Masked Backpropagation
Adria Ruiz
Antonio Agudo
Francesc Moreno
FAtt
40
1
0
17 Jan 2021
Towards interpreting ML-based automated malware detection models: a
  survey
Towards interpreting ML-based automated malware detection models: a survey
Yuzhou Lin
Xiaolin Chang
126
7
0
15 Jan 2021
Explainability of deep vision-based autonomous driving systems: Review
  and challenges
Explainability of deep vision-based autonomous driving systems: Review and challenges
Éloi Zablocki
H. Ben-younes
P. Pérez
Matthieu Cord
XAI
186
178
0
13 Jan 2021
Explaining the Black-box Smoothly- A Counterfactual Approach
Explaining the Black-box Smoothly- A Counterfactual Approach
Junyu Chen
Yong Du
Yufan He
W. Paul Segars
Ye Li
MedImFAtt
152
105
0
11 Jan 2021
Comprehensible Convolutional Neural Networks via Guided Concept Learning
Comprehensible Convolutional Neural Networks via Guided Concept Learning
Sandareka Wickramanayake
Wynne Hsu
Mong Li Lee
SSL
52
25
0
11 Jan 2021
Explainable AI and Adoption of Financial Algorithmic Advisors: an
  Experimental Study
Explainable AI and Adoption of Financial Algorithmic Advisors: an Experimental Study
D. David
Yehezkel S. Resheff
Talia Tron
44
24
0
05 Jan 2021
A Survey on Neural Network Interpretability
A Survey on Neural Network Interpretability
Yu Zhang
Peter Tiño
A. Leonardis
K. Tang
FaMLXAI
209
691
0
28 Dec 2020
TransPose: Keypoint Localization via Transformer
TransPose: Keypoint Localization via Transformer
Sen Yang
Zhibin Quan
Mu Nie
Wankou Yang
ViT
205
271
0
28 Dec 2020
Analyzing Representations inside Convolutional Neural Networks
Analyzing Representations inside Convolutional Neural Networks
Uday Singh Saini
Evangelos E. Papalexakis
FAtt
36
2
0
23 Dec 2020
Image Translation via Fine-grained Knowledge Transfer
Image Translation via Fine-grained Knowledge Transfer
Xuanhong Chen
Ziang Liu
Ting Qiu
Bingbing Ni
Naiyuan Liu
Xiwei Hu
Yuhan Li
35
0
0
21 Dec 2020
Demystifying Deep Neural Networks Through Interpretation: A Survey
Demystifying Deep Neural Networks Through Interpretation: A Survey
Giang Dao
Minwoo Lee
FaMLFAtt
66
1
0
13 Dec 2020
The Three Ghosts of Medical AI: Can the Black-Box Present Deliver?
The Three Ghosts of Medical AI: Can the Black-Box Present Deliver?
Thomas P. Quinn
Stephan Jacobs
M. Senadeera
Vuong Le
S. Coghlan
58
117
0
10 Dec 2020
Influence-Driven Explanations for Bayesian Network Classifiers
Influence-Driven Explanations for Bayesian Network Classifiers
Antonio Rago
Emanuele Albini
P. Baroni
Francesca Toni
94
9
0
10 Dec 2020
Deep Argumentative Explanations
Deep Argumentative Explanations
Emanuele Albini
Piyawat Lertvittayakumjorn
Antonio Rago
Francesca Toni
AAML
52
5
0
10 Dec 2020
Debiased-CAM to mitigate image perturbations with faithful visual
  explanations of machine learning
Debiased-CAM to mitigate image perturbations with faithful visual explanations of machine learning
Wencan Zhang
Mariella Dimiccoli
Brian Y. Lim
FAtt
72
18
0
10 Dec 2020
A Knowledge Driven Approach to Adaptive Assistance Using Preference
  Reasoning and Explanation
A Knowledge Driven Approach to Adaptive Assistance Using Preference Reasoning and Explanation
Jason R. Wilson
Leilani H. Gilpin
Irina Rabkina
35
4
0
05 Dec 2020
Detecting Trojaned DNNs Using Counterfactual Attributions
Detecting Trojaned DNNs Using Counterfactual Attributions
Karan Sikka
Indranil Sur
Susmit Jha
Anirban Roy
Ajay Divakaran
AAML
38
13
0
03 Dec 2020
Achievements and Challenges in Explaining Deep Learning based
  Computer-Aided Diagnosis Systems
Achievements and Challenges in Explaining Deep Learning based Computer-Aided Diagnosis Systems
Adriano Lucieri
Muhammad Naseer Bajwa
Andreas Dengel
Sheraz Ahmed
136
10
0
26 Nov 2020
StyleSpace Analysis: Disentangled Controls for StyleGAN Image Generation
StyleSpace Analysis: Disentangled Controls for StyleGAN Image Generation
Zongze Wu
Dani Lischinski
Eli Shechtman
DRL
138
483
0
25 Nov 2020
Interpretable Visual Reasoning via Induced Symbolic Space
Interpretable Visual Reasoning via Induced Symbolic Space
Zhonghao Wang
Kai Wang
Mo Yu
Jinjun Xiong
Wen-mei W. Hwu
M. Hasegawa-Johnson
Humphrey Shi
LRMOCL
63
20
0
23 Nov 2020
Learning Class Unique Features in Fine-Grained Visual Classification
Learning Class Unique Features in Fine-Grained Visual Classification
Runkai Zheng
Zhijia Yu
Yinqi Zhang
C. Ding
Hei Victor Cheng
Li Liu
24
0
0
22 Nov 2020
Style Intervention: How to Achieve Spatial Disentanglement with
  Style-based Generators?
Style Intervention: How to Achieve Spatial Disentanglement with Style-based Generators?
Yunfan Liu
Qi Li
Zhenan Sun
Tieniu Tan
58
19
0
19 Nov 2020
Deep Interpretable Classification and Weakly-Supervised Segmentation of
  Histology Images via Max-Min Uncertainty
Deep Interpretable Classification and Weakly-Supervised Segmentation of Histology Images via Max-Min Uncertainty
Soufiane Belharbi
Jérôme Rony
Jose Dolz
Ismail Ben Ayed
Luke McCaffrey
Eric Granger
89
54
0
14 Nov 2020
One Explanation is Not Enough: Structured Attention Graphs for Image
  Classification
One Explanation is Not Enough: Structured Attention Graphs for Image Classification
Vivswan Shitole
Li Fuxin
Minsuk Kahng
Prasad Tadepalli
Alan Fern
FAttGNN
70
38
0
13 Nov 2020
Debugging Tests for Model Explanations
Debugging Tests for Model Explanations
Julius Adebayo
M. Muelly
Ilaria Liccardi
Been Kim
FAtt
78
181
0
10 Nov 2020
Teaching with Commentaries
Teaching with Commentaries
Aniruddh Raghu
M. Raghu
Simon Kornblith
David Duvenaud
Geoffrey E. Hinton
99
24
0
05 Nov 2020
This Looks Like That, Because ... Explaining Prototypes for
  Interpretable Image Recognition
This Looks Like That, Because ... Explaining Prototypes for Interpretable Image Recognition
Meike Nauta
Annemarie Jutte
Jesper C. Provoost
C. Seifert
FAtt
111
65
0
05 Nov 2020
Previous
123...91011...141516
Next