ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1704.05796
  4. Cited By
Network Dissection: Quantifying Interpretability of Deep Visual
  Representations

Network Dissection: Quantifying Interpretability of Deep Visual Representations

19 April 2017
David Bau
Bolei Zhou
A. Khosla
A. Oliva
Antonio Torralba
    MILMFAtt
ArXiv (abs)PDFHTML

Papers citing "Network Dissection: Quantifying Interpretability of Deep Visual Representations"

50 / 787 papers shown
Title
Self-conditioning pre-trained language models
Self-conditioning pre-trained language models
Xavier Suau
Luca Zappella
N. Apostoloff
54
12
0
30 Sep 2021
TSM: Temporal Shift Module for Efficient and Scalable Video
  Understanding on Edge Device
TSM: Temporal Shift Module for Efficient and Scalable Video Understanding on Edge Device
Ji Lin
Chuang Gan
Kuan-Chieh Wang
Song Han
100
65
0
27 Sep 2021
Learning Interpretable Concept Groups in CNNs
Learning Interpretable Concept Groups in CNNs
Saurabh Varshneya
Antoine Ledent
Robert A. Vandermeulen
Yunwen Lei
Matthias Enders
Damian Borth
Marius Kloft
111
6
0
21 Sep 2021
Explaining Convolutional Neural Networks by Tagging Filters
Explaining Convolutional Neural Networks by Tagging Filters
Anna Nguyen
Daniel Hagenmayer
T. Weller
Michael Färber
FAtt
56
0
0
20 Sep 2021
Detection Accuracy for Evaluating Compositional Explanations of Units
Detection Accuracy for Evaluating Compositional Explanations of Units
Sayo M. Makinwa
Biagio La Rosa
Roberto Capobianco
FAttCoGe
85
1
0
16 Sep 2021
Cross-Model Consensus of Explanations and Beyond for Image
  Classification Models: An Empirical Study
Cross-Model Consensus of Explanations and Beyond for Image Classification Models: An Empirical Study
Xuhong Li
Haoyi Xiong
Siyu Huang
Shilei Ji
Dejing Dou
63
10
0
02 Sep 2021
Understanding of Kernels in CNN Models by Suppressing Irrelevant Visual
  Features in Images
Understanding of Kernels in CNN Models by Suppressing Irrelevant Visual Features in Images
Jiafan Zhuang
Wanying Tao
Jianfei Xing
Wei Shi
Ruixuan Wang
Weishi Zheng
FAtt
52
3
0
25 Aug 2021
Interpreting Face Inference Models using Hierarchical Network Dissection
Interpreting Face Inference Models using Hierarchical Network Dissection
Divyang Teotia
Àgata Lapedriza
Sarah Ostadabbas
CVBM
67
3
0
23 Aug 2021
Towards Interpretable Deep Networks for Monocular Depth Estimation
Towards Interpretable Deep Networks for Monocular Depth Estimation
Zunzhi You
Yi-Hsuan Tsai
W. Chiu
Guanbin Li
FAtt
77
17
0
11 Aug 2021
Interpreting Generative Adversarial Networks for Interactive Image
  Generation
Interpreting Generative Adversarial Networks for Interactive Image Generation
Bolei Zhou
GAN
59
6
0
10 Aug 2021
COVID-view: Diagnosis of COVID-19 using Chest CT
COVID-view: Diagnosis of COVID-19 using Chest CT
Shreeraj Jadhav
Gaofeng Deng
M. Zawin
Arie Kaufman
78
18
0
09 Aug 2021
Spatiotemporal Contrastive Learning of Facial Expressions in Videos
Spatiotemporal Contrastive Learning of Facial Expressions in Videos
Shuvendu Roy
Ali Etemad
108
18
0
06 Aug 2021
Where do Models go Wrong? Parameter-Space Saliency Maps for
  Explainability
Where do Models go Wrong? Parameter-Space Saliency Maps for Explainability
Roman Levin
Manli Shu
Eitan Borgnia
Furong Huang
Micah Goldblum
Tom Goldstein
FAttAAML
55
11
0
03 Aug 2021
Shared Interest: Measuring Human-AI Alignment to Identify Recurring
  Patterns in Model Behavior
Shared Interest: Measuring Human-AI Alignment to Identify Recurring Patterns in Model Behavior
Angie Boggust
Benjamin Hoover
Arvindmani Satyanarayan
Hendrik Strobelt
73
52
0
20 Jul 2021
One Map Does Not Fit All: Evaluating Saliency Map Explanation on
  Multi-Modal Medical Images
One Map Does Not Fit All: Evaluating Saliency Map Explanation on Multi-Modal Medical Images
Weina Jin
Xiaoxiao Li
Ghassan Hamarneh
FAtt
45
16
0
11 Jul 2021
Using Causal Analysis for Conceptual Deep Learning Explanation
Using Causal Analysis for Conceptual Deep Learning Explanation
Sumedha Singla
Stephen Wallace
Sofia Triantafillou
Kayhan Batmanghelich
CML
53
15
0
10 Jul 2021
Interpretable Compositional Convolutional Neural Networks
Interpretable Compositional Convolutional Neural Networks
Wen Shen
Zhihua Wei
Shikun Huang
Binbin Zhang
Jiaqi Fan
Ping Zhao
Quanshi Zhang
FAtt
76
36
0
09 Jul 2021
Subspace Clustering Based Analysis of Neural Networks
Subspace Clustering Based Analysis of Neural Networks
Uday Singh Saini
Pravallika Devineni
Evangelos E. Papalexakis
GNN
33
1
0
02 Jul 2021
What do End-to-End Speech Models Learn about Speaker, Language and
  Channel Information? A Layer-wise and Neuron-level Analysis
What do End-to-End Speech Models Learn about Speaker, Language and Channel Information? A Layer-wise and Neuron-level Analysis
Shammur A. Chowdhury
Nadir Durrani
Ahmed M. Ali
118
16
0
01 Jul 2021
Inverting and Understanding Object Detectors
Inverting and Understanding Object Detectors
Ang Cao
Justin Johnson
ObjD
128
3
0
26 Jun 2021
Towards Fully Interpretable Deep Neural Networks: Are We There Yet?
Towards Fully Interpretable Deep Neural Networks: Are We There Yet?
Sandareka Wickramanayake
Wynne Hsu
Mong Li Lee
FaMLAI4CE
46
3
0
24 Jun 2021
Evaluation of Saliency-based Explainability Method
Evaluation of Saliency-based Explainability Method
Sam Zabdiel Sunder Samuel
V. Kamakshi
Namrata Lodhi
N. C. Krishnan
FAttXAI
56
12
0
24 Jun 2021
Visual Probing: Cognitive Framework for Explaining Self-Supervised Image
  Representations
Visual Probing: Cognitive Framework for Explaining Self-Supervised Image Representations
Witold Oleszkiewicz
Dominika Basaj
Igor Sieradzki
Michal Górszczak
Barbara Rychalska
K. Lewandowska
Tomasz Trzciñski
Bartosz Zieliñski
SSL
75
3
0
21 Jun 2021
A Game-Theoretic Taxonomy of Visual Concepts in DNNs
A Game-Theoretic Taxonomy of Visual Concepts in DNNs
Xu Cheng
Chuntung Chu
Yi Zheng
Jie Ren
Quanshi Zhang
47
22
0
21 Jun 2021
Cogradient Descent for Dependable Learning
Cogradient Descent for Dependable Learning
Runqi Wang
Baochang Zhang
Lian Zhuo
QiXiang Ye
David Doermann
30
0
0
20 Jun 2021
Guided Integrated Gradients: An Adaptive Path Method for Removing Noise
Guided Integrated Gradients: An Adaptive Path Method for Removing Noise
A. Kapishnikov
Subhashini Venugopalan
Besim Avci
Benjamin D. Wedin
Michael Terry
Tolga Bolukbasi
122
95
0
17 Jun 2021
Best of both worlds: local and global explanations with
  human-understandable concepts
Best of both worlds: local and global explanations with human-understandable concepts
Jessica Schrouff
Sebastien Baur
Shaobo Hou
Diana Mincu
Eric Loreaux
Ralph Blanes
James Wexler
Alan Karthikesalingam
Been Kim
FAtt
106
28
0
16 Jun 2021
On the Evolution of Neuron Communities in a Deep Learning Architecture
On the Evolution of Neuron Communities in a Deep Learning Architecture
Sakib Mostafa
Debajyoti Mondal
GNN
52
3
0
08 Jun 2021
3DB: A Framework for Debugging Computer Vision Models
3DB: A Framework for Debugging Computer Vision Models
Guillaume Leclerc
Hadi Salman
Andrew Ilyas
Sai H. Vemprala
Logan Engstrom
...
Pengchuan Zhang
Shibani Santurkar
Greg Yang
Ashish Kapoor
Aleksander Madry
118
42
0
07 Jun 2021
Improving Compositionality of Neural Networks by Decoding
  Representations to Inputs
Improving Compositionality of Neural Networks by Decoding Representations to Inputs
Mike Wu
Noah D. Goodman
Stefano Ermon
AI4CE
56
3
0
01 Jun 2021
Drop Clause: Enhancing Performance, Interpretability and Robustness of
  the Tsetlin Machine
Drop Clause: Enhancing Performance, Interpretability and Robustness of the Tsetlin Machine
Jivitesh Sharma
Rohan Kumar Yadav
Ole-Christoffer Granmo
Lei Jiao
VLM
58
12
0
30 May 2021
The Definitions of Interpretability and Learning of Interpretable Models
The Definitions of Interpretability and Learning of Interpretable Models
Weishen Pan
Changshui Zhang
FaMLXAI
51
4
0
29 May 2021
Transparent Model of Unabridged Data (TMUD)
Transparent Model of Unabridged Data (TMUD)
Jie Xu
Min Ding
40
0
0
23 May 2021
A Comprehensive Taxonomy for Explainable Artificial Intelligence: A
  Systematic Survey of Surveys on Methods and Concepts
A Comprehensive Taxonomy for Explainable Artificial Intelligence: A Systematic Survey of Surveys on Methods and Concepts
Gesina Schwalbe
Bettina Finzel
XAI
153
198
0
15 May 2021
The Low-Dimensional Linear Geometry of Contextualized Word
  Representations
The Low-Dimensional Linear Geometry of Contextualized Word Representations
Evan Hernandez
Jacob Andreas
MILM
106
45
0
15 May 2021
Cause and Effect: Hierarchical Concept-based Explanation of Neural
  Networks
Cause and Effect: Hierarchical Concept-based Explanation of Neural Networks
Mohammad Nokhbeh Zaeem
Majid Komeili
CML
74
9
0
14 May 2021
Verification of Size Invariance in DNN Activations using Concept
  Embeddings
Verification of Size Invariance in DNN Activations using Concept Embeddings
Gesina Schwalbe
3DPC
59
8
0
14 May 2021
XAI Handbook: Towards a Unified Framework for Explainable AI
XAI Handbook: Towards a Unified Framework for Explainable AI
Sebastián M. Palacio
Adriano Lucieri
Mohsin Munir
Jörn Hees
Sheraz Ahmed
Andreas Dengel
58
32
0
14 May 2021
Boosting Light-Weight Depth Estimation Via Knowledge Distillation
Boosting Light-Weight Depth Estimation Via Knowledge Distillation
Junjie Hu
Chenyou Fan
Hualie Jiang
Xiyue Guo
Yuan Gao
Xiangyong Lu
Tin Lun Lam
64
27
0
13 May 2021
Leveraging Sparse Linear Layers for Debuggable Deep Networks
Leveraging Sparse Linear Layers for Debuggable Deep Networks
Eric Wong
Shibani Santurkar
Aleksander Madry
FAtt
67
92
0
11 May 2021
Rationalization through Concepts
Rationalization through Concepts
Diego Antognini
Boi Faltings
FAtt
124
22
0
11 May 2021
This Looks Like That... Does it? Shortcomings of Latent Space Prototype
  Interpretability in Deep Networks
This Looks Like That... Does it? Shortcomings of Latent Space Prototype Interpretability in Deep Networks
Adrian Hoffmann
Claudio Fanconi
Rahul Rade
Jonas Köhler
71
63
0
05 May 2021
Do Feature Attribution Methods Correctly Attribute Features?
Do Feature Attribution Methods Correctly Attribute Features?
Yilun Zhou
Serena Booth
Marco Tulio Ribeiro
J. Shah
FAttXAI
112
136
0
27 Apr 2021
Exploiting Explanations for Model Inversion Attacks
Exploiting Explanations for Model Inversion Attacks
Xu Zhao
Wencan Zhang
Xiao Xiao
Brian Y. Lim
MIACV
101
85
0
26 Apr 2021
EigenGAN: Layer-Wise Eigen-Learning for GANs
EigenGAN: Layer-Wise Eigen-Learning for GANs
Zhenliang He
Meina Kan
Shiguang Shan
GAN
115
51
0
26 Apr 2021
Neural Mean Discrepancy for Efficient Out-of-Distribution Detection
Neural Mean Discrepancy for Efficient Out-of-Distribution Detection
Xin Dong
Junfeng Guo
Ang Li
W. Ting
Cong Liu
H. T. Kung
OODD
131
59
0
23 Apr 2021
Equivariant Wavelets: Fast Rotation and Translation Invariant Wavelet
  Scattering Transforms
Equivariant Wavelets: Fast Rotation and Translation Invariant Wavelet Scattering Transforms
A. Saydjari
D. Finkbeiner
67
20
0
22 Apr 2021
Do Deep Neural Networks Forget Facial Action Units? -- Exploring the
  Effects of Transfer Learning in Health Related Facial Expression Recognition
Do Deep Neural Networks Forget Facial Action Units? -- Exploring the Effects of Transfer Learning in Health Related Facial Expression Recognition
Pooja Prajod
Dominik Schiller
Tobias Huber
Elisabeth André
CVBM
28
8
0
15 Apr 2021
An Interpretability Illusion for BERT
An Interpretability Illusion for BERT
Tolga Bolukbasi
Adam Pearce
Ann Yuan
Andy Coenen
Emily Reif
Fernanda Viégas
Martin Wattenberg
MILMFAtt
103
82
0
14 Apr 2021
Automatic Correction of Internal Units in Generative Neural Networks
Automatic Correction of Internal Units in Generative Neural Networks
A. Tousi
Haedong Jeong
Jiyeon Han
Hwanil Choi
Jaesik Choi
GAN
57
11
0
13 Apr 2021
Previous
123...8910...141516
Next