ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1704.05796
  4. Cited By
Network Dissection: Quantifying Interpretability of Deep Visual
  Representations

Network Dissection: Quantifying Interpretability of Deep Visual Representations

19 April 2017
David Bau
Bolei Zhou
A. Khosla
A. Oliva
Antonio Torralba
    MILMFAtt
ArXiv (abs)PDFHTML

Papers citing "Network Dissection: Quantifying Interpretability of Deep Visual Representations"

50 / 787 papers shown
Title
Zero-shot Class Unlearning via Layer-wise Relevance Analysis and Neuronal Path Perturbation
Zero-shot Class Unlearning via Layer-wise Relevance Analysis and Neuronal Path Perturbation
Wenhan Chang
Tianqing Zhu
Ping Xiong
Yufeng Wu
Faqian Guan
Wanlei Zhou
MU
80
0
0
31 Oct 2024
Lightweight Frequency Masker for Cross-Domain Few-Shot Semantic
  Segmentation
Lightweight Frequency Masker for Cross-Domain Few-Shot Semantic Segmentation
Jintao Tong
Yixiong Zou
Yuhua Li
Ruixuan Li
96
6
0
29 Oct 2024
Exploiting Text-Image Latent Spaces for the Description of Visual
  Concepts
Exploiting Text-Image Latent Spaces for the Description of Visual Concepts
Laines Schmalwasser
J. Gawlikowski
Joachim Denzler
Julia Niebling
57
2
0
23 Oct 2024
LG-CAV: Train Any Concept Activation Vector with Language Guidance
LG-CAV: Train Any Concept Activation Vector with Language Guidance
Qihan Huang
Jie Song
Mengqi Xue
Han Zhang
Bingde Hu
Huiqiong Wang
Hao Jiang
Xingen Wang
Xiuming Zhang
VLM
75
3
0
14 Oct 2024
Why pre-training is beneficial for downstream classification tasks?
Why pre-training is beneficial for downstream classification tasks?
Xin Jiang
Xu Cheng
Zechao Li
70
0
0
11 Oct 2024
Audio Explanation Synthesis with Generative Foundation Models
Audio Explanation Synthesis with Generative Foundation Models
Alican Akman
Qiyang Sun
Björn W. Schuller
75
1
0
10 Oct 2024
Unlearning-based Neural Interpretations
Unlearning-based Neural Interpretations
Ching Lam Choi
Alexandre Duplessis
Serge Belongie
FAtt
268
0
0
10 Oct 2024
MINER: Mining the Underlying Pattern of Modality-Specific Neurons in
  Multimodal Large Language Models
MINER: Mining the Underlying Pattern of Modality-Specific Neurons in Multimodal Large Language Models
Kaichen Huang
Jiahao Huo
Yibo Yan
Kun Wang
Yutao Yue
Xuming Hu
80
2
0
07 Oct 2024
Localizing Memorization in SSL Vision Encoders
Localizing Memorization in SSL Vision Encoders
Wenhao Wang
Adam Dziedzic
Michael Backes
Franziska Boenisch
67
2
0
27 Sep 2024
Unveiling Ontological Commitment in Multi-Modal Foundation Models
Unveiling Ontological Commitment in Multi-Modal Foundation Models
Mert Keser
Gesina Schwalbe
Niki Amini-Naieni
Matthias Rottmann
Alois Knoll
53
1
0
25 Sep 2024
Concept-Based Explanations in Computer Vision: Where Are We and Where
  Could We Go?
Concept-Based Explanations in Computer Vision: Where Are We and Where Could We Go?
Jae Hee Lee
Georgii Mikriukov
Gesina Schwalbe
Stefan Wermter
D. Wolter
109
2
0
20 Sep 2024
Measuring Sound Symbolism in Audio-visual Models
Measuring Sound Symbolism in Audio-visual Models
Wei-Cheng Tseng
Yi-Jen Shih
David Harwath
Raymond Mooney
92
0
0
18 Sep 2024
Trustworthy Conceptual Explanations for Neural Networks in Robot
  Decision-Making
Trustworthy Conceptual Explanations for Neural Networks in Robot Decision-Making
Som Sagar
Aditya Taparia
Harsh Mankodiya
Pranav M Bidare
Yifan Zhou
Ransalu Senanayake
FAtt
72
0
0
16 Sep 2024
Optimal ablation for interpretability
Optimal ablation for interpretability
Maximilian Li
Lucas Janson
FAtt
118
3
0
16 Sep 2024
Layerwise Change of Knowledge in Neural Networks
Layerwise Change of Knowledge in Neural Networks
Xu Cheng
Lei Cheng
Zhaoran Peng
Yang Xu
Tian Han
Quanshi Zhang
KELMFAtt
74
5
0
13 Sep 2024
Quantifying Emergence in Neural Networks: Insights from Pruning and
  Training Dynamics
Quantifying Emergence in Neural Networks: Insights from Pruning and Training Dynamics
Faisal AlShinaifi
Zeyad Almoaigel
Johnny Jingze Li
Abdulla Kuleib
Gabriel A. Silva
44
0
0
03 Sep 2024
How to Measure Human-AI Prediction Accuracy in Explainable AI Systems
How to Measure Human-AI Prediction Accuracy in Explainable AI Systems
Sujay Koujalgi
Andrew Anderson
Iyadunni Adenuga
Shikha Soneji
Rupika Dikkala
...
Leo Soccio
Sourav Panda
Rupak Kumar Das
Margaret Burnett
Jonathan Dodge
73
2
0
23 Aug 2024
Smooth InfoMax -- Towards Easier Post-Hoc Interpretability
Smooth InfoMax -- Towards Easier Post-Hoc Interpretability
Fabian Denoodt
Bart de Boer
José Oramas
178
2
0
23 Aug 2024
LCE: A Framework for Explainability of DNNs for Ultrasound Image Based
  on Concept Discovery
LCE: A Framework for Explainability of DNNs for Ultrasound Image Based on Concept Discovery
Weiji Kong
Xun Gong
Juan Wang
33
1
0
19 Aug 2024
Improving Network Interpretability via Explanation Consistency
  Evaluation
Improving Network Interpretability via Explanation Consistency Evaluation
Hefeng Wu
Hao Jiang
Keze Wang
Ziyi Tang
Xianghuan He
Liang Lin
FAttAAML
95
0
0
08 Aug 2024
Interpreting Global Perturbation Robustness of Image Models using Axiomatic Spectral Importance Decomposition
Interpreting Global Perturbation Robustness of Image Models using Axiomatic Spectral Importance Decomposition
Róisín Luo
James McDermott
C. O'Riordan
AAML
58
1
0
02 Aug 2024
Faithful and Plausible Natural Language Explanations for Image Classification: A Pipeline Approach
Faithful and Plausible Natural Language Explanations for Image Classification: A Pipeline Approach
Adam Wojciechowski
Mateusz Lango
Ondrej Dusek
FAtt
88
1
0
30 Jul 2024
Towards the Dynamics of a DNN Learning Symbolic Interactions
Towards the Dynamics of a DNN Learning Symbolic Interactions
Qihan Ren
Yang Xu
Junpeng Zhang
Yue Xin
Dongrui Liu
Quanshi Zhang
87
9
0
27 Jul 2024
Knowledge Mechanisms in Large Language Models: A Survey and Perspective
Knowledge Mechanisms in Large Language Models: A Survey and Perspective
Meng Wang
Yunzhi Yao
Ziwen Xu
Shuofei Qiao
Shumin Deng
...
Yong Jiang
Pengjun Xie
Fei Huang
Huajun Chen
Ningyu Zhang
143
39
0
22 Jul 2024
Mask-Free Neuron Concept Annotation for Interpreting Neural Networks in
  Medical Domain
Mask-Free Neuron Concept Annotation for Interpreting Neural Networks in Medical Domain
Hyeon Bae Kim
Yong Hyun Ahn
Seong Tae Kim
84
1
0
16 Jul 2024
Understanding Visual Feature Reliance through the Lens of Complexity
Understanding Visual Feature Reliance through the Lens of Complexity
Thomas Fel
Louis Bethune
Andrew Kyle Lampinen
Thomas Serre
Katherine Hermann
FAttCoGe
92
9
0
08 Jul 2024
Towards A Comprehensive Visual Saliency Explanation Framework for
  AI-based Face Recognition Systems
Towards A Comprehensive Visual Saliency Explanation Framework for AI-based Face Recognition Systems
Yuhang Lu
Zewei Xu
Touradj Ebrahimi
CVBMFAttXAI
78
3
0
08 Jul 2024
Concept Bottleneck Models Without Predefined Concepts
Concept Bottleneck Models Without Predefined Concepts
Simon Schrodi
Julian Schur
Max Argus
Thomas Brox
84
12
0
04 Jul 2024
Towards Compositionality in Concept Learning
Towards Compositionality in Concept Learning
Adam Stein
Aaditya Naik
Yinjun Wu
Mayur Naik
Eric Wong
CoGe
125
3
0
26 Jun 2024
AlignedCut: Visual Concepts Discovery on Brain-Guided Universal Feature
  Space
AlignedCut: Visual Concepts Discovery on Brain-Guided Universal Feature Space
Huzheng Yang
James Gee
Jianbo Shi
VOS
68
2
0
26 Jun 2024
InFiConD: Interactive No-code Fine-tuning with Concept-based Knowledge
  Distillation
InFiConD: Interactive No-code Fine-tuning with Concept-based Knowledge Distillation
Jinbin Huang
Wenbin He
Liang Gou
Liu Ren
Chris Bryan
115
0
0
25 Jun 2024
AND: Audio Network Dissection for Interpreting Deep Acoustic Models
AND: Audio Network Dissection for Interpreting Deep Acoustic Models
Tung-Yu Wu
Yu-Xiang Lin
Tsui-Wei Weng
108
2
0
24 Jun 2024
MMNeuron: Discovering Neuron-Level Domain-Specific Interpretation in
  Multimodal Large Language Model
MMNeuron: Discovering Neuron-Level Domain-Specific Interpretation in Multimodal Large Language Model
Jiahao Huo
Yibo Yan
Boren Hu
Yutao Yue
Xuming Hu
LRMMLLM
104
8
0
17 Jun 2024
Don't Forget Too Much: Towards Machine Unlearning on Feature Level
Don't Forget Too Much: Towards Machine Unlearning on Feature Level
Heng Xu
Tianqing Zhu
Wanlei Zhou
Wei Zhao
MU
80
5
0
16 Jun 2024
LLM-assisted Concept Discovery: Automatically Identifying and Explaining
  Neuron Functions
LLM-assisted Concept Discovery: Automatically Identifying and Explaining Neuron Functions
N. Hoang-Xuan
Minh Nhat Vu
My T. Thai
72
4
0
12 Jun 2024
Graphical Perception of Saliency-based Model Explanations
Graphical Perception of Saliency-based Model Explanations
Yayan Zhao
Mingwei Li
Matthew Berger
XAIFAtt
91
2
0
11 Jun 2024
DiffusionPID: Interpreting Diffusion via Partial Information
  Decomposition
DiffusionPID: Interpreting Diffusion via Partial Information Decomposition
Shaurya Dewan
Rushikesh Zawar
Prakanshul Saxena
Yingshan Chang
Andrew F. Luo
Yonatan Bisk
DiffM
117
4
0
07 Jun 2024
EdgeSync: Faster Edge-model Updating via Adaptive Continuous Learning
  for Video Data Drift
EdgeSync: Faster Edge-model Updating via Adaptive Continuous Learning for Video Data Drift
Peng Zhao
Runchu Dong
Guiqin Wang
Cong Zhao
105
2
0
05 Jun 2024
Searching for internal symbols underlying deep learning
Searching for internal symbols underlying deep learning
J. H. Lee
Sujith Vijayan
AI4CE
94
0
0
31 May 2024
Applications of interpretable deep learning in neuroimaging: a
  comprehensive review
Applications of interpretable deep learning in neuroimaging: a comprehensive review
Lindsay Munroe
Mariana da Silva
Faezeh Heidari
I. Grigorescu
Simon Dahan
E. C. Robinson
Maria Deprez
Po-Wah So
AI4CE
58
7
0
30 May 2024
CoSy: Evaluating Textual Explanations of Neurons
CoSy: Evaluating Textual Explanations of Neurons
Laura Kopf
P. Bommer
Anna Hedström
Sebastian Lapuschkin
Marina M.-C. Höhne
Kirill Bykov
70
13
0
30 May 2024
I Bet You Did Not Mean That: Testing Semantic Importance via Betting
I Bet You Did Not Mean That: Testing Semantic Importance via Betting
Jacopo Teneggi
Jeremias Sulam
FAtt
141
2
0
29 May 2024
Locally Testing Model Detections for Semantic Global Concepts
Locally Testing Model Detections for Semantic Global Concepts
Franz Motzkus
Georgii Mikriukov
Christian Hellert
Ute Schmid
98
2
0
27 May 2024
Crafting Interpretable Embeddings by Asking LLMs Questions
Crafting Interpretable Embeddings by Asking LLMs Questions
Vinamra Benara
Chandan Singh
John X. Morris
Richard Antonello
Ion Stoica
Alexander G. Huth
Jianfeng Gao
69
6
0
26 May 2024
Linear Explanations for Individual Neurons
Linear Explanations for Individual Neurons
Tuomas P. Oikarinen
Tsui-Wei Weng
FAttMILM
110
10
0
10 May 2024
Improving Concept Alignment in Vision-Language Concept Bottleneck Models
Improving Concept Alignment in Vision-Language Concept Bottleneck Models
Nithish Muthuchamy Selvaraj
Xiaobao Guo
Bingquan Shen
A. Kong
Alex C. Kot
VLM
116
0
0
03 May 2024
When a Relation Tells More Than a Concept: Exploring and Evaluating
  Classifier Decisions with CoReX
When a Relation Tells More Than a Concept: Exploring and Evaluating Classifier Decisions with CoReX
Bettina Finzel
Patrick Hilme
Johannes Rabold
Ute Schmid
109
1
0
02 May 2024
Explainable AI (XAI) in Image Segmentation in Medicine, Industry, and
  Beyond: A Survey
Explainable AI (XAI) in Image Segmentation in Medicine, Industry, and Beyond: A Survey
Rokas Gipiškis
Chun-Wei Tsai
Olga Kurasova
95
10
0
02 May 2024
Evaluating Concept-based Explanations of Language Models: A Study on
  Faithfulness and Readability
Evaluating Concept-based Explanations of Language Models: A Study on Faithfulness and Readability
Meng Li
Haoran Jin
Ruixuan Huang
Zhihao Xu
Defu Lian
Zijia Lin
Di Zhang
Xiting Wang
LRM
33
4
0
29 Apr 2024
CA-Stream: Attention-based pooling for interpretable image recognition
CA-Stream: Attention-based pooling for interpretable image recognition
Felipe Torres
Hanwei Zhang
R. Sicre
Stéphane Ayache
Yannis Avrithis
84
1
0
23 Apr 2024
Previous
12345...141516
Next