Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1805.12233
Cited By
How Important Is a Neuron?
30 May 2018
Kedar Dhamdhere
Mukund Sundararajan
Qiqi Yan
FAtt
GNN
Re-assign community
ArXiv
PDF
HTML
Papers citing
"How Important Is a Neuron?"
27 / 27 papers shown
Title
Discovering Influential Neuron Path in Vision Transformers
Yifan Wang
Yifei Liu
Yingdong Shi
Chong Li
Anqi Pang
Sibei Yang
Jingyi Yu
Kan Ren
ViT
69
0
0
12 Mar 2025
SRViT: Vision Transformers for Estimating Radar Reflectivity from Satellite Observations at Scale
Jason Stock
Kyle Hilburn
Imme Ebert-Uphoff
Charles Anderson
40
1
0
20 Jun 2024
What Sketch Explainability Really Means for Downstream Tasks
Hmrishav Bandyopadhyay
Pinaki Nath Chowdhury
A. Bhunia
Aneeshan Sain
Tao Xiang
Yi-Zhe Song
30
4
0
14 Mar 2024
Boosting Adversarial Transferability via Fusing Logits of Top-1 Decomposed Feature
Juanjuan Weng
Zhiming Luo
Dazhen Lin
Shaozi Li
Zhun Zhong
AAML
FedML
39
7
0
02 May 2023
On the Robustness of Explanations of Deep Neural Network Models: A Survey
Amlan Jyoti
Karthik Balaji Ganesh
Manoj Gayala
Nandita Lakshmi Tunuguntla
Sandesh Kamath
V. Balasubramanian
XAI
FAtt
AAML
32
4
0
09 Nov 2022
Adapting to Non-Centered Languages for Zero-shot Multilingual Translation
Zhi Qu
Taro Watanabe
44
7
0
09 Sep 2022
Debiasing Deep Chest X-Ray Classifiers using Intra- and Post-processing Methods
Ricards Marcinkevics
Ece Ozkan
Julia E. Vogt
19
18
0
26 Jul 2022
Visual Explanations from Deep Networks via Riemann-Stieltjes Integrated Gradient-based Localization
Mirtha Lucas
Miguel A. Lerma
J. Furst
D. Raicu
FAtt
19
8
0
22 May 2022
Improving Adversarial Transferability via Neuron Attribution-Based Attacks
Jianping Zhang
Weibin Wu
Jen-tse Huang
Yizhan Huang
Wenxuan Wang
Yuxin Su
Michael R. Lyu
AAML
45
129
0
31 Mar 2022
Sparsity-Inducing Categorical Prior Improves Robustness of the Information Bottleneck
Anirban Samaddar
Sandeep Madireddy
Prasanna Balaprakash
Tapabrata Maiti
Gustavo de los Campos
Ian Fischer
18
1
0
04 Mar 2022
Identifying Suitable Tasks for Inductive Transfer Through the Analysis of Feature Attributions
Alexander Pugantsov
R. McCreadie
18
0
0
02 Feb 2022
Deeply Explain CNN via Hierarchical Decomposition
Mingg-Ming Cheng
Peng-Tao Jiang
Linghao Han
Liang Wang
Philip H. S. Torr
FAtt
53
15
0
23 Jan 2022
Interpretable Low-Resource Legal Decision Making
R. Bhambhoria
Hui Liu
Samuel Dahan
Xiao-Dan Zhu
ELM
AILaw
27
9
0
01 Jan 2022
TorchEsegeta: Framework for Interpretability and Explainability of Image-based Deep Learning Models
S. Chatterjee
Arnab Das
Chirag Mandal
Budhaditya Mukhopadhyay
Manish Vipinraj
Aniruddh Shukla
R. Rao
Chompunuch Sarasaen
Oliver Speck
A. Nürnberger
MedIm
37
14
0
16 Oct 2021
Cartoon Explanations of Image Classifiers
Stefan Kolek
Duc Anh Nguyen
Ron Levie
Joan Bruna
Gitta Kutyniok
FAtt
38
14
0
07 Oct 2021
Neuron-level Interpretation of Deep NLP Models: A Survey
Hassan Sajjad
Nadir Durrani
Fahim Dalvi
MILM
AI4CE
35
80
0
30 Aug 2021
Robust and Interpretable Temporal Convolution Network for Event Detection in Lung Sound Recordings
Tharindu Fernando
Sridha Sridharan
Simon Denman
H. Ghaemmaghami
Clinton Fookes
33
27
0
30 Jun 2021
Software for Dataset-wide XAI: From Local Explanations to Global Insights with Zennit, CoRelAy, and ViRelAy
Christopher J. Anders
David Neumann
Wojciech Samek
K. Müller
Sebastian Lapuschkin
29
64
0
24 Jun 2021
Explaining in Style: Training a GAN to explain a classifier in StyleSpace
Oran Lang
Yossi Gandelsman
Michal Yarom
Yoav Wald
G. Elidan
...
William T. Freeman
Phillip Isola
Amir Globerson
Michal Irani
Inbar Mosseri
GAN
45
152
0
27 Apr 2021
Robust Models Are More Interpretable Because Attributions Look Normal
Zifan Wang
Matt Fredrikson
Anupam Datta
OOD
FAtt
35
25
0
20 Mar 2021
MIMIC-IF: Interpretability and Fairness Evaluation of Deep Learning Models on MIMIC-IV Dataset
Chuizheng Meng
Loc Trinh
Nan Xu
Yan Liu
24
30
0
12 Feb 2021
Interpretability and Explainability: A Machine Learning Zoo Mini-tour
Ricards Marcinkevics
Julia E. Vogt
XAI
28
119
0
03 Dec 2020
Captum: A unified and generic model interpretability library for PyTorch
Narine Kokhlikyan
Vivek Miglani
Miguel Martin
Edward Wang
B. Alsallakh
...
Alexander Melnikov
Natalia Kliushkina
Carlos Araya
Siqi Yan
Orion Reblitz-Richardson
FAtt
29
821
0
16 Sep 2020
Selectivity considered harmful: evaluating the causal impact of class selectivity in DNNs
Matthew L. Leavitt
Ari S. Morcos
58
33
0
03 Mar 2020
Neuron Shapley: Discovering the Responsible Neurons
Amirata Ghorbani
James Zou
FAtt
TDI
25
108
0
23 Feb 2020
From deep learning to mechanistic understanding in neuroscience: the structure of retinal prediction
Hidenori Tanaka
Aran Nayebi
Niru Maheswaranathan
Lane T. McIntosh
S. Baccus
Surya Ganguli
FAtt
14
61
0
12 Dec 2019
Convolutional Neural Networks for Sentence Classification
Yoon Kim
AILaw
VLM
255
13,364
0
25 Aug 2014
1