ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.09046
  4. Cited By
Discovering Influential Neuron Path in Vision Transformers

Discovering Influential Neuron Path in Vision Transformers

12 March 2025
Yifan Wang
Yifei Liu
Yingdong Shi
Chong Li
Anqi Pang
Sibei Yang
Jingyi Yu
Kan Ren
    ViT
ArXivPDFHTML

Papers citing "Discovering Influential Neuron Path in Vision Transformers"

45 / 45 papers shown
Title
Uncovering Memorization Effect in the Presence of Spurious Correlations
Uncovering Memorization Effect in the Presence of Spurious Correlations
Chenyu You
Haocheng Dai
Yifei Min
Jasjeet Sekhon
S. Joshi
James S. Duncan
101
3
0
01 Jan 2025
Mixture-of-Depths: Dynamically allocating compute in transformer-based
  language models
Mixture-of-Depths: Dynamically allocating compute in transformer-based language models
David Raposo
Sam Ritter
Blake A. Richards
Timothy Lillicrap
Peter C. Humphreys
Adam Santoro
MoE
78
84
0
02 Apr 2024
DISCOVER: Making Vision Networks Interpretable via Competition and
  Dissection
DISCOVER: Making Vision Networks Interpretable via Competition and Dissection
Konstantinos P. Panousis
S. Chatzis
66
5
0
07 Oct 2023
Rigorously Assessing Natural Language Explanations of Neurons
Rigorously Assessing Natural Language Explanations of Neurons
Jing-ling Huang
Atticus Geiger
Karel DÓosterlinck
Zhengxuan Wu
Christopher Potts
MILM
62
27
0
19 Sep 2023
Scale Alone Does not Improve Mechanistic Interpretability in Vision
  Models
Scale Alone Does not Improve Mechanistic Interpretability in Vision Models
Roland S. Zimmermann
Thomas Klein
Wieland Brendel
58
15
0
11 Jul 2023
Neuron to Graph: Interpreting Language Model Neurons at Scale
Neuron to Graph: Interpreting Language Model Neurons at Scale
Alex Foote
Neel Nanda
Esben Kran
Ioannis Konstas
Shay B. Cohen
Fazl Barez
MILM
53
26
0
31 May 2023
DINOv2: Learning Robust Visual Features without Supervision
DINOv2: Learning Robust Visual Features without Supervision
Maxime Oquab
Timothée Darcet
Théo Moutakanni
Huy Q. Vo
Marc Szafraniec
...
Hervé Jégou
Julien Mairal
Patrick Labatut
Armand Joulin
Piotr Bojanowski
VLM
CLIP
SSL
284
3,340
0
14 Apr 2023
Evaluating Neuron Interpretation Methods of NLP Models
Evaluating Neuron Interpretation Methods of NLP Models
Yimin Fan
Fahim Dalvi
Nadir Durrani
Hassan Sajjad
54
8
0
30 Jan 2023
Scalable Diffusion Models with Transformers
Scalable Diffusion Models with Transformers
William S. Peebles
Saining Xie
GNN
77
2,298
0
19 Dec 2022
What do Vision Transformers Learn? A Visual Exploration
What do Vision Transformers Learn? A Visual Exploration
Amin Ghiasi
Hamid Kazemi
Eitan Borgnia
Steven Reich
Manli Shu
Micah Goldblum
A. Wilson
Tom Goldstein
ViT
64
61
0
13 Dec 2022
WeightedSHAP: analyzing and improving Shapley based feature attributions
WeightedSHAP: analyzing and improving Shapley based feature attributions
Yongchan Kwon
James Zou
TDI
FAtt
60
36
0
27 Sep 2022
CLIP-Dissect: Automatic Description of Neuron Representations in Deep
  Vision Networks
CLIP-Dissect: Automatic Description of Neuron Representations in Deep Vision Networks
Tuomas P. Oikarinen
Tsui-Wei Weng
VLM
46
87
1
23 Apr 2022
Natural Language Descriptions of Deep Visual Features
Natural Language Descriptions of Deep Visual Features
Evan Hernandez
Sarah Schwettmann
David Bau
Teona Bagashvili
Antonio Torralba
Jacob Andreas
MILM
288
121
0
26 Jan 2022
Vision Transformer Slimming: Multi-Dimension Searching in Continuous
  Optimization Space
Vision Transformer Slimming: Multi-Dimension Searching in Continuous Optimization Space
Arnav Chavan
Zhiqiang Shen
Zhuang Liu
Zechun Liu
Kwang-Ting Cheng
Eric P. Xing
ViT
79
70
0
03 Jan 2022
Masked Autoencoders Are Scalable Vision Learners
Masked Autoencoders Are Scalable Vision Learners
Kaiming He
Xinlei Chen
Saining Xie
Yanghao Li
Piotr Dollár
Ross B. Girshick
ViT
TPM
427
7,705
0
11 Nov 2021
Fast Model Editing at Scale
Fast Model Editing at Scale
E. Mitchell
Charles Lin
Antoine Bosselut
Chelsea Finn
Christopher D. Manning
KELM
322
364
0
21 Oct 2021
Intriguing Properties of Vision Transformers
Intriguing Properties of Vision Transformers
Muzammal Naseer
Kanchana Ranasinghe
Salman Khan
Munawar Hayat
Fahad Shahbaz Khan
Ming-Hsuan Yang
ViT
304
641
0
21 May 2021
Vision Transformers are Robust Learners
Vision Transformers are Robust Learners
Sayak Paul
Pin-Yu Chen
ViT
57
312
0
17 May 2021
Emerging Properties in Self-Supervised Vision Transformers
Emerging Properties in Self-Supervised Vision Transformers
Mathilde Caron
Hugo Touvron
Ishan Misra
Hervé Jégou
Julien Mairal
Piotr Bojanowski
Armand Joulin
613
6,029
0
29 Apr 2021
Knowledge Neurons in Pretrained Transformers
Knowledge Neurons in Pretrained Transformers
Damai Dai
Li Dong
Y. Hao
Zhifang Sui
Baobao Chang
Furu Wei
KELM
MU
79
451
0
18 Apr 2021
Learning Transferable Visual Models From Natural Language Supervision
Learning Transferable Visual Models From Natural Language Supervision
Alec Radford
Jong Wook Kim
Chris Hallacy
Aditya A. Ramesh
Gabriel Goh
...
Amanda Askell
Pamela Mishkin
Jack Clark
Gretchen Krueger
Ilya Sutskever
CLIP
VLM
824
29,167
0
26 Feb 2021
On Explainability of Graph Neural Networks via Subgraph Explorations
On Explainability of Graph Neural Networks via Subgraph Explorations
Hao Yuan
Haiyang Yu
Jie Wang
Kang Li
Shuiwang Ji
FAtt
78
389
0
09 Feb 2021
Transformer Feed-Forward Layers Are Key-Value Memories
Transformer Feed-Forward Layers Are Key-Value Memories
Mor Geva
R. Schuster
Jonathan Berant
Omer Levy
KELM
130
827
0
29 Dec 2020
Transformer Interpretability Beyond Attention Visualization
Transformer Interpretability Beyond Attention Visualization
Hila Chefer
Shir Gur
Lior Wolf
124
659
0
17 Dec 2020
Influence Patterns for Explaining Information Flow in BERT
Influence Patterns for Explaining Information Flow in BERT
Kaiji Lu
Zifan Wang
Piotr (Peter) Mardziel
Anupam Datta
GNN
62
16
0
02 Nov 2020
An Image is Worth 16x16 Words: Transformers for Image Recognition at
  Scale
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
Alexey Dosovitskiy
Lucas Beyer
Alexander Kolesnikov
Dirk Weissenborn
Xiaohua Zhai
...
Matthias Minderer
G. Heigold
Sylvain Gelly
Jakob Uszkoreit
N. Houlsby
ViT
553
40,739
0
22 Oct 2020
Analyzing Individual Neurons in Pre-trained Language Models
Analyzing Individual Neurons in Pre-trained Language Models
Nadir Durrani
Hassan Sajjad
Fahim Dalvi
Yonatan Belinkov
MILM
55
104
0
06 Oct 2020
Influence Paths for Characterizing Subject-Verb Number Agreement in LSTM
  Language Models
Influence Paths for Characterizing Subject-Verb Number Agreement in LSTM Language Models
Kaiji Lu
Piotr (Peter) Mardziel
Klas Leino
Matt Fredrikson
Anupam Datta
60
10
0
03 May 2020
Neuron Shapley: Discovering the Responsible Neurons
Neuron Shapley: Discovering the Responsible Neurons
Amirata Ghorbani
James Zou
FAtt
TDI
54
111
0
23 Feb 2020
A Multiscale Visualization of Attention in the Transformer Model
A Multiscale Visualization of Attention in the Transformer Model
Jesse Vig
ViT
77
577
0
12 Jun 2019
What Is One Grain of Sand in the Desert? Analyzing Individual Neurons in
  Deep NLP Models
What Is One Grain of Sand in the Desert? Analyzing Individual Neurons in Deep NLP Models
Fahim Dalvi
Nadir Durrani
Hassan Sajjad
Yonatan Belinkov
A. Bau
James R. Glass
MILM
54
188
0
21 Dec 2018
BERT: Pre-training of Deep Bidirectional Transformers for Language
  Understanding
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Jacob Devlin
Ming-Wei Chang
Kenton Lee
Kristina Toutanova
VLM
SSL
SSeg
1.6K
94,511
0
11 Oct 2018
How Important Is a Neuron?
How Important Is a Neuron?
Kedar Dhamdhere
Mukund Sundararajan
Qiqi Yan
FAtt
GNN
53
130
0
30 May 2018
Interpretability Beyond Feature Attribution: Quantitative Testing with
  Concept Activation Vectors (TCAV)
Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
Been Kim
Martin Wattenberg
Justin Gilmer
Carrie J. Cai
James Wexler
F. Viégas
Rory Sayres
FAtt
205
1,837
0
30 Nov 2017
Grad-CAM++: Improved Visual Explanations for Deep Convolutional Networks
Grad-CAM++: Improved Visual Explanations for Deep Convolutional Networks
Aditya Chattopadhyay
Anirban Sarkar
Prantik Howlader
V. Balasubramanian
FAtt
101
2,289
0
30 Oct 2017
SmoothGrad: removing noise by adding noise
SmoothGrad: removing noise by adding noise
D. Smilkov
Nikhil Thorat
Been Kim
F. Viégas
Martin Wattenberg
FAtt
ODL
199
2,221
0
12 Jun 2017
Attention Is All You Need
Attention Is All You Need
Ashish Vaswani
Noam M. Shazeer
Niki Parmar
Jakob Uszkoreit
Llion Jones
Aidan Gomez
Lukasz Kaiser
Illia Polosukhin
3DV
654
130,942
0
12 Jun 2017
Network Dissection: Quantifying Interpretability of Deep Visual
  Representations
Network Dissection: Quantifying Interpretability of Deep Visual Representations
David Bau
Bolei Zhou
A. Khosla
A. Oliva
Antonio Torralba
MILM
FAtt
140
1,514
1
19 Apr 2017
Understanding Black-box Predictions via Influence Functions
Understanding Black-box Predictions via Influence Functions
Pang Wei Koh
Percy Liang
TDI
169
2,878
0
14 Mar 2017
Axiomatic Attribution for Deep Networks
Axiomatic Attribution for Deep Networks
Mukund Sundararajan
Ankur Taly
Qiqi Yan
OOD
FAtt
175
5,968
0
04 Mar 2017
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based
  Localization
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization
Ramprasaath R. Selvaraju
Michael Cogswell
Abhishek Das
Ramakrishna Vedantam
Devi Parikh
Dhruv Batra
FAtt
266
19,929
0
07 Oct 2016
Learning Deep Features for Discriminative Localization
Learning Deep Features for Discriminative Localization
Bolei Zhou
A. Khosla
Àgata Lapedriza
A. Oliva
Antonio Torralba
SSL
SSeg
FAtt
239
9,305
0
14 Dec 2015
Visualizing and Understanding Recurrent Networks
Visualizing and Understanding Recurrent Networks
A. Karpathy
Justin Johnson
Li Fei-Fei
HAI
111
1,100
0
05 Jun 2015
Visualizing and Understanding Neural Models in NLP
Visualizing and Understanding Neural Models in NLP
Jiwei Li
Xinlei Chen
Eduard H. Hovy
Dan Jurafsky
MILM
FAtt
75
707
0
02 Jun 2015
Visualizing and Understanding Convolutional Networks
Visualizing and Understanding Convolutional Networks
Matthew D. Zeiler
Rob Fergus
FAtt
SSL
552
15,874
0
12 Nov 2013
1