ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2309.08035
  4. Cited By
Interpretability-Aware Vision Transformer

Interpretability-Aware Vision Transformer

14 September 2023
Yao Qiang
Chengyin Li
Prashant Khanduri
D. Zhu
    ViT
ArXivPDFHTML

Papers citing "Interpretability-Aware Vision Transformer"

50 / 67 papers shown
Title
Explainable and Interpretable Multimodal Large Language Models: A
  Comprehensive Survey
Explainable and Interpretable Multimodal Large Language Models: A Comprehensive Survey
Yunkai Dang
Kaichen Huang
Jiahao Huo
Yibo Yan
Shijie Huang
...
Kun Wang
Yong Liu
Jing Shao
Hui Xiong
Xuming Hu
LRM
120
17
0
03 Dec 2024
Explainability in AI Based Applications: A Framework for Comparing
  Different Techniques
Explainability in AI Based Applications: A Framework for Comparing Different Techniques
Arne Grobrugge
Nidhi Mishra
Johannes Jakubik
G. Satzger
125
1
0
28 Oct 2024
LVLM-Interpret: An Interpretability Tool for Large Vision-Language
  Models
LVLM-Interpret: An Interpretability Tool for Large Vision-Language Models
Gabriela Ben-Melech Stan
Estelle Aflalo
R. Y. Rohekar
Anahita Bhiwandiwalla
Shao-Yen Tseng
Matthew Lyle Olson
Yaniv Gurwicz
Chenfei Wu
Nan Duan
Vasudev Lal
100
7
0
03 Apr 2024
Attention Guided CAM: Visual Explanations of Vision Transformer Guided
  by Self-Attention
Attention Guided CAM: Visual Explanations of Vision Transformer Guided by Self-Attention
Saebom Leem
Hyunseok Seo
ViT
43
14
0
07 Feb 2024
AutoProSAM: Automated Prompting SAM for 3D Multi-Organ Segmentation
AutoProSAM: Automated Prompting SAM for 3D Multi-Organ Segmentation
Chengyin Li
Prashant Khanduri
Yao Qiang
Rafi Ibn Sultan
I. Chetty
D. Zhu
MedIm
97
13
0
28 Aug 2023
Fairness-aware Vision Transformer via Debiased Self-Attention
Fairness-aware Vision Transformer via Debiased Self-Attention
Yao Qiang
Chengyin Li
Prashant Khanduri
D. Zhu
ViT
81
9
0
31 Jan 2023
Negative Flux Aggregation to Estimate Feature Attributions
Negative Flux Aggregation to Estimate Feature Attributions
X. Li
Deng Pan
Chengyin Li
Yao Qiang
D. Zhu
FAtt
30
6
0
17 Jan 2023
Learning Compact Features via In-Training Representation Alignment
Learning Compact Features via In-Training Representation Alignment
X. Li
Xiangrui Li
Deng Pan
Yao Qiang
D. Zhu
OOD
40
3
0
23 Nov 2022
DiMBERT: Learning Vision-Language Grounded Representations with
  Disentangled Multimodal-Attention
DiMBERT: Learning Vision-Language Grounded Representations with Disentangled Multimodal-Attention
Fenglin Liu
Xian Wu
Shen Ge
Xuancheng Ren
Wei Fan
Xu Sun
Yuexian Zou
VLM
95
12
0
28 Oct 2022
OpenXAI: Towards a Transparent Evaluation of Model Explanations
OpenXAI: Towards a Transparent Evaluation of Model Explanations
Chirag Agarwal
Dan Ley
Satyapriya Krishna
Eshika Saxena
Martin Pawelczyk
Nari Johnson
Isha Puri
Marinka Zitnik
Himabindu Lakkaraju
XAI
55
145
0
22 Jun 2022
B-cos Networks: Alignment is All We Need for Interpretability
B-cos Networks: Alignment is All We Need for Interpretability
Moritz D Boehle
Mario Fritz
Bernt Schiele
87
86
0
20 May 2022
Grad-SAM: Explaining Transformers via Gradient Self-Attention Maps
Grad-SAM: Explaining Transformers via Gradient Self-Attention Maps
Oren Barkan
Edan Hauon
Avi Caciularu
Ori Katz
Itzik Malkiel
Omri Armstrong
Noam Koenigstein
53
38
0
23 Apr 2022
ViTOL: Vision Transformer for Weakly Supervised Object Localization
ViTOL: Vision Transformer for Weakly Supervised Object Localization
Saurav Gupta
Sourav Lakhotia
Abhay Rawat
Rahul Tallamraju
WSOL
61
21
0
14 Apr 2022
LCTR: On Awakening the Local Continuity of Transformer for Weakly
  Supervised Object Localization
LCTR: On Awakening the Local Continuity of Transformer for Weakly Supervised Object Localization
Zhiwei Chen
Changan Wang
Yabiao Wang
Guannan Jiang
Yunhang Shen
Ying Tai
Chengjie Wang
Wei Zhang
Liujuan Cao
WSOL
ViT
62
46
0
10 Dec 2021
BEVT: BERT Pretraining of Video Transformers
BEVT: BERT Pretraining of Video Transformers
Rui Wang
Dongdong Chen
Zuxuan Wu
Yinpeng Chen
Xiyang Dai
Mengchen Liu
Yu-Gang Jiang
Luowei Zhou
Lu Yuan
ViT
79
207
0
02 Dec 2021
Improving Deep Learning Interpretability by Saliency Guided Training
Improving Deep Learning Interpretability by Saliency Guided Training
Aya Abdelsalam Ismail
H. C. Bravo
Soheil Feizi
FAtt
71
83
0
29 Nov 2021
Learning Interpretation with Explainable Knowledge Distillation
Learning Interpretation with Explainable Knowledge Distillation
Raed Alharbi
Minh Nhat Vu
My T. Thai
39
15
0
12 Nov 2021
Video Swin Transformer
Video Swin Transformer
Ze Liu
Jia Ning
Yue Cao
Yixuan Wei
Zheng Zhang
Stephen Lin
Han Hu
ViT
94
1,481
0
24 Jun 2021
IA-RED$^2$: Interpretability-Aware Redundancy Reduction for Vision
  Transformers
IA-RED2^22: Interpretability-Aware Redundancy Reduction for Vision Transformers
Bowen Pan
Yikang Shen
Yi Ding
Zhangyang Wang
Rogerio Feris
A. Oliva
VLM
ViT
82
160
0
23 Jun 2021
The effectiveness of feature attribution methods and its correlation
  with automatic evaluation scores
The effectiveness of feature attribution methods and its correlation with automatic evaluation scores
Giang Nguyen
Daeyoung Kim
Anh Totti Nguyen
FAtt
105
89
0
31 May 2021
Twins: Revisiting the Design of Spatial Attention in Vision Transformers
Twins: Revisiting the Design of Spatial Attention in Vision Transformers
Xiangxiang Chu
Zhi Tian
Yuqing Wang
Bo Zhang
Haibing Ren
Xiaolin K. Wei
Huaxia Xia
Chunhua Shen
ViT
79
1,017
0
28 Apr 2021
VidTr: Video Transformer Without Convolutions
VidTr: Video Transformer Without Convolutions
Yanyi Zhang
Xinyu Li
Chunhui Liu
Bing Shuai
Yi Zhu
Biagio Brattoli
Hao Chen
I. Marsic
Joseph Tighe
ViT
176
196
0
23 Apr 2021
Going deeper with Image Transformers
Going deeper with Image Transformers
Hugo Touvron
Matthieu Cord
Alexandre Sablayrolles
Gabriel Synnaeve
Hervé Jégou
ViT
133
1,006
0
31 Mar 2021
TS-CAM: Token Semantic Coupled Attention Map for Weakly Supervised
  Object Localization
TS-CAM: Token Semantic Coupled Attention Map for Weakly Supervised Object Localization
Wei Gao
Fang Wan
Xingjia Pan
Zhiliang Peng
Qi Tian
Zhenjun Han
Bolei Zhou
QiXiang Ye
ViT
WSOL
59
202
0
27 Mar 2021
Swin Transformer: Hierarchical Vision Transformer using Shifted Windows
Swin Transformer: Hierarchical Vision Transformer using Shifted Windows
Ze Liu
Yutong Lin
Yue Cao
Han Hu
Yixuan Wei
Zheng Zhang
Stephen Lin
B. Guo
ViT
421
21,347
0
25 Mar 2021
DeepViT: Towards Deeper Vision Transformer
DeepViT: Towards Deeper Vision Transformer
Daquan Zhou
Bingyi Kang
Xiaojie Jin
Linjie Yang
Xiaochen Lian
Zihang Jiang
Qibin Hou
Jiashi Feng
ViT
81
518
0
22 Mar 2021
Do Input Gradients Highlight Discriminative Features?
Do Input Gradients Highlight Discriminative Features?
Harshay Shah
Prateek Jain
Praneeth Netrapalli
AAML
FAtt
59
59
0
25 Feb 2021
Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction
  without Convolutions
Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions
Wenhai Wang
Enze Xie
Xiang Li
Deng-Ping Fan
Kaitao Song
Ding Liang
Tong Lu
Ping Luo
Ling Shao
ViT
503
3,709
0
24 Feb 2021
Tokens-to-Token ViT: Training Vision Transformers from Scratch on
  ImageNet
Tokens-to-Token ViT: Training Vision Transformers from Scratch on ImageNet
Li-xin Yuan
Yunpeng Chen
Tao Wang
Weihao Yu
Yujun Shi
Zihang Jiang
Francis E. H. Tay
Jiashi Feng
Shuicheng Yan
ViT
119
1,935
0
28 Jan 2021
Training data-efficient image transformers & distillation through
  attention
Training data-efficient image transformers & distillation through attention
Hugo Touvron
Matthieu Cord
Matthijs Douze
Francisco Massa
Alexandre Sablayrolles
Hervé Jégou
ViT
361
6,757
0
23 Dec 2020
Transformer Interpretability Beyond Attention Visualization
Transformer Interpretability Beyond Attention Visualization
Hila Chefer
Shir Gur
Lior Wolf
126
664
0
17 Dec 2020
An Image is Worth 16x16 Words: Transformers for Image Recognition at
  Scale
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
Alexey Dosovitskiy
Lucas Beyer
Alexander Kolesnikov
Dirk Weissenborn
Xiaohua Zhai
...
Matthias Minderer
G. Heigold
Sylvain Gelly
Jakob Uszkoreit
N. Houlsby
ViT
555
40,739
0
22 Oct 2020
A Framework to Learn with Interpretation
A Framework to Learn with Interpretation
Jayneel Parekh
Pavlo Mozharovskyi
Florence dÁlché-Buc
AI4CE
FAtt
46
30
0
19 Oct 2020
End-to-End Object Detection with Transformers
End-to-End Object Detection with Transformers
Nicolas Carion
Francisco Massa
Gabriel Synnaeve
Nicolas Usunier
Alexander Kirillov
Sergey Zagoruyko
ViT
3DV
PINN
363
13,002
0
26 May 2020
Quantifying Attention Flow in Transformers
Quantifying Attention Flow in Transformers
Samira Abnar
Willem H. Zuidema
134
794
0
02 May 2020
Self-Attention Attribution: Interpreting Information Interactions Inside
  Transformer
Self-Attention Attribution: Interpreting Information Interactions Inside Transformer
Y. Hao
Li Dong
Furu Wei
Ke Xu
ViT
58
223
0
23 Apr 2020
Interpretability of machine learning based prediction models in
  healthcare
Interpretability of machine learning based prediction models in healthcare
Gregor Stiglic
Primož Kocbek
Nino Fijačko
Marinka Zitnik
K. Verbert
Leona Cilar
AI4CE
60
383
0
20 Feb 2020
Understanding and Improving Knowledge Distillation
Understanding and Improving Knowledge Distillation
Jiaxi Tang
Rakesh Shivanna
Zhe Zhao
Dong Lin
Anima Singh
Ed H. Chi
Sagar Jain
60
131
0
10 Feb 2020
Concept Whitening for Interpretable Image Recognition
Concept Whitening for Interpretable Image Recognition
Zhi Chen
Yijie Bei
Cynthia Rudin
FAtt
63
320
0
05 Feb 2020
Towards Explainable Deep Neural Networks (xDNN)
Towards Explainable Deep Neural Networks (xDNN)
Plamen Angelov
Eduardo Soares
AAML
61
261
0
05 Dec 2019
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies,
  Opportunities and Challenges toward Responsible AI
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI
Alejandro Barredo Arrieta
Natalia Díaz Rodríguez
Javier Del Ser
Adrien Bennetot
Siham Tabik
...
S. Gil-Lopez
Daniel Molina
Richard Benjamins
Raja Chatila
Francisco Herrera
XAI
116
6,251
0
22 Oct 2019
Understanding Deep Networks via Extremal Perturbations and Smooth Masks
Understanding Deep Networks via Extremal Perturbations and Smooth Masks
Ruth C. Fong
Mandela Patrick
Andrea Vedaldi
AAML
64
415
0
18 Oct 2019
Is Attention Interpretable?
Is Attention Interpretable?
Sofia Serrano
Noah A. Smith
101
683
0
09 Jun 2019
Saliency Learning: Teaching the Model Where to Pay Attention
Saliency Learning: Teaching the Model Where to Pay Attention
Reza Ghaeini
Xiaoli Z. Fern
Hamed Shahbazi
Prasad Tadepalli
FAtt
XAI
56
30
0
22 Feb 2019
BERT: Pre-training of Deep Bidirectional Transformers for Language
  Understanding
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Jacob Devlin
Ming-Wei Chang
Kenton Lee
Kristina Toutanova
VLM
SSL
SSeg
1.6K
94,729
0
11 Oct 2018
Sanity Checks for Saliency Maps
Sanity Checks for Saliency Maps
Julius Adebayo
Justin Gilmer
M. Muelly
Ian Goodfellow
Moritz Hardt
Been Kim
FAtt
AAML
XAI
123
1,963
0
08 Oct 2018
A Benchmark for Interpretability Methods in Deep Neural Networks
A Benchmark for Interpretability Methods in Deep Neural Networks
Sara Hooker
D. Erhan
Pieter-Jan Kindermans
Been Kim
FAtt
UQCV
98
681
0
28 Jun 2018
This Looks Like That: Deep Learning for Interpretable Image Recognition
This Looks Like That: Deep Learning for Interpretable Image Recognition
Chaofan Chen
Oscar Li
Chaofan Tao
A. Barnett
Jonathan Su
Cynthia Rudin
223
1,182
0
27 Jun 2018
On the Robustness of Interpretability Methods
On the Robustness of Interpretability Methods
David Alvarez-Melis
Tommi Jaakkola
76
526
0
21 Jun 2018
RISE: Randomized Input Sampling for Explanation of Black-box Models
RISE: Randomized Input Sampling for Explanation of Black-box Models
Vitali Petsiuk
Abir Das
Kate Saenko
FAtt
175
1,169
0
19 Jun 2018
12
Next