ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2309.08035
  4. Cited By
Interpretability-Aware Vision Transformer

Interpretability-Aware Vision Transformer

14 September 2023
Yao Qiang
Chengyin Li
Prashant Khanduri
D. Zhu
    ViT
ArXivPDFHTML

Papers citing "Interpretability-Aware Vision Transformer"

17 / 67 papers shown
Title
Distilling a Neural Network Into a Soft Decision Tree
Distilling a Neural Network Into a Soft Decision Tree
Nicholas Frosst
Geoffrey E. Hinton
389
636
0
27 Nov 2017
Beyond Sparsity: Tree Regularization of Deep Models for Interpretability
Beyond Sparsity: Tree Regularization of Deep Models for Interpretability
Mike Wu
M. C. Hughes
S. Parbhoo
Maurizio Zazzi
Volker Roth
Finale Doshi-Velez
AI4CE
117
282
0
16 Nov 2017
The (Un)reliability of saliency methods
The (Un)reliability of saliency methods
Pieter-Jan Kindermans
Sara Hooker
Julius Adebayo
Maximilian Alber
Kristof T. Schütt
Sven Dähne
D. Erhan
Been Kim
FAtt
XAI
91
685
0
02 Nov 2017
Interpretable Convolutional Neural Networks
Interpretable Convolutional Neural Networks
Quanshi Zhang
Ying Nian Wu
Song-Chun Zhu
FAtt
64
780
0
02 Oct 2017
Attention Is All You Need
Attention Is All You Need
Ashish Vaswani
Noam M. Shazeer
Niki Parmar
Jakob Uszkoreit
Llion Jones
Aidan Gomez
Lukasz Kaiser
Illia Polosukhin
3DV
656
131,414
0
12 Jun 2017
Contextual Explanation Networks
Contextual Explanation Networks
Maruan Al-Shedivat
Kumar Avinava Dubey
Eric Xing
CML
76
82
0
29 May 2017
A Unified Approach to Interpreting Model Predictions
A Unified Approach to Interpreting Model Predictions
Scott M. Lundberg
Su-In Lee
FAtt
1.1K
21,815
0
22 May 2017
Learning Important Features Through Propagating Activation Differences
Learning Important Features Through Propagating Activation Differences
Avanti Shrikumar
Peyton Greenside
A. Kundaje
FAtt
188
3,869
0
10 Apr 2017
Interpretable Learning for Self-Driving Cars by Visualizing Causal
  Attention
Interpretable Learning for Self-Driving Cars by Visualizing Causal Attention
Jinkyu Kim
John F. Canny
FAtt
XAI
OOD
MILM
CML
85
336
0
30 Mar 2017
Right for the Right Reasons: Training Differentiable Models by
  Constraining their Explanations
Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations
A. Ross
M. C. Hughes
Finale Doshi-Velez
FAtt
115
589
0
10 Mar 2017
Axiomatic Attribution for Deep Networks
Axiomatic Attribution for Deep Networks
Mukund Sundararajan
Ankur Taly
Qiqi Yan
OOD
FAtt
175
5,986
0
04 Mar 2017
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based
  Localization
Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization
Ramprasaath R. Selvaraju
Michael Cogswell
Abhishek Das
Ramakrishna Vedantam
Devi Parikh
Dhruv Batra
FAtt
268
19,929
0
07 Oct 2016
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
"Why Should I Trust You?": Explaining the Predictions of Any Classifier
Marco Tulio Ribeiro
Sameer Singh
Carlos Guestrin
FAtt
FaML
1.1K
16,931
0
16 Feb 2016
Learning Deep Features for Discriminative Localization
Learning Deep Features for Discriminative Localization
Bolei Zhou
A. Khosla
Àgata Lapedriza
A. Oliva
Antonio Torralba
SSL
SSeg
FAtt
241
9,305
0
14 Dec 2015
Distilling the Knowledge in a Neural Network
Distilling the Knowledge in a Neural Network
Geoffrey E. Hinton
Oriol Vinyals
J. Dean
FedML
327
19,609
0
09 Mar 2015
Deep Inside Convolutional Networks: Visualising Image Classification
  Models and Saliency Maps
Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
Karen Simonyan
Andrea Vedaldi
Andrew Zisserman
FAtt
301
7,289
0
20 Dec 2013
A Kernel Method for the Two-Sample Problem
A Kernel Method for the Two-Sample Problem
Arthur Gretton
Karsten Borgwardt
Malte J. Rasch
Bernhard Schölkopf
Alex Smola
227
2,355
0
15 May 2008
Previous
12