ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2403.04523
  4. Cited By
T-TAME: Trainable Attention Mechanism for Explaining Convolutional
  Networks and Vision Transformers

T-TAME: Trainable Attention Mechanism for Explaining Convolutional Networks and Vision Transformers

7 March 2024
Mariano V. Ntrougkas
Nikolaos Gkalelis
Vasileios Mezaris
    FAtt
    ViT
ArXivPDFHTML

Papers citing "T-TAME: Trainable Attention Mechanism for Explaining Convolutional Networks and Vision Transformers"

4 / 4 papers shown
Title
Are Transformers More Robust Than CNNs?
Are Transformers More Robust Than CNNs?
Yutong Bai
Jieru Mei
Alan Yuille
Cihang Xie
ViT
AAML
192
257
0
10 Nov 2021
Learn To Pay Attention
Learn To Pay Attention
Saumya Jetley
Nicholas A. Lord
Namhoon Lee
Philip H. S. Torr
67
437
0
06 Apr 2018
A disciplined approach to neural network hyper-parameters: Part 1 --
  learning rate, batch size, momentum, and weight decay
A disciplined approach to neural network hyper-parameters: Part 1 -- learning rate, batch size, momentum, and weight decay
L. Smith
208
1,019
0
26 Mar 2018
ImageNet Large Scale Visual Recognition Challenge
ImageNet Large Scale Visual Recognition Challenge
Olga Russakovsky
Jia Deng
Hao Su
J. Krause
S. Satheesh
...
A. Karpathy
A. Khosla
Michael S. Bernstein
Alexander C. Berg
Li Fei-Fei
VLM
ObjD
296
39,198
0
01 Sep 2014
1