ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2103.14586
  4. Cited By
Understanding Robustness of Transformers for Image Classification

Understanding Robustness of Transformers for Image Classification

26 March 2021
Srinadh Bhojanapalli
Ayan Chakrabarti
Daniel Glasner
Daliang Li
Thomas Unterthiner
Andreas Veit
    ViT
ArXivPDFHTML

Papers citing "Understanding Robustness of Transformers for Image Classification"

36 / 86 papers shown
Title
Self-Supervised Learning for Videos: A Survey
Self-Supervised Learning for Videos: A Survey
Madeline Chantry Schiappa
Y. S. Rawat
M. Shah
SSL
36
131
0
18 Jun 2022
One-Pixel Shortcut: on the Learning Preference of Deep Neural Networks
One-Pixel Shortcut: on the Learning Preference of Deep Neural Networks
Shutong Wu
Sizhe Chen
Cihang Xie
X. Huang
AAML
45
27
0
24 May 2022
Degradation-Aware Unfolding Half-Shuffle Transformer for Spectral
  Compressive Imaging
Degradation-Aware Unfolding Half-Shuffle Transformer for Spectral Compressive Imaging
Yuanhao Cai
Jing Lin
Haoqian Wang
Xin Yuan
Henghui Ding
Yulun Zhang
Radu Timofte
Luc Van Gool
80
116
0
20 May 2022
It's DONE: Direct ONE-shot learning with quantile weight imprinting
It's DONE: Direct ONE-shot learning with quantile weight imprinting
Kazufumi Hosoda
Keigo Nishida
S. Seno
Tomohiro Mashita
H. Kashioka
I. Ohzawa
24
2
0
28 Apr 2022
Deeper Insights into the Robustness of ViTs towards Common Corruptions
Deeper Insights into the Robustness of ViTs towards Common Corruptions
Rui Tian
Zuxuan Wu
Qi Dai
Han Hu
Yu-Gang Jiang
ViT
AAML
21
4
0
26 Apr 2022
Does Robustness on ImageNet Transfer to Downstream Tasks?
Does Robustness on ImageNet Transfer to Downstream Tasks?
Yutaro Yamada
Mayu Otani
OOD
32
27
0
08 Apr 2022
Give Me Your Attention: Dot-Product Attention Considered Harmful for
  Adversarial Patch Robustness
Give Me Your Attention: Dot-Product Attention Considered Harmful for Adversarial Patch Robustness
Giulio Lovisotto
Nicole Finnie
Mauricio Muñoz
Chaithanya Kumar Mummadi
J. H. Metzen
AAML
ViT
30
32
0
25 Mar 2022
InvPT: Inverted Pyramid Multi-task Transformer for Dense Scene
  Understanding
InvPT: Inverted Pyramid Multi-task Transformer for Dense Scene Understanding
Hanrong Ye
Dan Xu
ViT
21
84
0
15 Mar 2022
Joint rotational invariance and adversarial training of a dual-stream
  Transformer yields state of the art Brain-Score for Area V4
Joint rotational invariance and adversarial training of a dual-stream Transformer yields state of the art Brain-Score for Area V4
William Berrios
Arturo Deza
MedIm
ViT
30
13
0
08 Mar 2022
3D Common Corruptions and Data Augmentation
3D Common Corruptions and Data Augmentation
Oğuzhan Fatih Kar
Teresa Yeo
Andrei Atanov
Amir Zamir
3DPC
45
107
0
02 Mar 2022
How Do Vision Transformers Work?
How Do Vision Transformers Work?
Namuk Park
Songkuk Kim
ViT
47
465
0
14 Feb 2022
Are Transformers More Robust? Towards Exact Robustness Verification for
  Transformers
Are Transformers More Robust? Towards Exact Robustness Verification for Transformers
B. Liao
Chih-Hong Cheng
Hasan Esen
Alois Knoll
AAML
26
1
0
08 Feb 2022
Transformers in Self-Supervised Monocular Depth Estimation with Unknown
  Camera Intrinsics
Transformers in Self-Supervised Monocular Depth Estimation with Unknown Camera Intrinsics
Arnav Varma
Hemang Chawla
Bahram Zonooz
Elahe Arani
ViT
MDE
36
49
0
07 Feb 2022
GLPanoDepth: Global-to-Local Panoramic Depth Estimation
GLPanoDepth: Global-to-Local Panoramic Depth Estimation
Jia-Chi Bai
Shuichang Lai
Haoyu Qin
Jie Guo
Yanwen Guo
ViT
MDE
69
21
0
06 Feb 2022
Query Efficient Decision Based Sparse Attacks Against Black-Box Deep
  Learning Models
Query Efficient Decision Based Sparse Attacks Against Black-Box Deep Learning Models
Viet Vo
Ehsan Abbasnejad
D. Ranasinghe
AAML
30
14
0
31 Jan 2022
Video Transformers: A Survey
Video Transformers: A Survey
Javier Selva
A. S. Johansen
Sergio Escalera
Kamal Nasrollahi
T. Moeslund
Albert Clapés
ViT
22
103
0
16 Jan 2022
Decision-based Black-box Attack Against Vision Transformers via
  Patch-wise Adversarial Removal
Decision-based Black-box Attack Against Vision Transformers via Patch-wise Adversarial Removal
Yucheng Shi
Yahong Han
Yu-an Tan
Xiaohui Kuang
38
30
0
07 Dec 2021
DAFormer: Improving Network Architectures and Training Strategies for
  Domain-Adaptive Semantic Segmentation
DAFormer: Improving Network Architectures and Training Strategies for Domain-Adaptive Semantic Segmentation
Lukas Hoyer
Dengxin Dai
Luc Van Gool
AI4CE
36
450
0
29 Nov 2021
OOD-CV: A Benchmark for Robustness to Out-of-Distribution Shifts of
  Individual Nuisances in Natural Images
OOD-CV: A Benchmark for Robustness to Out-of-Distribution Shifts of Individual Nuisances in Natural Images
Bingchen Zhao
Shaozuo Yu
Wufei Ma
M. Yu
Shenxiao Mei
Angtian Wang
Ju He
Alan Yuille
Adam Kortylewski
31
53
0
29 Nov 2021
Are Vision Transformers Robust to Patch Perturbations?
Are Vision Transformers Robust to Patch Perturbations?
Jindong Gu
Volker Tresp
Yao Qin
AAML
ViT
38
60
0
20 Nov 2021
Discrete Representations Strengthen Vision Transformer Robustness
Discrete Representations Strengthen Vision Transformer Robustness
Chengzhi Mao
Lu Jiang
Mostafa Dehghani
Carl Vondrick
Rahul Sukthankar
Irfan Essa
ViT
27
43
0
20 Nov 2021
Are Transformers More Robust Than CNNs?
Are Transformers More Robust Than CNNs?
Yutong Bai
Jieru Mei
Alan Yuille
Cihang Xie
ViT
AAML
192
257
0
10 Nov 2021
Are we ready for a new paradigm shift? A Survey on Visual Deep MLP
Are we ready for a new paradigm shift? A Survey on Visual Deep MLP
Ruiyang Liu
Hai-Tao Zheng
Li Tao
Dun Liang
Haitao Zheng
85
97
0
07 Nov 2021
Blending Anti-Aliasing into Vision Transformer
Blending Anti-Aliasing into Vision Transformer
Shengju Qian
Hao Shao
Yi Zhu
Mu Li
Jiaya Jia
26
20
0
28 Oct 2021
Adversarial Token Attacks on Vision Transformers
Adversarial Token Attacks on Vision Transformers
Ameya Joshi
Gauri Jagatap
C. Hegde
ViT
30
19
0
08 Oct 2021
Adversarial Robustness Comparison of Vision Transformer and MLP-Mixer to
  CNNs
Adversarial Robustness Comparison of Vision Transformer and MLP-Mixer to CNNs
Philipp Benz
Soomin Ham
Chaoning Zhang
Adil Karjauv
In So Kweon
AAML
ViT
47
78
0
06 Oct 2021
Do Vision Transformers See Like Convolutional Neural Networks?
Do Vision Transformers See Like Convolutional Neural Networks?
M. Raghu
Thomas Unterthiner
Simon Kornblith
Chiyuan Zhang
Alexey Dosovitskiy
ViT
67
924
0
19 Aug 2021
Exploring Corruption Robustness: Inductive Biases in Vision Transformers
  and MLP-Mixers
Exploring Corruption Robustness: Inductive Biases in Vision Transformers and MLP-Mixers
Katelyn Morrison
B. Gilby
Colton Lipchak
Adam Mattioli
Adriana Kovashka
ViT
28
17
0
24 Jun 2021
Rethinking Architecture Design for Tackling Data Heterogeneity in
  Federated Learning
Rethinking Architecture Design for Tackling Data Heterogeneity in Federated Learning
Liangqiong Qu
Yuyin Zhou
Paul Pu Liang
Yingda Xia
Feifei Wang
Ehsan Adeli
L. Fei-Fei
D. Rubin
FedML
AI4CE
19
175
0
10 Jun 2021
Reveal of Vision Transformers Robustness against Adversarial Attacks
Reveal of Vision Transformers Robustness against Adversarial Attacks
Ahmed Aldahdooh
W. Hamidouche
Olivier Déforges
ViT
15
56
0
07 Jun 2021
Intriguing Properties of Vision Transformers
Intriguing Properties of Vision Transformers
Muzammal Naseer
Kanchana Ranasinghe
Salman Khan
Munawar Hayat
Fahad Shahbaz Khan
Ming-Hsuan Yang
ViT
265
621
0
21 May 2021
Vision Transformers are Robust Learners
Vision Transformers are Robust Learners
Sayak Paul
Pin-Yu Chen
ViT
28
304
0
17 May 2021
MLP-Mixer: An all-MLP Architecture for Vision
MLP-Mixer: An all-MLP Architecture for Vision
Ilya O. Tolstikhin
N. Houlsby
Alexander Kolesnikov
Lucas Beyer
Xiaohua Zhai
...
Andreas Steiner
Daniel Keysers
Jakob Uszkoreit
Mario Lucic
Alexey Dosovitskiy
274
2,606
0
04 May 2021
Transformers in Vision: A Survey
Transformers in Vision: A Survey
Salman Khan
Muzammal Naseer
Munawar Hayat
Syed Waqas Zamir
Fahad Shahbaz Khan
M. Shah
ViT
227
2,430
0
04 Jan 2021
Scaling Laws for Neural Language Models
Scaling Laws for Neural Language Models
Jared Kaplan
Sam McCandlish
T. Henighan
Tom B. Brown
B. Chess
R. Child
Scott Gray
Alec Radford
Jeff Wu
Dario Amodei
264
4,489
0
23 Jan 2020
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision
  Applications
MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications
Andrew G. Howard
Menglong Zhu
Bo Chen
Dmitry Kalenichenko
Weijun Wang
Tobias Weyand
M. Andreetto
Hartwig Adam
3DH
950
20,567
0
17 Apr 2017
Previous
12