ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2105.07581
  4. Cited By
Vision Transformers are Robust Learners

Vision Transformers are Robust Learners

17 May 2021
Sayak Paul
Pin-Yu Chen
    ViT
ArXivPDFHTML

Papers citing "Vision Transformers are Robust Learners"

32 / 82 papers shown
Title
Assaying Out-Of-Distribution Generalization in Transfer Learning
Assaying Out-Of-Distribution Generalization in Transfer Learning
F. Wenzel
Andrea Dittadi
Peter V. Gehler
Carl-Johann Simon-Gabriel
Max Horn
...
Chris Russell
Thomas Brox
Bernt Schiele
Bernhard Schölkopf
Francesco Locatello
OOD
OODD
AAML
57
71
0
19 Jul 2022
Multimodal Learning with Transformers: A Survey
Multimodal Learning with Transformers: A Survey
P. Xu
Xiatian Zhu
David A. Clifton
ViT
57
527
0
13 Jun 2022
Squeeze Training for Adversarial Robustness
Squeeze Training for Adversarial Robustness
Qizhang Li
Yiwen Guo
W. Zuo
Hao Chen
OOD
36
9
0
23 May 2022
Simpler is Better: off-the-shelf Continual Learning Through Pretrained
  Backbones
Simpler is Better: off-the-shelf Continual Learning Through Pretrained Backbones
Francesco Pelosin
VLM
14
11
0
03 May 2022
Deeper Insights into the Robustness of ViTs towards Common Corruptions
Deeper Insights into the Robustness of ViTs towards Common Corruptions
Rui Tian
Zuxuan Wu
Qi Dai
Han Hu
Yu-Gang Jiang
ViT
AAML
21
4
0
26 Apr 2022
Revisiting the Adversarial Robustness-Accuracy Tradeoff in Robot
  Learning
Revisiting the Adversarial Robustness-Accuracy Tradeoff in Robot Learning
Mathias Lechner
Alexander Amini
Daniela Rus
T. Henzinger
AAML
26
9
0
15 Apr 2022
3D Shuffle-Mixer: An Efficient Context-Aware Vision Learner of
  Transformer-MLP Paradigm for Dense Prediction in Medical Volume
3D Shuffle-Mixer: An Efficient Context-Aware Vision Learner of Transformer-MLP Paradigm for Dense Prediction in Medical Volume
Jianye Pang
Cheng Jiang
Yihao Chen
Jianbo Chang
M. Feng
Renzhi Wang
Jianhua Yao
ViT
MedIm
28
11
0
14 Apr 2022
Scaling Up Your Kernels to 31x31: Revisiting Large Kernel Design in CNNs
Scaling Up Your Kernels to 31x31: Revisiting Large Kernel Design in CNNs
Xiaohan Ding
Xinming Zhang
Yi Zhou
Jungong Han
Guiguang Ding
Jian Sun
VLM
49
528
0
13 Mar 2022
How Do Vision Transformers Work?
How Do Vision Transformers Work?
Namuk Park
Songkuk Kim
ViT
41
465
0
14 Feb 2022
How to Understand Masked Autoencoders
How to Understand Masked Autoencoders
Shuhao Cao
Peng-Tao Xu
David A. Clifton
29
40
0
08 Feb 2022
Transformers in Self-Supervised Monocular Depth Estimation with Unknown
  Camera Intrinsics
Transformers in Self-Supervised Monocular Depth Estimation with Unknown Camera Intrinsics
Arnav Varma
Hemang Chawla
Bahram Zonooz
Elahe Arani
ViT
MDE
36
49
0
07 Feb 2022
Architecture Matters in Continual Learning
Architecture Matters in Continual Learning
Seyed Iman Mirzadeh
Arslan Chaudhry
Dong Yin
Timothy Nguyen
Razvan Pascanu
Dilan Görür
Mehrdad Farajtabar
OOD
KELM
116
58
0
01 Feb 2022
Video Transformers: A Survey
Video Transformers: A Survey
Javier Selva
A. S. Johansen
Sergio Escalera
Kamal Nasrollahi
T. Moeslund
Albert Clapés
ViT
22
103
0
16 Jan 2022
Decision-based Black-box Attack Against Vision Transformers via
  Patch-wise Adversarial Removal
Decision-based Black-box Attack Against Vision Transformers via Patch-wise Adversarial Removal
Yucheng Shi
Yahong Han
Yu-an Tan
Xiaohui Kuang
38
30
0
07 Dec 2021
DAFormer: Improving Network Architectures and Training Strategies for
  Domain-Adaptive Semantic Segmentation
DAFormer: Improving Network Architectures and Training Strategies for Domain-Adaptive Semantic Segmentation
Lukas Hoyer
Dengxin Dai
Luc Van Gool
AI4CE
36
450
0
29 Nov 2021
Are Vision Transformers Robust to Patch Perturbations?
Are Vision Transformers Robust to Patch Perturbations?
Jindong Gu
Volker Tresp
Yao Qin
AAML
ViT
38
60
0
20 Nov 2021
Discrete Representations Strengthen Vision Transformer Robustness
Discrete Representations Strengthen Vision Transformer Robustness
Chengzhi Mao
Lu Jiang
Mostafa Dehghani
Carl Vondrick
Rahul Sukthankar
Irfan Essa
ViT
27
43
0
20 Nov 2021
Are Transformers More Robust Than CNNs?
Are Transformers More Robust Than CNNs?
Yutong Bai
Jieru Mei
Alan Yuille
Cihang Xie
ViT
AAML
192
257
0
10 Nov 2021
Adversarial Token Attacks on Vision Transformers
Adversarial Token Attacks on Vision Transformers
Ameya Joshi
Gauri Jagatap
C. Hegde
ViT
30
19
0
08 Oct 2021
Adversarial Robustness Comparison of Vision Transformer and MLP-Mixer to
  CNNs
Adversarial Robustness Comparison of Vision Transformer and MLP-Mixer to CNNs
Philipp Benz
Soomin Ham
Chaoning Zhang
Adil Karjauv
In So Kweon
AAML
ViT
47
78
0
06 Oct 2021
Disrupting Adversarial Transferability in Deep Neural Networks
Disrupting Adversarial Transferability in Deep Neural Networks
Christopher Wiedeman
Ge Wang
AAML
33
8
0
27 Aug 2021
PatchCleanser: Certifiably Robust Defense against Adversarial Patches
  for Any Image Classifier
PatchCleanser: Certifiably Robust Defense against Adversarial Patches for Any Image Classifier
Chong Xiang
Saeed Mahloujifar
Prateek Mittal
VLM
AAML
24
73
0
20 Aug 2021
Do Vision Transformers See Like Convolutional Neural Networks?
Do Vision Transformers See Like Convolutional Neural Networks?
M. Raghu
Thomas Unterthiner
Simon Kornblith
Chiyuan Zhang
Alexey Dosovitskiy
ViT
67
924
0
19 Aug 2021
A Simple Fix to Mahalanobis Distance for Improving Near-OOD Detection
A Simple Fix to Mahalanobis Distance for Improving Near-OOD Detection
Jie Jessie Ren
Stanislav Fort
J. Liu
Abhijit Guha Roy
Shreyas Padhy
Balaji Lakshminarayanan
UQCV
33
216
0
16 Jun 2021
Delving Deep into the Generalization of Vision Transformers under
  Distribution Shifts
Delving Deep into the Generalization of Vision Transformers under Distribution Shifts
Chongzhi Zhang
Mingyuan Zhang
Shanghang Zhang
Daisheng Jin
Qiang-feng Zhou
Zhongang Cai
Haiyu Zhao
Xianglong Liu
Ziwei Liu
21
102
0
14 Jun 2021
Rethinking Architecture Design for Tackling Data Heterogeneity in
  Federated Learning
Rethinking Architecture Design for Tackling Data Heterogeneity in Federated Learning
Liangqiong Qu
Yuyin Zhou
Paul Pu Liang
Yingda Xia
Feifei Wang
Ehsan Adeli
L. Fei-Fei
D. Rubin
FedML
AI4CE
19
174
0
10 Jun 2021
Reveal of Vision Transformers Robustness against Adversarial Attacks
Reveal of Vision Transformers Robustness against Adversarial Attacks
Ahmed Aldahdooh
W. Hamidouche
Olivier Déforges
ViT
15
56
0
07 Jun 2021
Intriguing Properties of Vision Transformers
Intriguing Properties of Vision Transformers
Muzammal Naseer
Kanchana Ranasinghe
Salman Khan
Munawar Hayat
F. Khan
Ming-Hsuan Yang
ViT
259
621
0
21 May 2021
Emerging Properties in Self-Supervised Vision Transformers
Emerging Properties in Self-Supervised Vision Transformers
Mathilde Caron
Hugo Touvron
Ishan Misra
Hervé Jégou
Julien Mairal
Piotr Bojanowski
Armand Joulin
317
5,785
0
29 Apr 2021
On the Adversarial Robustness of Vision Transformers
On the Adversarial Robustness of Vision Transformers
Rulin Shao
Zhouxing Shi
Jinfeng Yi
Pin-Yu Chen
Cho-Jui Hsieh
ViT
30
137
0
29 Mar 2021
Instance adaptive adversarial training: Improved accuracy tradeoffs in
  neural nets
Instance adaptive adversarial training: Improved accuracy tradeoffs in neural nets
Yogesh Balaji
Tom Goldstein
Judy Hoffman
AAML
131
103
0
17 Oct 2019
Real-Time Single Image and Video Super-Resolution Using an Efficient
  Sub-Pixel Convolutional Neural Network
Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network
Wenzhe Shi
Jose Caballero
Ferenc Huszár
J. Totz
Andrew P. Aitken
Rob Bishop
Daniel Rueckert
Zehan Wang
SupR
195
5,176
0
16 Sep 2016
Previous
12