Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2311.17128
Cited By
Vulnerability Analysis of Transformer-based Optical Character Recognition to Adversarial Attacks
28 November 2023
Lucas Beerens
D. Higham
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Vulnerability Analysis of Transformer-based Optical Character Recognition to Adversarial Attacks"
18 / 18 papers shown
Title
Adversarial Ink: Componentwise Backward Error Attacks on Deep Learning
Lucas Beerens
D. Higham
AAML
69
8
0
05 Jun 2023
Transformers in Speech Processing: A Survey
S. Latif
Aun Zaidi
Heriberto Cuayáhuitl
Fahad Shamshad
Moazzam Shoukat
Muhammad Usama
Junaid Qadir
136
48
0
21 Mar 2023
TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models
Minghao Li
Tengchao Lv
Jingye Chen
Lei Cui
Yijuan Lu
D. Florêncio
Cha Zhang
Zhoujun Li
Furu Wei
ViT
240
367
0
21 Sep 2021
Towards Transferable Adversarial Attacks on Vision Transformers
Zhipeng Wei
Jingjing Chen
Micah Goldblum
Zuxuan Wu
Tom Goldstein
Yu-Gang Jiang
ViT
AAML
96
120
0
09 Sep 2021
Reveal of Vision Transformers Robustness against Adversarial Attacks
Ahmed Aldahdooh
W. Hamidouche
Olivier Déforges
ViT
43
60
0
07 Jun 2021
Transformers in Vision: A Survey
Salman Khan
Muzammal Naseer
Munawar Hayat
Syed Waqas Zamir
Fahad Shahbaz Khan
M. Shah
ViT
305
2,516
0
04 Jan 2021
Training data-efficient image transformers & distillation through attention
Hugo Touvron
Matthieu Cord
Matthijs Douze
Francisco Massa
Alexandre Sablayrolles
Hervé Jégou
ViT
389
6,793
0
23 Dec 2020
MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers
Wenhui Wang
Furu Wei
Li Dong
Hangbo Bao
Nan Yang
Ming Zhou
VLM
156
1,267
0
25 Feb 2020
Handwritten Optical Character Recognition (OCR): A Comprehensive Systematic Literature Review (SLR)
Jamshed Memon
Maira Sami
Rizwan Ahmed Khan
74
332
0
01 Jan 2020
Adversarial Attacks Against Automatic Speech Recognition Systems via Psychoacoustic Hiding
Lea Schonherr
Katharina Kohls
Steffen Zeiler
Thorsten Holz
D. Kolossa
AAML
77
289
0
16 Aug 2018
On the Suitability of
L
p
L_p
L
p
-norms for Creating and Preventing Adversarial Examples
Mahmood Sharif
Lujo Bauer
Michael K. Reiter
AAML
130
138
0
27 Feb 2018
ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks without Training Substitute Models
Pin-Yu Chen
Huan Zhang
Yash Sharma
Jinfeng Yi
Cho-Jui Hsieh
AAML
83
1,882
0
14 Aug 2017
Towards Deep Learning Models Resistant to Adversarial Attacks
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILM
OOD
310
12,117
0
19 Jun 2017
Towards Evaluating the Robustness of Neural Networks
Nicholas Carlini
D. Wagner
OOD
AAML
266
8,555
0
16 Aug 2016
Practical Black-Box Attacks against Machine Learning
Nicolas Papernot
Patrick McDaniel
Ian Goodfellow
S. Jha
Z. Berkay Celik
A. Swami
MLAU
AAML
75
3,678
0
08 Feb 2016
DeepFool: a simple and accurate method to fool deep neural networks
Seyed-Mohsen Moosavi-Dezfooli
Alhussein Fawzi
P. Frossard
AAML
151
4,897
0
14 Nov 2015
Explaining and Harnessing Adversarial Examples
Ian Goodfellow
Jonathon Shlens
Christian Szegedy
AAML
GAN
280
19,066
0
20 Dec 2014
Intriguing properties of neural networks
Christian Szegedy
Wojciech Zaremba
Ilya Sutskever
Joan Bruna
D. Erhan
Ian Goodfellow
Rob Fergus
AAML
277
14,927
1
21 Dec 2013
1