ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2311.17128
  4. Cited By
Vulnerability Analysis of Transformer-based Optical Character
  Recognition to Adversarial Attacks

Vulnerability Analysis of Transformer-based Optical Character Recognition to Adversarial Attacks

28 November 2023
Lucas Beerens
D. Higham
ArXiv (abs)PDFHTML

Papers citing "Vulnerability Analysis of Transformer-based Optical Character Recognition to Adversarial Attacks"

18 / 18 papers shown
Title
Adversarial Ink: Componentwise Backward Error Attacks on Deep Learning
Adversarial Ink: Componentwise Backward Error Attacks on Deep Learning
Lucas Beerens
D. Higham
AAML
69
8
0
05 Jun 2023
Transformers in Speech Processing: A Survey
Transformers in Speech Processing: A Survey
S. Latif
Aun Zaidi
Heriberto Cuayáhuitl
Fahad Shamshad
Moazzam Shoukat
Muhammad Usama
Junaid Qadir
136
48
0
21 Mar 2023
TrOCR: Transformer-based Optical Character Recognition with Pre-trained
  Models
TrOCR: Transformer-based Optical Character Recognition with Pre-trained Models
Minghao Li
Tengchao Lv
Jingye Chen
Lei Cui
Yijuan Lu
D. Florêncio
Cha Zhang
Zhoujun Li
Furu Wei
ViT
240
367
0
21 Sep 2021
Towards Transferable Adversarial Attacks on Vision Transformers
Towards Transferable Adversarial Attacks on Vision Transformers
Zhipeng Wei
Jingjing Chen
Micah Goldblum
Zuxuan Wu
Tom Goldstein
Yu-Gang Jiang
ViTAAML
96
120
0
09 Sep 2021
Reveal of Vision Transformers Robustness against Adversarial Attacks
Reveal of Vision Transformers Robustness against Adversarial Attacks
Ahmed Aldahdooh
W. Hamidouche
Olivier Déforges
ViT
43
60
0
07 Jun 2021
Transformers in Vision: A Survey
Transformers in Vision: A Survey
Salman Khan
Muzammal Naseer
Munawar Hayat
Syed Waqas Zamir
Fahad Shahbaz Khan
M. Shah
ViT
305
2,516
0
04 Jan 2021
Training data-efficient image transformers & distillation through
  attention
Training data-efficient image transformers & distillation through attention
Hugo Touvron
Matthieu Cord
Matthijs Douze
Francisco Massa
Alexandre Sablayrolles
Hervé Jégou
ViT
387
6,768
0
23 Dec 2020
MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression
  of Pre-Trained Transformers
MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers
Wenhui Wang
Furu Wei
Li Dong
Hangbo Bao
Nan Yang
Ming Zhou
VLM
156
1,267
0
25 Feb 2020
Handwritten Optical Character Recognition (OCR): A Comprehensive
  Systematic Literature Review (SLR)
Handwritten Optical Character Recognition (OCR): A Comprehensive Systematic Literature Review (SLR)
Jamshed Memon
Maira Sami
Rizwan Ahmed Khan
74
332
0
01 Jan 2020
Adversarial Attacks Against Automatic Speech Recognition Systems via
  Psychoacoustic Hiding
Adversarial Attacks Against Automatic Speech Recognition Systems via Psychoacoustic Hiding
Lea Schonherr
Katharina Kohls
Steffen Zeiler
Thorsten Holz
D. Kolossa
AAML
75
289
0
16 Aug 2018
On the Suitability of $L_p$-norms for Creating and Preventing
  Adversarial Examples
On the Suitability of LpL_pLp​-norms for Creating and Preventing Adversarial Examples
Mahmood Sharif
Lujo Bauer
Michael K. Reiter
AAML
130
138
0
27 Feb 2018
ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural
  Networks without Training Substitute Models
ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks without Training Substitute Models
Pin-Yu Chen
Huan Zhang
Yash Sharma
Jinfeng Yi
Cho-Jui Hsieh
AAML
80
1,882
0
14 Aug 2017
Towards Deep Learning Models Resistant to Adversarial Attacks
Towards Deep Learning Models Resistant to Adversarial Attacks
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILMOOD
310
12,069
0
19 Jun 2017
Towards Evaluating the Robustness of Neural Networks
Towards Evaluating the Robustness of Neural Networks
Nicholas Carlini
D. Wagner
OODAAML
266
8,555
0
16 Aug 2016
Practical Black-Box Attacks against Machine Learning
Practical Black-Box Attacks against Machine Learning
Nicolas Papernot
Patrick McDaniel
Ian Goodfellow
S. Jha
Z. Berkay Celik
A. Swami
MLAUAAML
75
3,678
0
08 Feb 2016
DeepFool: a simple and accurate method to fool deep neural networks
DeepFool: a simple and accurate method to fool deep neural networks
Seyed-Mohsen Moosavi-Dezfooli
Alhussein Fawzi
P. Frossard
AAML
151
4,897
0
14 Nov 2015
Explaining and Harnessing Adversarial Examples
Explaining and Harnessing Adversarial Examples
Ian Goodfellow
Jonathon Shlens
Christian Szegedy
AAMLGAN
277
19,066
0
20 Dec 2014
Intriguing properties of neural networks
Intriguing properties of neural networks
Christian Szegedy
Wojciech Zaremba
Ilya Sutskever
Joan Bruna
D. Erhan
Ian Goodfellow
Rob Fergus
AAML
277
14,927
1
21 Dec 2013
1