ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.05648
  4. Cited By
From Hero to Zéroe: A Benchmark of Low-Level Adversarial Attacks
v1v2 (latest)

From Hero to Zéroe: A Benchmark of Low-Level Adversarial Attacks

12 October 2020
Steffen Eger
Yannik Benz
    AAML
ArXiv (abs)PDFHTML

Papers citing "From Hero to Zéroe: A Benchmark of Low-Level Adversarial Attacks"

28 / 28 papers shown
Title
FLUKE: A Linguistically-Driven and Task-Agnostic Framework for Robustness Evaluation
FLUKE: A Linguistically-Driven and Task-Agnostic Framework for Robustness Evaluation
Yulia Otmakhova
Hung Thinh Truong
Rahmad Mahendra
Zenan Zhai
Rongxin Zhu
Daniel Beck
Jey Han Lau
ELM
159
0
0
24 Apr 2025
Pay Attention to Real World Perturbations! Natural Robustness Evaluation in Machine Reading Comprehension
Pay Attention to Real World Perturbations! Natural Robustness Evaluation in Machine Reading Comprehension
Yulong Wu
Viktor Schlegel
Riza Batista-Navarro
AAML
76
0
0
23 Feb 2025
CERT-ED: Certifiably Robust Text Classification for Edit Distance
CERT-ED: Certifiably Robust Text Classification for Edit Distance
Zhuoqun Huang
Yipeng Wang
Seunghee Shin
Benjamin I. P. Rubinstein
AAML
100
1
0
01 Aug 2024
Defensive Unlearning with Adversarial Training for Robust Concept
  Erasure in Diffusion Models
Defensive Unlearning with Adversarial Training for Robust Concept Erasure in Diffusion Models
Yimeng Zhang
Xin Chen
Jinghan Jia
Yihua Zhang
Chongyu Fan
Jiancheng Liu
Mingyi Hong
Ke Ding
Sijia Liu
DiffM
119
68
0
24 May 2024
Typos that Broke the RAG's Back: Genetic Attack on RAG Pipeline by
  Simulating Documents in the Wild via Low-level Perturbations
Typos that Broke the RAG's Back: Genetic Attack on RAG Pipeline by Simulating Documents in the Wild via Low-level Perturbations
Sukmin Cho
Soyeong Jeong
Jeongyeon Seo
Taeho Hwang
Jong C. Park
SILMAAML
109
33
0
22 Apr 2024
SemRoDe: Macro Adversarial Training to Learn Representations That are
  Robust to Word-Level Attacks
SemRoDe: Macro Adversarial Training to Learn Representations That are Robust to Word-Level Attacks
Brian Formento
Wenjie Feng
Chuan-Sheng Foo
Anh Tuan Luu
See-Kiong Ng
AAML
104
7
0
27 Mar 2024
ChatGPT Incorrectness Detection in Software Reviews
ChatGPT Incorrectness Detection in Software Reviews
M. Tanzil
Junaed Younus Khan
Gias Uddin
89
4
0
25 Mar 2024
Pairing Orthographically Variant Literary Words to Standard Equivalents
  Using Neural Edit Distance Models
Pairing Orthographically Variant Literary Words to Standard Equivalents Using Neural Edit Distance Models
Craig Messner
Tom Lippincott
52
2
0
26 Jan 2024
Investigating Training Strategies and Model Robustness of Low-Rank
  Adaptation for Language Modeling in Speech Recognition
Investigating Training Strategies and Model Robustness of Low-Rank Adaptation for Language Modeling in Speech Recognition
Yu Yu
Chao-Han Huck Yang
Tuan Dinh
Sungho Ryu
J. Kolehmainen
...
Ankur Gandhe
Ariya Rastrow
Jia Xu
I. Bulyko
A. Stolcke
100
1
0
19 Jan 2024
PIXAR: Auto-Regressive Language Modeling in Pixel Space
PIXAR: Auto-Regressive Language Modeling in Pixel Space
Yintao Tai
Xiyang Liao
Alessandro Suglia
Antonio Vergari
MLLM
74
9
0
06 Jan 2024
Break it, Imitate it, Fix it: Robustness by Generating Human-Like
  Attacks
Break it, Imitate it, Fix it: Robustness by Generating Human-Like Attacks
Aradhana Sinha
Ananth Balashankar
Ahmad Beirami
Thi Avrahami
Jilin Chen
Alex Beutel
AAML
83
4
0
25 Oct 2023
To Generate or Not? Safety-Driven Unlearned Diffusion Models Are Still
  Easy To Generate Unsafe Images ... For Now
To Generate or Not? Safety-Driven Unlearned Diffusion Models Are Still Easy To Generate Unsafe Images ... For Now
Yimeng Zhang
Jinghan Jia
Xin Chen
Aochuan Chen
Yihua Zhang
Jiancheng Liu
Ke Ding
Sijia Liu
DiffM
177
101
0
18 Oct 2023
Evaluating the Robustness of Text-to-image Diffusion Models against
  Real-world Attacks
Evaluating the Robustness of Text-to-image Diffusion Models against Real-world Attacks
Hongcheng Gao
Hao Zhang
Yinpeng Dong
Zhijie Deng
AAML
109
23
0
16 Jun 2023
From Adversarial Arms Race to Model-centric Evaluation: Motivating a
  Unified Automatic Robustness Evaluation Framework
From Adversarial Arms Race to Model-centric Evaluation: Motivating a Unified Automatic Robustness Evaluation Framework
Yangyi Chen
Hongcheng Gao
Ganqu Cui
Lifan Yuan
Dehan Kong
...
Longtao Huang
H. Xue
Zhiyuan Liu
Maosong Sun
Heng Ji
AAMLELM
101
6
0
29 May 2023
Byte-Level Grammatical Error Correction Using Synthetic and Curated
  Corpora
Byte-Level Grammatical Error Correction Using Synthetic and Curated Corpora
Svanhvít Lilja Ingólfsdóttir
Pétur Orri Ragnarsson
H. Jónsson
Haukur Barri Símonarson
Vilhjálmur Þorsteinsson
Vésteinn Snæbjarnarson
SyDa
80
9
0
29 May 2023
Learning the Legibility of Visual Text Perturbations
Learning the Legibility of Visual Text Perturbations
D. Seth
Rickard Stureborg
Danish Pruthi
Bhuwan Dhingra
AAML
73
7
0
09 Mar 2023
Character-level White-Box Adversarial Attacks against Transformers via
  Attachable Subwords Substitution
Character-level White-Box Adversarial Attacks against Transformers via Attachable Subwords Substitution
Aiwei Liu
Honghai Yu
Xuming Hu
Shuang Li
Li Lin
Fukun Ma
Yawen Yang
Lijie Wen
86
35
0
31 Oct 2022
ROSE: Robust Selective Fine-tuning for Pre-trained Language Models
ROSE: Robust Selective Fine-tuning for Pre-trained Language Models
Lan Jiang
Hao Zhou
Yankai Lin
Peng Li
Jie Zhou
R. Jiang
AAML
84
8
0
18 Oct 2022
Layer or Representation Space: What makes BERT-based Evaluation Metrics
  Robust?
Layer or Representation Space: What makes BERT-based Evaluation Metrics Robust?
Doan Nam Long Vu
N. Moosavi
Steffen Eger
128
9
0
06 Sep 2022
Language Modelling with Pixels
Language Modelling with Pixels
Phillip Rust
Jonas F. Lotz
Emanuele Bugliarello
Elizabeth Salesky
Miryam de Lhoneux
Desmond Elliott
VLM
107
46
0
14 Jul 2022
Adversarial Text Normalization
Adversarial Text Normalization
Joanna Bitton
Maya Pavlova
Ivan Evtimov
AAML
49
2
0
08 Jun 2022
Exposing Fine-Grained Adversarial Vulnerability of Face Anti-Spoofing
  Models
Exposing Fine-Grained Adversarial Vulnerability of Face Anti-Spoofing Models
Songlin Yang
Wei Wang
Chenye Xu
Ziwen He
Bo Peng
Jing Dong
AAMLCVBM
67
4
0
30 May 2022
The Risks of Machine Learning Systems
The Risks of Machine Learning Systems
Samson Tan
Araz Taeihagh
K. Baxter
39
5
0
21 Apr 2022
Towards Explainable Evaluation Metrics for Natural Language Generation
Towards Explainable Evaluation Metrics for Natural Language Generation
Christoph Leiter
Piyawat Lertvittayakumjorn
M. Fomicheva
Wei Zhao
Yang Gao
Steffen Eger
AAMLELM
76
20
0
21 Mar 2022
NL-Augmenter: A Framework for Task-Sensitive Natural Language
  Augmentation
NL-Augmenter: A Framework for Task-Sensitive Natural Language Augmentation
Kaustubh D. Dhole
Varun Gangal
Sebastian Gehrmann
Aadesh Gupta
Zhenhao Li
...
Tianbao Xie
Usama Yaseen
Michael A. Yee
Jing Zhang
Yue Zhang
235
88
0
06 Dec 2021
BERT-Defense: A Probabilistic Model Based on BERT to Combat Cognitively
  Inspired Orthographic Adversarial Attacks
BERT-Defense: A Probabilistic Model Based on BERT to Combat Cognitively Inspired Orthographic Adversarial Attacks
Yannik Keller
J. Mackensen
Steffen Eger
AAML
117
31
0
02 Jun 2021
Reliability Testing for Natural Language Processing Systems
Reliability Testing for Natural Language Processing Systems
Samson Tan
Shafiq Joty
K. Baxter
Araz Taeihagh
G. Bennett
Min-Yen Kan
96
41
0
06 May 2021
Token-Modification Adversarial Attacks for Natural Language Processing:
  A Survey
Token-Modification Adversarial Attacks for Natural Language Processing: A Survey
Tom Roth
Yansong Gao
A. Abuadbba
Surya Nepal
Wei Liu
AAML
110
12
0
01 Mar 2021
1