ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2109.07403
  4. Cited By
BERT is Robust! A Case Against Synonym-Based Adversarial Examples in
  Text Classification

BERT is Robust! A Case Against Synonym-Based Adversarial Examples in Text Classification

15 September 2021
J. Hauser
Zhao Meng
Damian Pascual
Roger Wattenhofer
    OOD
    SILM
    AAML
ArXivPDFHTML

Papers citing "BERT is Robust! A Case Against Synonym-Based Adversarial Examples in Text Classification"

10 / 10 papers shown
Title
DSRM: Boost Textual Adversarial Training with Distribution Shift Risk
  Minimization
DSRM: Boost Textual Adversarial Training with Distribution Shift Risk Minimization
Songyang Gao
Shihan Dou
Yan Liu
Xiao Wang
Qi Zhang
Zhongyu Wei
Jin Ma
Yingchun Shan
OOD
19
3
0
27 Jun 2023
Enhancing Robustness of AI Offensive Code Generators via Data
  Augmentation
Enhancing Robustness of AI Offensive Code Generators via Data Augmentation
Cristina Improta
Pietro Liguori
R. Natella
B. Cukic
Domenico Cotroneo
AAML
30
2
0
08 Jun 2023
Exploiting Explainability to Design Adversarial Attacks and Evaluate
  Attack Resilience in Hate-Speech Detection Models
Exploiting Explainability to Design Adversarial Attacks and Evaluate Attack Resilience in Hate-Speech Detection Models
Pranath Reddy Kumbam
Sohaib Uddin Syed
Prashanth Thamminedi
S. Harish
Ian Perera
Bonnie J. Dorr
AAML
14
1
0
29 May 2023
From Adversarial Arms Race to Model-centric Evaluation: Motivating a
  Unified Automatic Robustness Evaluation Framework
From Adversarial Arms Race to Model-centric Evaluation: Motivating a Unified Automatic Robustness Evaluation Framework
Yangyi Chen
Hongcheng Gao
Ganqu Cui
Lifan Yuan
Dehan Kong
...
Longtao Huang
H. Xue
Zhiyuan Liu
Maosong Sun
Heng Ji
AAML
ELM
27
6
0
29 May 2023
Can Large Language Models Be an Alternative to Human Evaluations?
Can Large Language Models Be an Alternative to Human Evaluations?
Cheng-Han Chiang
Hung-yi Lee
ALM
LM&MA
229
572
0
03 May 2023
Identifying Human Strategies for Generating Word-Level Adversarial
  Examples
Identifying Human Strategies for Generating Word-Level Adversarial Examples
Maximilian Mozes
Bennett Kleinberg
Lewis D. Griffin
AAML
33
1
0
20 Oct 2022
Are Synonym Substitution Attacks Really Synonym Substitution Attacks?
Are Synonym Substitution Attacks Really Synonym Substitution Attacks?
Cheng-Han Chiang
Hunghuei Lee
AAML
33
5
0
06 Oct 2022
The Dangers of Underclaiming: Reasons for Caution When Reporting How NLP
  Systems Fail
The Dangers of Underclaiming: Reasons for Caution When Reporting How NLP Systems Fail
Sam Bowman
OffRL
24
45
0
15 Oct 2021
Certified Robustness to Adversarial Word Substitutions
Certified Robustness to Adversarial Word Substitutions
Robin Jia
Aditi Raghunathan
Kerem Göksel
Percy Liang
AAML
183
291
0
03 Sep 2019
Generating Natural Language Adversarial Examples
Generating Natural Language Adversarial Examples
M. Alzantot
Yash Sharma
Ahmed Elgohary
Bo-Jhang Ho
Mani B. Srivastava
Kai-Wei Chang
AAML
245
914
0
21 Apr 2018
1