ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.11268
  4. Cited By
Combating Adversarial Misspellings with Robust Word Recognition

Combating Adversarial Misspellings with Robust Word Recognition

27 May 2019
Danish Pruthi
Bhuwan Dhingra
Zachary Chase Lipton
ArXivPDFHTML

Papers citing "Combating Adversarial Misspellings with Robust Word Recognition"

50 / 163 papers shown
Title
Bypassing Prompt Injection and Jailbreak Detection in LLM Guardrails
Bypassing Prompt Injection and Jailbreak Detection in LLM Guardrails
William Hackett
Lewis Birch
Stefan Trawicki
N. Suri
Peter Garraghan
32
2
0
15 Apr 2025
Exploring Gradient-Guided Masked Language Model to Detect Textual Adversarial Attacks
Exploring Gradient-Guided Masked Language Model to Detect Textual Adversarial Attacks
Xiaomei Zhang
Zhaoxi Zhang
Yanjun Zhang
Xufei Zheng
L. Zhang
Shengshan Hu
Shirui Pan
AAML
27
0
0
08 Apr 2025
Tougher Text, Smarter Models: Raising the Bar for Adversarial Defence Benchmarks
Tougher Text, Smarter Models: Raising the Bar for Adversarial Defence Benchmarks
Yang Wang
Chenghua Lin
ELM
40
0
0
05 Jan 2025
Are Language Models Agnostic to Linguistically Grounded Perturbations? A
  Case Study of Indic Languages
Are Language Models Agnostic to Linguistically Grounded Perturbations? A Case Study of Indic Languages
Poulami Ghosh
Raj Dabre
Pushpak Bhattacharyya
AAML
75
0
0
14 Dec 2024
NMT-Obfuscator Attack: Ignore a sentence in translation with only one
  word
NMT-Obfuscator Attack: Ignore a sentence in translation with only one word
Sahar Sadrizadeh
César Descalzo
Ljiljana Dolamic
P. Frossard
AAML
74
0
0
19 Nov 2024
IAE: Irony-based Adversarial Examples for Sentiment Analysis Systems
IAE: Irony-based Adversarial Examples for Sentiment Analysis Systems
Xiaoyin Yi
Jiacheng Huang
AAML
62
0
0
12 Nov 2024
Faithfulness and the Notion of Adversarial Sensitivity in NLP
  Explanations
Faithfulness and the Notion of Adversarial Sensitivity in NLP Explanations
Supriya Manna
Niladri Sett
AAML
29
2
0
26 Sep 2024
Reducing and Exploiting Data Augmentation Noise through Meta Reweighting
  Contrastive Learning for Text Classification
Reducing and Exploiting Data Augmentation Noise through Meta Reweighting Contrastive Learning for Text Classification
Guanyi Mou
Yichuan Li
Kyumin Lee
36
3
0
26 Sep 2024
An Effective, Robust and Fairness-aware Hate Speech Detection Framework
An Effective, Robust and Fairness-aware Hate Speech Detection Framework
Guanyi Mou
Kyumin Lee
29
2
0
25 Sep 2024
Contextual Breach: Assessing the Robustness of Transformer-based QA
  Models
Contextual Breach: Assessing the Robustness of Transformer-based QA Models
Asir Saadat
Nahian Ibn Asad
Md Farhan Ishmam
AAML
46
0
0
17 Sep 2024
Adversarial Evasion Attack Efficiency against Large Language Models
Adversarial Evasion Attack Efficiency against Large Language Models
João Vitorino
Eva Maia
Isabel Praça
AAML
43
2
0
12 Jun 2024
Are AI-Generated Text Detectors Robust to Adversarial Perturbations?
Are AI-Generated Text Detectors Robust to Adversarial Perturbations?
Guanhua Huang
Yuchen Zhang
Zhe Li
Yongjian You
Mingze Wang
Zhouwang Yang
DeLMO
43
3
0
03 Jun 2024
Revisiting character-level adversarial attacks
Revisiting character-level adversarial attacks
Elias Abad Rocamora
Yongtao Wu
Fanghui Liu
Grigorios G. Chrysos
V. Cevher
AAML
39
3
0
07 May 2024
Adversarial Attacks and Defense for Conversation Entailment Task
Adversarial Attacks and Defense for Conversation Entailment Task
Zhenning Yang
Ryan Krawec
Liang-Yuan Wu
AAML
SILM
25
1
0
01 May 2024
DENOISER: Rethinking the Robustness for Open-Vocabulary Action
  Recognition
DENOISER: Rethinking the Robustness for Open-Vocabulary Action Recognition
Haozhe Cheng
Chen Ju
Haicheng Wang
Jinxiang Liu
Mengting Chen
Qiang Hu
Xiaoyun Zhang
Yanfeng Wang
DiffM
VLM
43
5
0
23 Apr 2024
Towards Robust Domain Generation Algorithm Classification
Towards Robust Domain Generation Algorithm Classification
Arthur Drichel
Marc Meyer
Ulrike Meyer
AAML
33
3
0
09 Apr 2024
Multi-granular Adversarial Attacks against Black-box Neural Ranking
  Models
Multi-granular Adversarial Attacks against Black-box Neural Ranking Models
Yuansan Liu
Ruqing Zhang
J. Guo
Maarten de Rijke
Yixing Fan
Xueqi Cheng
AAML
51
13
0
02 Apr 2024
Adversarial Math Word Problem Generation
Adversarial Math Word Problem Generation
Roy Xie
Chengxuan Huang
Junlin Wang
Bhuwan Dhingra
AAML
33
1
0
27 Feb 2024
A Trembling House of Cards? Mapping Adversarial Attacks against Language
  Agents
A Trembling House of Cards? Mapping Adversarial Attacks against Language Agents
Lingbo Mo
Zeyi Liao
Boyuan Zheng
Yu-Chuan Su
Chaowei Xiao
Huan Sun
AAML
LLMAG
49
15
0
15 Feb 2024
Learning High-Quality and General-Purpose Phrase Representations
Learning High-Quality and General-Purpose Phrase Representations
Lihu Chen
Gaël Varoquaux
Fabian M. Suchanek
40
3
0
18 Jan 2024
Perturbed examples reveal invariances shared by language models
Perturbed examples reveal invariances shared by language models
Ruchit Rawal
Mariya Toneva
AAML
39
0
0
07 Nov 2023
Do LLMs exhibit human-like response biases? A case study in survey
  design
Do LLMs exhibit human-like response biases? A case study in survey design
Lindia Tjuatja
Valerie Chen
Sherry Tongshuang Wu
Ameet Talwalkar
Graham Neubig
32
80
0
07 Nov 2023
Finite-context Indexing of Restricted Output Space for NLP Models Facing
  Noisy Input
Finite-context Indexing of Restricted Output Space for NLP Models Facing Noisy Input
Minh Nguyen
Nancy F. Chen
30
0
0
21 Oct 2023
Toward Stronger Textual Attack Detectors
Toward Stronger Textual Attack Detectors
Pierre Colombo
Marine Picot
Nathan Noiry
Guillaume Staerman
Pablo Piantanida
57
5
0
21 Oct 2023
SmoothLLM: Defending Large Language Models Against Jailbreaking Attacks
SmoothLLM: Defending Large Language Models Against Jailbreaking Attacks
Alexander Robey
Eric Wong
Hamed Hassani
George J. Pappas
AAML
46
215
0
05 Oct 2023
What Learned Representations and Influence Functions Can Tell Us About
  Adversarial Examples
What Learned Representations and Influence Functions Can Tell Us About Adversarial Examples
Shakila Mahjabin Tonni
Mark Dras
TDI
AAML
GAN
29
0
0
19 Sep 2023
Context-aware Adversarial Attack on Named Entity Recognition
Context-aware Adversarial Attack on Named Entity Recognition
Shuguang Chen
Leonardo Neves
Thamar Solorio
AAML
56
0
0
16 Sep 2023
A Classification-Guided Approach for Adversarial Attacks against Neural
  Machine Translation
A Classification-Guided Approach for Adversarial Attacks against Neural Machine Translation
Sahar Sadrizadeh
Ljiljana Dolamic
P. Frossard
AAML
SILM
40
2
0
29 Aug 2023
LEAP: Efficient and Automated Test Method for NLP Software
LEAP: Efficient and Automated Test Method for NLP Software
Ming-Ming Xiao
Yan Xiao
Hai Dong
Shunhui Ji
Pengcheng Zhang
AAML
22
8
0
22 Aug 2023
Compensating Removed Frequency Components: Thwarting Voice Spectrum
  Reduction Attacks
Compensating Removed Frequency Components: Thwarting Voice Spectrum Reduction Attacks
Shu Wang
Kun Sun
Qi Li
AAML
28
0
0
18 Aug 2023
Text-CRS: A Generalized Certified Robustness Framework against Textual
  Adversarial Attacks
Text-CRS: A Generalized Certified Robustness Framework against Textual Adversarial Attacks
Xinyu Zhang
Hanbin Hong
Yuan Hong
Peng Huang
Binghui Wang
Zhongjie Ba
Kui Ren
SILM
39
18
0
31 Jul 2023
LEA: Improving Sentence Similarity Robustness to Typos Using Lexical
  Attention Bias
LEA: Improving Sentence Similarity Robustness to Typos Using Lexical Attention Bias
Mario Almagro
Emilio Almazán
Diego Ortego
David Jiménez
29
3
0
06 Jul 2023
SCAT: Robust Self-supervised Contrastive Learning via Adversarial
  Training for Text Classification
SCAT: Robust Self-supervised Contrastive Learning via Adversarial Training for Text Classification
J. Wu
Dit-Yan Yeung
SILM
25
0
0
04 Jul 2023
Sample Attackability in Natural Language Adversarial Attacks
Sample Attackability in Natural Language Adversarial Attacks
Vyas Raina
Mark J. F. Gales
SILM
42
1
0
21 Jun 2023
Evaluating the Robustness of Text-to-image Diffusion Models against
  Real-world Attacks
Evaluating the Robustness of Text-to-image Diffusion Models against Real-world Attacks
Hongcheng Gao
Hao Zhang
Yinpeng Dong
Zhijie Deng
AAML
35
21
0
16 Jun 2023
Expanding Scope: Adapting English Adversarial Attacks to Chinese
Expanding Scope: Adapting English Adversarial Attacks to Chinese
Hanyu Liu
Chengyuan Cai
Yanjun Qi
AAML
21
5
0
08 Jun 2023
PromptAttack: Probing Dialogue State Trackers with Adversarial Prompts
PromptAttack: Probing Dialogue State Trackers with Adversarial Prompts
Xiangjue Dong
Yun He
Ziwei Zhu
James Caverlee
AAML
36
7
0
07 Jun 2023
VoteTRANS: Detecting Adversarial Text without Training by Voting on Hard
  Labels of Transformations
VoteTRANS: Detecting Adversarial Text without Training by Voting on Hard Labels of Transformations
Hoang-Quoc Nguyen-Son
Seira Hidano
Kazuhide Fukushima
S. Kiyomoto
Isao Echizen
28
0
0
02 Jun 2023
From Adversarial Arms Race to Model-centric Evaluation: Motivating a
  Unified Automatic Robustness Evaluation Framework
From Adversarial Arms Race to Model-centric Evaluation: Motivating a Unified Automatic Robustness Evaluation Framework
Yangyi Chen
Hongcheng Gao
Ganqu Cui
Lifan Yuan
Dehan Kong
...
Longtao Huang
H. Xue
Zhiyuan Liu
Maosong Sun
Heng Ji
AAML
ELM
27
6
0
29 May 2023
How do humans perceive adversarial text? A reality check on the validity
  and naturalness of word-based adversarial attacks
How do humans perceive adversarial text? A reality check on the validity and naturalness of word-based adversarial attacks
Salijona Dyrmishi
Salah Ghamizi
Maxime Cordy
AAML
18
17
0
24 May 2023
Another Dead End for Morphological Tags? Perturbed Inputs and Parsing
Another Dead End for Morphological Tags? Perturbed Inputs and Parsing
Alberto Muñoz-Ortiz
David Vilares
35
1
0
24 May 2023
Adversarial Demonstration Attacks on Large Language Models
Adversarial Demonstration Attacks on Large Language Models
Jiong Wang
Zi-yang Liu
Keun Hee Park
Zhuojun Jiang
Zhaoheng Zheng
Zhuofeng Wu
Muhao Chen
Chaowei Xiao
SILM
24
52
0
24 May 2023
The Best Defense is Attack: Repairing Semantics in Textual Adversarial
  Examples
The Best Defense is Attack: Repairing Semantics in Textual Adversarial Examples
Heng Yang
Ke Li
AAML
32
2
0
06 May 2023
Masked Language Model Based Textual Adversarial Example Detection
Masked Language Model Based Textual Adversarial Example Detection
Xiaomei Zhang
Zhaoxi Zhang
Qi Zhong
Xufei Zheng
Yanjun Zhang
Shengshan Hu
L. Zhang
AAML
28
0
0
18 Apr 2023
Model-tuning Via Prompts Makes NLP Models Adversarially Robust
Model-tuning Via Prompts Makes NLP Models Adversarially Robust
Mrigank Raman
Pratyush Maini
J. Zico Kolter
Zachary Chase Lipton
Danish Pruthi
AAML
38
17
0
13 Mar 2023
Learning the Legibility of Visual Text Perturbations
Learning the Legibility of Visual Text Perturbations
D. Seth
Rickard Stureborg
Danish Pruthi
Bhuwan Dhingra
AAML
54
4
0
09 Mar 2023
CitySpec with Shield: A Secure Intelligent Assistant for Requirement
  Formalization
CitySpec with Shield: A Secure Intelligent Assistant for Requirement Formalization
Zirong Chen
Issa Li
Haoxiang Zhang
S. Preum
John A. Stankovic
Meiyi Ma
AI4TS
13
4
0
19 Feb 2023
RETVec: Resilient and Efficient Text Vectorizer
RETVec: Resilient and Efficient Text Vectorizer
Elie Bursztein
Marina Zhang
Owen Vallis
Xinyu Jia
Alexey Kurakin
VLM
32
4
0
18 Feb 2023
READIN: A Chinese Multi-Task Benchmark with Realistic and Diverse Input
  Noises
READIN: A Chinese Multi-Task Benchmark with Realistic and Diverse Input Noises
Chenglei Si
Zhengyan Zhang
Yingfa Chen
Xiaozhi Wang
Zhiyuan Liu
Maosong Sun
AAML
26
1
0
14 Feb 2023
TextDefense: Adversarial Text Detection based on Word Importance Entropy
TextDefense: Adversarial Text Detection based on Word Importance Entropy
Lujia Shen
Xuhong Zhang
S. Ji
Yuwen Pu
Chunpeng Ge
Xing Yang
Yanghe Feng
AAML
17
8
0
12 Feb 2023
1234
Next