ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.11268
  4. Cited By
Combating Adversarial Misspellings with Robust Word Recognition

Combating Adversarial Misspellings with Robust Word Recognition

27 May 2019
Danish Pruthi
Bhuwan Dhingra
Zachary Chase Lipton
ArXivPDFHTML

Papers citing "Combating Adversarial Misspellings with Robust Word Recognition"

50 / 163 papers shown
Title
Less is More: Understanding Word-level Textual Adversarial Attack via
  n-gram Frequency Descend
Less is More: Understanding Word-level Textual Adversarial Attack via n-gram Frequency Descend
Ning Lu
Shengcai Liu
Zhirui Zhang
Qi. Wang
Haifeng Liu
Jiaheng Zhang
AAML
88
6
0
06 Feb 2023
On the Security Vulnerabilities of Text-to-SQL Models
On the Security Vulnerabilities of Text-to-SQL Models
Xutan Peng
Yipeng Zhang
Jingfeng Yang
Mark Stevenson
SILM
31
10
0
28 Nov 2022
Preserving Semantics in Textual Adversarial Attacks
Preserving Semantics in Textual Adversarial Attacks
David Herel
Hugo Cisneros
Tomáš Mikolov
AAML
40
6
0
08 Nov 2022
Rickrolling the Artist: Injecting Backdoors into Text Encoders for
  Text-to-Image Synthesis
Rickrolling the Artist: Injecting Backdoors into Text Encoders for Text-to-Image Synthesis
Lukas Struppek
Dominik Hintersdorf
Kristian Kersting
SILM
22
36
0
04 Nov 2022
TASA: Deceiving Question Answering Models by Twin Answer Sentences
  Attack
TASA: Deceiving Question Answering Models by Twin Answer Sentences Attack
Yu Cao
Dianqi Li
Meng Fang
Dinesh Manocha
Jun Gao
Yibing Zhan
Dacheng Tao
AAML
26
15
0
27 Oct 2022
ADDMU: Detection of Far-Boundary Adversarial Examples with Data and
  Model Uncertainty Estimation
ADDMU: Detection of Far-Boundary Adversarial Examples with Data and Model Uncertainty Estimation
Fan Yin
Yao Li
Cho-Jui Hsieh
Kai-Wei Chang
AAML
69
4
0
22 Oct 2022
TCAB: A Large-Scale Text Classification Attack Benchmark
TCAB: A Large-Scale Text Classification Attack Benchmark
Kalyani Asthana
Zhouhang Xie
Wencong You
Adam Noack
Jonathan Brophy
Sameer Singh
Daniel Lowd
44
3
0
21 Oct 2022
Why Should Adversarial Perturbations be Imperceptible? Rethink the
  Research Paradigm in Adversarial NLP
Why Should Adversarial Perturbations be Imperceptible? Rethink the Research Paradigm in Adversarial NLP
Yangyi Chen
Hongcheng Gao
Ganqu Cui
Fanchao Qi
Longtao Huang
Zhiyuan Liu
Maosong Sun
SILM
27
45
0
19 Oct 2022
Probabilistic Categorical Adversarial Attack & Adversarial Training
Probabilistic Categorical Adversarial Attack & Adversarial Training
Han Xu
Penghei He
Jie Ren
Yuxuan Wan
Zitao Liu
Hui Liu
Jiliang Tang
AAML
SILM
33
0
0
17 Oct 2022
A.I. Robustness: a Human-Centered Perspective on Technological
  Challenges and Opportunities
A.I. Robustness: a Human-Centered Perspective on Technological Challenges and Opportunities
Andrea Tocchetti
Lorenzo Corti
Agathe Balayn
Mireia Yurrita
Philip Lippmann
Marco Brambilla
Jie-jin Yang
32
10
0
17 Oct 2022
HashFormers: Towards Vocabulary-independent Pre-trained Transformers
HashFormers: Towards Vocabulary-independent Pre-trained Transformers
Huiyin Xue
Nikolaos Aletras
19
4
0
14 Oct 2022
Analogy Generation by Prompting Large Language Models: A Case Study of
  InstructGPT
Analogy Generation by Prompting Large Language Models: A Case Study of InstructGPT
B. Bhavya
Jinjun Xiong
Chengxiang Zhai
LRM
45
42
0
09 Oct 2022
MockingBERT: A Method for Retroactively Adding Resilience to NLP Models
MockingBERT: A Method for Retroactively Adding Resilience to NLP Models
Jan Jezabek
A. Singh
SILM
KELM
31
0
0
21 Aug 2022
Rethinking Textual Adversarial Defense for Pre-trained Language Models
Rethinking Textual Adversarial Defense for Pre-trained Language Models
Jiayi Wang
Rongzhou Bao
Zhuosheng Zhang
Hai Zhao
AAML
SILM
28
11
0
21 Jul 2022
Language Modelling with Pixels
Language Modelling with Pixels
Phillip Rust
Jonas F. Lotz
Emanuele Bugliarello
Elizabeth Salesky
Miryam de Lhoneux
Desmond Elliott
VLM
38
46
0
14 Jul 2022
The Fallacy of AI Functionality
The Fallacy of AI Functionality
Inioluwa Deborah Raji
Indra Elizabeth Kumar
Aaron Horowitz
Andrew D. Selbst
34
180
0
20 Jun 2022
Adversarial Text Normalization
Adversarial Text Normalization
Joanna Bitton
Maya Pavlova
Ivan Evtimov
AAML
32
2
0
08 Jun 2022
CodeAttack: Code-Based Adversarial Attacks for Pre-trained Programming
  Language Models
CodeAttack: Code-Based Adversarial Attacks for Pre-trained Programming Language Models
Akshita Jha
Chandan K. Reddy
SILM
ELM
AAML
40
60
0
31 May 2022
Certified Robustness Against Natural Language Attacks by Causal
  Intervention
Certified Robustness Against Natural Language Attacks by Causal Intervention
Haiteng Zhao
Chang Ma
Xinshuai Dong
A. Luu
Zhi-Hong Deng
Hanwang Zhang
AAML
41
35
0
24 May 2022
Phrase-level Textual Adversarial Attack with Label Preservation
Phrase-level Textual Adversarial Attack with Label Preservation
Yibin Lei
Yu Cao
Dianqi Li
Dinesh Manocha
Meng Fang
Mykola Pechenizkiy
AAML
50
24
0
22 May 2022
Detecting Textual Adversarial Examples Based on Distributional
  Characteristics of Data Representations
Detecting Textual Adversarial Examples Based on Distributional Characteristics of Data Representations
Na Liu
Mark Dras
Wei Emma Zhang
AAML
24
6
0
29 Apr 2022
"That Is a Suspicious Reaction!": Interpreting Logits Variation to
  Detect NLP Adversarial Attacks
"That Is a Suspicious Reaction!": Interpreting Logits Variation to Detect NLP Adversarial Attacks
Edoardo Mosca
Shreyash Agarwal
Javier Rando
Georg Groh
AAML
32
30
0
10 Apr 2022
Understanding, Detecting, and Separating Out-of-Distribution Samples and
  Adversarial Samples in Text Classification
Understanding, Detecting, and Separating Out-of-Distribution Samples and Adversarial Samples in Text Classification
Cheng-Han Chiang
Hung-yi Lee
OODD
28
1
0
09 Apr 2022
Input-specific Attention Subnetworks for Adversarial Detection
Input-specific Attention Subnetworks for Adversarial Detection
Emil Biju
Anirudh Sriram
Pratyush Kumar
Mitesh M Khapra
AAML
36
5
0
23 Mar 2022
Towards Explainable Evaluation Metrics for Natural Language Generation
Towards Explainable Evaluation Metrics for Natural Language Generation
Christoph Leiter
Piyawat Lertvittayakumjorn
M. Fomicheva
Wei-Ye Zhao
Yang Gao
Steffen Eger
AAML
ELM
30
20
0
21 Mar 2022
A Prompting-based Approach for Adversarial Example Generation and
  Robustness Enhancement
A Prompting-based Approach for Adversarial Example Generation and Robustness Enhancement
Yuting Yang
Pei Huang
Juan Cao
Jintao Li
Yun Lin
Jin Song Dong
Feifei Ma
Jian Zhang
AAML
SILM
38
13
0
21 Mar 2022
On Robust Prefix-Tuning for Text Classification
On Robust Prefix-Tuning for Text Classification
Zonghan Yang
Yang Liu
VLM
26
20
0
19 Mar 2022
Perturbations in the Wild: Leveraging Human-Written Text Perturbations
  for Realistic Adversarial Attack and Defense
Perturbations in the Wild: Leveraging Human-Written Text Perturbations for Realistic Adversarial Attack and Defense
Thai Le
Jooyoung Lee
Kevin Yen
Yifan Hu
Dongwon Lee
AAML
19
17
0
19 Mar 2022
Distinguishing Non-natural from Natural Adversarial Samples for More
  Robust Pre-trained Language Model
Distinguishing Non-natural from Natural Adversarial Samples for More Robust Pre-trained Language Model
Jiayi Wang
Rongzhou Bao
Zhuosheng Zhang
Hai Zhao
AAML
29
4
0
19 Mar 2022
Imputing Out-of-Vocabulary Embeddings with LOVE Makes Language Models
  Robust with Little Cost
Imputing Out-of-Vocabulary Embeddings with LOVE Makes Language Models Robust with Little Cost
Lihu Chen
Gaël Varoquaux
Fabian M. Suchanek
21
14
0
15 Mar 2022
Generalized but not Robust? Comparing the Effects of Data Modification
  Methods on Out-of-Domain Generalization and Adversarial Robustness
Generalized but not Robust? Comparing the Effects of Data Modification Methods on Out-of-Domain Generalization and Adversarial Robustness
Tejas Gokhale
Swaroop Mishra
Man Luo
Bhavdeep Singh Sachdeva
Chitta Baral
52
29
0
15 Mar 2022
A Survey of Adversarial Defences and Robustness in NLP
A Survey of Adversarial Defences and Robustness in NLP
Shreyansh Goyal
Sumanth Doddapaneni
Mitesh M.Khapra
B. Ravindran
AAML
34
30
0
12 Mar 2022
Detection of Word Adversarial Examples in Text Classification: Benchmark
  and Baseline via Robust Density Estimation
Detection of Word Adversarial Examples in Text Classification: Benchmark and Baseline via Robust Density Estimation
Kiyoon Yoo
Jangho Kim
Jiho Jang
Nojun Kwak
24
39
0
03 Mar 2022
Robust Textual Embedding against Word-level Adversarial Attacks
Robust Textual Embedding against Word-level Adversarial Attacks
Yichen Yang
Xiaosen Wang
Kun He
AAML
22
16
0
28 Feb 2022
White-Box Attacks on Hate-speech BERT Classifiers in German with
  Explicit and Implicit Character Level Defense
White-Box Attacks on Hate-speech BERT Classifiers in German with Explicit and Implicit Character Level Defense
Shahrukh Khan
Mahnoor Shahid
Navdeeppal Singh
AAML
33
3
0
11 Feb 2022
Constrained Optimization with Dynamic Bound-scaling for Effective
  NLPBackdoor Defense
Constrained Optimization with Dynamic Bound-scaling for Effective NLPBackdoor Defense
Guangyu Shen
Yingqi Liu
Guanhong Tao
Qiuling Xu
Zhuo Zhang
Shengwei An
Shiqing Ma
Xinming Zhang
AAML
21
34
0
11 Feb 2022
Hindi/Bengali Sentiment Analysis Using Transfer Learning and Joint Dual
  Input Learning with Self Attention
Hindi/Bengali Sentiment Analysis Using Transfer Learning and Joint Dual Input Learning with Self Attention
Shahrukh Khan
Mahnoor Shahid
28
1
0
11 Feb 2022
On The Empirical Effectiveness of Unrealistic Adversarial Hardening
  Against Realistic Adversarial Attacks
On The Empirical Effectiveness of Unrealistic Adversarial Hardening Against Realistic Adversarial Attacks
Salijona Dyrmishi
Salah Ghamizi
Thibault Simonetto
Yves Le Traon
Maxime Cordy
AAML
35
16
0
07 Feb 2022
An Assessment of the Impact of OCR Noise on Language Models
An Assessment of the Impact of OCR Noise on Language Models
Konstantin Todorov
Giovanni Colavizza
12
7
0
26 Jan 2022
Identifying Adversarial Attacks on Text Classifiers
Identifying Adversarial Attacks on Text Classifiers
Zhouhang Xie
Jonathan Brophy
Adam Noack
Wencong You
Kalyani Asthana
Carter Perkins
Sabrina Reis
Sameer Singh
Daniel Lowd
AAML
31
9
0
21 Jan 2022
An Adversarial Benchmark for Fake News Detection Models
An Adversarial Benchmark for Fake News Detection Models
Lorenzo Jaime Yu Flores
Sophie Hao
24
10
0
03 Jan 2022
Robust Natural Language Processing: Recent Advances, Challenges, and
  Future Directions
Robust Natural Language Processing: Recent Advances, Challenges, and Future Directions
Marwan Omar
Soohyeon Choi
Daehun Nyang
David A. Mohaisen
32
57
0
03 Jan 2022
Repairing Adversarial Texts through Perturbation
Repairing Adversarial Texts through Perturbation
Guoliang Dong
Jingyi Wang
Jun Sun
Sudipta Chattopadhyay
Xinyu Wang
Ting Dai
Jie Shi
J. Dong
AAML
25
2
0
29 Dec 2021
How Should Pre-Trained Language Models Be Fine-Tuned Towards Adversarial
  Robustness?
How Should Pre-Trained Language Models Be Fine-Tuned Towards Adversarial Robustness?
Xinhsuai Dong
Anh Tuan Luu
Min Lin
Shuicheng Yan
Hanwang Zhang
SILM
AAML
20
55
0
22 Dec 2021
Measure and Improve Robustness in NLP Models: A Survey
Measure and Improve Robustness in NLP Models: A Survey
Xuezhi Wang
Haohan Wang
Diyi Yang
139
130
0
15 Dec 2021
Adversarial Attacks Against Deep Generative Models on Data: A Survey
Adversarial Attacks Against Deep Generative Models on Data: A Survey
Hui Sun
Tianqing Zhu
Zhiqiu Zhang
Dawei Jin
Wanlei Zhou
AAML
42
42
0
01 Dec 2021
Triggerless Backdoor Attack for NLP Tasks with Clean Labels
Triggerless Backdoor Attack for NLP Tasks with Clean Labels
Leilei Gan
Jiwei Li
Tianwei Zhang
Xiaoya Li
Yuxian Meng
Fei Wu
Yi Yang
Shangwei Guo
Chun Fan
AAML
SILM
27
74
0
15 Nov 2021
Understanding the Generalization Benefit of Model Invariance from a Data
  Perspective
Understanding the Generalization Benefit of Model Invariance from a Data Perspective
Sicheng Zhu
Bang An
Furong Huang
27
25
0
10 Nov 2021
Effective and Imperceptible Adversarial Textual Attack via
  Multi-objectivization
Effective and Imperceptible Adversarial Textual Attack via Multi-objectivization
Shengcai Liu
Ning Lu
W. Hong
Chao Qian
Ke Tang
AAML
22
15
0
02 Nov 2021
Adversarial Attacks and Defenses for Social Network Text Processing
  Applications: Techniques, Challenges and Future Research Directions
Adversarial Attacks and Defenses for Social Network Text Processing Applications: Techniques, Challenges and Future Research Directions
I. Alsmadi
Kashif Ahmad
Mahmoud Nazzal
Firoj Alam
Ala I. Al-Fuqaha
Abdallah Khreishah
A. Algosaibi
AAML
37
16
0
26 Oct 2021
Previous
1234
Next