Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2107.13541
Cited By
Towards Robustness Against Natural Language Word Substitutions
28 July 2021
Xinshuai Dong
A. Luu
Rongrong Ji
Hong Liu
SILM
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Towards Robustness Against Natural Language Word Substitutions"
21 / 71 papers shown
Title
Robust Natural Language Processing: Recent Advances, Challenges, and Future Directions
Marwan Omar
Soohyeon Choi
Daehun Nyang
David A. Mohaisen
32
57
0
03 Jan 2022
How Should Pre-Trained Language Models Be Fine-Tuned Towards Adversarial Robustness?
Xinhsuai Dong
Anh Tuan Luu
Min-Bin Lin
Shuicheng Yan
Hanwang Zhang
SILM
AAML
20
55
0
22 Dec 2021
The King is Naked: on the Notion of Robustness for Natural Language Processing
Emanuele La Malfa
Marta Z. Kwiatkowska
20
28
0
13 Dec 2021
Detecting Textual Adversarial Examples through Randomized Substitution and Vote
Xiaosen Wang
Yifeng Xiong
Kun He
AAML
25
11
0
13 Sep 2021
TREATED:Towards Universal Defense against Textual Adversarial Attacks
Bin Zhu
Zhaoquan Gu
Le Wang
Zhihong Tian
AAML
36
8
0
13 Sep 2021
Towards Improving Adversarial Training of NLP Models
Jin Yong Yoo
Yanjun Qi
AAML
10
123
0
01 Sep 2021
ASR-GLUE: A New Multi-task Benchmark for ASR-Robust Natural Language Understanding
Lingyun Feng
Jianwei Yu
Deng Cai
Songxiang Liu
Haitao Zheng
Yan Wang
ELM
79
14
0
30 Aug 2021
Searching for an Effective Defender: Benchmarking Defense against Adversarial Word Substitution
Zongyi Li
Jianhan Xu
Jiehang Zeng
Linyang Li
Xiaoqing Zheng
Qi Zhang
Kai-Wei Chang
Cho-Jui Hsieh
AAML
8
74
0
29 Aug 2021
Evaluating the Robustness of Neural Language Models to Input Perturbations
M. Moradi
Matthias Samwald
AAML
48
95
0
27 Aug 2021
On The State of Data In Computer Vision: Human Annotations Remain Indispensable for Developing Deep Learning Models
Z. Emam
Andrew Kondrich
Sasha Harrison
Felix Lau
Yushi Wang
Aerin Kim
E. Branson
VLM
30
11
0
31 Jul 2021
Self-Supervised Contrastive Learning with Adversarial Perturbations for Defending Word Substitution-based Attacks
Zhao Meng
Yihan Dong
Mrinmaya Sachan
Roger Wattenhofer
AAML
66
10
0
15 Jul 2021
Certified Robustness to Text Adversarial Attacks by Randomized [MASK]
Jiehang Zeng
Xiaoqing Zheng
Jianhan Xu
Linyang Li
Liping Yuan
Xuanjing Huang
AAML
26
67
0
08 May 2021
Black-Box Dissector: Towards Erasing-based Hard-Label Model Stealing Attack
Yixu Wang
Jie Li
Hong Liu
Yan Wang
Yongjian Wu
Feiyue Huang
Rongrong Ji
AAML
25
34
0
03 May 2021
Improving Zero-Shot Cross-Lingual Transfer Learning via Robust Training
Kuan-Hao Huang
Wasi Uddin Ahmad
Nanyun Peng
Kai-Wei Chang
AAML
97
34
0
17 Apr 2021
Certified Robustness to Programmable Transformations in LSTMs
Yuhao Zhang
Aws Albarghouthi
Loris Dántoni
AAML
17
22
0
15 Feb 2021
SHIELD: Defending Textual Neural Networks against Multiple Black-Box Adversarial Attacks with Stochastic Multi-Expert Patcher
Thai Le
Noseong Park
Dongwon Lee
AAML
6
20
0
17 Nov 2020
Certified Robustness to Adversarial Word Substitutions
Robin Jia
Aditi Raghunathan
Kerem Göksel
Percy Liang
AAML
183
291
0
03 Sep 2019
Generating Natural Language Adversarial Examples
M. Alzantot
Yash Sharma
Ahmed Elgohary
Bo-Jhang Ho
Mani B. Srivastava
Kai-Wei Chang
AAML
250
915
0
21 Apr 2018
Adversarial Example Generation with Syntactically Controlled Paraphrase Networks
Mohit Iyyer
John Wieting
Kevin Gimpel
Luke Zettlemoyer
AAML
GAN
205
712
0
17 Apr 2018
Adversarial examples in the physical world
Alexey Kurakin
Ian Goodfellow
Samy Bengio
SILM
AAML
287
5,842
0
08 Jul 2016
A Decomposable Attention Model for Natural Language Inference
Ankur P. Parikh
Oscar Täckström
Dipanjan Das
Jakob Uszkoreit
213
1,367
0
06 Jun 2016
Previous
1
2