Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1812.05271
Cited By
TextBugger: Generating Adversarial Text Against Real-world Applications
13 December 2018
Jinfeng Li
S. Ji
Tianyu Du
Bo Li
Ting Wang
SILM
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"TextBugger: Generating Adversarial Text Against Real-world Applications"
50 / 382 papers shown
Title
Normal vs. Adversarial: Salience-based Analysis of Adversarial Samples for Relation Extraction
Luoqiu Li
Xiang Chen
Zhen Bi
Xin Xie
Shumin Deng
Ningyu Zhang
Chuanqi Tan
Mosha Chen
Huajun Chen
AAML
28
7
0
01 Apr 2021
BERT: A Review of Applications in Natural Language Processing and Understanding
M. V. Koroteev
VLM
25
196
0
22 Mar 2021
Model Extraction and Adversarial Transferability, Your BERT is Vulnerable!
Xuanli He
Lingjuan Lyu
Qiongkai Xu
Lichao Sun
MIACV
SILM
33
90
0
18 Mar 2021
Code-Mixing on Sesame Street: Dawn of the Adversarial Polyglots
Samson Tan
Chenyu You
AAML
29
35
0
17 Mar 2021
ReinforceBug: A Framework to Generate Adversarial Textual Examples
Bushra Sabir
M. Babar
R. Gaire
SILM
AAML
20
3
0
11 Mar 2021
T-Miner: A Generative Approach to Defend Against Trojan Attacks on DNN-based Text Classification
A. Azizi
I. A. Tahmid
Asim Waheed
Neal Mangaokar
Jiameng Pu
M. Javed
Chandan K. Reddy
Bimal Viswanath
AAML
25
77
0
07 Mar 2021
Token-Modification Adversarial Attacks for Natural Language Processing: A Survey
Tom Roth
Yansong Gao
A. Abuadbba
Surya Nepal
Wei Liu
AAML
43
12
0
01 Mar 2021
Enhancing Model Robustness By Incorporating Adversarial Knowledge Into Semantic Representation
Jinfeng Li
Tianyu Du
Xiangyu Liu
Rong Zhang
Hui Xue
S. Ji
AAML
17
9
0
23 Feb 2021
Certified Robustness to Programmable Transformations in LSTMs
Yuhao Zhang
Aws Albarghouthi
Loris Dántoni
AAML
25
22
0
15 Feb 2021
RECAST: Enabling User Recourse and Interpretability of Toxicity Detection Models with Interactive Visualization
Austin P. Wright
Omar Shaikh
Haekyu Park
Will Epperson
Muhammed Ahmed
Stephane Pinel
Duen Horng Chau
Diyi Yang
17
21
0
08 Feb 2021
Adv-OLM: Generating Textual Adversaries via OLM
Vijit Malik
A. Bhat
Ashutosh Modi
29
6
0
21 Jan 2021
Generative Counterfactuals for Neural Networks via Attribute-Informed Perturbation
Fan Yang
Ninghao Liu
Mengnan Du
X. Hu
OOD
8
17
0
18 Jan 2021
Adversarial Machine Learning in Text Analysis and Generation
I. Alsmadi
AAML
24
5
0
14 Jan 2021
Robustness Testing of Language Understanding in Task-Oriented Dialog
Jiexi Liu
Ryuichi Takanobu
Jiaxin Wen
Dazhen Wan
Hongguang Li
Weiran Nie
Cheng Li
Wei Peng
Minlie Huang
ELM
38
48
0
30 Dec 2020
Generating Natural Language Attacks in a Hard Label Black Box Setting
Rishabh Maheshwary
Saket Maheshwary
Vikram Pudi
AAML
30
103
0
29 Dec 2020
A Deep Marginal-Contrastive Defense against Adversarial Attacks on 1D Models
Mohammed Hassanin
Nour Moustafa
M. Tahtali
AAML
22
2
0
08 Dec 2020
Adversarial Evaluation of Multimodal Models under Realistic Gray Box Assumption
Ivan Evtimov
Russ Howes
Brian Dolhansky
Hamed Firooz
Cristian Canton Ferrer
AAML
6
10
0
25 Nov 2020
A Sweet Rabbit Hole by DARCY: Using Honeypots to Detect Universal Trigger's Adversarial Attacks
Thai Le
Noseong Park
Dongwon Lee
10
23
0
20 Nov 2020
SHIELD: Defending Textual Neural Networks against Multiple Black-Box Adversarial Attacks with Stochastic Multi-Expert Patcher
Thai Le
Noseong Park
Dongwon Lee
AAML
8
20
0
17 Nov 2020
Adversarial Black-Box Attacks On Text Classifiers Using Multi-Objective Genetic Optimization Guided By Deep Networks
Alex Mathai
Shreya Khare
Srikanth G. Tamilselvam
Senthil Mani
AAML
30
6
0
08 Nov 2020
Leveraging Extracted Model Adversaries for Improved Black Box Attacks
Naveen Jafer Nizar
Ari Kobren
MIACV
4
0
0
30 Oct 2020
GreedyFool: Multi-Factor Imperceptibility and Its Application to Designing a Black-box Adversarial Attack
Hui Liu
Bo Zhao
Minzhi Ji
Peng Liu
AAML
29
6
0
14 Oct 2020
EFSG: Evolutionary Fooling Sentences Generator
Marco Di Giovanni
Marco Brambilla
AAML
35
2
0
12 Oct 2020
Decamouflage: A Framework to Detect Image-Scaling Attacks on Convolutional Neural Networks
Bedeuro Kim
A. Abuadbba
Yansong Gao
Yifeng Zheng
Muhammad Ejaz Ahmed
Hyoungshick Kim
Surya Nepal
12
4
0
08 Oct 2020
InfoBERT: Improving Robustness of Language Models from An Information Theoretic Perspective
Wei Ping
Shuohang Wang
Yu Cheng
Zhe Gan
R. Jia
Bo-wen Li
Jingjing Liu
AAML
46
113
0
05 Oct 2020
Second-Order NLP Adversarial Examples
John X. Morris
AAML
17
0
0
05 Oct 2020
A Geometry-Inspired Attack for Generating Natural Language Adversarial Examples
Zhao Meng
Roger Wattenhofer
GAN
AAML
32
32
0
03 Oct 2020
Adversarial Attacks Against Deep Learning Systems for ICD-9 Code Assignment
Sharan Raja
Rudraksh Tuwani
AAML
14
3
0
29 Sep 2020
OpenAttack: An Open-source Textual Adversarial Attack Toolkit
Guoyang Zeng
Fanchao Qi
Qianrui Zhou
Ting Zhang
Zixian Ma
Bairu Hou
Yuan Zang
Zhiyuan Liu
Maosong Sun
AAML
26
118
0
19 Sep 2020
Contextualized Perturbation for Textual Adversarial Attack
Dianqi Li
Yizhe Zhang
Hao Peng
Liqun Chen
Chris Brockett
Ming-Ting Sun
Bill Dolan
AAML
SILM
12
229
0
16 Sep 2020
Dynamically Computing Adversarial Perturbations for Recurrent Neural Networks
Shankar A. Deka
D. Stipanović
Claire Tomlin
AAML
30
7
0
07 Sep 2020
MALCOM: Generating Malicious Comments to Attack Neural Fake News Detection Models
Thai Le
Suhang Wang
Dongwon Lee
24
59
0
01 Sep 2020
TextDecepter: Hard Label Black Box Attack on Text Classifiers
Sachin Saxena
AAML
17
5
0
16 Aug 2020
FireBERT: Hardening BERT-based classifiers against adversarial attack
Gunnar Mein
Kevin Hartman
Andrew Morris
SILM
AAML
16
0
0
10 Aug 2020
Adversarial Training with Fast Gradient Projection Method against Synonym Substitution based Text Attacks
Xiaosen Wang
Yichen Yang
Yihe Deng
Kun He
OOD
AAML
16
3
0
09 Aug 2020
Visual Attack and Defense on Text
Shengjun Liu
Ningkang Jiang
Yuanbin Wu
AAML
18
0
0
07 Aug 2020
Trojaning Language Models for Fun and Profit
Xinyang Zhang
Zheng-Wei Zhang
Shouling Ji
Ting Wang
SILM
AAML
17
132
0
01 Aug 2020
Natural Backdoor Attack on Text Data
Lichao Sun
SILM
16
39
0
29 Jun 2020
Blacklight: Scalable Defense for Neural Networks against Query-Based Black-Box Attacks
Huiying Li
Shawn Shan
Emily Wenger
Jiayun Zhang
Haitao Zheng
Ben Y. Zhao
AAML
23
42
0
24 Jun 2020
Adversarial Attacks and Defense on Texts: A Survey
A. Huq
Mst. Tasnim Pervin
AAML
14
21
0
28 May 2020
Reliability and Robustness analysis of Machine Learning based Phishing URL Detectors
Bushra Sabir
Muhammad Ali Babar
R. Gaire
A. Abuadbba
AAML
35
10
0
18 May 2020
Defense of Word-level Adversarial Attacks via Random Substitution Encoding
Zhaoyang Wang
Hongtao Wang
AAML
SILM
6
23
0
01 May 2020
Evaluating Neural Machine Comprehension Model Robustness to Noisy Inputs and Adversarial Attacks
Winston Wu
Dustin L. Arendt
Svitlana Volkova
AAML
20
5
0
01 May 2020
TextAttack: A Framework for Adversarial Attacks, Data Augmentation, and Adversarial Training in NLP
John X. Morris
Eli Lifland
Jin Yong Yoo
J. E. Grigsby
Di Jin
Yanjun Qi
SILM
27
69
0
29 Apr 2020
Reevaluating Adversarial Examples in Natural Language
John X. Morris
Eli Lifland
Jack Lanchantin
Yangfeng Ji
Yanjun Qi
SILM
AAML
20
111
0
25 Apr 2020
Testing Machine Translation via Referential Transparency
Pinjia He
Clara Meister
Z. Su
13
49
0
22 Apr 2020
Train No Evil: Selective Masking for Task-Guided Pre-Training
Yuxian Gu
Zhengyan Zhang
Xiaozhi Wang
Zhiyuan Liu
Maosong Sun
32
59
0
21 Apr 2020
Weight Poisoning Attacks on Pre-trained Models
Keita Kurita
Paul Michel
Graham Neubig
AAML
SILM
37
434
0
14 Apr 2020
Frequency-Guided Word Substitutions for Detecting Textual Adversarial Examples
Maximilian Mozes
Pontus Stenetorp
Bennett Kleinberg
Lewis D. Griffin
AAML
30
99
0
13 Apr 2020
Towards Evaluating the Robustness of Chinese BERT Classifiers
Wei Ping
Boyuan Pan
Xin Li
Bo-wen Li
AAML
28
8
0
07 Apr 2020
Previous
1
2
3
4
5
6
7
8
Next