ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1801.04354
  4. Cited By
Black-box Generation of Adversarial Text Sequences to Evade Deep
  Learning Classifiers

Black-box Generation of Adversarial Text Sequences to Evade Deep Learning Classifiers

13 January 2018
Ji Gao
Jack Lanchantin
M. Soffa
Yanjun Qi
    AAML
ArXivPDFHTML

Papers citing "Black-box Generation of Adversarial Text Sequences to Evade Deep Learning Classifiers"

50 / 360 papers shown
Title
Perturbing Inputs for Fragile Interpretations in Deep Natural Language
  Processing
Perturbing Inputs for Fragile Interpretations in Deep Natural Language Processing
Sanchit Sinha
Hanjie Chen
Arshdeep Sekhon
Yangfeng Ji
Yanjun Qi
AAML
FAtt
28
42
0
11 Aug 2021
Local Structure Matters Most: Perturbation Study in NLU
Local Structure Matters Most: Perturbation Study in NLU
Louis Clouâtre
Prasanna Parthasarathi
Amal Zouaq
Sarath Chandar
30
13
0
29 Jul 2021
Towards Robustness Against Natural Language Word Substitutions
Towards Robustness Against Natural Language Word Substitutions
Xinshuai Dong
A. Luu
Rongrong Ji
Hong Liu
SILM
AAML
38
113
0
28 Jul 2021
A Differentiable Language Model Adversarial Attack on Text Classifiers
A Differentiable Language Model Adversarial Attack on Text Classifiers
I. Fursov
Alexey Zaytsev
Pavel Burnyshev
Ekaterina Dmitrieva
Nikita Klyuchnikov
A. Kravchenko
Ekaterina Artemova
Evgeny Burnaev
SILM
25
15
0
23 Jul 2021
How Vulnerable Are Automatic Fake News Detection Methods to Adversarial
  Attacks?
How Vulnerable Are Automatic Fake News Detection Methods to Adversarial Attacks?
Camille Koenders
Johannes Filla
Nicolai Schneider
Vinicius Woloszyn
GNN
27
15
0
16 Jul 2021
Understanding Adversarial Examples Through Deep Neural Network's
  Response Surface and Uncertainty Regions
Understanding Adversarial Examples Through Deep Neural Network's Response Surface and Uncertainty Regions
Juan Shu
B. Xi
Charles A. Kamhoua
AAML
19
0
0
30 Jun 2021
The Threat of Offensive AI to Organizations
The Threat of Offensive AI to Organizations
Yisroel Mirsky
Ambra Demontis
J. Kotak
Ram Shankar
Deng Gelei
Liu Yang
Xinming Zhang
Wenke Lee
Yuval Elovici
Battista Biggio
33
81
0
30 Jun 2021
Bad Characters: Imperceptible NLP Attacks
Bad Characters: Imperceptible NLP Attacks
Nicholas Boucher
Ilia Shumailov
Ross J. Anderson
Nicolas Papernot
AAML
SILM
41
103
0
18 Jun 2021
Adversarial Attacks on Deep Models for Financial Transaction Records
Adversarial Attacks on Deep Models for Financial Transaction Records
I. Fursov
Matvey Morozov
N. Kaploukhaya
Elizaveta Kovtun
Rodrigo Rivera-Castro
Gleb Gusev
Dmitrii Babaev
Ivan Kireev
Alexey Zaytsev
E. Burnaev
AAML
36
38
0
15 Jun 2021
URLTran: Improving Phishing URL Detection Using Transformers
URLTran: Improving Phishing URL Detection Using Transformers
Pranav Maneriker
Jack W. Stokes
Edir Garcia Lazo
Diana Carutasu
Farid Tajaddodianfar
A. Gururajan
11
61
0
09 Jun 2021
Framing RNN as a kernel method: A neural ODE approach
Framing RNN as a kernel method: A neural ODE approach
Adeline Fermanian
P. Marion
Jean-Philippe Vert
Gérard Biau
31
25
0
02 Jun 2021
Defending Pre-trained Language Models from Adversarial Word
  Substitutions Without Performance Sacrifice
Defending Pre-trained Language Models from Adversarial Word Substitutions Without Performance Sacrifice
Rongzhou Bao
Jiayi Wang
Hai Zhao
AAML
11
43
0
30 May 2021
Certified Robustness to Text Adversarial Attacks by Randomized [MASK]
Certified Robustness to Text Adversarial Attacks by Randomized [MASK]
Jiehang Zeng
Xiaoqing Zheng
Jianhan Xu
Linyang Li
Liping Yuan
Xuanjing Huang
AAML
26
67
0
08 May 2021
Improved and Efficient Text Adversarial Attacks using Target Information
Improved and Efficient Text Adversarial Attacks using Target Information
M. Hossam
Trung Le
He Zhao
Viet Huynh
Dinh Q. Phung
AAML
12
1
0
27 Apr 2021
Evaluating Deception Detection Model Robustness To Linguistic Variation
Evaluating Deception Detection Model Robustness To Linguistic Variation
M. Glenski
Ellyn Ayton
Robin Cosbey
Dustin L. Arendt
Svitlana Volkova
AAML
11
0
0
23 Apr 2021
An Adversarially-Learned Turing Test for Dialog Generation Models
An Adversarially-Learned Turing Test for Dialog Generation Models
Xiang Gao
Yizhe Zhang
Michel Galley
Bill Dolan
AAML
20
2
0
16 Apr 2021
Gradient-based Adversarial Attacks against Text Transformers
Gradient-based Adversarial Attacks against Text Transformers
Chuan Guo
Alexandre Sablayrolles
Hervé Jégou
Douwe Kiela
SILM
106
227
0
15 Apr 2021
Fall of Giants: How popular text-based MLaaS fall against a simple
  evasion attack
Fall of Giants: How popular text-based MLaaS fall against a simple evasion attack
Luca Pajola
Mauro Conti
11
28
0
13 Apr 2021
Universal Spectral Adversarial Attacks for Deformable Shapes
Universal Spectral Adversarial Attacks for Deformable Shapes
Arianna Rampini
Franco Pestarini
Luca Cosmo
Simone Melzi
Emanuele Rodolà
AAML
20
18
0
07 Apr 2021
Normal vs. Adversarial: Salience-based Analysis of Adversarial Samples
  for Relation Extraction
Normal vs. Adversarial: Salience-based Analysis of Adversarial Samples for Relation Extraction
Luoqiu Li
Xiang Chen
Zhen Bi
Xin Xie
Shumin Deng
Ningyu Zhang
Chuanqi Tan
Mosha Chen
Huajun Chen
AAML
26
7
0
01 Apr 2021
Model Extraction and Adversarial Transferability, Your BERT is
  Vulnerable!
Model Extraction and Adversarial Transferability, Your BERT is Vulnerable!
Xuanli He
Lingjuan Lyu
Qiongkai Xu
Lichao Sun
MIACV
SILM
30
90
0
18 Mar 2021
ReinforceBug: A Framework to Generate Adversarial Textual Examples
ReinforceBug: A Framework to Generate Adversarial Textual Examples
Bushra Sabir
M. Babar
R. Gaire
SILM
AAML
18
3
0
11 Mar 2021
T-Miner: A Generative Approach to Defend Against Trojan Attacks on
  DNN-based Text Classification
T-Miner: A Generative Approach to Defend Against Trojan Attacks on DNN-based Text Classification
A. Azizi
I. A. Tahmid
Asim Waheed
Neal Mangaokar
Jiameng Pu
M. Javed
Chandan K. Reddy
Bimal Viswanath
AAML
25
76
0
07 Mar 2021
Token-Modification Adversarial Attacks for Natural Language Processing:
  A Survey
Token-Modification Adversarial Attacks for Natural Language Processing: A Survey
Tom Roth
Yansong Gao
A. Abuadbba
Surya Nepal
Wei Liu
AAML
40
12
0
01 Mar 2021
Generalized Adversarial Distances to Efficiently Discover Classifier
  Errors
Generalized Adversarial Distances to Efficiently Discover Classifier Errors
Walter D. Bennette
Sally Dufek
Karsten Maurer
Sean Sisti
Bunyod Tusmatov
6
0
0
25 Feb 2021
On Robustness of Neural Semantic Parsers
On Robustness of Neural Semantic Parsers
Shuo Huang
Zhuang Li
Lizhen Qu
Lei Pan
AAML
15
16
0
02 Feb 2021
ShufText: A Simple Black Box Approach to Evaluate the Fragility of Text
  Classification Models
ShufText: A Simple Black Box Approach to Evaluate the Fragility of Text Classification Models
Rutuja Taware
Shraddha Varat
G. Salunke
Chaitanya Gawande
Geetanjali Kale
Rahul Khengare
Raviraj Joshi
17
5
0
30 Jan 2021
Adv-OLM: Generating Textual Adversaries via OLM
Adv-OLM: Generating Textual Adversaries via OLM
Vijit Malik
A. Bhat
Ashutosh Modi
27
6
0
21 Jan 2021
Generative Counterfactuals for Neural Networks via Attribute-Informed
  Perturbation
Generative Counterfactuals for Neural Networks via Attribute-Informed Perturbation
Fan Yang
Ninghao Liu
Mengnan Du
X. Hu
OOD
8
17
0
18 Jan 2021
Adversarial Machine Learning in Text Analysis and Generation
Adversarial Machine Learning in Text Analysis and Generation
I. Alsmadi
AAML
24
5
0
14 Jan 2021
Adversarially Robust and Explainable Model Compression with On-Device
  Personalization for Text Classification
Adversarially Robust and Explainable Model Compression with On-Device Personalization for Text Classification
Yao Qiang
Supriya Tumkur Suresh Kumar
Marco Brocanelli
D. Zhu
AAML
28
0
0
10 Jan 2021
Generating Natural Language Attacks in a Hard Label Black Box Setting
Generating Natural Language Attacks in a Hard Label Black Box Setting
Rishabh Maheshwary
Saket Maheshwary
Vikram Pudi
AAML
24
103
0
29 Dec 2020
AdvExpander: Generating Natural Language Adversarial Examples by
  Expanding Text
AdvExpander: Generating Natural Language Adversarial Examples by Expanding Text
Zhihong Shao
Zitao Liu
Jiyong Zhang
Zhongqin Wu
Minlie Huang
AAML
9
9
0
18 Dec 2020
A Deep Marginal-Contrastive Defense against Adversarial Attacks on 1D
  Models
A Deep Marginal-Contrastive Defense against Adversarial Attacks on 1D Models
Mohammed Hassanin
Nour Moustafa
M. Tahtali
AAML
22
2
0
08 Dec 2020
A Sweet Rabbit Hole by DARCY: Using Honeypots to Detect Universal
  Trigger's Adversarial Attacks
A Sweet Rabbit Hole by DARCY: Using Honeypots to Detect Universal Trigger's Adversarial Attacks
Thai Le
Noseong Park
Dongwon Lee
10
23
0
20 Nov 2020
SHIELD: Defending Textual Neural Networks against Multiple Black-Box
  Adversarial Attacks with Stochastic Multi-Expert Patcher
SHIELD: Defending Textual Neural Networks against Multiple Black-Box Adversarial Attacks with Stochastic Multi-Expert Patcher
Thai Le
Noseong Park
Dongwon Lee
AAML
8
20
0
17 Nov 2020
Adversarial Black-Box Attacks On Text Classifiers Using Multi-Objective
  Genetic Optimization Guided By Deep Networks
Adversarial Black-Box Attacks On Text Classifiers Using Multi-Objective Genetic Optimization Guided By Deep Networks
Alex Mathai
Shreya Khare
Srikanth G. Tamilselvam
Senthil Mani
AAML
28
6
0
08 Nov 2020
WaveTransform: Crafting Adversarial Examples via Input Decomposition
WaveTransform: Crafting Adversarial Examples via Input Decomposition
Divyam Anshumaan
Akshay Agarwal
Mayank Vatsa
Richa Singh
AAML
17
11
0
29 Oct 2020
Char2Subword: Extending the Subword Embedding Space Using Robust
  Character Compositionality
Char2Subword: Extending the Subword Embedding Space Using Robust Character Compositionality
Gustavo Aguilar
Bryan McCann
Tong Niu
Nazneen Rajani
N. Keskar
Thamar Solorio
47
12
0
24 Oct 2020
Geometry matters: Exploring language examples at the decision boundary
Geometry matters: Exploring language examples at the decision boundary
Debajyoti Datta
Shashwat Kumar
Laura E. Barnes
Tom Fletcher
AAML
9
3
0
14 Oct 2020
Explain2Attack: Text Adversarial Attacks via Cross-Domain
  Interpretability
Explain2Attack: Text Adversarial Attacks via Cross-Domain Interpretability
M. Hossam
Trung Le
He Zhao
Dinh Q. Phung
SILM
AAML
8
6
0
14 Oct 2020
EFSG: Evolutionary Fooling Sentences Generator
EFSG: Evolutionary Fooling Sentences Generator
Marco Di Giovanni
Marco Brambilla
AAML
32
2
0
12 Oct 2020
Second-Order NLP Adversarial Examples
Second-Order NLP Adversarial Examples
John X. Morris
AAML
12
0
0
05 Oct 2020
Assessing Robustness of Text Classification through Maximal Safe Radius
  Computation
Assessing Robustness of Text Classification through Maximal Safe Radius Computation
Emanuele La Malfa
Min Wu
Luca Laurenti
Benjie Wang
Anthony Hartshorn
Marta Z. Kwiatkowska
AAML
20
18
0
01 Oct 2020
Learning to Attack: Towards Textual Adversarial Attacking in Real-world
  Situations
Learning to Attack: Towards Textual Adversarial Attacking in Real-world Situations
Yuan Zang
Bairu Hou
Fanchao Qi
Zhiyuan Liu
Xiaojun Meng
Maosong Sun
14
11
0
19 Sep 2020
OpenAttack: An Open-source Textual Adversarial Attack Toolkit
OpenAttack: An Open-source Textual Adversarial Attack Toolkit
Guoyang Zeng
Fanchao Qi
Qianrui Zhou
Ting Zhang
Zixian Ma
Bairu Hou
Yuan Zang
Zhiyuan Liu
Maosong Sun
AAML
21
118
0
19 Sep 2020
Contextualized Perturbation for Textual Adversarial Attack
Contextualized Perturbation for Textual Adversarial Attack
Dianqi Li
Yizhe Zhang
Hao Peng
Liqun Chen
Chris Brockett
Ming-Ting Sun
Bill Dolan
AAML
SILM
10
229
0
16 Sep 2020
Searching for a Search Method: Benchmarking Search Algorithms for
  Generating NLP Adversarial Examples
Searching for a Search Method: Benchmarking Search Algorithms for Generating NLP Adversarial Examples
Jin Yong Yoo
John X. Morris
Eli Lifland
Yanjun Qi
AAML
17
53
0
09 Sep 2020
Dynamically Computing Adversarial Perturbations for Recurrent Neural
  Networks
Dynamically Computing Adversarial Perturbations for Recurrent Neural Networks
Shankar A. Deka
D. Stipanović
Claire Tomlin
AAML
30
7
0
07 Sep 2020
TextDecepter: Hard Label Black Box Attack on Text Classifiers
TextDecepter: Hard Label Black Box Attack on Text Classifiers
Sachin Saxena
AAML
12
5
0
16 Aug 2020
Previous
12345678
Next