ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1801.04354
  4. Cited By
Black-box Generation of Adversarial Text Sequences to Evade Deep
  Learning Classifiers

Black-box Generation of Adversarial Text Sequences to Evade Deep Learning Classifiers

13 January 2018
Ji Gao
Jack Lanchantin
M. Soffa
Yanjun Qi
    AAML
ArXivPDFHTML

Papers citing "Black-box Generation of Adversarial Text Sequences to Evade Deep Learning Classifiers"

50 / 360 papers shown
Title
Mitigating backdoor attacks in LSTM-based Text Classification Systems by
  Backdoor Keyword Identification
Mitigating backdoor attacks in LSTM-based Text Classification Systems by Backdoor Keyword Identification
Chuanshuai Chen
Jiazhu Dai
SILM
55
125
0
11 Jul 2020
Adversarial Machine Learning Attacks and Defense Methods in the Cyber
  Security Domain
Adversarial Machine Learning Attacks and Defense Methods in the Cyber Security Domain
Ishai Rosenberg
A. Shabtai
Yuval Elovici
Lior Rokach
AAML
26
12
0
05 Jul 2020
Blacklight: Scalable Defense for Neural Networks against Query-Based
  Black-Box Attacks
Blacklight: Scalable Defense for Neural Networks against Query-Based Black-Box Attacks
Huiying Li
Shawn Shan
Emily Wenger
Jiayun Zhang
Haitao Zheng
Ben Y. Zhao
AAML
23
42
0
24 Jun 2020
Systematic Attack Surface Reduction For Deployed Sentiment Analysis
  Models
Systematic Attack Surface Reduction For Deployed Sentiment Analysis Models
Josh Kalin
David A. Noever
Gerry V. Dozier
6
5
0
19 Jun 2020
Differentiable Language Model Adversarial Attacks on Categorical
  Sequence Classifiers
Differentiable Language Model Adversarial Attacks on Categorical Sequence Classifiers
I. Fursov
A. Zaytsev
Nikita Klyuchnikov
A. Kravchenko
E. Burnaev
AAML
SILM
29
5
0
19 Jun 2020
Adversarial Attacks and Detection on Reinforcement Learning-Based
  Interactive Recommender Systems
Adversarial Attacks and Detection on Reinforcement Learning-Based Interactive Recommender Systems
Yuanjiang Cao
Xiaocong Chen
Lina Yao
Xianzhi Wang
W. Zhang
AAML
11
43
0
14 Jun 2020
Adversarial Attacks and Defense on Texts: A Survey
Adversarial Attacks and Defense on Texts: A Survey
A. Huq
Mst. Tasnim Pervin
AAML
14
21
0
28 May 2020
Reliability and Robustness analysis of Machine Learning based Phishing
  URL Detectors
Reliability and Robustness analysis of Machine Learning based Phishing URL Detectors
Bushra Sabir
Muhammad Ali Babar
R. Gaire
A. Abuadbba
AAML
29
10
0
18 May 2020
NAT: Noise-Aware Training for Robust Neural Sequence Labeling
NAT: Noise-Aware Training for Robust Neural Sequence Labeling
Marcin Namysl
Sven Behnke
Joachim Kohler
NoLa
12
14
0
14 May 2020
Defense of Word-level Adversarial Attacks via Random Substitution
  Encoding
Defense of Word-level Adversarial Attacks via Random Substitution Encoding
Zhaoyang Wang
Hongtao Wang
AAML
SILM
6
23
0
01 May 2020
Evaluating Neural Machine Comprehension Model Robustness to Noisy Inputs
  and Adversarial Attacks
Evaluating Neural Machine Comprehension Model Robustness to Noisy Inputs and Adversarial Attacks
Winston Wu
Dustin L. Arendt
Svitlana Volkova
AAML
15
5
0
01 May 2020
TextAttack: A Framework for Adversarial Attacks, Data Augmentation, and
  Adversarial Training in NLP
TextAttack: A Framework for Adversarial Attacks, Data Augmentation, and Adversarial Training in NLP
John X. Morris
Eli Lifland
Jin Yong Yoo
J. E. Grigsby
Di Jin
Yanjun Qi
SILM
27
69
0
29 Apr 2020
Reevaluating Adversarial Examples in Natural Language
Reevaluating Adversarial Examples in Natural Language
John X. Morris
Eli Lifland
Jack Lanchantin
Yangfeng Ji
Yanjun Qi
SILM
AAML
20
111
0
25 Apr 2020
Frequency-Guided Word Substitutions for Detecting Textual Adversarial
  Examples
Frequency-Guided Word Substitutions for Detecting Textual Adversarial Examples
Maximilian Mozes
Pontus Stenetorp
Bennett Kleinberg
Lewis D. Griffin
AAML
30
99
0
13 Apr 2020
BAE: BERT-based Adversarial Examples for Text Classification
BAE: BERT-based Adversarial Examples for Text Classification
Siddhant Garg
Goutham Ramakrishnan
AAML
SILM
28
540
0
04 Apr 2020
Gradient-based adversarial attacks on categorical sequence models via
  traversing an embedded world
Gradient-based adversarial attacks on categorical sequence models via traversing an embedded world
I. Fursov
Alexey Zaytsev
Nikita Klyuchnikov
A. Kravchenko
Evgeny Burnaev
AAML
SILM
17
11
0
09 Mar 2020
Search Space of Adversarial Perturbations against Image Filters
Search Space of Adversarial Perturbations against Image Filters
D. D. Thang
Toshihiro Matsui
AAML
6
1
0
05 Mar 2020
Automatic Perturbation Analysis for Scalable Certified Robustness and
  Beyond
Automatic Perturbation Analysis for Scalable Certified Robustness and Beyond
Kaidi Xu
Zhouxing Shi
Huan Zhang
Yihan Wang
Kai-Wei Chang
Minlie Huang
B. Kailkhura
X. Lin
Cho-Jui Hsieh
AAML
19
11
0
28 Feb 2020
Adv-BERT: BERT is not robust on misspellings! Generating nature
  adversarial samples on BERT
Adv-BERT: BERT is not robust on misspellings! Generating nature adversarial samples on BERT
Lichao Sun
Kazuma Hashimoto
Wenpeng Yin
Akari Asai
Jia Li
Philip Yu
Caiming Xiong
SILM
AAML
12
101
0
27 Feb 2020
Robustness Verification for Transformers
Robustness Verification for Transformers
Zhouxing Shi
Huan Zhang
Kai-Wei Chang
Minlie Huang
Cho-Jui Hsieh
AAML
24
104
0
16 Feb 2020
Adversarial Robustness for Code
Adversarial Robustness for Code
Pavol Bielik
Martin Vechev
AAML
22
89
0
11 Feb 2020
FastWordBug: A Fast Method To Generate Adversarial Text Against NLP
  Applications
FastWordBug: A Fast Method To Generate Adversarial Text Against NLP Applications
Dou Goodman
Zhonghou Lv
Minghua Wang
AAML
14
6
0
31 Jan 2020
Elephant in the Room: An Evaluation Framework for Assessing Adversarial
  Examples in NLP
Elephant in the Room: An Evaluation Framework for Assessing Adversarial Examples in NLP
Ying Xu
Xu Zhong
Antonio Jimeno Yepes
Jey Han Lau
AAML
22
10
0
22 Jan 2020
Humpty Dumpty: Controlling Word Meanings via Corpus Poisoning
Humpty Dumpty: Controlling Word Meanings via Corpus Poisoning
R. Schuster
Tal Schuster
Yoav Meri
Vitaly Shmatikov
AAML
6
38
0
14 Jan 2020
Advbox: a toolbox to generate adversarial examples that fool neural
  networks
Advbox: a toolbox to generate adversarial examples that fool neural networks
Dou Goodman
Xin Hao
Yang Wang
Yuesheng Wu
Junfeng Xiong
Huan Zhang
AAML
15
53
0
13 Jan 2020
Exploring and Improving Robustness of Multi Task Deep Neural Networks
  via Domain Agnostic Defenses
Exploring and Improving Robustness of Multi Task Deep Neural Networks via Domain Agnostic Defenses
Kashyap Coimbatore Murali
AAML
OOD
11
0
0
11 Jan 2020
To Transfer or Not to Transfer: Misclassification Attacks Against
  Transfer Learned Text Classifiers
To Transfer or Not to Transfer: Misclassification Attacks Against Transfer Learned Text Classifiers
Bijeeta Pal
Shruti Tople
AAML
18
9
0
08 Jan 2020
Towards Robust Toxic Content Classification
Towards Robust Toxic Content Classification
Keita Kurita
A. Belova
Antonios Anastasopoulos
AAML
8
30
0
14 Dec 2019
Towards Security Threats of Deep Learning Systems: A Survey
Towards Security Threats of Deep Learning Systems: A Survey
Yingzhe He
Guozhu Meng
Kai Chen
Xingbo Hu
Jinwen He
AAML
ELM
13
14
0
28 Nov 2019
Smoothed Inference for Adversarially-Trained Models
Smoothed Inference for Adversarially-Trained Models
Yaniv Nemcovsky
Evgenii Zheltonozhskii
Chaim Baskin
Brian Chmiel
Maxim Fishman
A. Bronstein
A. Mendelson
AAML
FedML
16
2
0
17 Nov 2019
SPARK: Spatial-aware Online Incremental Attack Against Visual Tracking
SPARK: Spatial-aware Online Incremental Attack Against Visual Tracking
Qing Guo
Xiaofei Xie
Felix Juefei-Xu
L. Ma
Zhongguo Li
Wanli Xue
Wei Feng
Yang Liu
AAML
24
4
0
19 Oct 2019
FENCE: Feasible Evasion Attacks on Neural Networks in Constrained
  Environments
FENCE: Feasible Evasion Attacks on Neural Networks in Constrained Environments
Alesia Chernikova
Alina Oprea
AAML
19
35
0
23 Sep 2019
Adversarial Attacks and Defenses in Images, Graphs and Text: A Review
Adversarial Attacks and Defenses in Images, Graphs and Text: A Review
Han Xu
Yao Ma
Haochen Liu
Debayan Deb
Hui Liu
Jiliang Tang
Anil K. Jain
AAML
33
668
0
17 Sep 2019
Generating Black-Box Adversarial Examples for Text Classifiers Using a
  Deep Reinforced Model
Generating Black-Box Adversarial Examples for Text Classifiers Using a Deep Reinforced Model
Prashanth Vijayaraghavan
D. Roy
AAML
11
35
0
17 Sep 2019
Learning to Discriminate Perturbations for Blocking Adversarial Attacks
  in Text Classification
Learning to Discriminate Perturbations for Blocking Adversarial Attacks in Text Classification
Yichao Zhou
Jyun-Yu Jiang
Kai-Wei Chang
Wei Wang
AAML
11
117
0
06 Sep 2019
On-Device Text Representations Robust To Misspellings via Projections
On-Device Text Representations Robust To Misspellings via Projections
Chinnadhurai Sankar
Sujith Ravi
Zornitsa Kozareva
13
2
0
14 Aug 2019
Is BERT Really Robust? A Strong Baseline for Natural Language Attack on
  Text Classification and Entailment
Is BERT Really Robust? A Strong Baseline for Natural Language Attack on Text Classification and Entailment
Di Jin
Zhijing Jin
Qiufeng Wang
Peter Szolovits
SILM
AAML
12
1,048
0
27 Jul 2019
Transferable Neural Projection Representations
Transferable Neural Projection Representations
Chinnadhurai Sankar
Sujith Ravi
Zornitsa Kozareva
18
8
0
04 Jun 2019
Generalizable Adversarial Attacks with Latent Variable Perturbation
  Modelling
Generalizable Adversarial Attacks with Latent Variable Perturbation Modelling
A. Bose
Andre Cianflone
William L. Hamilton
OOD
AAML
14
7
0
26 May 2019
POPQORN: Quantifying Robustness of Recurrent Neural Networks
POPQORN: Quantifying Robustness of Recurrent Neural Networks
Ching-Yun Ko
Zhaoyang Lyu
Tsui-Wei Weng
Luca Daniel
Ngai Wong
Dahua Lin
AAML
17
75
0
17 May 2019
White-to-Black: Efficient Distillation of Black-Box Adversarial Attacks
White-to-Black: Efficient Distillation of Black-Box Adversarial Attacks
Yotam Gil
Yoav Chai
O. Gorodissky
Jonathan Berant
MLAU
AAML
19
44
0
04 Apr 2019
MaskDGA: A Black-box Evasion Technique Against DGA Classifiers and
  Adversarial Defenses
MaskDGA: A Black-box Evasion Technique Against DGA Classifiers and Adversarial Defenses
Lior Sidi
Asaf Nadler
A. Shabtai
AAML
23
22
0
24 Feb 2019
Towards a Robust Deep Neural Network in Texts: A Survey
Towards a Robust Deep Neural Network in Texts: A Survey
Wenqi Wang
Benxiao Tang
Run Wang
Lina Wang
Aoshuang Ye
AAML
24
39
0
12 Feb 2019
Defense Methods Against Adversarial Examples for Recurrent Neural
  Networks
Defense Methods Against Adversarial Examples for Recurrent Neural Networks
Ishai Rosenberg
A. Shabtai
Yuval Elovici
Lior Rokach
AAML
GAN
22
42
0
28 Jan 2019
Universal Rules for Fooling Deep Neural Networks based Text
  Classification
Universal Rules for Fooling Deep Neural Networks based Text Classification
Di Li
Danilo Vasconcellos Vargas
Kouichi Sakurai
AAML
16
11
0
22 Jan 2019
Adversarial Attacks on Deep Learning Models in Natural Language
  Processing: A Survey
Adversarial Attacks on Deep Learning Models in Natural Language Processing: A Survey
W. Zhang
Quan Z. Sheng
A. Alhazmi
Chenliang Li
AAML
24
57
0
21 Jan 2019
Analysis Methods in Neural Language Processing: A Survey
Analysis Methods in Neural Language Processing: A Survey
Yonatan Belinkov
James R. Glass
39
547
0
21 Dec 2018
TextBugger: Generating Adversarial Text Against Real-world Applications
TextBugger: Generating Adversarial Text Against Real-world Applications
Jinfeng Li
S. Ji
Tianyu Du
Bo Li
Ting Wang
SILM
AAML
48
723
0
13 Dec 2018
Discrete Adversarial Attacks and Submodular Optimization with
  Applications to Text Classification
Discrete Adversarial Attacks and Submodular Optimization with Applications to Text Classification
Qi Lei
Lingfei Wu
Pin-Yu Chen
A. Dimakis
Inderjit S. Dhillon
Michael Witbrock
AAML
15
92
0
01 Dec 2018
Adversarial Gain
Adversarial Gain
Peter Henderson
Koustuv Sinha
Nan Rosemary Ke
Joelle Pineau
AAML
30
0
0
04 Nov 2018
Previous
12345678
Next