ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1704.08006
  4. Cited By
Deep Text Classification Can be Fooled

Deep Text Classification Can be Fooled

26 April 2017
Bin Liang
Hongcheng Li
Miaoqiang Su
Pan Bian
Xirong Li
Wenchang Shi
    AAML
ArXivPDFHTML

Papers citing "Deep Text Classification Can be Fooled"

50 / 69 papers shown
Title
Spiking Convolutional Neural Networks for Text Classification
Spiking Convolutional Neural Networks for Text Classification
Changze Lv
Jianhan Xu
Xiaoqing Zheng
58
28
0
27 Jun 2024
Semantic Stealth: Adversarial Text Attacks on NLP Using Several Methods
Semantic Stealth: Adversarial Text Attacks on NLP Using Several Methods
Roopkatha Dey
Aivy Debnath
Sayak Kumar Dutta
Kaustav Ghosh
Arijit Mitra
Arghya Roy Chowdhury
Jaydip Sen
AAML
SILM
29
1
0
08 Apr 2024
A Modified Word Saliency-Based Adversarial Attack on Text Classification
  Models
A Modified Word Saliency-Based Adversarial Attack on Text Classification Models
Hetvi Waghela
Sneha Rakshit
Jaydip Sen
AAML
31
7
0
17 Mar 2024
Adversarial Testing for Visual Grounding via Image-Aware Property
  Reduction
Adversarial Testing for Visual Grounding via Image-Aware Property Reduction
Zhiyuan Chang
Mingyang Li
Junjie Wang
Cheng Li
Boyu Wu
Fanjiang Xu
Qing Wang
AAML
36
0
0
02 Mar 2024
Trustworthy Distributed AI Systems: Robustness, Privacy, and Governance
Trustworthy Distributed AI Systems: Robustness, Privacy, and Governance
Wenqi Wei
Ling Liu
31
16
0
02 Feb 2024
A Critical Reflection on the Use of Toxicity Detection Algorithms in
  Proactive Content Moderation Systems
A Critical Reflection on the Use of Toxicity Detection Algorithms in Proactive Content Moderation Systems
Mark Warner
Angelika Strohmayer
Matthew Higgs
Lynne Coventry
21
6
0
19 Jan 2024
From Adversarial Arms Race to Model-centric Evaluation: Motivating a
  Unified Automatic Robustness Evaluation Framework
From Adversarial Arms Race to Model-centric Evaluation: Motivating a Unified Automatic Robustness Evaluation Framework
Yangyi Chen
Hongcheng Gao
Ganqu Cui
Lifan Yuan
Dehan Kong
...
Longtao Huang
H. Xue
Zhiyuan Liu
Maosong Sun
Heng Ji
AAML
ELM
33
6
0
29 May 2023
Modeling Adversarial Attack on Pre-trained Language Models as Sequential
  Decision Making
Modeling Adversarial Attack on Pre-trained Language Models as Sequential Decision Making
Xuanjie Fang
Sijie Cheng
Yang Liu
Wen Wang
AAML
42
9
0
27 May 2023
Another Dead End for Morphological Tags? Perturbed Inputs and Parsing
Another Dead End for Morphological Tags? Perturbed Inputs and Parsing
Alberto Muñoz-Ortiz
David Vilares
38
1
0
24 May 2023
A Survey of Safety and Trustworthiness of Large Language Models through
  the Lens of Verification and Validation
A Survey of Safety and Trustworthiness of Large Language Models through the Lens of Verification and Validation
Xiaowei Huang
Wenjie Ruan
Wei Huang
Gao Jin
Yizhen Dong
...
Sihao Wu
Peipei Xu
Dengyu Wu
André Freitas
Mustafa A. Mustafa
ALM
47
83
0
19 May 2023
Adversarial Amendment is the Only Force Capable of Transforming an Enemy
  into a Friend
Adversarial Amendment is the Only Force Capable of Transforming an Enemy into a Friend
Chong Yu
Tao Chen
Zhongxue Gan
AAML
23
1
0
18 May 2023
Tell Model Where to Attend: Improving Interpretability of Aspect-Based
  Sentiment Classification via Small Explanation Annotations
Tell Model Where to Attend: Improving Interpretability of Aspect-Based Sentiment Classification via Small Explanation Annotations
Zhenxiao Cheng
Jie Zhou
Wen Wu
Qin Chen
Liang He
32
3
0
21 Feb 2023
Identifying the Source of Vulnerability in Explanation Discrepancy: A
  Case Study in Neural Text Classification
Identifying the Source of Vulnerability in Explanation Discrepancy: A Case Study in Neural Text Classification
Ruixuan Tang
Hanjie Chen
Yangfeng Ji
AAML
FAtt
32
2
0
10 Dec 2022
TASA: Deceiving Question Answering Models by Twin Answer Sentences
  Attack
TASA: Deceiving Question Answering Models by Twin Answer Sentences Attack
Yu Cao
Dianqi Li
Meng Fang
Dinesh Manocha
Jun Gao
Yibing Zhan
Dacheng Tao
AAML
26
15
0
27 Oct 2022
ROSE: Robust Selective Fine-tuning for Pre-trained Language Models
ROSE: Robust Selective Fine-tuning for Pre-trained Language Models
Lan Jiang
Hao Zhou
Yankai Lin
Peng Li
Jie Zhou
R. Jiang
AAML
37
8
0
18 Oct 2022
Adversarial Robustness for Tabular Data through Cost and Utility
  Awareness
Adversarial Robustness for Tabular Data through Cost and Utility Awareness
Klim Kireev
B. Kulynych
Carmela Troncoso
AAML
26
16
0
27 Aug 2022
A Context-Aware Approach for Textual Adversarial Attack through
  Probability Difference Guided Beam Search
A Context-Aware Approach for Textual Adversarial Attack through Probability Difference Guided Beam Search
Huijun Liu
Jie Yu
Shasha Li
Jun Ma
Bin Ji
AAML
38
1
0
17 Aug 2022
Catch Me If You Can: Deceiving Stance Detection and Geotagging Models to
  Protect Privacy of Individuals on Twitter
Catch Me If You Can: Deceiving Stance Detection and Geotagging Models to Protect Privacy of Individuals on Twitter
Dilara Doğan
Bahadir Altun
Muhammed Said Zengin
Mucahid Kutlu
Tamer Elsayed
26
2
0
23 Jul 2022
Rethinking Textual Adversarial Defense for Pre-trained Language Models
Rethinking Textual Adversarial Defense for Pre-trained Language Models
Jiayi Wang
Rongzhou Bao
Zhuosheng Zhang
Hai Zhao
AAML
SILM
28
11
0
21 Jul 2022
AEON: A Method for Automatic Evaluation of NLP Test Cases
AEON: A Method for Automatic Evaluation of NLP Test Cases
Jen-tse Huang
Jianping Zhang
Wenxuan Wang
Pinjia He
Yuxin Su
Michael R. Lyu
45
23
0
13 May 2022
Testing the limits of natural language models for predicting human
  language judgments
Testing the limits of natural language models for predicting human language judgments
Tal Golan
Matthew Siegelman
N. Kriegeskorte
Christopher A. Baldassano
22
15
0
07 Apr 2022
Adversarial Training for Improving Model Robustness? Look at Both
  Prediction and Interpretation
Adversarial Training for Improving Model Robustness? Look at Both Prediction and Interpretation
Hanjie Chen
Yangfeng Ji
OOD
AAML
VLM
34
21
0
23 Mar 2022
Defending Black-box Skeleton-based Human Activity Classifiers
Defending Black-box Skeleton-based Human Activity Classifiers
He Wang
Yunfeng Diao
Zichang Tan
G. Guo
AAML
53
10
0
09 Mar 2022
Robust Natural Language Processing: Recent Advances, Challenges, and
  Future Directions
Robust Natural Language Processing: Recent Advances, Challenges, and Future Directions
Marwan Omar
Soohyeon Choi
Daehun Nyang
David A. Mohaisen
32
57
0
03 Jan 2022
How Should Pre-Trained Language Models Be Fine-Tuned Towards Adversarial
  Robustness?
How Should Pre-Trained Language Models Be Fine-Tuned Towards Adversarial Robustness?
Xinhsuai Dong
Anh Tuan Luu
Min Lin
Shuicheng Yan
Hanwang Zhang
SILM
AAML
20
55
0
22 Dec 2021
The King is Naked: on the Notion of Robustness for Natural Language
  Processing
The King is Naked: on the Notion of Robustness for Natural Language Processing
Emanuele La Malfa
Marta Z. Kwiatkowska
20
28
0
13 Dec 2021
Adversarial Attacks Against Deep Generative Models on Data: A Survey
Adversarial Attacks Against Deep Generative Models on Data: A Survey
Hui Sun
Tianqing Zhu
Zhiqiu Zhang
Dawei Jin
Wanlei Zhou
AAML
47
42
0
01 Dec 2021
Effective and Imperceptible Adversarial Textual Attack via
  Multi-objectivization
Effective and Imperceptible Adversarial Textual Attack via Multi-objectivization
Shengcai Liu
Ning Lu
W. Hong
Chao Qian
Ke Tang
AAML
22
15
0
02 Nov 2021
Detecting Textual Adversarial Examples through Randomized Substitution
  and Vote
Detecting Textual Adversarial Examples through Randomized Substitution and Vote
Xiaosen Wang
Yifeng Xiong
Kun He
AAML
27
11
0
13 Sep 2021
A Strong Baseline for Query Efficient Attacks in a Black Box Setting
A Strong Baseline for Query Efficient Attacks in a Black Box Setting
Rishabh Maheshwary
Saket Maheshwary
Vikram Pudi
AAML
30
30
0
10 Sep 2021
Multi-granularity Textual Adversarial Attack with Behavior Cloning
Multi-granularity Textual Adversarial Attack with Behavior Cloning
Yangyi Chen
Jingtong Su
Wei Wei
AAML
19
32
0
09 Sep 2021
Evaluating the Robustness of Neural Language Models to Input
  Perturbations
Evaluating the Robustness of Neural Language Models to Input Perturbations
M. Moradi
Matthias Samwald
AAML
50
96
0
27 Aug 2021
Towards Robustness Against Natural Language Word Substitutions
Towards Robustness Against Natural Language Word Substitutions
Xinshuai Dong
A. Luu
Rongrong Ji
Hong Liu
SILM
AAML
38
113
0
28 Jul 2021
We Can Always Catch You: Detecting Adversarial Patched Objects WITH or
  WITHOUT Signature
We Can Always Catch You: Detecting Adversarial Patched Objects WITH or WITHOUT Signature
Binxiu Liang
Jiachun Li
Jianjun Huang
AAML
33
12
0
09 Jun 2021
Defending Against Backdoor Attacks in Natural Language Generation
Defending Against Backdoor Attacks in Natural Language Generation
Xiaofei Sun
Xiaoya Li
Yuxian Meng
Xiang Ao
Fei Wu
Jiwei Li
Tianwei Zhang
AAML
SILM
31
47
0
03 Jun 2021
Robustness Tests of NLP Machine Learning Models: Search and Semantically
  Replace
Robustness Tests of NLP Machine Learning Models: Search and Semantically Replace
Rahul Singh
Karan Jindal
Yufei Yu
Hanyu Yang
Tarun Joshi
Matthew A. Campbell
Wayne B. Shoumaker
58
2
0
20 Apr 2021
Normal vs. Adversarial: Salience-based Analysis of Adversarial Samples
  for Relation Extraction
Normal vs. Adversarial: Salience-based Analysis of Adversarial Samples for Relation Extraction
Luoqiu Li
Xiang Chen
Zhen Bi
Xin Xie
Shumin Deng
Ningyu Zhang
Chuanqi Tan
Mosha Chen
Huajun Chen
AAML
34
7
0
01 Apr 2021
BERT: A Review of Applications in Natural Language Processing and
  Understanding
BERT: A Review of Applications in Natural Language Processing and Understanding
M. V. Koroteev
VLM
25
196
0
22 Mar 2021
Adversarial Attack on Network Embeddings via Supervised Network
  Poisoning
Adversarial Attack on Network Embeddings via Supervised Network Poisoning
Viresh Gupta
Tanmoy Chakraborty
AAML
36
12
0
14 Feb 2021
Generating Natural Language Attacks in a Hard Label Black Box Setting
Generating Natural Language Attacks in a Hard Label Black Box Setting
Rishabh Maheshwary
Saket Maheshwary
Vikram Pudi
AAML
30
103
0
29 Dec 2020
A Deep Marginal-Contrastive Defense against Adversarial Attacks on 1D
  Models
A Deep Marginal-Contrastive Defense against Adversarial Attacks on 1D Models
Mohammed Hassanin
Nour Moustafa
M. Tahtali
AAML
24
2
0
08 Dec 2020
Self-Explaining Structures Improve NLP Models
Self-Explaining Structures Improve NLP Models
Zijun Sun
Chun Fan
Qinghong Han
Xiaofei Sun
Yuxian Meng
Fei Wu
Jiwei Li
MILM
XAI
LRM
FAtt
46
38
0
03 Dec 2020
Do We Need Online NLU Tools?
Do We Need Online NLU Tools?
Petr Lorenc
Petro Marek
Jan Pichl
Jakub Konrád
Jan Sedivý
21
6
0
19 Nov 2020
Adversarial Semantic Collisions
Adversarial Semantic Collisions
Congzheng Song
Alexander M. Rush
Vitaly Shmatikov
AAML
14
52
0
09 Nov 2020
Geometry matters: Exploring language examples at the decision boundary
Geometry matters: Exploring language examples at the decision boundary
Debajyoti Datta
Shashwat Kumar
Laura E. Barnes
Tom Fletcher
AAML
20
3
0
14 Oct 2020
Can Adversarial Weight Perturbations Inject Neural Backdoors?
Can Adversarial Weight Perturbations Inject Neural Backdoors?
Siddhant Garg
Adarsh Kumar
Vibhor Goel
Yingyu Liang
AAML
48
86
0
04 Aug 2020
Defense against Adversarial Attacks in NLP via Dirichlet Neighborhood
  Ensemble
Defense against Adversarial Attacks in NLP via Dirichlet Neighborhood Ensemble
Yi Zhou
Xiaoqing Zheng
Cho-Jui Hsieh
Kai-Wei Chang
Xuanjing Huang
SILM
39
48
0
20 Jun 2020
Differentiable Language Model Adversarial Attacks on Categorical
  Sequence Classifiers
Differentiable Language Model Adversarial Attacks on Categorical Sequence Classifiers
I. Fursov
A. Zaytsev
Nikita Klyuchnikov
A. Kravchenko
E. Burnaev
AAML
SILM
31
5
0
19 Jun 2020
Chat as Expected: Learning to Manipulate Black-box Neural Dialogue
  Models
Chat as Expected: Learning to Manipulate Black-box Neural Dialogue Models
Haochen Liu
Zhiwei Wang
Tyler Derr
Jiliang Tang
AAML
22
15
0
27 May 2020
Frequency-Guided Word Substitutions for Detecting Textual Adversarial
  Examples
Frequency-Guided Word Substitutions for Detecting Textual Adversarial Examples
Maximilian Mozes
Pontus Stenetorp
Bennett Kleinberg
Lewis D. Griffin
AAML
30
99
0
13 Apr 2020
12
Next