ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1812.05271
  4. Cited By
TextBugger: Generating Adversarial Text Against Real-world Applications

TextBugger: Generating Adversarial Text Against Real-world Applications

13 December 2018
Jinfeng Li
S. Ji
Tianyu Du
Bo Li
Ting Wang
    SILM
    AAML
ArXivPDFHTML

Papers citing "TextBugger: Generating Adversarial Text Against Real-world Applications"

50 / 382 papers shown
Title
TASA: Deceiving Question Answering Models by Twin Answer Sentences
  Attack
TASA: Deceiving Question Answering Models by Twin Answer Sentences Attack
Yu Cao
Dianqi Li
Meng Fang
Dinesh Manocha
Jun Gao
Yibing Zhan
Dacheng Tao
AAML
26
15
0
27 Oct 2022
Disentangled Text Representation Learning with Information-Theoretic
  Perspective for Adversarial Robustness
Disentangled Text Representation Learning with Information-Theoretic Perspective for Adversarial Robustness
Jiahao Zhao
Wenji Mao
DRL
OOD
32
3
0
26 Oct 2022
Secure and Trustworthy Artificial Intelligence-Extended Reality (AI-XR)
  for Metaverses
Secure and Trustworthy Artificial Intelligence-Extended Reality (AI-XR) for Metaverses
Adnan Qayyum
M. A. Butt
Hassan Ali
Muhammad Usman
O. Halabi
Ala I. Al-Fuqaha
Q. Abbasi
Muhammad Ali Imran
Junaid Qadir
30
32
0
24 Oct 2022
ADDMU: Detection of Far-Boundary Adversarial Examples with Data and
  Model Uncertainty Estimation
ADDMU: Detection of Far-Boundary Adversarial Examples with Data and Model Uncertainty Estimation
Fan Yin
Yao Li
Cho-Jui Hsieh
Kai-Wei Chang
AAML
69
4
0
22 Oct 2022
TCAB: A Large-Scale Text Classification Attack Benchmark
TCAB: A Large-Scale Text Classification Attack Benchmark
Kalyani Asthana
Zhouhang Xie
Wencong You
Adam Noack
Jonathan Brophy
Sameer Singh
Daniel Lowd
39
3
0
21 Oct 2022
An Empirical Analysis of SMS Scam Detection Systems
An Empirical Analysis of SMS Scam Detection Systems
Muhammad Salman
Muhammad Ikram
M. Kâafar
46
8
0
19 Oct 2022
Probabilistic Inverse Modeling: An Application in Hydrology
Probabilistic Inverse Modeling: An Application in Hydrology
Somya Sharma
Rahul Ghosh
Arvind Renganathan
Xiang Li
Snigdhansu Chatterjee
John L. Nieber
C. Duffy
Vipin Kumar
AI4CE
30
1
0
12 Oct 2022
LLMEffiChecker: Understanding and Testing Efficiency Degradation of
  Large Language Models
LLMEffiChecker: Understanding and Testing Efficiency Degradation of Large Language Models
Simin Chen
Cong Liu
Mirazul Haque
Wei Yang
34
21
0
07 Oct 2022
BootAug: Boosting Text Augmentation via Hybrid Instance Filtering
  Framework
BootAug: Boosting Text Augmentation via Hybrid Instance Filtering Framework
Heng Yang
Ke Li
51
5
0
06 Oct 2022
PromptAttack: Prompt-based Attack for Language Models via Gradient
  Search
PromptAttack: Prompt-based Attack for Language Models via Gradient Search
Yundi Shi
Piji Li
Changchun Yin
Zhaoyang Han
Zhe Liu
Zhe Liu
AAML
SILM
32
19
0
05 Sep 2022
Semantic Preserving Adversarial Attack Generation with Autoencoder and
  Genetic Algorithm
Semantic Preserving Adversarial Attack Generation with Autoencoder and Genetic Algorithm
Xinyi Wang
S. Y. Enoch
Dan Dongseong Kim
AAML
24
2
0
25 Aug 2022
Catch Me If You Can: Deceiving Stance Detection and Geotagging Models to
  Protect Privacy of Individuals on Twitter
Catch Me If You Can: Deceiving Stance Detection and Geotagging Models to Protect Privacy of Individuals on Twitter
Dilara Doğan
Bahadir Altun
Muhammed Said Zengin
Mucahid Kutlu
Tamer Elsayed
23
2
0
23 Jul 2022
Rethinking Textual Adversarial Defense for Pre-trained Language Models
Rethinking Textual Adversarial Defense for Pre-trained Language Models
Jiayi Wang
Rongzhou Bao
Zhuosheng Zhang
Hai Zhao
AAML
SILM
28
11
0
21 Jul 2022
Defending Substitution-Based Profile Pollution Attacks on Sequential
  Recommenders
Defending Substitution-Based Profile Pollution Attacks on Sequential Recommenders
Zhenrui Yue
Huimin Zeng
Ziyi Kou
Lanyu Shang
Dong Wang
AAML
12
25
0
19 Jul 2022
A Universal Adversarial Policy for Text Classifiers
A Universal Adversarial Policy for Text Classifiers
Gallil Maimon
Lior Rokach
AAML
11
10
0
19 Jun 2022
Improving the Adversarial Robustness of NLP Models by Information
  Bottleneck
Improving the Adversarial Robustness of NLP Models by Information Bottleneck
Ce Zhang
Xiang Zhou
Yixin Wan
Xiaoqing Zheng
Kai-Wei Chang
Cho-Jui Hsieh
21
25
0
11 Jun 2022
Kallima: A Clean-label Framework for Textual Backdoor Attacks
Kallima: A Clean-label Framework for Textual Backdoor Attacks
Xiaoyi Chen
Yinpeng Dong
Zeyu Sun
Shengfang Zhai
Qingni Shen
Zhonghai Wu
AAML
11
30
0
03 Jun 2022
CodeAttack: Code-Based Adversarial Attacks for Pre-trained Programming
  Language Models
CodeAttack: Code-Based Adversarial Attacks for Pre-trained Programming Language Models
Akshita Jha
Chandan K. Reddy
SILM
ELM
AAML
30
59
0
31 May 2022
Securing AI-based Healthcare Systems using Blockchain Technology: A
  State-of-the-Art Systematic Literature Review and Future Research Directions
Securing AI-based Healthcare Systems using Blockchain Technology: A State-of-the-Art Systematic Literature Review and Future Research Directions
Rucha Shinde
S. Patil
K. Kotecha
V. Potdar
Ganeshsree Selvachandran
Ajith Abraham
26
32
0
30 May 2022
Learning to Ignore Adversarial Attacks
Learning to Ignore Adversarial Attacks
Yiming Zhang
Yan Zhou
Samuel Carton
Chenhao Tan
54
2
0
23 May 2022
Phrase-level Textual Adversarial Attack with Label Preservation
Phrase-level Textual Adversarial Attack with Label Preservation
Yibin Lei
Yu Cao
Dianqi Li
Dinesh Manocha
Meng Fang
Mykola Pechenizkiy
AAML
50
24
0
22 May 2022
AEON: A Method for Automatic Evaluation of NLP Test Cases
AEON: A Method for Automatic Evaluation of NLP Test Cases
Jen-tse Huang
Jianping Zhang
Wenxuan Wang
Pinjia He
Yuxin Su
Michael R. Lyu
40
23
0
13 May 2022
Sibylvariant Transformations for Robust Text Classification
Sibylvariant Transformations for Robust Text Classification
Fabrice Harel-Canada
Muhammad Ali Gulzar
Nanyun Peng
Miryung Kim
AAML
VLM
11
4
0
10 May 2022
A Simple Yet Efficient Method for Adversarial Word-Substitute Attack
A Simple Yet Efficient Method for Adversarial Word-Substitute Attack
Tianle Li
Yi Yang
AAML
24
0
0
07 May 2022
Don't sweat the small stuff, classify the rest: Sample Shielding to
  protect text classifiers against adversarial attacks
Don't sweat the small stuff, classify the rest: Sample Shielding to protect text classifiers against adversarial attacks
Jonathan Rusert
P. Srinivasan
AAML
19
3
0
03 May 2022
SemAttack: Natural Textual Attacks via Different Semantic Spaces
SemAttack: Natural Textual Attacks via Different Semantic Spaces
Wei Ping
Chejian Xu
Xiangyu Liu
Yuk-Kit Cheng
Bo-wen Li
SILM
AAML
21
52
0
03 May 2022
BERTops: Studying BERT Representations under a Topological Lens
BERTops: Studying BERT Representations under a Topological Lens
Jatin Chauhan
Manohar Kaul
24
3
0
02 May 2022
DDDM: a Brain-Inspired Framework for Robust Classification
DDDM: a Brain-Inspired Framework for Robust Classification
Xiyuan Chen
Xingyu Li
Yi Zhou
Tianming Yang
AAML
DiffM
40
7
0
01 May 2022
Improving robustness of language models from a geometry-aware
  perspective
Improving robustness of language models from a geometry-aware perspective
Bin Zhu
Zhaoquan Gu
Le Wang
Jinyin Chen
Qi Xuan
AAML
21
9
0
28 Apr 2022
Residue-Based Natural Language Adversarial Attack Detection
Residue-Based Natural Language Adversarial Attack Detection
Vyas Raina
Mark Gales
AAML
35
11
0
17 Apr 2022
Exploring the Universal Vulnerability of Prompt-based Learning Paradigm
Exploring the Universal Vulnerability of Prompt-based Learning Paradigm
Lei Xu
Yangyi Chen
Ganqu Cui
Hongcheng Gao
Zhiyuan Liu
SILM
VPVLM
27
71
0
11 Apr 2022
Clues in Tweets: Twitter-Guided Discovery and Analysis of SMS Spam
Clues in Tweets: Twitter-Guided Discovery and Analysis of SMS Spam
Siyuan Tang
Xianghang Mi
Ying Li
Xiaofeng Wang
Kai Chen
11
28
0
04 Apr 2022
Can NMT Understand Me? Towards Perturbation-based Evaluation of NMT
  Models for Code Generation
Can NMT Understand Me? Towards Perturbation-based Evaluation of NMT Models for Code Generation
Pietro Liguori
Cristina Improta
S. D. Vivo
R. Natella
B. Cukic
Domenico Cotroneo
AAML
41
4
0
29 Mar 2022
Adversarial Training for Improving Model Robustness? Look at Both
  Prediction and Interpretation
Adversarial Training for Improving Model Robustness? Look at Both Prediction and Interpretation
Hanjie Chen
Yangfeng Ji
OOD
AAML
VLM
32
21
0
23 Mar 2022
A Girl Has A Name, And It's ... Adversarial Authorship Attribution for
  Deobfuscation
A Girl Has A Name, And It's ... Adversarial Authorship Attribution for Deobfuscation
Wanyue Zhai
Jonathan Rusert
Zubair Shafiq
P. Srinivasan
14
5
0
22 Mar 2022
On The Robustness of Offensive Language Classifiers
On The Robustness of Offensive Language Classifiers
Jonathan Rusert
Zubair Shafiq
P. Srinivasan
AAML
21
12
0
21 Mar 2022
On Robust Prefix-Tuning for Text Classification
On Robust Prefix-Tuning for Text Classification
Zonghan Yang
Yang Liu
VLM
26
20
0
19 Mar 2022
Perturbations in the Wild: Leveraging Human-Written Text Perturbations
  for Realistic Adversarial Attack and Defense
Perturbations in the Wild: Leveraging Human-Written Text Perturbations for Realistic Adversarial Attack and Defense
Thai Le
Jooyoung Lee
Kevin Yen
Yifan Hu
Dongwon Lee
AAML
19
17
0
19 Mar 2022
Distinguishing Non-natural from Natural Adversarial Samples for More
  Robust Pre-trained Language Model
Distinguishing Non-natural from Natural Adversarial Samples for More Robust Pre-trained Language Model
Jiayi Wang
Rongzhou Bao
Zhuosheng Zhang
Hai Zhao
AAML
29
4
0
19 Mar 2022
A Survey of Adversarial Defences and Robustness in NLP
A Survey of Adversarial Defences and Robustness in NLP
Shreyansh Goyal
Sumanth Doddapaneni
Mitesh M.Khapra
B. Ravindran
AAML
34
30
0
12 Mar 2022
iSEA: An Interactive Pipeline for Semantic Error Analysis of NLP Models
iSEA: An Interactive Pipeline for Semantic Error Analysis of NLP Models
Jun Yuan
Jesse Vig
Nazneen Rajani
22
13
0
08 Mar 2022
MaMaDroid2.0 -- The Holes of Control Flow Graphs
MaMaDroid2.0 -- The Holes of Control Flow Graphs
Harel Berger
Chen Hajaj
Enrico Mariconti
A. Dvir
36
4
0
28 Feb 2022
Robust Textual Embedding against Word-level Adversarial Attacks
Robust Textual Embedding against Word-level Adversarial Attacks
Yichen Yang
Xiaosen Wang
Kun He
AAML
22
16
0
28 Feb 2022
Data-Driven Mitigation of Adversarial Text Perturbation
Data-Driven Mitigation of Adversarial Text Perturbation
Rasika Bhalerao
Mohammad Al-Rubaie
Anand Bhaskar
Igor L. Markov
24
8
0
19 Feb 2022
RoPGen: Towards Robust Code Authorship Attribution via Automatic Coding
  Style Transformation
RoPGen: Towards Robust Code Authorship Attribution via Automatic Coding Style Transformation
Zhen Li
Guenevere Chen
Chen
Chen Chen
Yayi Zou
Shouhuai Xu
AAML
AI4TS
24
44
0
12 Feb 2022
Using Random Perturbations to Mitigate Adversarial Attacks on Sentiment
  Analysis Models
Using Random Perturbations to Mitigate Adversarial Attacks on Sentiment Analysis Models
Abigail Swenor
Jugal Kalita
AAML
13
12
0
11 Feb 2022
On The Empirical Effectiveness of Unrealistic Adversarial Hardening
  Against Realistic Adversarial Attacks
On The Empirical Effectiveness of Unrealistic Adversarial Hardening Against Realistic Adversarial Attacks
Salijona Dyrmishi
Salah Ghamizi
Thibault Simonetto
Yves Le Traon
Maxime Cordy
AAML
32
16
0
07 Feb 2022
Identifying Adversarial Attacks on Text Classifiers
Identifying Adversarial Attacks on Text Classifiers
Zhouhang Xie
Jonathan Brophy
Adam Noack
Wencong You
Kalyani Asthana
Carter Perkins
Sabrina Reis
Sameer Singh
Daniel Lowd
AAML
29
9
0
21 Jan 2022
TextHacker: Learning based Hybrid Local Search Algorithm for Text
  Hard-label Adversarial Attack
TextHacker: Learning based Hybrid Local Search Algorithm for Text Hard-label Adversarial Attack
Zhen Yu
Xiaosen Wang
Wanxiang Che
Kun He
AAML
27
14
0
20 Jan 2022
Cyberbullying Classifiers are Sensitive to Model-Agnostic Perturbations
Cyberbullying Classifiers are Sensitive to Model-Agnostic Perturbations
Chris Emmery
Ákos Kádár
Grzegorz Chrupała
Walter Daelemans
22
5
0
17 Jan 2022
Previous
12345678
Next