ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1812.05271
  4. Cited By
TextBugger: Generating Adversarial Text Against Real-world Applications

TextBugger: Generating Adversarial Text Against Real-world Applications

13 December 2018
Jinfeng Li
S. Ji
Tianyu Du
Bo Li
Ting Wang
    SILM
    AAML
ArXivPDFHTML

Papers citing "TextBugger: Generating Adversarial Text Against Real-world Applications"

50 / 382 papers shown
Title
Unpacking Robustness in Inflectional Languages: Adversarial Evaluation and Mechanistic Insights
Unpacking Robustness in Inflectional Languages: Adversarial Evaluation and Mechanistic Insights
Paweł Walkowiak
Marek Klonowski
Marcin Oleksy
Arkadiusz Janz
AAML
39
0
0
08 May 2025
CAMOUFLAGE: Exploiting Misinformation Detection Systems Through LLM-driven Adversarial Claim Transformation
CAMOUFLAGE: Exploiting Misinformation Detection Systems Through LLM-driven Adversarial Claim Transformation
Mazal Bethany
Nishant Vishwamitra
Cho-Yu Chiang
Peyman Najafirad
AAML
31
0
0
03 May 2025
aiXamine: Simplified LLM Safety and Security
aiXamine: Simplified LLM Safety and Security
Fatih Deniz
Dorde Popovic
Yazan Boshmaf
Euisuh Jeong
M. Ahmad
Sanjay Chawla
Issa M. Khalil
ELM
80
0
0
21 Apr 2025
Bypassing Prompt Injection and Jailbreak Detection in LLM Guardrails
Bypassing Prompt Injection and Jailbreak Detection in LLM Guardrails
William Hackett
Lewis Birch
Stefan Trawicki
N. Suri
Peter Garraghan
37
2
0
15 Apr 2025
Exploring Gradient-Guided Masked Language Model to Detect Textual Adversarial Attacks
Exploring Gradient-Guided Masked Language Model to Detect Textual Adversarial Attacks
Xiaomei Zhang
Zhaoxi Zhang
Yanjun Zhang
Xufei Zheng
L. Zhang
Shengshan Hu
Shirui Pan
AAML
32
0
0
08 Apr 2025
Enhancing LLM Robustness to Perturbed Instructions: An Empirical Study
Enhancing LLM Robustness to Perturbed Instructions: An Empirical Study
Aryan Agrawal
Lisa Alazraki
Shahin Honarvar
Marek Rei
57
0
0
03 Apr 2025
Pay More Attention to the Robustness of Prompt for Instruction Data Mining
Pay More Attention to the Robustness of Prompt for Instruction Data Mining
Qiang Wang
Dawei Feng
Xu Zhang
Ao Shen
Yang Xu
Bo Ding
H. Wang
AAML
53
0
0
31 Mar 2025
Playing the Fool: Jailbreaking LLMs and Multimodal LLMs with Out-of-Distribution Strategy
Playing the Fool: Jailbreaking LLMs and Multimodal LLMs with Out-of-Distribution Strategy
Joonhyun Jeong
Seyun Bae
Yeonsung Jung
Jaeryong Hwang
Eunho Yang
AAML
45
1
0
26 Mar 2025
FLEX: A Benchmark for Evaluating Robustness of Fairness in Large Language Models
FLEX: A Benchmark for Evaluating Robustness of Fairness in Large Language Models
Dahyun Jung
Seungyoon Lee
Hyeonseok Moon
Chanjun Park
Heuiseok Lim
AAML
ALM
ELM
58
0
0
25 Mar 2025
Investigating Neurons and Heads in Transformer-based LLMs for Typographical Errors
Investigating Neurons and Heads in Transformer-based LLMs for Typographical Errors
Kohei Tsuji
Tatsuya Hiraoka
Yuchang Cheng
Eiji Aramaki
Tomoya Iwakura
79
0
0
27 Feb 2025
SEA: Shareable and Explainable Attribution for Query-based Black-box Attacks
SEA: Shareable and Explainable Attribution for Query-based Black-box Attacks
Yue Gao
Ilia Shumailov
Kassem Fawaz
AAML
148
0
0
21 Feb 2025
Confidence Elicitation: A New Attack Vector for Large Language Models
Confidence Elicitation: A New Attack Vector for Large Language Models
Brian Formento
Chuan-Sheng Foo
See-Kiong Ng
AAML
99
0
0
07 Feb 2025
Tougher Text, Smarter Models: Raising the Bar for Adversarial Defence Benchmarks
Tougher Text, Smarter Models: Raising the Bar for Adversarial Defence Benchmarks
Yang Wang
Chenghua Lin
ELM
40
0
0
05 Jan 2025
Evaluating Concurrent Robustness of Language Models Across Diverse Challenge Sets
Evaluating Concurrent Robustness of Language Models Across Diverse Challenge Sets
Vatsal Gupta
Pranshu Pandya
Tushar Kataria
Vivek Gupta
Dan Roth
AAML
63
1
0
03 Jan 2025
Adversarial Robustness through Dynamic Ensemble Learning
Adversarial Robustness through Dynamic Ensemble Learning
Hetvi Waghela
Jaydip Sen
Sneha Rakshit
AAML
91
0
0
20 Dec 2024
Human-Readable Adversarial Prompts: An Investigation into LLM Vulnerabilities Using Situational Context
Human-Readable Adversarial Prompts: An Investigation into LLM Vulnerabilities Using Situational Context
Nilanjana Das
Edward Raff
Manas Gaur
AAML
106
1
0
20 Dec 2024
Are Language Models Agnostic to Linguistically Grounded Perturbations? A
  Case Study of Indic Languages
Are Language Models Agnostic to Linguistically Grounded Perturbations? A Case Study of Indic Languages
Poulami Ghosh
Raj Dabre
Pushpak Bhattacharyya
AAML
75
0
0
14 Dec 2024
BinarySelect to Improve Accessibility of Black-Box Attack Research
BinarySelect to Improve Accessibility of Black-Box Attack Research
Shatarupa Ghosh
Jonathan Rusert
AAML
79
0
0
13 Dec 2024
WaterPark: A Robustness Assessment of Language Model Watermarking
WaterPark: A Robustness Assessment of Language Model Watermarking
Jiacheng Liang
Zian Wang
Lauren Hong
Shouling Ji
Ting Wang
AAML
103
0
0
20 Nov 2024
Transferable Adversarial Attacks against ASR
Transferable Adversarial Attacks against ASR
Xiaoxue Gao
Zexin Li
Yiming Chen
Cong Liu
Yiming Li
AAML
36
1
0
14 Nov 2024
Stochastic Monkeys at Play: Random Augmentations Cheaply Break LLM
  Safety Alignment
Stochastic Monkeys at Play: Random Augmentations Cheaply Break LLM Safety Alignment
Jason Vega
Junsheng Huang
Gaokai Zhang
Hangoo Kang
Minjia Zhang
Gagandeep Singh
39
0
0
05 Nov 2024
ProTransformer: Robustify Transformers via Plug-and-Play Paradigm
ProTransformer: Robustify Transformers via Plug-and-Play Paradigm
Zhichao Hou
Weizhi Gao
Yuchen Shen
Feiyi Wang
Xiaorui Liu
VLM
30
2
0
30 Oct 2024
TaeBench: Improving Quality of Toxic Adversarial Examples
TaeBench: Improving Quality of Toxic Adversarial Examples
Xuan Zhu
Dmitriy Bespalov
Liwen You
Ninad Kulkarni
Yanjun Qi
AAML
65
0
0
08 Oct 2024
SteerDiff: Steering towards Safe Text-to-Image Diffusion Models
SteerDiff: Steering towards Safe Text-to-Image Diffusion Models
Hongxiang Zhang
Yifeng He
Hao Chen
31
3
0
03 Oct 2024
Scrambled text: training Language Models to correct OCR errors using
  synthetic data
Scrambled text: training Language Models to correct OCR errors using synthetic data
Jonathan Bourne
SyDa
38
2
0
29 Sep 2024
Detecting Dataset Abuse in Fine-Tuning Stable Diffusion Models for
  Text-to-Image Synthesis
Detecting Dataset Abuse in Fine-Tuning Stable Diffusion Models for Text-to-Image Synthesis
Songrui Wang
Yubo Zhu
Wei Tong
Sheng Zhong
WIGM
33
0
0
27 Sep 2024
Faithfulness and the Notion of Adversarial Sensitivity in NLP
  Explanations
Faithfulness and the Notion of Adversarial Sensitivity in NLP Explanations
Supriya Manna
Niladri Sett
AAML
29
2
0
26 Sep 2024
Reducing and Exploiting Data Augmentation Noise through Meta Reweighting
  Contrastive Learning for Text Classification
Reducing and Exploiting Data Augmentation Noise through Meta Reweighting Contrastive Learning for Text Classification
Guanyi Mou
Yichuan Li
Kyumin Lee
36
3
0
26 Sep 2024
SWE2: SubWord Enriched and Significant Word Emphasized Framework for
  Hate Speech Detection
SWE2: SubWord Enriched and Significant Word Emphasized Framework for Hate Speech Detection
Guanyi Mou
Pengyi Ye
Kyumin Lee
39
17
0
25 Sep 2024
An Effective, Robust and Fairness-aware Hate Speech Detection Framework
An Effective, Robust and Fairness-aware Hate Speech Detection Framework
Guanyi Mou
Kyumin Lee
29
2
0
25 Sep 2024
Jailbreaking Text-to-Image Models with LLM-Based Agents
Jailbreaking Text-to-Image Models with LLM-Based Agents
Yingkai Dong
Zheng Li
Xiangtao Meng
Ning Yu
Shanqing Guo
LLMAG
45
13
0
01 Aug 2024
Breaking Agents: Compromising Autonomous LLM Agents Through Malfunction
  Amplification
Breaking Agents: Compromising Autonomous LLM Agents Through Malfunction Amplification
Boyang Zhang
Yicong Tan
Yun Shen
Ahmed Salem
Michael Backes
Savvas Zannettou
Yang Zhang
LLMAG
AAML
48
15
0
30 Jul 2024
Enhancing Adversarial Text Attacks on BERT Models with Projected
  Gradient Descent
Enhancing Adversarial Text Attacks on BERT Models with Projected Gradient Descent
Hetvi Waghela
Jaydip Sen
Sneha Rakshit
AAML
SILM
41
2
0
29 Jul 2024
On Behalf of the Stakeholders: Trends in NLP Model Interpretability in the Era of LLMs
On Behalf of the Stakeholders: Trends in NLP Model Interpretability in the Era of LLMs
Nitay Calderon
Roi Reichart
42
10
0
27 Jul 2024
Assessing Brittleness of Image-Text Retrieval Benchmarks from
  Vision-Language Models Perspective
Assessing Brittleness of Image-Text Retrieval Benchmarks from Vision-Language Models Perspective
Mariya Hendriksen
Shuo Zhang
R. Reinanda
Mohamed Yahya
Edgar Meij
Maarten de Rijke
54
0
0
21 Jul 2024
Human-Interpretable Adversarial Prompt Attack on Large Language Models
  with Situational Context
Human-Interpretable Adversarial Prompt Attack on Large Language Models with Situational Context
Nilanjana Das
Edward Raff
Manas Gaur
AAML
35
2
0
19 Jul 2024
Counterfactual Explainable Incremental Prompt Attack Analysis on Large
  Language Models
Counterfactual Explainable Incremental Prompt Attack Analysis on Large Language Models
Dong Shu
Mingyu Jin
Tianle Chen
Chong Zhang
Yongfeng Zhang
ELM
SILM
36
1
0
12 Jul 2024
IDT: Dual-Task Adversarial Attacks for Privacy Protection
IDT: Dual-Task Adversarial Attacks for Privacy Protection
Pedro Faustini
Shakila Mahjabin Tonni
Annabelle McIver
Qiongkai Xu
Mark Dras
SILM
AAML
52
0
0
28 Jun 2024
Data-Driven Lipschitz Continuity: A Cost-Effective Approach to Improve
  Adversarial Robustness
Data-Driven Lipschitz Continuity: A Cost-Effective Approach to Improve Adversarial Robustness
Erh-Chung Chen
Pin-Yu Chen
I-Hsin Chung
Che-Rung Lee
34
2
0
28 Jun 2024
DiffuseDef: Improved Robustness to Adversarial Attacks via Iterative Denoising
DiffuseDef: Improved Robustness to Adversarial Attacks via Iterative Denoising
Zhenhao Li
Huichi Zhou
Marek Rei
Lucia Specia
DiffM
34
0
0
28 Jun 2024
Spiking Convolutional Neural Networks for Text Classification
Spiking Convolutional Neural Networks for Text Classification
Changze Lv
Jianhan Xu
Xiaoqing Zheng
56
28
0
27 Jun 2024
Unmasking Database Vulnerabilities: Zero-Knowledge Schema Inference
  Attacks in Text-to-SQL Systems
Unmasking Database Vulnerabilities: Zero-Knowledge Schema Inference Attacks in Text-to-SQL Systems
Đorđe Klisura
Anthony Rios
AAML
24
1
0
20 Jun 2024
MaskPure: Improving Defense Against Text Adversaries with Stochastic
  Purification
MaskPure: Improving Defense Against Text Adversaries with Stochastic Purification
Harrison Gietz
Jugal Kalita
AAML
29
1
0
18 Jun 2024
Saliency Attention and Semantic Similarity-Driven Adversarial
  Perturbation
Saliency Attention and Semantic Similarity-Driven Adversarial Perturbation
Hetvi Waghela
Jaydip Sen
Sneha Rakshit
AAML
39
4
0
18 Jun 2024
E-Bench: Towards Evaluating the Ease-of-Use of Large Language Models
E-Bench: Towards Evaluating the Ease-of-Use of Large Language Models
Zhenyu Zhang
Bingguang Hao
Jinpeng Li
Zekai Zhang
Dongyan Zhao
33
0
0
16 Jun 2024
Adversarial Evasion Attack Efficiency against Large Language Models
Adversarial Evasion Attack Efficiency against Large Language Models
João Vitorino
Eva Maia
Isabel Praça
AAML
43
2
0
12 Jun 2024
Unveiling the Lexical Sensitivity of LLMs: Combinatorial Optimization
  for Prompt Enhancement
Unveiling the Lexical Sensitivity of LLMs: Combinatorial Optimization for Prompt Enhancement
Pengwei Zhan
Zhen Xu
Qian Tan
Jie Song
Ru Xie
51
7
0
31 May 2024
Deep Learning Approaches for Detecting Adversarial Cyberbullying and
  Hate Speech in Social Networks
Deep Learning Approaches for Detecting Adversarial Cyberbullying and Hate Speech in Social Networks
S. Azumah
Nelly Elsayed
Zag ElSayed
Murat Ozer
Amanda La Guardia
46
1
0
30 May 2024
Phantom: General Trigger Attacks on Retrieval Augmented Language
  Generation
Phantom: General Trigger Attacks on Retrieval Augmented Language Generation
Harsh Chaudhari
Giorgio Severi
John Abascal
Matthew Jagielski
Christopher A. Choquette-Choo
Milad Nasr
Cristina Nita-Rotaru
Alina Oprea
SILM
AAML
80
29
0
30 May 2024
Evaluating the Adversarial Robustness of Retrieval-Based In-Context
  Learning for Large Language Models
Evaluating the Adversarial Robustness of Retrieval-Based In-Context Learning for Large Language Models
Simon Chi Lok Yu
Jie He
Pasquale Minervini
Jeff Z. Pan
28
0
0
24 May 2024
12345678
Next