ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2010.02684
  4. Cited By
Poison Attacks against Text Datasets with Conditional Adversarially
  Regularized Autoencoder

Poison Attacks against Text Datasets with Conditional Adversarially Regularized Autoencoder

6 October 2020
Alvin Chan
Yi Tay
Yew-Soon Ong
Aston Zhang
    SILM
ArXivPDFHTML

Papers citing "Poison Attacks against Text Datasets with Conditional Adversarially Regularized Autoencoder"

33 / 33 papers shown
Title
The Ultimate Cookbook for Invisible Poison: Crafting Subtle Clean-Label Text Backdoors with Style Attributes
The Ultimate Cookbook for Invisible Poison: Crafting Subtle Clean-Label Text Backdoors with Style Attributes
Wencong You
Daniel Lowd
39
0
0
24 Apr 2025
PoisonBench: Assessing Large Language Model Vulnerability to Data
  Poisoning
PoisonBench: Assessing Large Language Model Vulnerability to Data Poisoning
Tingchen Fu
Mrinank Sharma
Philip Torr
Shay B. Cohen
David M. Krueger
Fazl Barez
AAML
55
7
0
11 Oct 2024
Defending Code Language Models against Backdoor Attacks with Deceptive Cross-Entropy Loss
Defending Code Language Models against Backdoor Attacks with Deceptive Cross-Entropy Loss
Guang Yang
Yu Zhou
Xiang Chen
Xiangyu Zhang
Terry Yue Zhuo
David Lo
Taolue Chen
AAML
57
4
0
12 Jul 2024
SpamDam: Towards Privacy-Preserving and Adversary-Resistant SMS Spam
  Detection
SpamDam: Towards Privacy-Preserving and Adversary-Resistant SMS Spam Detection
Yekai Li
Rufan Zhang
Wenxin Rong
Xianghang Mi
42
2
0
15 Apr 2024
Manipulating Predictions over Discrete Inputs in Machine Teaching
Manipulating Predictions over Discrete Inputs in Machine Teaching
Xiaodong Wu
Yufei Han
H. Dahrouj
Jianbing Ni
Zhenwen Liang
Xiangliang Zhang
19
0
0
31 Jan 2024
Context Matters: Data-Efficient Augmentation of Large Language Models
  for Scientific Applications
Context Matters: Data-Efficient Augmentation of Large Language Models for Scientific Applications
Xiang Li
Haoran Tang
Siyu Chen
Ziwei Wang
Anurag Maravi
Marcin Abram
24
0
0
12 Dec 2023
Poisoned ChatGPT Finds Work for Idle Hands: Exploring Developers' Coding
  Practices with Insecure Suggestions from Poisoned AI Models
Poisoned ChatGPT Finds Work for Idle Hands: Exploring Developers' Coding Practices with Insecure Suggestions from Poisoned AI Models
Sanghak Oh
Kiho Lee
Seonhye Park
Doowon Kim
Hyoungshick Kim
SILM
29
16
0
11 Dec 2023
Large Language Models Are Better Adversaries: Exploring Generative
  Clean-Label Backdoor Attacks Against Text Classifiers
Large Language Models Are Better Adversaries: Exploring Generative Clean-Label Backdoor Attacks Against Text Classifiers
Wencong You
Zayd Hammoudeh
Daniel Lowd
AAML
32
12
0
28 Oct 2023
Backdoor Attacks for In-Context Learning with Language Models
Backdoor Attacks for In-Context Learning with Language Models
Nikhil Kandpal
Matthew Jagielski
Florian Tramèr
Nicholas Carlini
SILM
AAML
36
76
0
27 Jul 2023
Did You Train on My Dataset? Towards Public Dataset Protection with
  Clean-Label Backdoor Watermarking
Did You Train on My Dataset? Towards Public Dataset Protection with Clean-Label Backdoor Watermarking
Ruixiang Tang
Qizhang Feng
Ninghao Liu
Fan Yang
Xia Hu
26
36
0
20 Mar 2023
Attacks in Adversarial Machine Learning: A Systematic Survey from the
  Life-cycle Perspective
Attacks in Adversarial Machine Learning: A Systematic Survey from the Life-cycle Perspective
Baoyuan Wu
Zihao Zhu
Li Liu
Qingshan Liu
Zhaofeng He
Siwei Lyu
AAML
44
21
0
19 Feb 2023
Backdoor Learning for NLP: Recent Advances, Challenges, and Future
  Research Directions
Backdoor Learning for NLP: Recent Advances, Challenges, and Future Research Directions
Marwan Omar
SILM
AAML
33
20
0
14 Feb 2023
BDMMT: Backdoor Sample Detection for Language Models through Model
  Mutation Testing
BDMMT: Backdoor Sample Detection for Language Models through Model Mutation Testing
Jiali Wei
Ming Fan
Wenjing Jiao
Wuxia Jin
Ting Liu
AAML
29
11
0
25 Jan 2023
TrojanPuzzle: Covertly Poisoning Code-Suggestion Models
TrojanPuzzle: Covertly Poisoning Code-Suggestion Models
H. Aghakhani
Wei Dai
Andre Manoel
Xavier Fernandes
Anant Kharkar
Christopher Kruegel
Giovanni Vigna
David Evans
B. Zorn
Robert Sim
SILM
29
33
0
06 Jan 2023
UPTON: Preventing Authorship Leakage from Public Text Release via Data
  Poisoning
UPTON: Preventing Authorship Leakage from Public Text Release via Data Poisoning
Ziyao Wang
Thai Le
Dongwon Lee
36
1
0
17 Nov 2022
Rethinking the Reverse-engineering of Trojan Triggers
Rethinking the Reverse-engineering of Trojan Triggers
Zhenting Wang
Kai Mei
Hailun Ding
Juan Zhai
Shiqing Ma
20
45
0
27 Oct 2022
Detecting Backdoors in Deep Text Classifiers
Detecting Backdoors in Deep Text Classifiers
Youyan Guo
Jun Wang
Trevor Cohn
SILM
42
1
0
11 Oct 2022
Attention Hijacking in Trojan Transformers
Attention Hijacking in Trojan Transformers
Weimin Lyu
Songzhu Zheng
Teng Ma
Haibin Ling
Chao Chen
35
6
0
09 Aug 2022
Kallima: A Clean-label Framework for Textual Backdoor Attacks
Kallima: A Clean-label Framework for Textual Backdoor Attacks
Xiaoyi Chen
Yinpeng Dong
Zeyu Sun
Shengfang Zhai
Qingni Shen
Zhonghai Wu
AAML
11
30
0
03 Jun 2022
BITE: Textual Backdoor Attacks with Iterative Trigger Injection
BITE: Textual Backdoor Attacks with Iterative Trigger Injection
Jun Yan
Vansh Gupta
Xiang Ren
SILM
31
46
0
25 May 2022
A Study of the Attention Abnormality in Trojaned BERTs
A Study of the Attention Abnormality in Trojaned BERTs
Weimin Lyu
Songzhu Zheng
Teng Ma
Chao Chen
51
56
0
13 May 2022
A Survey on AI Sustainability: Emerging Trends on Learning Algorithms
  and Research Challenges
A Survey on AI Sustainability: Emerging Trends on Learning Algorithms and Research Challenges
Zhenghua Chen
Min-man Wu
Alvin Chan
Xiaoli Li
Yew-Soon Ong
24
6
0
08 May 2022
Impact of Pretraining Term Frequencies on Few-Shot Reasoning
Impact of Pretraining Term Frequencies on Few-Shot Reasoning
Yasaman Razeghi
Robert L Logan IV
Matt Gardner
Sameer Singh
ReLM
LRM
32
150
0
15 Feb 2022
Rank List Sensitivity of Recommender Systems to Interaction
  Perturbations
Rank List Sensitivity of Recommender Systems to Interaction Perturbations
Sejoon Oh
Berk Ustun
Julian McAuley
Srijan Kumar
30
34
0
29 Jan 2022
Revisiting Methods for Finding Influential Examples
Revisiting Methods for Finding Influential Examples
Karthikeyan K
Anders Søgaard
TDI
16
30
0
08 Nov 2021
CoProtector: Protect Open-Source Code against Unauthorized Training
  Usage with Data Poisoning
CoProtector: Protect Open-Source Code against Unauthorized Training Usage with Data Poisoning
Zhensu Sun
Xiaoning Du
Fu Song
Mingze Ni
Li Li
30
68
0
25 Oct 2021
RAP: Robustness-Aware Perturbations for Defending against Backdoor
  Attacks on NLP Models
RAP: Robustness-Aware Perturbations for Defending against Backdoor Attacks on NLP Models
Wenkai Yang
Yankai Lin
Peng Li
Jie Zhou
Xu Sun
SILM
AAML
32
103
0
15 Oct 2021
Be Careful about Poisoned Word Embeddings: Exploring the Vulnerability
  of the Embedding Layers in NLP Models
Be Careful about Poisoned Word Embeddings: Exploring the Vulnerability of the Embedding Layers in NLP Models
Wenkai Yang
Lei Li
Zhiyuan Zhang
Xuancheng Ren
Xu Sun
Bin He
SILM
23
147
0
29 Mar 2021
Red Alarm for Pre-trained Models: Universal Vulnerability to
  Neuron-Level Backdoor Attacks
Red Alarm for Pre-trained Models: Universal Vulnerability to Neuron-Level Backdoor Attacks
Zhengyan Zhang
Guangxuan Xiao
Yongwei Li
Tian Lv
Fanchao Qi
Zhiyuan Liu
Yasheng Wang
Xin Jiang
Maosong Sun
AAML
23
68
0
18 Jan 2021
Concealed Data Poisoning Attacks on NLP Models
Concealed Data Poisoning Attacks on NLP Models
Eric Wallace
Tony Zhao
Shi Feng
Sameer Singh
SILM
19
18
0
23 Oct 2020
BadNL: Backdoor Attacks against NLP Models with Semantic-preserving
  Improvements
BadNL: Backdoor Attacks against NLP Models with Semantic-preserving Improvements
Xiaoyi Chen
A. Salem
Dingfan Chen
Michael Backes
Shiqing Ma
Qingni Shen
Zhonghai Wu
Yang Zhang
SILM
29
228
0
01 Jun 2020
Generating Natural Language Adversarial Examples
Generating Natural Language Adversarial Examples
M. Alzantot
Yash Sharma
Ahmed Elgohary
Bo-Jhang Ho
Mani B. Srivastava
Kai-Wei Chang
AAML
258
915
0
21 Apr 2018
Adversarial Example Generation with Syntactically Controlled Paraphrase
  Networks
Adversarial Example Generation with Syntactically Controlled Paraphrase Networks
Mohit Iyyer
John Wieting
Kevin Gimpel
Luke Zettlemoyer
AAML
GAN
205
713
0
17 Apr 2018
1