ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1905.05897
  4. Cited By
Transferable Clean-Label Poisoning Attacks on Deep Neural Nets

Transferable Clean-Label Poisoning Attacks on Deep Neural Nets

15 May 2019
Chen Zhu
Wenjie Huang
Ali Shafahi
Hengduo Li
Gavin Taylor
Christoph Studer
Tom Goldstein
ArXivPDFHTML

Papers citing "Transferable Clean-Label Poisoning Attacks on Deep Neural Nets"

50 / 68 papers shown
Title
PureEBM: Universal Poison Purification via Mid-Run Dynamics of
  Energy-Based Models
PureEBM: Universal Poison Purification via Mid-Run Dynamics of Energy-Based Models
Omead Brandon Pooladzandi
Jeffrey Q. Jiang
Sunay Bhat
Gregory Pottie
AAML
31
0
0
28 May 2024
Partial train and isolate, mitigate backdoor attack
Partial train and isolate, mitigate backdoor attack
Yong Li
Han Gao
AAML
34
0
0
26 May 2024
Trustworthy Distributed AI Systems: Robustness, Privacy, and Governance
Trustworthy Distributed AI Systems: Robustness, Privacy, and Governance
Wenqi Wei
Ling Liu
33
16
0
02 Feb 2024
End-to-End Anti-Backdoor Learning on Images and Time Series
End-to-End Anti-Backdoor Learning on Images and Time Series
Yujing Jiang
Xingjun Ma
S. Erfani
Yige Li
James Bailey
42
1
0
06 Jan 2024
PACOL: Poisoning Attacks Against Continual Learners
PACOL: Poisoning Attacks Against Continual Learners
Huayu Li
G. Ditzler
AAML
25
2
0
18 Nov 2023
Transferable Availability Poisoning Attacks
Transferable Availability Poisoning Attacks
Yiyong Liu
Michael Backes
Xiao Zhang
AAML
29
3
0
08 Oct 2023
On the Exploitability of Instruction Tuning
On the Exploitability of Instruction Tuning
Manli Shu
Jiong Wang
Chen Zhu
Jonas Geiping
Chaowei Xiao
Tom Goldstein
SILM
47
92
0
28 Jun 2023
Mole Recruitment: Poisoning of Image Classifiers via Selective Batch
  Sampling
Mole Recruitment: Poisoning of Image Classifiers via Selective Batch Sampling
Ethan Wisdom
Tejas Gokhale
Chaowei Xiao
Yezhou Yang
31
0
0
30 Mar 2023
TrojanPuzzle: Covertly Poisoning Code-Suggestion Models
TrojanPuzzle: Covertly Poisoning Code-Suggestion Models
H. Aghakhani
Wei Dai
Andre Manoel
Xavier Fernandes
Anant Kharkar
Christopher Kruegel
Giovanni Vigna
David Evans
B. Zorn
Robert Sim
SILM
31
33
0
06 Jan 2023
Backdoor Attacks Against Dataset Distillation
Backdoor Attacks Against Dataset Distillation
Yugeng Liu
Zheng Li
Michael Backes
Yun Shen
Yang Zhang
DD
47
28
0
03 Jan 2023
Unlearnable Clusters: Towards Label-agnostic Unlearnable Examples
Unlearnable Clusters: Towards Label-agnostic Unlearnable Examples
Jiaming Zhang
Xingjun Ma
Qiaomin Yi
Jitao Sang
Yugang Jiang
Yaowei Wang
Changsheng Xu
21
24
0
31 Dec 2022
Rethinking Backdoor Data Poisoning Attacks in the Context of
  Semi-Supervised Learning
Rethinking Backdoor Data Poisoning Attacks in the Context of Semi-Supervised Learning
Marissa Connor
Vincent Emanuele
SILM
AAML
33
1
0
05 Dec 2022
Emerging Threats in Deep Learning-Based Autonomous Driving: A
  Comprehensive Survey
Emerging Threats in Deep Learning-Based Autonomous Driving: A Comprehensive Survey
Huiyun Cao
Wenlong Zou
Yinkun Wang
Ting Song
Mengjun Liu
AAML
56
5
0
19 Oct 2022
Data Poisoning Attacks Against Multimodal Encoders
Data Poisoning Attacks Against Multimodal Encoders
Ziqing Yang
Xinlei He
Zheng Li
Michael Backes
Mathias Humbert
Pascal Berrang
Yang Zhang
AAML
121
46
0
30 Sep 2022
Friendly Noise against Adversarial Noise: A Powerful Defense against
  Data Poisoning Attacks
Friendly Noise against Adversarial Noise: A Powerful Defense against Data Poisoning Attacks
Tianwei Liu
Yu Yang
Baharan Mirzasoleiman
AAML
39
27
0
14 Aug 2022
Backdoor Attacks on Crowd Counting
Backdoor Attacks on Crowd Counting
Yuhua Sun
Tailai Zhang
Xingjun Ma
Pan Zhou
Jian Lou
Zichuan Xu
Xing Di
Yu Cheng
Lichao
AAML
19
15
0
12 Jul 2022
Bugs in Machine Learning-based Systems: A Faultload Benchmark
Bugs in Machine Learning-based Systems: A Faultload Benchmark
Mohammad Mehdi Morovati
Amin Nikanjam
Foutse Khomh
Zhen Ming
Z. Jiang
32
20
0
24 Jun 2022
Natural Backdoor Datasets
Natural Backdoor Datasets
Emily Wenger
Roma Bhattacharjee
A. Bhagoji
Josephine Passananti
Emilio Andere
Haitao Zheng
Ben Y. Zhao
AAML
35
4
0
21 Jun 2022
Adversarial attacks and defenses in Speaker Recognition Systems: A
  survey
Adversarial attacks and defenses in Speaker Recognition Systems: A survey
Jiahe Lan
Rui Zhang
Zheng Yan
Jie Wang
Yu Chen
Ronghui Hou
AAML
24
23
0
27 May 2022
Indiscriminate Data Poisoning Attacks on Neural Networks
Indiscriminate Data Poisoning Attacks on Neural Networks
Yiwei Lu
Gautam Kamath
Yaoliang Yu
AAML
43
24
0
19 Apr 2022
Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets
Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets
Florian Tramèr
Reza Shokri
Ayrton San Joaquin
Hoang Minh Le
Matthew Jagielski
Sanghyun Hong
Nicholas Carlini
MIACV
53
109
0
31 Mar 2022
WaveFuzz: A Clean-Label Poisoning Attack to Protect Your Voice
WaveFuzz: A Clean-Label Poisoning Attack to Protect Your Voice
Yunjie Ge
Qianqian Wang
Jingfeng Zhang
Juntao Zhou
Yunzhu Zhang
Chao Shen
AAML
24
6
0
25 Mar 2022
Low-Rank Softmax Can Have Unargmaxable Classes in Theory but Rarely in
  Practice
Low-Rank Softmax Can Have Unargmaxable Classes in Theory but Rarely in Practice
Andreas Grivas
Nikolay Bogoychev
Adam Lopez
17
9
0
12 Mar 2022
Low-Loss Subspace Compression for Clean Gains against Multi-Agent
  Backdoor Attacks
Low-Loss Subspace Compression for Clean Gains against Multi-Agent Backdoor Attacks
Siddhartha Datta
N. Shadbolt
AAML
32
6
0
07 Mar 2022
Holistic Adversarial Robustness of Deep Learning Models
Holistic Adversarial Robustness of Deep Learning Models
Pin-Yu Chen
Sijia Liu
AAML
54
16
0
15 Feb 2022
Jigsaw Puzzle: Selective Backdoor Attack to Subvert Malware Classifiers
Jigsaw Puzzle: Selective Backdoor Attack to Subvert Malware Classifiers
Limin Yang
Zhi Chen
Jacopo Cortellazzi
Feargus Pendlebury
Kevin Tu
Fabio Pierazzi
Lorenzo Cavallaro
Gang Wang
AAML
31
36
0
11 Feb 2022
Redactor: A Data-centric and Individualized Defense Against Inference
  Attacks
Redactor: A Data-centric and Individualized Defense Against Inference Attacks
Geon Heo
Steven Euijong Whang
AAML
25
2
0
07 Feb 2022
Improved Certified Defenses against Data Poisoning with (Deterministic)
  Finite Aggregation
Improved Certified Defenses against Data Poisoning with (Deterministic) Finite Aggregation
Wenxiao Wang
Alexander Levine
S. Feizi
AAML
25
60
0
05 Feb 2022
Can Adversarial Training Be Manipulated By Non-Robust Features?
Can Adversarial Training Be Manipulated By Non-Robust Features?
Lue Tao
Lei Feng
Hongxin Wei
Jinfeng Yi
Sheng-Jun Huang
Songcan Chen
AAML
139
16
0
31 Jan 2022
Backdoors Stuck At The Frontdoor: Multi-Agent Backdoor Attacks That
  Backfire
Backdoors Stuck At The Frontdoor: Multi-Agent Backdoor Attacks That Backfire
Siddhartha Datta
N. Shadbolt
AAML
41
7
0
28 Jan 2022
Identifying a Training-Set Attack's Target Using Renormalized Influence
  Estimation
Identifying a Training-Set Attack's Target Using Renormalized Influence Estimation
Zayd Hammoudeh
Daniel Lowd
TDI
29
28
0
25 Jan 2022
Hiding Behind Backdoors: Self-Obfuscation Against Generative Models
Hiding Behind Backdoors: Self-Obfuscation Against Generative Models
Siddhartha Datta
N. Shadbolt
SILM
AAML
AI4CE
25
2
0
24 Jan 2022
Robust and Privacy-Preserving Collaborative Learning: A Comprehensive
  Survey
Robust and Privacy-Preserving Collaborative Learning: A Comprehensive Survey
Shangwei Guo
Xu Zhang
Feiyu Yang
Tianwei Zhang
Yan Gan
Tao Xiang
Yang Liu
FedML
31
9
0
19 Dec 2021
Data Collection and Quality Challenges in Deep Learning: A Data-Centric
  AI Perspective
Data Collection and Quality Challenges in Deep Learning: A Data-Centric AI Perspective
Steven Euijong Whang
Yuji Roh
Hwanjun Song
Jae-Gil Lee
27
326
0
13 Dec 2021
SoK: Anti-Facial Recognition Technology
SoK: Anti-Facial Recognition Technology
Emily Wenger
Shawn Shan
Haitao Zheng
Ben Y. Zhao
PICV
32
13
0
08 Dec 2021
Backdoor Pre-trained Models Can Transfer to All
Backdoor Pre-trained Models Can Transfer to All
Lujia Shen
S. Ji
Xuhong Zhang
Jinfeng Li
Jing Chen
Jie Shi
Chengfang Fang
Jianwei Yin
Ting Wang
AAML
SILM
33
120
0
30 Oct 2021
Anti-Backdoor Learning: Training Clean Models on Poisoned Data
Anti-Backdoor Learning: Training Clean Models on Poisoned Data
Yige Li
X. Lyu
Nodens Koren
Lingjuan Lyu
Bo Li
Xingjun Ma
OnRL
31
322
0
22 Oct 2021
Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks
Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks
Shawn Shan
A. Bhagoji
Haitao Zheng
Ben Y. Zhao
AAML
99
50
0
13 Oct 2021
SoK: Machine Learning Governance
SoK: Machine Learning Governance
Varun Chandrasekaran
Hengrui Jia
Anvith Thudi
Adelin Travers
Mohammad Yaghini
Nicolas Papernot
43
16
0
20 Sep 2021
Hard to Forget: Poisoning Attacks on Certified Machine Unlearning
Hard to Forget: Poisoning Attacks on Certified Machine Unlearning
Neil G. Marchant
Benjamin I. P. Rubinstein
Scott Alfeld
MU
AAML
28
69
0
17 Sep 2021
Understanding the Limits of Unsupervised Domain Adaptation via Data
  Poisoning
Understanding the Limits of Unsupervised Domain Adaptation via Data Poisoning
Akshay Mehra
B. Kailkhura
Pin-Yu Chen
Jihun Hamm
AAML
30
22
0
08 Jul 2021
Accumulative Poisoning Attacks on Real-time Data
Accumulative Poisoning Attacks on Real-time Data
Tianyu Pang
Xiao Yang
Yinpeng Dong
Hang Su
Jun Zhu
37
20
0
18 Jun 2021
Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks
  Trained from Scratch
Sleeper Agent: Scalable Hidden Trigger Backdoors for Neural Networks Trained from Scratch
Hossein Souri
Liam H. Fowl
Ramalingam Chellappa
Micah Goldblum
Tom Goldstein
SILM
31
124
0
16 Jun 2021
De-Pois: An Attack-Agnostic Defense against Data Poisoning Attacks
De-Pois: An Attack-Agnostic Defense against Data Poisoning Attacks
Jian Chen
Xuxin Zhang
Rui Zhang
Chen Wang
Ling Liu
AAML
25
86
0
08 May 2021
Poisoning the Unlabeled Dataset of Semi-Supervised Learning
Poisoning the Unlabeled Dataset of Semi-Supervised Learning
Nicholas Carlini
AAML
166
68
0
04 May 2021
A Master Key Backdoor for Universal Impersonation Attack against
  DNN-based Face Verification
A Master Key Backdoor for Universal Impersonation Attack against DNN-based Face Verification
Wei Guo
B. Tondi
Mauro Barni
AAML
32
19
0
01 May 2021
Turning Federated Learning Systems Into Covert Channels
Turning Federated Learning Systems Into Covert Channels
Gabriele Costa
Fabio Pinelli
S. Soderi
Gabriele Tolomei
FedML
37
10
0
21 Apr 2021
The Hammer and the Nut: Is Bilevel Optimization Really Needed to Poison
  Linear Classifiers?
The Hammer and the Nut: Is Bilevel Optimization Really Needed to Poison Linear Classifiers?
Antonio Emanuele Cinà
Sebastiano Vascon
Ambra Demontis
Battista Biggio
Fabio Roli
Marcello Pelillo
AAML
32
9
0
23 Mar 2021
EX-RAY: Distinguishing Injected Backdoor from Natural Features in Neural
  Networks by Examining Differential Feature Symmetry
EX-RAY: Distinguishing Injected Backdoor from Natural Features in Neural Networks by Examining Differential Feature Symmetry
Yingqi Liu
Guangyu Shen
Guanhong Tao
Zhenting Wang
Shiqing Ma
Xinming Zhang
AAML
40
8
0
16 Mar 2021
DP-InstaHide: Provably Defusing Poisoning and Backdoor Attacks with
  Differentially Private Data Augmentations
DP-InstaHide: Provably Defusing Poisoning and Backdoor Attacks with Differentially Private Data Augmentations
Eitan Borgnia
Jonas Geiping
Valeriia Cherepanova
Liam H. Fowl
Arjun Gupta
Amin Ghiasi
Furong Huang
Micah Goldblum
Tom Goldstein
45
46
0
02 Mar 2021
12
Next