ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1804.00792
  4. Cited By
Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks

Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks

3 April 2018
Ali Shafahi
Yifan Jiang
Mahyar Najibi
Octavian Suciu
Christoph Studer
Tudor Dumitras
Tom Goldstein
    AAML
ArXivPDFHTML

Papers citing "Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks"

50 / 259 papers shown
Title
Privacy and Robustness in Federated Learning: Attacks and Defenses
Privacy and Robustness in Federated Learning: Attacks and Defenses
Lingjuan Lyu
Han Yu
Xingjun Ma
Chen Chen
Lichao Sun
Jun Zhao
Qiang Yang
Philip S. Yu
FedML
183
357
0
07 Dec 2020
How Robust are Randomized Smoothing based Defenses to Data Poisoning?
How Robust are Randomized Smoothing based Defenses to Data Poisoning?
Akshay Mehra
B. Kailkhura
Pin-Yu Chen
Jihun Hamm
OOD
AAML
28
32
0
02 Dec 2020
Being Single Has Benefits. Instance Poisoning to Deceive Malware
  Classifiers
Being Single Has Benefits. Instance Poisoning to Deceive Malware Classifiers
T. Shapira
David Berend
Ishai Rosenberg
Yang Liu
A. Shabtai
Yuval Elovici
AAML
27
4
0
30 Oct 2020
Robust and Verifiable Information Embedding Attacks to Deep Neural
  Networks via Error-Correcting Codes
Robust and Verifiable Information Embedding Attacks to Deep Neural Networks via Error-Correcting Codes
Jinyuan Jia
Binghui Wang
Neil Zhenqiang Gong
AAML
35
5
0
26 Oct 2020
Concealed Data Poisoning Attacks on NLP Models
Concealed Data Poisoning Attacks on NLP Models
Eric Wallace
Tony Zhao
Shi Feng
Sameer Singh
SILM
29
18
0
23 Oct 2020
Poison Attacks against Text Datasets with Conditional Adversarially
  Regularized Autoencoder
Poison Attacks against Text Datasets with Conditional Adversarially Regularized Autoencoder
Alvin Chan
Yi Tay
Yew-Soon Ong
Aston Zhang
SILM
23
56
0
06 Oct 2020
Block-wise Image Transformation with Secret Key for Adversarially Robust
  Defense
Block-wise Image Transformation with Secret Key for Adversarially Robust Defense
Maungmaung Aprilpyone
Hitoshi Kiya
29
57
0
02 Oct 2020
Bias Field Poses a Threat to DNN-based X-Ray Recognition
Bias Field Poses a Threat to DNN-based X-Ray Recognition
Binyu Tian
Qing Guo
Felix Juefei Xu
W. L. Chan
Yupeng Cheng
Xiaohong Li
Xiaofei Xie
Shengchao Qin
AAML
AI4CE
34
33
0
19 Sep 2020
Data Poisoning Attacks on Regression Learning and Corresponding Defenses
Data Poisoning Attacks on Regression Learning and Corresponding Defenses
Nicolas Müller
Daniel Kowatsch
Konstantin Böttinger
AAML
10
18
0
15 Sep 2020
Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching
Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching
Jonas Geiping
Liam H. Fowl
Yifan Jiang
W. Czaja
Gavin Taylor
Michael Moeller
Tom Goldstein
AAML
21
215
0
04 Sep 2020
Vulnerability-Aware Poisoning Mechanism for Online RL with Unknown
  Dynamics
Vulnerability-Aware Poisoning Mechanism for Online RL with Unknown Dynamics
Yanchao Sun
Da Huo
Furong Huang
AAML
OffRL
OnRL
26
49
0
02 Sep 2020
On Attribution of Deepfakes
On Attribution of Deepfakes
Baiwu Zhang
Jin Peng Zhou
Ilia Shumailov
Nicolas Papernot
WIGM
32
11
0
20 Aug 2020
Practical Detection of Trojan Neural Networks: Data-Limited and
  Data-Free Cases
Practical Detection of Trojan Neural Networks: Data-Limited and Data-Free Cases
Ren Wang
Gaoyuan Zhang
Sijia Liu
Pin-Yu Chen
Jinjun Xiong
Meng Wang
AAML
36
148
0
31 Jul 2020
Membership Leakage in Label-Only Exposures
Membership Leakage in Label-Only Exposures
Zheng Li
Yang Zhang
34
237
0
30 Jul 2020
Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive
  Review
Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review
Yansong Gao
Bao Gia Doan
Zhi-Li Zhang
Siqi Ma
Jiliang Zhang
Anmin Fu
Surya Nepal
Hyoungshick Kim
AAML
36
221
0
21 Jul 2020
Transfer Learning without Knowing: Reprogramming Black-box Machine
  Learning Models with Scarce Data and Limited Resources
Transfer Learning without Knowing: Reprogramming Black-box Machine Learning Models with Scarce Data and Limited Resources
Yun-Yun Tsai
Pin-Yu Chen
Tsung-Yi Ho
AAML
MLAU
BDL
13
95
0
17 Jul 2020
Data Poisoning Attacks Against Federated Learning Systems
Data Poisoning Attacks Against Federated Learning Systems
Vale Tolpegin
Stacey Truex
Mehmet Emre Gursoy
Ling Liu
FedML
31
640
0
16 Jul 2020
Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks
Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks
Yunfei Liu
Xingjun Ma
James Bailey
Feng Lu
AAML
22
505
0
05 Jul 2020
Backdoor Attacks Against Deep Learning Systems in the Physical World
Backdoor Attacks Against Deep Learning Systems in the Physical World
Emily Wenger
Josephine Passananti
A. Bhagoji
Yuanshun Yao
Haitao Zheng
Ben Y. Zhao
AAML
31
200
0
25 Jun 2020
Subpopulation Data Poisoning Attacks
Subpopulation Data Poisoning Attacks
Matthew Jagielski
Giorgio Severi
Niklas Pousette Harger
Alina Oprea
AAML
SILM
24
114
0
24 Jun 2020
Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and
  Data Poisoning Attacks
Just How Toxic is Data Poisoning? A Unified Benchmark for Backdoor and Data Poisoning Attacks
Avi Schwarzschild
Micah Goldblum
Arjun Gupta
John P. Dickerson
Tom Goldstein
AAML
TDI
21
162
0
22 Jun 2020
Neural Anisotropy Directions
Neural Anisotropy Directions
Guillermo Ortiz-Jiménez
Apostolos Modas
Seyed-Mohsen Moosavi-Dezfooli
P. Frossard
34
16
0
17 Jun 2020
An Embarrassingly Simple Approach for Trojan Attack in Deep Neural
  Networks
An Embarrassingly Simple Approach for Trojan Attack in Deep Neural Networks
Ruixiang Tang
Mengnan Du
Ninghao Liu
Fan Yang
Xia Hu
AAML
23
184
0
15 Jun 2020
Defending against GAN-based Deepfake Attacks via Transformation-aware
  Adversarial Faces
Defending against GAN-based Deepfake Attacks via Transformation-aware Adversarial Faces
Chaofei Yang
Lei Ding
Yiran Chen
H. Li
AAML
27
46
0
12 Jun 2020
Backdoors in Neural Models of Source Code
Backdoors in Neural Models of Source Code
Goutham Ramakrishnan
Aws Albarghouthi
AAML
SILM
28
56
0
11 Jun 2020
When Machine Unlearning Jeopardizes Privacy
When Machine Unlearning Jeopardizes Privacy
Min Chen
Zhikun Zhang
Tianhao Wang
Michael Backes
Mathias Humbert
Yang Zhang
MIACV
36
218
0
05 May 2020
Bullseye Polytope: A Scalable Clean-Label Poisoning Attack with Improved
  Transferability
Bullseye Polytope: A Scalable Clean-Label Poisoning Attack with Improved Transferability
H. Aghakhani
Dongyu Meng
Yu-Xiang Wang
Christopher Kruegel
Giovanni Vigna
AAML
31
105
0
01 May 2020
Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness
Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness
Pu Zhao
Pin-Yu Chen
Payel Das
Karthikeyan N. Ramamurthy
Xue Lin
AAML
64
185
0
30 Apr 2020
Weight Poisoning Attacks on Pre-trained Models
Weight Poisoning Attacks on Pre-trained Models
Keita Kurita
Paul Michel
Graham Neubig
AAML
SILM
43
434
0
14 Apr 2020
PoisHygiene: Detecting and Mitigating Poisoning Attacks in Neural
  Networks
PoisHygiene: Detecting and Mitigating Poisoning Attacks in Neural Networks
Junfeng Guo
Zelun Kong
Cong Liu
AAML
32
1
0
24 Mar 2020
FTT-NAS: Discovering Fault-Tolerant Convolutional Neural Architecture
FTT-NAS: Discovering Fault-Tolerant Convolutional Neural Architecture
Xuefei Ning
Guangjun Ge
Wenshuo Li
Zhenhua Zhu
Yin Zheng
Xiaoming Chen
Zhen Gao
Yu Wang
Huazhong Yang
37
24
0
20 Mar 2020
Backdooring and Poisoning Neural Networks with Image-Scaling Attacks
Backdooring and Poisoning Neural Networks with Image-Scaling Attacks
Erwin Quiring
Konrad Rieck
AAML
54
70
0
19 Mar 2020
Dynamic Backdoor Attacks Against Machine Learning Models
Dynamic Backdoor Attacks Against Machine Learning Models
A. Salem
Rui Wen
Michael Backes
Shiqing Ma
Yang Zhang
AAML
51
271
0
07 Mar 2020
Threats to Federated Learning: A Survey
Threats to Federated Learning: A Survey
Lingjuan Lyu
Han Yu
Qiang Yang
FedML
204
436
0
04 Mar 2020
Explanation-Guided Backdoor Poisoning Attacks Against Malware
  Classifiers
Explanation-Guided Backdoor Poisoning Attacks Against Malware Classifiers
Giorgio Severi
J. Meyer
Scott E. Coull
Alina Oprea
AAML
SILM
29
18
0
02 Mar 2020
Regularisation Can Mitigate Poisoning Attacks: A Novel Analysis Based on
  Multiobjective Bilevel Optimisation
Regularisation Can Mitigate Poisoning Attacks: A Novel Analysis Based on Multiobjective Bilevel Optimisation
Javier Carnerero-Cano
Luis Muñoz-González
P. Spencer
Emil C. Lupu
AAML
36
11
0
28 Feb 2020
On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient
  Shaping
On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping
Sanghyun Hong
Varun Chandrasekaran
Yigitcan Kaya
Tudor Dumitras
Nicolas Papernot
AAML
28
136
0
26 Feb 2020
Defending against Backdoor Attack on Deep Neural Networks
Defending against Backdoor Attack on Deep Neural Networks
Kaidi Xu
Sijia Liu
Pin-Yu Chen
Pu Zhao
X. Lin
Xue Lin
AAML
25
47
0
26 Feb 2020
NNoculation: Catching BadNets in the Wild
NNoculation: Catching BadNets in the Wild
A. Veldanda
Kang Liu
Benjamin Tan
Prashanth Krishnamurthy
Farshad Khorrami
Ramesh Karri
Brendan Dolan-Gavitt
S. Garg
AAML
OnRL
19
20
0
19 Feb 2020
Preventing Clean Label Poisoning using Gaussian Mixture Loss
Preventing Clean Label Poisoning using Gaussian Mixture Loss
Muhammad Yaseen
Muneeb Aadil
Mariam Sargsyan
AAML
9
3
0
10 Feb 2020
Radioactive data: tracing through training
Radioactive data: tracing through training
Alexandre Sablayrolles
Matthijs Douze
Cordelia Schmid
Hervé Jégou
38
74
0
03 Feb 2020
Label-Consistent Backdoor Attacks
Label-Consistent Backdoor Attacks
Alexander Turner
Dimitris Tsipras
A. Madry
AAML
11
383
0
05 Dec 2019
REFIT: A Unified Watermark Removal Framework For Deep Learning Systems
  With Limited Data
REFIT: A Unified Watermark Removal Framework For Deep Learning Systems With Limited Data
Xinyun Chen
Wenxiao Wang
Chris Bender
Yiming Ding
R. Jia
Bo-wen Li
D. Song
AAML
27
107
0
17 Nov 2019
The Threat of Adversarial Attacks on Machine Learning in Network
  Security -- A Survey
The Threat of Adversarial Attacks on Machine Learning in Network Security -- A Survey
Olakunle Ibitoye
Rana Abou-Khamis
Mohamed el Shehaby
Ashraf Matrawy
M. O. Shafiq
AAML
39
68
0
06 Nov 2019
Data Poisoning Attacks to Local Differential Privacy Protocols
Data Poisoning Attacks to Local Differential Privacy Protocols
Xiaoyu Cao
Jinyuan Jia
Neil Zhenqiang Gong
AAML
33
76
0
05 Nov 2019
Coverage Guided Testing for Recurrent Neural Networks
Coverage Guided Testing for Recurrent Neural Networks
Wei Huang
Youcheng Sun
Xing-E. Zhao
James Sharp
Wenjie Ruan
Jie Meng
Xiaowei Huang
AAML
37
47
0
05 Nov 2019
Hidden Trigger Backdoor Attacks
Hidden Trigger Backdoor Attacks
Aniruddha Saha
Akshayvarun Subramanya
Hamed Pirsiavash
36
613
0
30 Sep 2019
On Defending Against Label Flipping Attacks on Malware Detection Systems
On Defending Against Label Flipping Attacks on Malware Detection Systems
R. Taheri
R. Javidan
Mohammad Shojafar
Zahra Pooranian
A. Miri
Mauro Conti
AAML
21
88
0
13 Aug 2019
Universal Litmus Patterns: Revealing Backdoor Attacks in CNNs
Universal Litmus Patterns: Revealing Backdoor Attacks in CNNs
Soheil Kolouri
Aniruddha Saha
Hamed Pirsiavash
Heiko Hoffmann
AAML
33
231
0
26 Jun 2019
Poisoning Attacks with Generative Adversarial Nets
Poisoning Attacks with Generative Adversarial Nets
Luis Muñoz-González
Bjarne Pfitzner
Matteo Russo
Javier Carnerero-Cano
Emil C. Lupu
AAML
21
63
0
18 Jun 2019
Previous
123456
Next