Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1804.00792
Cited By
Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
3 April 2018
Ali Shafahi
Yifan Jiang
Mahyar Najibi
Octavian Suciu
Christoph Studer
Tudor Dumitras
Tom Goldstein
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks"
9 / 259 papers shown
Title
Lower Bounds for Adversarially Robust PAC Learning
Dimitrios I. Diochnos
Saeed Mahloujifar
Mohammad Mahmoody
AAML
27
26
0
13 Jun 2019
Terminal Brain Damage: Exposing the Graceless Degradation in Deep Neural Networks Under Hardware Fault Attacks
Sanghyun Hong
Pietro Frigo
Yigitcan Kaya
Cristiano Giuffrida
Tudor Dumitras
AAML
22
211
0
03 Jun 2019
Bypassing Backdoor Detection Algorithms in Deep Learning
T. Tan
Reza Shokri
FedML
AAML
39
149
0
31 May 2019
Privacy Risks of Securing Machine Learning Models against Adversarial Examples
Liwei Song
Reza Shokri
Prateek Mittal
SILM
MIACV
AAML
6
235
0
24 May 2019
Learning to Confuse: Generating Training Time Adversarial Data with Auto-Encoder
Ji Feng
Qi-Zhi Cai
Zhi-Hua Zhou
AAML
19
104
0
22 May 2019
A Target-Agnostic Attack on Deep Models: Exploiting Security Vulnerabilities of Transfer Learning
Shahbaz Rezaei
Xin Liu
SILM
AAML
33
46
0
08 Apr 2019
On the security relevance of weights in deep learning
Kathrin Grosse
T. A. Trost
Marius Mosbach
Michael Backes
Dietrich Klakow
AAML
32
6
0
08 Feb 2019
SentiNet: Detecting Localized Universal Attacks Against Deep Learning Systems
Edward Chou
Florian Tramèr
Giancarlo Pellegrino
AAML
182
289
0
02 Dec 2018
Stronger Data Poisoning Attacks Break Data Sanitization Defenses
Pang Wei Koh
Jacob Steinhardt
Percy Liang
6
240
0
02 Nov 2018
Previous
1
2
3
4
5
6