Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1804.00792
Cited By
Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
3 April 2018
Ali Shafahi
Yifan Jiang
Mahyar Najibi
Octavian Suciu
Christoph Studer
Tudor Dumitras
Tom Goldstein
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks"
50 / 259 papers shown
Title
Enhanced Security and Privacy via Fragmented Federated Learning
N. Jebreel
J. Domingo-Ferrer
Alberto Blanco-Justicia
David Sánchez
FedML
39
26
0
13 Jul 2022
Backdoor Attacks on Crowd Counting
Yuhua Sun
Tailai Zhang
Xingjun Ma
Pan Zhou
Jian Lou
Zichuan Xu
Xing Di
Yu Cheng
Lichao
AAML
19
15
0
12 Jul 2022
How many perturbations break this model? Evaluating robustness beyond adversarial accuracy
R. Olivier
Bhiksha Raj
AAML
34
5
0
08 Jul 2022
Backdoor Attacks on Vision Transformers
Akshayvarun Subramanya
Aniruddha Saha
Soroush Abbasi Koohpayegani
Ajinkya Tejankar
Hamed Pirsiavash
ViT
AAML
21
16
0
16 Jun 2022
Adversarial Patch Attacks and Defences in Vision-Based Tasks: A Survey
Abhijith Sharma
Yijun Bian
Phil Munz
Apurva Narayan
VLM
AAML
27
20
0
16 Jun 2022
Architectural Backdoors in Neural Networks
Mikel Bober-Irizar
Ilia Shumailov
Yiren Zhao
Robert D. Mullins
Nicolas Papernot
AAML
25
23
0
15 Jun 2022
Edge Security: Challenges and Issues
Xin Jin
Charalampos Katsis
Fan Sang
Jiahao Sun
A. Kundu
Ramana Rao Kompella
52
8
0
14 Jun 2022
Certifying Data-Bias Robustness in Linear Regression
Anna P. Meyer
Aws Albarghouthi
Loris Dántoni
35
3
0
07 Jun 2022
CASSOCK: Viable Backdoor Attacks against DNN in The Wall of Source-Specific Backdoor Defences
Shang Wang
Yansong Gao
Anmin Fu
Zhi-Li Zhang
Yuqing Zhang
W. Susilo
Dongxi Liu
AAML
31
10
0
31 May 2022
Integrity Authentication in Tree Models
Weijie Zhao
Yingjie Lao
Ping Li
59
5
0
30 May 2022
Contributor-Aware Defenses Against Adversarial Backdoor Attacks
Glenn Dawson
Muhammad Umer
R. Polikar
AAML
33
0
0
28 May 2022
Adversarial attacks and defenses in Speaker Recognition Systems: A survey
Jiahe Lan
Rui Zhang
Zheng Yan
Jie Wang
Yu Chen
Ronghui Hou
AAML
24
23
0
27 May 2022
On Collective Robustness of Bagging Against Data Poisoning
Ruoxin Chen
Zenan Li
Jie Li
Chentao Wu
Junchi Yan
61
23
0
26 May 2022
WeDef: Weakly Supervised Backdoor Defense for Text Classification
Lesheng Jin
Zihan Wang
Jingbo Shang
AAML
35
14
0
24 May 2022
Verifying Neural Networks Against Backdoor Attacks
Long H. Pham
Jun Sun
AAML
26
5
0
14 May 2022
PoisonedEncoder: Poisoning the Unlabeled Pre-training Data in Contrastive Learning
Hongbin Liu
Jinyuan Jia
Neil Zhenqiang Gong
33
34
0
13 May 2022
Subverting Fair Image Search with Generative Adversarial Perturbations
A. Ghosh
Matthew Jagielski
Chris L. Wilson
22
7
0
05 May 2022
Detecting Backdoor Poisoning Attacks on Deep Neural Networks by Heatmap Clustering
Lukas Schulth
Christian Berghoff
Matthias Neu
AAML
25
5
0
27 Apr 2022
Backdooring Explainable Machine Learning
Maximilian Noppel
Lukas Peter
Christian Wressnegger
AAML
21
5
0
20 Apr 2022
Indiscriminate Data Poisoning Attacks on Neural Networks
Yiwei Lu
Gautam Kamath
Yaoliang Yu
AAML
43
24
0
19 Apr 2022
Special Session: Towards an Agile Design Methodology for Efficient, Reliable, and Secure ML Systems
Shail Dave
Alberto Marchisio
Muhammad Abdullah Hanif
Amira Guesmi
Aviral Shrivastava
Ihsen Alouani
Mohamed Bennai
39
13
0
18 Apr 2022
Machine Learning Security against Data Poisoning: Are We There Yet?
Antonio Emanuele Cinà
Kathrin Grosse
Ambra Demontis
Battista Biggio
Fabio Roli
Marcello Pelillo
AAML
32
35
0
12 Apr 2022
Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets
Florian Tramèr
Reza Shokri
Ayrton San Joaquin
Hoang Minh Le
Matthew Jagielski
Sanghyun Hong
Nicholas Carlini
MIACV
53
109
0
31 Mar 2022
Semi-Targeted Model Poisoning Attack on Federated Learning via Backward Error Analysis
Yuwei Sun
H. Ochiai
Jun Sakuma
AAML
FedML
43
15
0
22 Mar 2022
Concept-based Adversarial Attacks: Tricking Humans and Classifiers Alike
Johannes Schneider
Giovanni Apruzzese
AAML
34
8
0
18 Mar 2022
Sniper Backdoor: Single Client Targeted Backdoor Attack in Federated Learning
Gorka Abad
Servio Paguada
Oguzhan Ersoy
S. Picek
Víctor Julio Ramírez-Durán
A. Urbieta
FedML
31
6
0
16 Mar 2022
Energy-Latency Attacks via Sponge Poisoning
Antonio Emanuele Cinà
Ambra Demontis
Battista Biggio
Fabio Roli
Marcello Pelillo
SILM
57
29
0
14 Mar 2022
Low-Rank Softmax Can Have Unargmaxable Classes in Theory but Rarely in Practice
Andreas Grivas
Nikolay Bogoychev
Adam Lopez
17
9
0
12 Mar 2022
Robustly-reliable learners under poisoning attacks
Maria-Florina Balcan
Avrim Blum
Steve Hanneke
Dravyansh Sharma
AAML
OOD
26
14
0
08 Mar 2022
Low-Loss Subspace Compression for Clean Gains against Multi-Agent Backdoor Attacks
Siddhartha Datta
N. Shadbolt
AAML
32
6
0
07 Mar 2022
Physical Backdoor Attacks to Lane Detection Systems in Autonomous Driving
Xingshuo Han
Guowen Xu
Yuanpu Zhou
Xuehuan Yang
Jiwei Li
Tianwei Zhang
AAML
32
43
0
02 Mar 2022
Poisoning Attacks and Defenses on Artificial Intelligence: A Survey
M. A. Ramírez
Song-Kyoo Kim
H. A. Hamadi
Ernesto Damiani
Young-Ji Byon
Tae-Yeon Kim
C. Cho
C. Yeun
AAML
25
37
0
21 Feb 2022
Holistic Adversarial Robustness of Deep Learning Models
Pin-Yu Chen
Sijia Liu
AAML
54
16
0
15 Feb 2022
Jigsaw Puzzle: Selective Backdoor Attack to Subvert Malware Classifiers
Limin Yang
Zhi Chen
Jacopo Cortellazzi
Feargus Pendlebury
Kevin Tu
Fabio Pierazzi
Lorenzo Cavallaro
Gang Wang
AAML
28
36
0
11 Feb 2022
Redactor: A Data-centric and Individualized Defense Against Inference Attacks
Geon Heo
Steven Euijong Whang
AAML
25
2
0
07 Feb 2022
Improved Certified Defenses against Data Poisoning with (Deterministic) Finite Aggregation
Wenxiao Wang
Alexander Levine
S. Feizi
AAML
20
60
0
05 Feb 2022
A Survey on Poisoning Attacks Against Supervised Machine Learning
Wenjun Qiu
AAML
36
9
0
05 Feb 2022
Can Adversarial Training Be Manipulated By Non-Robust Features?
Lue Tao
Lei Feng
Hongxin Wei
Jinfeng Yi
Sheng-Jun Huang
Songcan Chen
AAML
139
16
0
31 Jan 2022
Backdoors Stuck At The Frontdoor: Multi-Agent Backdoor Attacks That Backfire
Siddhartha Datta
N. Shadbolt
AAML
41
7
0
28 Jan 2022
Identifying a Training-Set Attack's Target Using Renormalized Influence Estimation
Zayd Hammoudeh
Daniel Lowd
TDI
26
28
0
25 Jan 2022
Hiding Behind Backdoors: Self-Obfuscation Against Generative Models
Siddhartha Datta
N. Shadbolt
SILM
AAML
AI4CE
25
2
0
24 Jan 2022
Dangerous Cloaking: Natural Trigger based Backdoor Attacks on Object Detectors in the Physical World
Hua Ma
Yinshan Li
Yansong Gao
A. Abuadbba
Zhi-Li Zhang
Anmin Fu
Hyoungshick Kim
S. Al-Sarawi
N. Surya
Derek Abbott
21
34
0
21 Jan 2022
LoMar: A Local Defense Against Poisoning Attack on Federated Learning
Xingyu Li
Zhe Qu
Shangqing Zhao
Bo Tang
Zhuo Lu
Yao-Hong Liu
AAML
41
92
0
08 Jan 2022
Adversarial Attacks against Windows PE Malware Detection: A Survey of the State-of-the-Art
Xiang Ling
Lingfei Wu
Jiangyu Zhang
Zhenqing Qu
Wei Deng
...
Chunming Wu
S. Ji
Tianyue Luo
Jingzheng Wu
Yanjun Wu
AAML
47
74
0
23 Dec 2021
Robust and Privacy-Preserving Collaborative Learning: A Comprehensive Survey
Shangwei Guo
Xu Zhang
Feiyu Yang
Tianwei Zhang
Yan Gan
Tao Xiang
Yang Liu
FedML
31
9
0
19 Dec 2021
Data Collection and Quality Challenges in Deep Learning: A Data-Centric AI Perspective
Steven Euijong Whang
Yuji Roh
Hwanjun Song
Jae-Gil Lee
27
326
0
13 Dec 2021
SoK: Anti-Facial Recognition Technology
Emily Wenger
Shawn Shan
Haitao Zheng
Ben Y. Zhao
PICV
32
13
0
08 Dec 2021
Adversarial Attacks Against Deep Generative Models on Data: A Survey
Hui Sun
Tianqing Zhu
Zhiqiu Zhang
Dawei Jin
Wanlei Zhou
AAML
50
42
0
01 Dec 2021
Triggerless Backdoor Attack for NLP Tasks with Clean Labels
Leilei Gan
Jiwei Li
Tianwei Zhang
Xiaoya Li
Yuxian Meng
Fei Wu
Yi Yang
Shangwei Guo
Chun Fan
AAML
SILM
27
74
0
15 Nov 2021
Get a Model! Model Hijacking Attack Against Machine Learning Models
A. Salem
Michael Backes
Yang Zhang
AAML
27
28
0
08 Nov 2021
Previous
1
2
3
4
5
6
Next