Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1703.01340
Cited By
Generative Poisoning Attack Method Against Neural Networks
3 March 2017
Chaofei Yang
Qing Wu
Hai Helen Li
Yiran Chen
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Generative Poisoning Attack Method Against Neural Networks"
49 / 99 papers shown
Title
The Hammer and the Nut: Is Bilevel Optimization Really Needed to Poison Linear Classifiers?
Antonio Emanuele Cinà
Sebastiano Vascon
Ambra Demontis
Battista Biggio
Fabio Roli
Marcello Pelillo
AAML
32
9
0
23 Mar 2021
Security and Privacy for Artificial Intelligence: Opportunities and Challenges
Ayodeji Oseni
Nour Moustafa
Helge Janicke
Peng Liu
Z. Tari
A. Vasilakos
AAML
34
48
0
09 Feb 2021
Unlearnable Examples: Making Personal Data Unexploitable
Hanxun Huang
Xingjun Ma
S. Erfani
James Bailey
Yisen Wang
MIACV
156
190
0
13 Jan 2021
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses
Micah Goldblum
Dimitris Tsipras
Chulin Xie
Xinyun Chen
Avi Schwarzschild
D. Song
Aleksander Madry
Bo Li
Tom Goldstein
SILM
32
271
0
18 Dec 2020
Backdoor Attacks on the DNN Interpretation System
Shihong Fang
A. Choromańska
FAtt
AAML
29
19
0
21 Nov 2020
EEG-Based Brain-Computer Interfaces Are Vulnerable to Backdoor Attacks
Lubin Meng
Jian Huang
Zhigang Zeng
Xue Jiang
Shan Yu
T. Jung
Chin-Teng Lin
Ricardo Chavarriaga
Dongrui Wu
AAML
26
35
0
30 Oct 2020
Being Single Has Benefits. Instance Poisoning to Deceive Malware Classifiers
T. Shapira
David Berend
Ishai Rosenberg
Yang Liu
A. Shabtai
Yuval Elovici
AAML
27
4
0
30 Oct 2020
Bias Field Poses a Threat to DNN-based X-Ray Recognition
Binyu Tian
Qing Guo
Felix Juefei Xu
W. L. Chan
Yupeng Cheng
Xiaohong Li
Xiaofei Xie
Shengchao Qin
AAML
AI4CE
36
33
0
19 Sep 2020
Review and Critical Analysis of Privacy-preserving Infection Tracking and Contact Tracing
William J. Buchanan
Muhammad Ali Imran
M. Rehman
Lei Zhang
Q. Abbasi
C. Chrysoulas
D. Haynes
Nikolaos Pitropakis
Pavlos Papadopoulos
8
14
0
10 Sep 2020
Vulnerability-Aware Poisoning Mechanism for Online RL with Unknown Dynamics
Yanchao Sun
Da Huo
Furong Huang
AAML
OffRL
OnRL
34
49
0
02 Sep 2020
Intrinsic Certified Robustness of Bagging against Data Poisoning Attacks
Jinyuan Jia
Xiaoyu Cao
Neil Zhenqiang Gong
SILM
24
128
0
11 Aug 2020
Blackbox Trojanising of Deep Learning Models : Using non-intrusive network structure and binary alterations
Jonathan Pan
AAML
14
3
0
02 Aug 2020
The Price of Tailoring the Index to Your Data: Poisoning Attacks on Learned Index Structures
Evgenios M. Kornaropoulos
Silei Ren
R. Tamassia
AAML
23
17
0
01 Aug 2020
Towards Class-Oriented Poisoning Attacks Against Neural Networks
Bingyin Zhao
Yingjie Lao
SILM
AAML
9
18
0
31 Jul 2020
Data Poisoning Attacks Against Federated Learning Systems
Vale Tolpegin
Stacey Truex
Mehmet Emre Gursoy
Ling Liu
FedML
31
640
0
16 Jul 2020
You Autocomplete Me: Poisoning Vulnerabilities in Neural Code Completion
R. Schuster
Congzheng Song
Eran Tromer
Vitaly Shmatikov
SILM
AAML
27
150
0
05 Jul 2020
Model-Targeted Poisoning Attacks with Provable Convergence
Fnu Suya
Saeed Mahloujifar
Anshuman Suri
David Evans
Yuan Tian
AAML
17
5
0
30 Jun 2020
With Great Dispersion Comes Greater Resilience: Efficient Poisoning Attacks and Defenses for Linear Regression Models
Jialin Wen
Benjamin Zi Hao Zhao
Minhui Xue
Alina Oprea
Hai-feng Qian
AAML
16
19
0
21 Jun 2020
Defending against GAN-based Deepfake Attacks via Transformation-aware Adversarial Faces
Chaofei Yang
Lei Ding
Yiran Chen
H. Li
AAML
27
46
0
12 Jun 2020
Adversarial Machine Learning in Network Intrusion Detection Systems
Elie Alhajjar
P. Maxwell
Nathaniel D. Bastian
GAN
SILM
AAML
12
137
0
23 Apr 2020
PoisHygiene: Detecting and Mitigating Poisoning Attacks in Neural Networks
Junfeng Guo
Zelun Kong
Cong Liu
AAML
35
1
0
24 Mar 2020
RAB: Provable Robustness Against Backdoor Attacks
Maurice Weber
Xiaojun Xu
Bojan Karlas
Ce Zhang
Bo Li
AAML
24
161
0
19 Mar 2020
Fawkes: Protecting Privacy against Unauthorized Deep Learning Models
Shawn Shan
Emily Wenger
Jiayun Zhang
Huiying Li
Haitao Zheng
Ben Y. Zhao
PICV
MU
32
24
0
19 Feb 2020
Certified Robustness to Label-Flipping Attacks via Randomized Smoothing
Elan Rosenfeld
Ezra Winston
Pradeep Ravikumar
J. Zico Kolter
OOD
AAML
10
155
0
07 Feb 2020
Humpty Dumpty: Controlling Word Meanings via Corpus Poisoning
R. Schuster
Tal Schuster
Yoav Meri
Vitaly Shmatikov
AAML
6
38
0
14 Jan 2020
Backdoor Attacks against Transfer Learning with Pre-trained Deep Learning Models
Shuo Wang
Surya Nepal
Carsten Rudolph
M. Grobler
Shangyu Chen
Tianle Chen
AAML
19
103
0
10 Jan 2020
Revealing Perceptible Backdoors, without the Training Set, via the Maximum Achievable Misclassification Fraction Statistic
Zhen Xiang
David J. Miller
Hang Wang
G. Kesidis
AAML
34
9
0
18 Nov 2019
Detecting AI Trojans Using Meta Neural Analysis
Xiaojun Xu
Qi Wang
Huichen Li
Nikita Borisov
Carl A. Gunter
Bo Li
43
321
0
08 Oct 2019
Impact of Low-bitwidth Quantization on the Adversarial Robustness for Embedded Neural Networks
Rémi Bernhard
Pierre-Alain Moëllic
J. Dutertre
AAML
MQ
26
18
0
27 Sep 2019
On Defending Against Label Flipping Attacks on Malware Detection Systems
R. Taheri
R. Javidan
Mohammad Shojafar
Zahra Pooranian
A. Miri
Mauro Conti
AAML
21
88
0
13 Aug 2019
Poisoning Attacks with Generative Adversarial Nets
Luis Muñoz-González
Bjarne Pfitzner
Matteo Russo
Javier Carnerero-Cano
Emil C. Lupu
AAML
21
63
0
18 Jun 2019
Robust or Private? Adversarial Training Makes Models More Vulnerable to Privacy Attacks
Felipe A. Mejia
Paul Gamble
Z. Hampel-Arias
M. Lomnitz
Nina Lopatina
Lucas Tindall
M. Barrios
SILM
27
18
0
15 Jun 2019
Reconstruction and Membership Inference Attacks against Generative Models
Benjamin Hilprecht
Martin Härterich
Daniel Bernau
AAML
MIACV
21
185
0
07 Jun 2019
Bypassing Backdoor Detection Algorithms in Deep Learning
T. Tan
Reza Shokri
FedML
AAML
39
149
0
31 May 2019
A backdoor attack against LSTM-based text classification systems
Jiazhu Dai
Chuanshuai Chen
SILM
8
320
0
29 May 2019
Evaluating Differentially Private Machine Learning in Practice
Bargav Jayaraman
David Evans
15
7
0
24 Feb 2019
A new Backdoor Attack in CNNs by training set corruption without label poisoning
Mauro Barni
Kassem Kallas
B. Tondi
AAML
35
348
0
12 Feb 2019
Adversarial Samples on Android Malware Detection Systems for IoT Systems
Xiaolei Liu
Xiaojiang Du
Xiaosong Zhang
Qingxin Zhu
Mohsen Guizani
AAML
6
60
0
12 Feb 2019
Detecting Backdoor Attacks on Deep Neural Networks by Activation Clustering
Bryant Chen
Wilka Carvalho
Wenjie Li
Heiko Ludwig
Benjamin Edwards
Chengyao Chen
Ziqiang Cao
Biplav Srivastava
AAML
31
782
0
09 Nov 2018
Stronger Data Poisoning Attacks Break Data Sanitization Defenses
Pang Wei Koh
Jacob Steinhardt
Percy Liang
6
240
0
02 Nov 2018
VerIDeep: Verifying Integrity of Deep Neural Networks through Sensitive-Sample Fingerprinting
Zecheng He
Tianwei Zhang
R. Lee
FedML
AAML
MLAU
22
18
0
09 Aug 2018
Enabling Trust in Deep Learning Models: A Digital Forensics Case Study
Aditya K
Slawomir Grzonkowski
NhienAn Lekhac
19
27
0
03 Aug 2018
Security and Privacy Issues in Deep Learning
Ho Bae
Jaehee Jang
Dahuin Jung
Hyemi Jang
Heonseok Ha
Hyungyu Lee
Sungroh Yoon
SILM
MIACV
48
77
0
31 Jul 2018
Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
Ali Shafahi
Wenjie Huang
Mahyar Najibi
Octavian Suciu
Christoph Studer
Tudor Dumitras
Tom Goldstein
AAML
45
1,076
0
03 Apr 2018
Technical Report: When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks
Octavian Suciu
R. Marginean
Yigitcan Kaya
Hal Daumé
Tudor Dumitras
AAML
40
286
0
19 Mar 2018
BEBP: An Poisoning Method Against Machine Learning Based IDSs
Pan Li
Qiang Liu
Wentao Zhao
Dongxu Wang
Siqi Wang
AAML
10
6
0
11 Mar 2018
Targeted Backdoor Attacks on Deep Learning Systems Using Data Poisoning
Xinyun Chen
Chang-rui Liu
Bo Li
Kimberly Lu
D. Song
AAML
SILM
44
1,808
0
15 Dec 2017
Neural Trojans
Yuntao Liu
Yang Xie
Ankur Srivastava
AAML
13
350
0
03 Oct 2017
Certified Defenses for Data Poisoning Attacks
Jacob Steinhardt
Pang Wei Koh
Percy Liang
AAML
27
746
0
09 Jun 2017
Previous
1
2