Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1804.00308
Cited By
v1
v2
v3 (latest)
Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning
1 April 2018
Matthew Jagielski
Alina Oprea
Battista Biggio
Chang-rui Liu
Cristina Nita-Rotaru
Yue Liu
AAML
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning"
50 / 318 papers shown
Title
EX-RAY: Distinguishing Injected Backdoor from Natural Features in Neural Networks by Examining Differential Feature Symmetry
Yingqi Liu
Guangyu Shen
Guanhong Tao
Zhenting Wang
Shiqing Ma
Xinming Zhang
AAML
97
8
0
16 Mar 2021
Quantitative robustness of instance ranking problems
Tino Werner
52
2
0
12 Mar 2021
Stochastic-HMDs: Adversarial Resilient Hardware Malware Detectors through Voltage Over-scaling
Md. Shohidul Islam
Ihsen Alouani
Khaled N. Khasawneh
AAML
41
1
0
11 Mar 2021
Graph Computing for Financial Crime and Fraud Detection: Trends, Challenges and Outlook
Eren Kurshan
Hongda Shen
GNN
66
33
0
02 Mar 2021
Financial Crime & Fraud Detection Using Graph Computing: Application Considerations & Outlook
Eren Kurshan
Honda Shen
Haojie Yu
GNN
FaML
89
29
0
02 Mar 2021
Oriole: Thwarting Privacy against Trustworthy Deep Learning Models
Liuqiao Chen
Hu Wang
Benjamin Zi Hao Zhao
Minhui Xue
Hai-feng Qian
PICV
60
4
0
23 Feb 2021
Data Poisoning Attacks and Defenses to Crowdsourcing Systems
Minghong Fang
Minghao Sun
Qi Li
Neil Zhenqiang Gong
Jinhua Tian
Jia-Wei Liu
111
36
0
18 Feb 2021
Making Paper Reviewing Robust to Bid Manipulation Attacks
Ruihan Wu
Chuan Guo
Felix Wu
Rahul Kidambi
Laurens van der Maaten
Kilian Q. Weinberger
AAML
80
24
0
09 Feb 2021
Security and Privacy for Artificial Intelligence: Opportunities and Challenges
Ayodeji Oseni
Nour Moustafa
Helge Janicke
Peng Liu
Z. Tari
A. Vasilakos
AAML
67
52
0
09 Feb 2021
Quantifying and Mitigating Privacy Risks of Contrastive Learning
Xinlei He
Yang Zhang
92
53
0
08 Feb 2021
SAFELearning: Enable Backdoor Detectability In Federated Learning With Secure Aggregation
Zhuosheng Zhang
Jiarui Li
Shucheng Yu
C. Makaya
FedML
53
23
0
04 Feb 2021
Machine learning pipeline for battery state of health estimation
D. Roman
Saurabh Saxena
Valentin Robu
Michael G. Pecht
David Flynn
60
392
0
01 Feb 2021
Covert Model Poisoning Against Federated Learning: Algorithm Design and Optimization
Kang Wei
Jun Li
Ming Ding
Chuan Ma
Yo-Seb Jeon
H. Vincent Poor
FedML
57
8
0
28 Jan 2021
Adversarial Vulnerability of Active Transfer Learning
Nicolas Müller
Konstantin Böttinger
AAML
18
0
0
26 Jan 2021
Untargeted Poisoning Attack Detection in Federated Learning via Behavior Attestation
Ranwa Al Mallah
David López
Godwin Badu-Marfo
Bilal Farooq
AAML
101
39
0
24 Jan 2021
Dataset Security for Machine Learning: Data Poisoning, Backdoor Attacks, and Defenses
Micah Goldblum
Dimitris Tsipras
Chulin Xie
Xinyun Chen
Avi Schwarzschild
Basel Alomair
Aleksander Madry
Yue Liu
Tom Goldstein
SILM
165
283
0
18 Dec 2020
HaS-Nets: A Heal and Select Mechanism to Defend DNNs Against Backdoor Attacks for Data Collection Scenarios
Hassan Ali
Surya Nepal
S. Kanhere
S. Jha
AAML
62
13
0
14 Dec 2020
Poisoning Semi-supervised Federated Learning via Unlabeled Data: Attacks and Defenses
Yi Liu
Lizhen Qu
Ruihui Zhao
Cong Wang
Dusit Niyato
Yefeng Zheng
88
6
0
08 Dec 2020
Certified Robustness of Nearest Neighbors against Data Poisoning and Backdoor Attacks
Jinyuan Jia
Yupei Liu
Xiaoyu Cao
Neil Zhenqiang Gong
AAML
115
75
0
07 Dec 2020
Privacy and Robustness in Federated Learning: Attacks and Defenses
Lingjuan Lyu
Han Yu
Xingjun Ma
Chen Chen
Lichao Sun
Jun Zhao
Qiang Yang
Philip S. Yu
FedML
331
380
0
07 Dec 2020
PAC-Learning for Strategic Classification
Ravi Sundaram
A. Vullikanti
Haifeng Xu
Fan Yao
AAML
108
44
0
06 Dec 2020
How Robust are Randomized Smoothing based Defenses to Data Poisoning?
Akshay Mehra
B. Kailkhura
Pin-Yu Chen
Jihun Hamm
OOD
AAML
117
32
0
02 Dec 2020
Challenges in Deploying Machine Learning: a Survey of Case Studies
Andrei Paleyes
Raoul-Gabriel Urma
Neil D. Lawrence
71
409
0
18 Nov 2020
Privacy Preservation in Federated Learning: An insightful survey from the GDPR Perspective
N. Truong
Kai Sun
Siyao Wang
Florian Guitton
Yike Guo
FedML
67
9
0
10 Nov 2020
BaFFLe: Backdoor detection via Feedback-based Federated Learning
Sébastien Andreina
G. Marson
Helen Möllering
Ghassan O. Karame
FedML
75
141
0
04 Nov 2020
Blockchain based Attack Detection on Machine Learning Algorithms for IoT based E-Health Applications
Thippa Reddy Gadekallu
Manoj M K
Sivarama Krishnan S
Neeraj Kumar
S. Hakak
S. Bhattacharya
OOD
66
54
0
03 Nov 2020
Robust and Verifiable Information Embedding Attacks to Deep Neural Networks via Error-Correcting Codes
Jinyuan Jia
Binghui Wang
Neil Zhenqiang Gong
AAML
79
5
0
26 Oct 2020
A Generative Model based Adversarial Security of Deep Learning and Linear Classifier Models
Ferhat Ozgur Catak
Samed Sivaslioglu
Kevser Sahinbas
AAML
70
7
0
17 Oct 2020
BAAAN: Backdoor Attacks Against Autoencoder and GAN-Based Machine Learning Models
A. Salem
Yannick Sautter
Michael Backes
Mathias Humbert
Yang Zhang
AAML
SILM
AI4CE
59
40
0
06 Oct 2020
Pocket Diagnosis: Secure Federated Learning against Poisoning Attack in the Cloud
Zhuo Ma
Jianfeng Ma
Yinbin Miao
Ximeng Liu
K. Choo
R. Deng
FedML
118
33
0
23 Sep 2020
Data Poisoning Attacks on Regression Learning and Corresponding Defenses
Nicolas Müller
Daniel Kowatsch
Konstantin Böttinger
AAML
52
19
0
15 Sep 2020
Review and Critical Analysis of Privacy-preserving Infection Tracking and Contact Tracing
William J. Buchanan
Muhammad Ali Imran
M. Rehman
Lei Zhang
Q. Abbasi
C. Chrysoulas
D. Haynes
Nikolaos Pitropakis
Pavlos Papadopoulos
51
14
0
10 Sep 2020
Local and Central Differential Privacy for Robustness and Privacy in Federated Learning
Mohammad Naseri
Jamie Hayes
Emiliano De Cristofaro
FedML
126
153
0
08 Sep 2020
Adversarial Attack on Large Scale Graph
Jintang Li
Tao Xie
Liang Chen
Fenfang Xie
Xiangnan He
Zibin Zheng
AAML
94
67
0
08 Sep 2020
Defending Regression Learners Against Poisoning Attacks
Sandamal Weerasinghe
S. Erfani
T. Alpcan
C. Leckie
Justin Kopacz
AAML
32
0
0
21 Aug 2020
Intrinsic Certified Robustness of Bagging against Data Poisoning Attacks
Jinyuan Jia
Xiaoyu Cao
Neil Zhenqiang Gong
SILM
110
135
0
11 Aug 2020
The Price of Tailoring the Index to Your Data: Poisoning Attacks on Learned Index Structures
Evgenios M. Kornaropoulos
Silei Ren
R. Tamassia
AAML
69
19
0
01 Aug 2020
Towards Class-Oriented Poisoning Attacks Against Neural Networks
Bingyin Zhao
Yingjie Lao
SILM
AAML
26
18
0
31 Jul 2020
Dynamic Defense Against Byzantine Poisoning Attacks in Federated Learning
Nuria Rodríguez-Barroso
Eugenio Martínez-Cámara
M. V. Luzón
Francisco Herrera
FedML
AAML
95
36
0
29 Jul 2020
AI Data poisoning attack: Manipulating game AI of Go
Junli Shen
Maocai Xia
AAML
72
3
0
23 Jul 2020
Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review
Yansong Gao
Bao Gia Doan
Zhi-Li Zhang
Siqi Ma
Jiliang Zhang
Anmin Fu
Surya Nepal
Hyoungshick Kim
AAML
131
235
0
21 Jul 2020
Data Poisoning Attacks Against Federated Learning Systems
Vale Tolpegin
Stacey Truex
Mehmet Emre Gursoy
Ling Liu
FedML
128
671
0
16 Jul 2020
You Autocomplete Me: Poisoning Vulnerabilities in Neural Code Completion
R. Schuster
Congzheng Song
Eran Tromer
Vitaly Shmatikov
SILM
AAML
144
160
0
05 Jul 2020
Subpopulation Data Poisoning Attacks
Matthew Jagielski
Giorgio Severi
Niklas Pousette Harger
Alina Oprea
AAML
SILM
115
122
0
24 Jun 2020
With Great Dispersion Comes Greater Resilience: Efficient Poisoning Attacks and Defenses for Linear Regression Models
Jialin Wen
Benjamin Zi Hao Zhao
Minhui Xue
Alina Oprea
Hai-feng Qian
AAML
70
20
0
21 Jun 2020
On Adversarial Bias and the Robustness of Fair Machine Learning
Hong Chang
Ta Duy Nguyen
S. K. Murakonda
Ehsan Kazemi
Reza Shokri
FaML
OOD
FedML
68
51
0
15 Jun 2020
Defending SVMs against Poisoning Attacks: the Hardness and DBSCAN Approach
Hu Ding
Fan Yang
Jiawei Huang
AAML
60
0
0
14 Jun 2020
Sponge Examples: Energy-Latency Attacks on Neural Networks
Ilia Shumailov
Yiren Zhao
Daniel Bates
Nicolas Papernot
Robert D. Mullins
Ross J. Anderson
SILM
86
139
0
05 Jun 2020
VerifyTL: Secure and Verifiable Collaborative Transfer Learning
Zhuo Ma
Jianfeng Ma
Yinbin Miao
Ximeng Liu
Wei Zheng
K. Choo
R. Deng
AAML
43
3
0
18 May 2020
Blind Backdoors in Deep Learning Models
Eugene Bagdasaryan
Vitaly Shmatikov
AAML
FedML
SILM
167
311
0
08 May 2020
Previous
1
2
3
4
5
6
7
Next