Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1804.00308
Cited By
v1
v2
v3 (latest)
Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning
1 April 2018
Matthew Jagielski
Alina Oprea
Battista Biggio
Chang-rui Liu
Cristina Nita-Rotaru
Yue Liu
AAML
Re-assign community
ArXiv (abs)
PDF
HTML
Papers citing
"Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning"
50 / 318 papers shown
Title
A Survey on Poisoning Attacks Against Supervised Machine Learning
Wenjun Qiu
AAML
69
9
0
05 Feb 2022
Linear Model Against Malicious Adversaries with Local Differential Privacy
G. Miao
A. Ding
Samuel S. Wu
AAML
44
1
0
05 Feb 2022
Backdoors Stuck At The Frontdoor: Multi-Agent Backdoor Attacks That Backfire
Siddhartha Datta
N. Shadbolt
AAML
110
7
0
28 Jan 2022
Recommendation Unlearning
C. L. Philip Chen
Fei Sun
Hao Fei
Bolin Ding
MU
88
99
0
18 Jan 2022
Towards Adversarial Evaluations for Inexact Machine Unlearning
Shashwat Goel
Ameya Prabhu
Amartya Sanyal
Ser-Nam Lim
Philip Torr
Ponnurangam Kumaraguru
AAML
ELM
MU
118
59
0
17 Jan 2022
Adversarial Machine Learning Threat Analysis and Remediation in Open Radio Access Network (O-RAN)
Edan Habler
Ron Bitton
D. Avraham
D. Mimran
Eitan Klevansky
Oleg Brodt
Heiko Lehmann
Yuval Elovici
A. Shabtai
AAML
98
14
0
16 Jan 2022
Security for Machine Learning-based Software Systems: a survey of threats, practices and challenges
Huaming Chen
Muhammad Ali Babar
AAML
97
23
0
12 Jan 2022
LoMar: A Local Defense Against Poisoning Attack on Federated Learning
Xingyu Li
Zhe Qu
Shangqing Zhao
Bo Tang
Zhuo Lu
Yao-Hong Liu
AAML
102
97
0
08 Jan 2022
PORTFILER: Port-Level Network Profiling for Self-Propagating Malware Detection
Talha Ongun
Oliver Spohngellert
Benjamin Miller
Simona Boboila
Alina Oprea
Tina Eliassi-Rad
Jason Hiser
Alastair Nottingham
Jack W. Davidson
M. Veeraraghavan
27
11
0
27 Dec 2021
SoK: A Study of the Security on Voice Processing Systems
Robert Chang
Logan Kuo
Arthur Liu
Nader Sehatbakhsh
26
0
0
24 Dec 2021
Robust and Privacy-Preserving Collaborative Learning: A Comprehensive Survey
Shangwei Guo
Xu Zhang
Feiyu Yang
Tianwei Zhang
Yan Gan
Tao Xiang
Yang Liu
FedML
106
9
0
19 Dec 2021
On the Security & Privacy in Federated Learning
Gorka Abad
S. Picek
Víctor Julio Ramírez-Durán
A. Urbieta
126
11
0
10 Dec 2021
Adversarial Attacks Against Deep Generative Models on Data: A Survey
Hui Sun
Tianqing Zhu
Zhiqiu Zhang
Dawei Jin
Wanlei Zhou
AAML
146
44
0
01 Dec 2021
Living-Off-The-Land Command Detection Using Active Learning
Talha Ongun
Jack W. Stokes
Jonathan Bar Or
K. Tian
Farid Tajaddodianfar
Joshua Neil
C. Seifert
Alina Oprea
John C. Platt
AAML
42
24
0
30 Nov 2021
Trimming Stability Selection increases variable selection robustness
Tino Werner
54
3
0
23 Nov 2021
Get a Model! Model Hijacking Attack Against Machine Learning Models
A. Salem
Michael Backes
Yang Zhang
AAML
109
28
0
08 Nov 2021
10 Security and Privacy Problems in Large Foundation Models
Jinyuan Jia
Hongbin Liu
Neil Zhenqiang Gong
113
7
0
28 Oct 2021
Widen The Backdoor To Let More Attackers In
Siddhartha Datta
Giulio Lovisotto
Ivan Martinovic
N. Shadbolt
AAML
58
3
0
09 Oct 2021
Backdoor Attacks on Federated Learning with Lottery Ticket Hypothesis
Zeyuan Yin
Ye Yuan
Panfeng Guo
Pan Zhou
FedML
67
7
0
22 Sep 2021
Membership Inference Attacks Against Recommender Systems
Minxing Zhang
Zhaochun Ren
Zihan Wang
Fajie Yuan
Zhumin Chen
Pengfei Hu
Yang Zhang
MIACV
AAML
85
90
0
16 Sep 2021
On the Initial Behavior Monitoring Issues in Federated Learning
Ranwa Al Mallah
Godwin Badu-Marfo
Bilal Farooq
FedML
16
2
0
11 Sep 2021
Back to the Drawing Board: A Critical Evaluation of Poisoning Attacks on Production Federated Learning
Virat Shejwalkar
Amir Houmansadr
Peter Kairouz
Daniel Ramage
AAML
127
223
0
23 Aug 2021
Poison Ink: Robust and Invisible Backdoor Attack
Jie Zhang
Dongdong Chen
Qidong Huang
Jing Liao
Weiming Zhang
Huamin Feng
G. Hua
Nenghai Yu
AAML
78
90
0
05 Aug 2021
Derivative-free Alternating Projection Algorithms for General Nonconvex-Concave Minimax Problems
Zi Xu
Ziqi Wang
Jingjing Shen
Yuhong Dai
148
10
0
01 Aug 2021
Can You Hear It? Backdoor Attacks via Ultrasonic Triggers
Stefanos Koffas
Jing Xu
Mauro Conti
S. Picek
AAML
104
71
0
30 Jul 2021
Poisoning Online Learning Filters: DDoS Attacks and Countermeasures
W. Tann
E. Chang
AAML
28
0
0
27 Jul 2021
Understanding the Limits of Unsupervised Domain Adaptation via Data Poisoning
Akshay Mehra
B. Kailkhura
Pin-Yu Chen
Jihun Hamm
AAML
98
22
0
08 Jul 2021
Poisoning Attack against Estimating from Pairwise Comparisons
Ke Ma
Qianqian Xu
Jinshan Zeng
Xiaochun Cao
Qingming Huang
AAML
71
24
0
05 Jul 2021
Evaluating the Cybersecurity Risk of Real World, Machine Learning Production Systems
Ron Bitton
Nadav Maman
Inderjeet Singh
Satoru Momiyama
Yuval Elovici
A. Shabtai
111
19
0
05 Jul 2021
Bi-Level Poisoning Attack Model and Countermeasure for Appliance Consumption Data of Smart Homes
M. Billah
A. Anwar
Ziaur Rahman
S. Galib
58
8
0
01 Jul 2021
Sharing in a Trustless World: Privacy-Preserving Data Analytics with Potentially Cheating Participants
Tham Nguyen
Hassan Jameel Asghar
Raghav Bhaskar
Dali Kaafar
F. Farokhi
35
0
0
18 Jun 2021
Bad Characters: Imperceptible NLP Attacks
Nicholas Boucher
Ilia Shumailov
Ross J. Anderson
Nicolas Papernot
AAML
SILM
116
107
0
18 Jun 2021
CAN-LOC: Spoofing Detection and Physical Intrusion Localization on an In-Vehicle CAN Bus Based on Deep Features of Voltage Signals
Efrat Levy
A. Shabtai
B. Groza
Pal-Stefan Murvay
Yuval Elovici
49
22
0
15 Jun 2021
GAL: Gradient Assisted Learning for Decentralized Multi-Organization Collaborations
Enmao Diao
Jie Ding
Vahid Tarokh
FedML
84
17
0
02 Jun 2021
A Gradient Method for Multilevel Optimization
Ryo Sato
Mirai Tanaka
Akiko Takeda
56
18
0
28 May 2021
Honest-but-Curious Nets: Sensitive Attributes of Private Inputs Can Be Secretly Coded into the Classifiers' Outputs
Mohammad Malekzadeh
Anastasia Borovykh
Deniz Gündüz
MIACV
87
42
0
25 May 2021
DeepObliviate: A Powerful Charm for Erasing Data Residual Memory in Deep Neural Networks
Yingzhe He
Guozhu Meng
Kai Chen
Jinwen He
Xingbo Hu
MU
59
28
0
13 May 2021
Incompatibility Clustering as a Defense Against Backdoor Poisoning Attacks
Charles Jin
Melinda Sun
Martin Rinard
AAML
38
6
0
08 May 2021
De-Pois: An Attack-Agnostic Defense against Data Poisoning Attacks
Jian Chen
Xuxin Zhang
Rui Zhang
Chen Wang
Ling Liu
AAML
72
88
0
08 May 2021
Exploiting Vulnerabilities in Deep Neural Networks: Adversarial and Fault-Injection Attacks
Faiq Khalid
Muhammad Abdullah Hanif
Mohamed Bennai
AAML
SILM
78
9
0
05 May 2021
Poisoning the Unlabeled Dataset of Semi-Supervised Learning
Nicholas Carlini
AAML
225
68
0
04 May 2021
Broadly Applicable Targeted Data Sample Omission Attacks
Guy Barash
E. Farchi
Sarit Kraus
Onn Shehory
AAML
40
0
0
04 May 2021
Hidden Backdoors in Human-Centric Language Models
Shaofeng Li
Hui Liu
Tian Dong
Benjamin Zi Hao Zhao
Minhui Xue
Haojin Zhu
Jialiang Lu
SILM
160
155
0
01 May 2021
Property Inference Attacks on Convolutional Neural Networks: Influence and Implications of Target Model's Complexity
Mathias Parisot
Balázs Pejó
Dayana Spagnuelo
MIACV
149
34
0
27 Apr 2021
Turning Federated Learning Systems Into Covert Channels
Gabriele Costa
Fabio Pinelli
S. Soderi
Gabriele Tolomei
FedML
78
13
0
21 Apr 2021
Manipulating SGD with Data Ordering Attacks
Ilia Shumailov
Zakhar Shumaylov
Dmitry Kazhdan
Yiren Zhao
Nicolas Papernot
Murat A. Erdogdu
Ross J. Anderson
AAML
156
98
0
19 Apr 2021
FedCom: A Byzantine-Robust Local Model Aggregation Rule Using Data Commitment for Federated Learning
Bo Zhao
Peng Sun
Liming Fang
Tao Wang
Ke Jiang
FedML
64
4
0
16 Apr 2021
Mitigating Adversarial Attack for Compute-in-Memory Accelerator Utilizing On-chip Finetune
Shanshi Huang
Hongwu Jiang
Shimeng Yu
AAML
54
3
0
13 Apr 2021
Privacy and Trust Redefined in Federated Machine Learning
Pavlos Papadopoulos
Will Abramson
A. Hall
Nikolaos Pitropakis
William J. Buchanan
74
42
0
29 Mar 2021
Graph Unlearning
Min Chen
Zhikun Zhang
Tianhao Wang
Michael Backes
Mathias Humbert
Yang Zhang
MU
106
150
0
27 Mar 2021
Previous
1
2
3
4
5
6
7
Next