ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1804.00308
  4. Cited By
Manipulating Machine Learning: Poisoning Attacks and Countermeasures for
  Regression Learning
v1v2v3 (latest)

Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning

1 April 2018
Matthew Jagielski
Alina Oprea
Battista Biggio
Chang-rui Liu
Cristina Nita-Rotaru
Yue Liu
    AAML
ArXiv (abs)PDFHTML

Papers citing "Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning"

50 / 318 papers shown
Title
AI Maintenance: A Robustness Perspective
AI Maintenance: A Robustness Perspective
Pin-Yu Chen
Payel Das
86
14
0
08 Jan 2023
Sublinear Time Algorithms for Several Geometric Optimization (With
  Outliers) Problems In Machine Learning
Sublinear Time Algorithms for Several Geometric Optimization (With Outliers) Problems In Machine Learning
Hu Ding
155
0
0
07 Jan 2023
Distributed Machine Learning for UAV Swarms: Computing, Sensing, and
  Semantics
Distributed Machine Learning for UAV Swarms: Computing, Sensing, and Semantics
Yahao Ding
Zhaohui Yang
Quoc-Viet Pham
Zhaoyang Zhang
M. Shikh-Bahaei
86
38
0
03 Jan 2023
XMAM:X-raying Models with A Matrix to Reveal Backdoor Attacks for
  Federated Learning
XMAM:X-raying Models with A Matrix to Reveal Backdoor Attacks for Federated Learning
Jianyi Zhang
Fangjiao Zhang
Qichao Jin
Zhiqiang Wang
Xiaodong Lin
X. Hei
AAMLFedML
92
1
0
28 Dec 2022
FairRoad: Achieving Fairness for Recommender Systems with Optimized
  Antidote Data
FairRoad: Achieving Fairness for Recommender Systems with Optimized Antidote Data
Minghong Fang
Jia-Wei Liu
Michinari Momma
Yi Sun
76
4
0
13 Dec 2022
Selective Amnesia: On Efficient, High-Fidelity and Blind Suppression of
  Backdoor Effects in Trojaned Machine Learning Models
Selective Amnesia: On Efficient, High-Fidelity and Blind Suppression of Backdoor Effects in Trojaned Machine Learning Models
Rui Zhu
Di Tang
Siyuan Tang
Wenyuan Xu
Haixu Tang
AAMLFedML
101
14
0
09 Dec 2022
Vicious Classifiers: Data Reconstruction Attack at Inference Time
Vicious Classifiers: Data Reconstruction Attack at Inference Time
Mohammad Malekzadeh
Deniz Gunduz
AAMLMIACV
66
0
0
08 Dec 2022
Hijack Vertical Federated Learning Models As One Party
Hijack Vertical Federated Learning Models As One Party
Pengyu Qiu
Xuhong Zhang
Shouling Ji
Changjiang Li
Yuwen Pu
Xing Yang
Ting Wang
FedML
124
6
0
01 Dec 2022
Adversarial Attacks are a Surprisingly Strong Baseline for Poisoning
  Few-Shot Meta-Learners
Adversarial Attacks are a Surprisingly Strong Baseline for Poisoning Few-Shot Meta-Learners
E. T. Oldewage
J. Bronskill
Richard Turner
68
3
0
23 Nov 2022
Analysis and Detectability of Offline Data Poisoning Attacks on Linear
  Dynamical Systems
Analysis and Detectability of Offline Data Poisoning Attacks on Linear Dynamical Systems
Alessio Russo
AAML
40
3
0
16 Nov 2022
M-to-N Backdoor Paradigm: A Multi-Trigger and Multi-Target Attack to
  Deep Learning Models
M-to-N Backdoor Paradigm: A Multi-Trigger and Multi-Target Attack to Deep Learning Models
Linshan Hou
Zhongyun Hua
Yuhong Li
Yifeng Zheng
Leo Yu Zhang
AAML
109
3
0
03 Nov 2022
Amplifying Membership Exposure via Data Poisoning
Amplifying Membership Exposure via Data Poisoning
Yufei Chen
Chao Shen
Yun Shen
Cong Wang
Yang Zhang
AAML
125
33
0
01 Nov 2022
New data poison attacks on machine learning classifiers for mobile
  exfiltration
New data poison attacks on machine learning classifiers for mobile exfiltration
M. A. Ramírez
Sangyoung Yoon
Ernesto Damiani
H. A. Hamadi
C. Ardagna
Nicola Bena
Young-Ji Byon
Tae-Yeon Kim
C. Cho
C. Yeun
AAML
83
4
0
20 Oct 2022
A.I. Robustness: a Human-Centered Perspective on Technological
  Challenges and Opportunities
A.I. Robustness: a Human-Centered Perspective on Technological Challenges and Opportunities
Andrea Tocchetti
Lorenzo Corti
Agathe Balayn
Mireia Yurrita
Philip Lippmann
Marco Brambilla
Jie Yang
87
14
0
17 Oct 2022
Data Poisoning Attacks Against Multimodal Encoders
Data Poisoning Attacks Against Multimodal Encoders
Ziqing Yang
Xinlei He
Zheng Li
Michael Backes
Mathias Humbert
Pascal Berrang
Yang Zhang
AAML
179
52
0
30 Sep 2022
GAGA: Deciphering Age-path of Generalized Self-paced Regularizer
GAGA: Deciphering Age-path of Generalized Self-paced Regularizer
Xingyu Qu
Diyang Li
Xiaohan Zhao
Bin Gu
81
1
0
15 Sep 2022
Federated Learning based on Defending Against Data Poisoning Attacks in
  IoT
Federated Learning based on Defending Against Data Poisoning Attacks in IoT
Jiayin Li
Wenzhong Guo
Xingshuo Han
Jianping Cai
Ximeng Liu
AAML
127
1
0
14 Sep 2022
Defend Data Poisoning Attacks on Voice Authentication
Defend Data Poisoning Attacks on Voice Authentication
Ke Li
Cameron Baird
D. Lin
AAML
86
9
0
09 Sep 2022
Reducing Certified Regression to Certified Classification for General
  Poisoning Attacks
Reducing Certified Regression to Certified Classification for General Poisoning Attacks
Zayd Hammoudeh
Daniel Lowd
AAML
82
10
0
29 Aug 2022
SNAP: Efficient Extraction of Private Properties with Poisoning
SNAP: Efficient Extraction of Private Properties with Poisoning
Harsh Chaudhari
John Abascal
Alina Oprea
Matthew Jagielski
Florian Tramèr
Jonathan R. Ullman
MIACV
111
33
0
25 Aug 2022
Auditing Membership Leakages of Multi-Exit Networks
Auditing Membership Leakages of Multi-Exit Networks
Zheng Li
Yiyong Liu
Xinlei He
Ning Yu
Michael Backes
Yang Zhang
AAML
73
34
0
23 Aug 2022
An Input-Aware Mimic Defense Theory and its Practice
An Input-Aware Mimic Defense Theory and its Practice
Jiale Fu
Yali Yuan
Jiajun He
Sichu Liang
Zhe Huang
Hongyu Zhu
AAML
67
0
0
22 Aug 2022
Training-Time Attacks against k-Nearest Neighbors
Training-Time Attacks against k-Nearest Neighbors
Ara Vartanian
Will Rosenbaum
Scott Alfeld
53
1
0
15 Aug 2022
Testing the Robustness of Learned Index Structures
Testing the Robustness of Learned Index Structures
Matthias Bachfischer
Renata Borovica-Gajic
Benjamin I. P. Rubinstein
AAML
49
1
0
23 Jul 2022
Careful What You Wish For: on the Extraction of Adversarially Trained
  Models
Careful What You Wish For: on the Extraction of Adversarially Trained Models
Kacem Khaled
Gabriela Nicolescu
F. Magalhães
MIACVAAML
78
4
0
21 Jul 2022
Using Anomaly Detection to Detect Poisoning Attacks in Federated Learning Applications
Using Anomaly Detection to Detect Poisoning Attacks in Federated Learning Applications
Ali Raza
Shujun Li
K. Tran
L. Koehl
Kim Duc Tran
AAML
155
4
0
18 Jul 2022
Enhanced Security and Privacy via Fragmented Federated Learning
Enhanced Security and Privacy via Fragmented Federated Learning
N. Jebreel
J. Domingo-Ferrer
Alberto Blanco-Justicia
David Sánchez
FedML
91
28
0
13 Jul 2022
A law of adversarial risk, interpolation, and label noise
A law of adversarial risk, interpolation, and label noise
Daniel Paleka
Amartya Sanyal
NoLaAAML
113
10
0
08 Jul 2022
Federated and Transfer Learning: A Survey on Adversaries and Defense
  Mechanisms
Federated and Transfer Learning: A Survey on Adversaries and Defense Mechanisms
Ehsan Hallaji
R. Razavi-Far
M. Saif
AAMLFedML
79
13
0
05 Jul 2022
Defending against the Label-flipping Attack in Federated Learning
Defending against the Label-flipping Attack in Federated Learning
N. Jebreel
J. Domingo-Ferrer
David Sánchez
Alberto Blanco-Justicia
AAML
73
37
0
05 Jul 2022
FL-Defender: Combating Targeted Attacks in Federated Learning
FL-Defender: Combating Targeted Attacks in Federated Learning
N. Jebreel
J. Domingo-Ferrer
AAMLFedML
105
61
0
02 Jul 2022
Threat Assessment in Machine Learning based Systems
Threat Assessment in Machine Learning based Systems
L. Tidjon
Foutse Khomh
61
17
0
30 Jun 2022
Measuring the Effect of Training Data on Deep Learning Predictions via
  Randomized Experiments
Measuring the Effect of Training Data on Deep Learning Predictions via Randomized Experiments
Jinkun Lin
Anqi Zhang
Mathias Lécuyer
Jinyang Li
Aurojit Panda
S. Sen
TDIFedML
79
55
0
20 Jun 2022
Edge Security: Challenges and Issues
Edge Security: Challenges and Issues
Xin Jin
Charalampos Katsis
Fan Sang
Jiahao Sun
A. Kundu
Ramana Rao Kompella
95
9
0
14 Jun 2022
Certifying Data-Bias Robustness in Linear Regression
Certifying Data-Bias Robustness in Linear Regression
Anna P. Meyer
Aws Albarghouthi
Loris Dántoni
65
3
0
07 Jun 2022
Circumventing Backdoor Defenses That Are Based on Latent Separability
Circumventing Backdoor Defenses That Are Based on Latent Separability
Xiangyu Qi
Tinghao Xie
Yiming Li
Saeed Mahloujifar
Prateek Mittal
AAML
131
11
0
26 May 2022
Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for Free
Quarantine: Sparsity Can Uncover the Trojan Attack Trigger for Free
Tianlong Chen
Zhenyu Zhang
Yihua Zhang
Shiyu Chang
Sijia Liu
Zhangyang Wang
AAML
80
25
0
24 May 2022
SafeNet: The Unreasonable Effectiveness of Ensembles in Private
  Collaborative Learning
SafeNet: The Unreasonable Effectiveness of Ensembles in Private Collaborative Learning
Harsh Chaudhari
Matthew Jagielski
Alina Oprea
85
7
0
20 May 2022
Autonomy and Intelligence in the Computing Continuum: Challenges,
  Enablers, and Future Directions for Orchestration
Autonomy and Intelligence in the Computing Continuum: Challenges, Enablers, and Future Directions for Orchestration
Henna Kokkonen
Lauri Lovén
Naser Hossein Motlagh
Abhishek Kumar
Juha Partala
...
M. Bennis
Sasu Tarkoma
Schahram Dustdar
Susanna Pirttikangas
J. Riekki
107
27
0
03 May 2022
Backdooring Explainable Machine Learning
Backdooring Explainable Machine Learning
Maximilian Noppel
Lukas Peter
Christian Wressnegger
AAML
79
5
0
20 Apr 2022
Adversarial Analysis of the Differentially-Private Federated Learning in
  Cyber-Physical Critical Infrastructures
Adversarial Analysis of the Differentially-Private Federated Learning in Cyber-Physical Critical Infrastructures
Md Tamjid Hossain
S. Badsha
Hung M. La
Haoting Shen
Shafkat Islam
Ibrahim Khalil
X. Yi
AAML
65
3
0
06 Apr 2022
Breaking the De-Pois Poisoning Defense
Breaking the De-Pois Poisoning Defense
Alaa Anani
M. C. Ghanem
L. A. Khaliq
AAML
55
0
0
03 Apr 2022
Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets
Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets
Florian Tramèr
Reza Shokri
Ayrton San Joaquin
Hoang Minh Le
Matthew Jagielski
Sanghyun Hong
Nicholas Carlini
MIACV
141
123
0
31 Mar 2022
Robust Unlearnable Examples: Protecting Data Against Adversarial
  Learning
Robust Unlearnable Examples: Protecting Data Against Adversarial Learning
Shaopeng Fu
Fengxiang He
Yang Liu
Li Shen
Dacheng Tao
80
26
0
28 Mar 2022
WaveFuzz: A Clean-Label Poisoning Attack to Protect Your Voice
WaveFuzz: A Clean-Label Poisoning Attack to Protect Your Voice
Yunjie Ge
Qianqian Wang
Jingfeng Zhang
Juntao Zhou
Yunzhu Zhang
Chao Shen
AAML
98
6
0
25 Mar 2022
A Tutorial on Adversarial Learning Attacks and Countermeasures
A Tutorial on Adversarial Learning Attacks and Countermeasures
Cato Pauling
Michael Gimson
Muhammed Qaid
Ahmad Kida
Basel Halak
AAML
94
11
0
21 Feb 2022
Poisoning Attacks and Defenses on Artificial Intelligence: A Survey
Poisoning Attacks and Defenses on Artificial Intelligence: A Survey
M. A. Ramírez
Song-Kyoo Kim
H. A. Hamadi
Ernesto Damiani
Young-Ji Byon
Tae-Yeon Kim
C. Cho
C. Yeun
AAML
94
37
0
21 Feb 2022
Trusted AI in Multi-agent Systems: An Overview of Privacy and Security
  for Distributed Learning
Trusted AI in Multi-agent Systems: An Overview of Privacy and Security for Distributed Learning
Chuan Ma
Jun Li
Kang Wei
Bo Liu
Ming Ding
Long Yuan
Zhu Han
H. Vincent Poor
106
48
0
18 Feb 2022
Holistic Adversarial Robustness of Deep Learning Models
Holistic Adversarial Robustness of Deep Learning Models
Pin-Yu Chen
Sijia Liu
AAML
105
16
0
15 Feb 2022
AnoMili: Spoofing Prevention and Explainable Anomaly Detection for the
  1553 Military Avionic Bus
AnoMili: Spoofing Prevention and Explainable Anomaly Detection for the 1553 Military Avionic Bus
Efrat Levy
Nadav Maman
A. Shabtai
Yuval Elovici
47
15
0
14 Feb 2022
Previous
1234567
Next