ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1804.00308
  4. Cited By
Manipulating Machine Learning: Poisoning Attacks and Countermeasures for
  Regression Learning

Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning

1 April 2018
Matthew Jagielski
Alina Oprea
Battista Biggio
Chang-rui Liu
Cristina Nita-Rotaru
Bo-wen Li
    AAML
ArXivPDFHTML

Papers citing "Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning"

35 / 135 papers shown
Title
Data Poisoning Attacks on Regression Learning and Corresponding Defenses
Data Poisoning Attacks on Regression Learning and Corresponding Defenses
Nicolas Müller
Daniel Kowatsch
Konstantin Böttinger
AAML
10
18
0
15 Sep 2020
Defending Regression Learners Against Poisoning Attacks
Defending Regression Learners Against Poisoning Attacks
Sandamal Weerasinghe
S. Erfani
T. Alpcan
C. Leckie
Justin Kopacz
AAML
16
0
0
21 Aug 2020
Dynamic Defense Against Byzantine Poisoning Attacks in Federated
  Learning
Dynamic Defense Against Byzantine Poisoning Attacks in Federated Learning
Nuria Rodríguez-Barroso
Eugenio Martínez-Cámara
M. V. Luzón
Francisco Herrera
FedML
AAML
24
35
0
29 Jul 2020
Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive
  Review
Backdoor Attacks and Countermeasures on Deep Learning: A Comprehensive Review
Yansong Gao
Bao Gia Doan
Zhi-Li Zhang
Siqi Ma
Jiliang Zhang
Anmin Fu
Surya Nepal
Hyoungshick Kim
AAML
36
221
0
21 Jul 2020
Data Poisoning Attacks Against Federated Learning Systems
Data Poisoning Attacks Against Federated Learning Systems
Vale Tolpegin
Stacey Truex
Mehmet Emre Gursoy
Ling Liu
FedML
31
640
0
16 Jul 2020
Subpopulation Data Poisoning Attacks
Subpopulation Data Poisoning Attacks
Matthew Jagielski
Giorgio Severi
Niklas Pousette Harger
Alina Oprea
AAML
SILM
24
114
0
24 Jun 2020
Sponge Examples: Energy-Latency Attacks on Neural Networks
Sponge Examples: Energy-Latency Attacks on Neural Networks
Ilia Shumailov
Yiren Zhao
Daniel Bates
Nicolas Papernot
Robert D. Mullins
Ross J. Anderson
SILM
19
127
0
05 Jun 2020
Blind Backdoors in Deep Learning Models
Blind Backdoors in Deep Learning Models
Eugene Bagdasaryan
Vitaly Shmatikov
AAML
FedML
SILM
46
298
0
08 May 2020
Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness
Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness
Pu Zhao
Pin-Yu Chen
Payel Das
Karthikeyan N. Ramamurthy
Xue Lin
AAML
64
185
0
30 Apr 2020
PoisHygiene: Detecting and Mitigating Poisoning Attacks in Neural
  Networks
PoisHygiene: Detecting and Mitigating Poisoning Attacks in Neural Networks
Junfeng Guo
Zelun Kong
Cong Liu
AAML
27
1
0
24 Mar 2020
Dynamic Backdoor Attacks Against Machine Learning Models
Dynamic Backdoor Attacks Against Machine Learning Models
A. Salem
Rui Wen
Michael Backes
Shiqing Ma
Yang Zhang
AAML
48
271
0
07 Mar 2020
Explanation-Guided Backdoor Poisoning Attacks Against Malware
  Classifiers
Explanation-Guided Backdoor Poisoning Attacks Against Malware Classifiers
Giorgio Severi
J. Meyer
Scott E. Coull
Alina Oprea
AAML
SILM
29
18
0
02 Mar 2020
Entangled Watermarks as a Defense against Model Extraction
Entangled Watermarks as a Defense against Model Extraction
Hengrui Jia
Christopher A. Choquette-Choo
Varun Chandrasekaran
Nicolas Papernot
WaLM
AAML
13
218
0
27 Feb 2020
On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient
  Shaping
On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping
Sanghyun Hong
Varun Chandrasekaran
Yigitcan Kaya
Tudor Dumitras
Nicolas Papernot
AAML
28
136
0
26 Feb 2020
NNoculation: Catching BadNets in the Wild
NNoculation: Catching BadNets in the Wild
A. Veldanda
Kang Liu
Benjamin Tan
Prashanth Krishnamurthy
Farshad Khorrami
Ramesh Karri
Brendan Dolan-Gavitt
S. Garg
AAML
OnRL
13
20
0
19 Feb 2020
Adversarial Machine Learning -- Industry Perspectives
Adversarial Machine Learning -- Industry Perspectives
Ramnath Kumar
Magnus Nyström
J. Lambert
Andrew Marshall
Mario Goertzel
Andi Comissoneru
Matt Swann
Sharon Xia
AAML
SILM
29
232
0
04 Feb 2020
Cronus: Robust and Heterogeneous Collaborative Learning with Black-Box
  Knowledge Transfer
Cronus: Robust and Heterogeneous Collaborative Learning with Black-Box Knowledge Transfer
Hong Chang
Virat Shejwalkar
Reza Shokri
Amir Houmansadr
FedML
26
167
0
24 Dec 2019
White-Box Target Attack for EEG-Based BCI Regression Problems
White-Box Target Attack for EEG-Based BCI Regression Problems
Lubin Meng
Chin-Teng Lin
T. Jung
Dongrui Wu
AAML
31
42
0
07 Nov 2019
Data Poisoning Attacks to Local Differential Privacy Protocols
Data Poisoning Attacks to Local Differential Privacy Protocols
Xiaoyu Cao
Jinyuan Jia
Neil Zhenqiang Gong
AAML
33
76
0
05 Nov 2019
Differentiable Convex Optimization Layers
Differentiable Convex Optimization Layers
Akshay Agrawal
Brandon Amos
Shane T. Barratt
Stephen P. Boyd
Steven Diamond
Zico Kolter
45
640
0
28 Oct 2019
Deep Neural Rejection against Adversarial Examples
Deep Neural Rejection against Adversarial Examples
Angelo Sotgiu
Ambra Demontis
Marco Melis
Battista Biggio
Giorgio Fumera
Xiaoyi Feng
Fabio Roli
AAML
22
68
0
01 Oct 2019
Big Data Analytics for Large Scale Wireless Networks: Challenges and
  Opportunities
Big Data Analytics for Large Scale Wireless Networks: Challenges and Opportunities
Hongning Dai
Raymond Chi-Wing Wong
Hao Wang
Zibin Zheng
A. Vasilakos
AI4CE
GNN
22
65
0
02 Sep 2019
Helen: Maliciously Secure Coopetitive Learning for Linear Models
Helen: Maliciously Secure Coopetitive Learning for Linear Models
Wenting Zheng
Raluca A. Popa
Joseph E. Gonzalez
Ion Stoica
FedML
27
144
0
16 Jul 2019
Poisoning Attacks with Generative Adversarial Nets
Poisoning Attacks with Generative Adversarial Nets
Luis Muñoz-González
Bjarne Pfitzner
Matteo Russo
Javier Carnerero-Cano
Emil C. Lupu
AAML
21
63
0
18 Jun 2019
Privacy Risks of Securing Machine Learning Models against Adversarial
  Examples
Privacy Risks of Securing Machine Learning Models against Adversarial Examples
Liwei Song
Reza Shokri
Prateek Mittal
SILM
MIACV
AAML
6
235
0
24 May 2019
Learning to Confuse: Generating Training Time Adversarial Data with
  Auto-Encoder
Learning to Confuse: Generating Training Time Adversarial Data with Auto-Encoder
Ji Feng
Qi-Zhi Cai
Zhi-Hua Zhou
AAML
19
104
0
22 May 2019
Attacking Graph-based Classification via Manipulating the Graph
  Structure
Attacking Graph-based Classification via Manipulating the Graph Structure
Binghui Wang
Neil Zhenqiang Gong
AAML
39
155
0
01 Mar 2019
Contamination Attacks and Mitigation in Multi-Party Machine Learning
Contamination Attacks and Mitigation in Multi-Party Machine Learning
Jamie Hayes
O. Ohrimenko
AAML
FedML
25
74
0
08 Jan 2019
Stealing Neural Networks via Timing Side Channels
Stealing Neural Networks via Timing Side Channels
Vasisht Duddu
D. Samanta
D. V. Rao
V. Balas
AAML
MLAU
FedML
33
133
0
31 Dec 2018
Analyzing Federated Learning through an Adversarial Lens
Analyzing Federated Learning through an Adversarial Lens
A. Bhagoji
Supriyo Chakraborty
Prateek Mittal
S. Calo
FedML
191
1,034
0
29 Nov 2018
Stronger Data Poisoning Attacks Break Data Sanitization Defenses
Stronger Data Poisoning Attacks Break Data Sanitization Defenses
Pang Wei Koh
Jacob Steinhardt
Percy Liang
6
240
0
02 Nov 2018
Why Do Adversarial Attacks Transfer? Explaining Transferability of
  Evasion and Poisoning Attacks
Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks
Ambra Demontis
Marco Melis
Maura Pintor
Matthew Jagielski
Battista Biggio
Alina Oprea
Cristina Nita-Rotaru
Fabio Roli
SILM
AAML
19
11
0
08 Sep 2018
Data Poisoning Attacks in Contextual Bandits
Data Poisoning Attacks in Contextual Bandits
Yuzhe Ma
Kwang-Sung Jun
Lihong Li
Xiaojin Zhu
AAML
17
67
0
17 Aug 2018
ML-Leaks: Model and Data Independent Membership Inference Attacks and
  Defenses on Machine Learning Models
ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models
A. Salem
Yang Zhang
Mathias Humbert
Pascal Berrang
Mario Fritz
Michael Backes
MIACV
MIALM
59
928
0
04 Jun 2018
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
Battista Biggio
Fabio Roli
AAML
40
1,391
0
08 Dec 2017
Previous
123