ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1804.00308
  4. Cited By
Manipulating Machine Learning: Poisoning Attacks and Countermeasures for
  Regression Learning
v1v2v3 (latest)

Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning

1 April 2018
Matthew Jagielski
Alina Oprea
Battista Biggio
Chang-rui Liu
Cristina Nita-Rotaru
Yue Liu
    AAML
ArXiv (abs)PDFHTML

Papers citing "Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning"

18 / 318 papers shown
Title
Biscotti: A Ledger for Private and Secure Peer-to-Peer Machine Learning
Biscotti: A Ledger for Private and Secure Peer-to-Peer Machine Learning
Muhammad Shayan
Clement Fung
Chris J. M. Yoon
Ivan Beschastnikh
FedML
102
82
0
24 Nov 2018
Oversight of Unsafe Systems via Dynamic Safety Envelopes
Oversight of Unsafe Systems via Dynamic Safety Envelopes
David Manheim
18
1
0
22 Nov 2018
An Optimal Control View of Adversarial Machine Learning
An Optimal Control View of Adversarial Machine Learning
Xiaojin Zhu
AAML
55
25
0
11 Nov 2018
FAdeML: Understanding the Impact of Pre-Processing Noise Filtering on
  Adversarial Machine Learning
FAdeML: Understanding the Impact of Pre-Processing Noise Filtering on Adversarial Machine Learning
Faiq Khalid
Muhammad Abdullah Hanif
Semeen Rehman
Junaid Qadir
Mohamed Bennai
AAML
85
34
0
04 Nov 2018
TrISec: Training Data-Unaware Imperceptible Security Attacks on Deep
  Neural Networks
TrISec: Training Data-Unaware Imperceptible Security Attacks on Deep Neural Networks
Faiq Khalid
Muhammad Abdullah Hanif
Semeen Rehman
Rehan Ahmed
Mohamed Bennai
AAML
88
21
0
02 Nov 2018
Towards Adversarial Malware Detection: Lessons Learned from PDF-based
  Attacks
Towards Adversarial Malware Detection: Lessons Learned from PDF-based Attacks
Davide Maiorca
Battista Biggio
Giorgio Giacinto
AAML
80
47
0
02 Nov 2018
Stronger Data Poisoning Attacks Break Data Sanitization Defenses
Stronger Data Poisoning Attacks Break Data Sanitization Defenses
Pang Wei Koh
Jacob Steinhardt
Percy Liang
126
244
0
02 Nov 2018
Law and Adversarial Machine Learning
Law and Adversarial Machine Learning
Ramnath Kumar
David R. O'Brien
Kendra Albert
Salome Vilojen
AILawAAML
49
12
0
25 Oct 2018
Security Matters: A Survey on Adversarial Machine Learning
Security Matters: A Survey on Adversarial Machine Learning
Guofu Li
Pengjia Zhu
Jin Li
Zhemin Yang
Ning Cao
Zhiyi Chen
AAML
90
25
0
16 Oct 2018
Why Do Adversarial Attacks Transfer? Explaining Transferability of
  Evasion and Poisoning Attacks
Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks
Ambra Demontis
Marco Melis
Maura Pintor
Matthew Jagielski
Battista Biggio
Alina Oprea
Cristina Nita-Rotaru
Fabio Roli
SILMAAML
86
11
0
08 Sep 2018
Data Poisoning Attacks in Contextual Bandits
Data Poisoning Attacks in Contextual Bandits
Yuzhe Ma
Kwang-Sung Jun
Lihong Li
Xiaojin Zhu
AAML
85
68
0
17 Aug 2018
Mitigation of Adversarial Attacks through Embedded Feature Selection
Mitigation of Adversarial Attacks through Embedded Feature Selection
Ziyi Bao
Luis Muñoz-González
Emil C. Lupu
AAML
44
1
0
16 Aug 2018
Security and Privacy Issues in Deep Learning
Security and Privacy Issues in Deep Learning
Ho Bae
Jaehee Jang
Dahuin Jung
Hyemi Jang
Heonseok Ha
Hyungyu Lee
Sungroh Yoon
SILMMIACV
158
79
0
31 Jul 2018
On the adversarial robustness of robust estimators
On the adversarial robustness of robust estimators
Lifeng Lai
Erhan Bayraktar
71
10
0
11 Jun 2018
PAC-learning in the presence of evasion adversaries
PAC-learning in the presence of evasion adversaries
Daniel Cullina
A. Bhagoji
Prateek Mittal
AAML
104
55
0
05 Jun 2018
ML-Leaks: Model and Data Independent Membership Inference Attacks and
  Defenses on Machine Learning Models
ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models
A. Salem
Yang Zhang
Mathias Humbert
Pascal Berrang
Mario Fritz
Michael Backes
MIACVMIALM
186
957
0
04 Jun 2018
Label Sanitization against Label Flipping Poisoning Attacks
Label Sanitization against Label Flipping Poisoning Attacks
Andrea Paudice
Luis Muñoz-González
Emil C. Lupu
AAML
69
164
0
02 Mar 2018
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
Battista Biggio
Fabio Roli
AAML
183
1,412
0
08 Dec 2017
Previous
1234567