ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1803.06975
  4. Cited By
Technical Report: When Does Machine Learning FAIL? Generalized
  Transferability for Evasion and Poisoning Attacks

Technical Report: When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks

19 March 2018
Octavian Suciu
R. Marginean
Yigitcan Kaya
Hal Daumé
Tudor Dumitras
    AAML
ArXivPDFHTML

Papers citing "Technical Report: When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks"

33 / 133 papers shown
Title
Bullseye Polytope: A Scalable Clean-Label Poisoning Attack with Improved
  Transferability
Bullseye Polytope: A Scalable Clean-Label Poisoning Attack with Improved Transferability
H. Aghakhani
Dongyu Meng
Yu-Xiang Wang
Christopher Kruegel
Giovanni Vigna
AAML
31
105
0
01 May 2020
A Framework for Enhancing Deep Neural Networks Against Adversarial
  Malware
A Framework for Enhancing Deep Neural Networks Against Adversarial Malware
Deqiang Li
Qianmu Li
Yanfang Ye
Shouhuai Xu
AAML
17
13
0
15 Apr 2020
A Separation Result Between Data-oblivious and Data-aware Poisoning
  Attacks
A Separation Result Between Data-oblivious and Data-aware Poisoning Attacks
Samuel Deng
Sanjam Garg
S. Jha
Saeed Mahloujifar
Mohammad Mahmoody
Abhradeep Thakurta
20
3
0
26 Mar 2020
PoisHygiene: Detecting and Mitigating Poisoning Attacks in Neural
  Networks
PoisHygiene: Detecting and Mitigating Poisoning Attacks in Neural Networks
Junfeng Guo
Zelun Kong
Cong Liu
AAML
24
1
0
24 Mar 2020
Dynamic Backdoor Attacks Against Machine Learning Models
Dynamic Backdoor Attacks Against Machine Learning Models
A. Salem
Rui Wen
Michael Backes
Shiqing Ma
Yang Zhang
AAML
43
271
0
07 Mar 2020
Explanation-Guided Backdoor Poisoning Attacks Against Malware
  Classifiers
Explanation-Guided Backdoor Poisoning Attacks Against Malware Classifiers
Giorgio Severi
J. Meyer
Scott E. Coull
Alina Oprea
AAML
SILM
29
18
0
02 Mar 2020
On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient
  Shaping
On the Effectiveness of Mitigating Data Poisoning Attacks with Gradient Shaping
Sanghyun Hong
Varun Chandrasekaran
Yigitcan Kaya
Tudor Dumitras
Nicolas Papernot
AAML
28
136
0
26 Feb 2020
Fawkes: Protecting Privacy against Unauthorized Deep Learning Models
Fawkes: Protecting Privacy against Unauthorized Deep Learning Models
Shawn Shan
Emily Wenger
Jiayun Zhang
Huiying Li
Haitao Zheng
Ben Y. Zhao
PICV
MU
29
24
0
19 Feb 2020
Can't Boil This Frog: Robustness of Online-Trained Autoencoder-Based
  Anomaly Detectors to Adversarial Poisoning Attacks
Can't Boil This Frog: Robustness of Online-Trained Autoencoder-Based Anomaly Detectors to Adversarial Poisoning Attacks
Moshe Kravchik
A. Shabtai
AAML
14
1
0
07 Feb 2020
Local Model Poisoning Attacks to Byzantine-Robust Federated Learning
Local Model Poisoning Attacks to Byzantine-Robust Federated Learning
Minghong Fang
Xiaoyu Cao
Jinyuan Jia
Neil Zhenqiang Gong
AAML
OOD
FedML
48
1,075
0
26 Nov 2019
The Threat of Adversarial Attacks on Machine Learning in Network
  Security -- A Survey
The Threat of Adversarial Attacks on Machine Learning in Network Security -- A Survey
Olakunle Ibitoye
Rana Abou-Khamis
Mohamed el Shehaby
Ashraf Matrawy
M. O. Shafiq
AAML
37
68
0
06 Nov 2019
Intriguing Properties of Adversarial ML Attacks in the Problem Space
  [Extended Version]
Intriguing Properties of Adversarial ML Attacks in the Problem Space [Extended Version]
Jacopo Cortellazzi
Feargus Pendlebury
Daniel Arp
Erwin Quiring
Fabio Pierazzi
Lorenzo Cavallaro
AAML
35
0
0
05 Nov 2019
A Tale of Evil Twins: Adversarial Inputs versus Poisoned Models
A Tale of Evil Twins: Adversarial Inputs versus Poisoned Models
Ren Pang
Hua Shen
Xinyang Zhang
S. Ji
Yevgeniy Vorobeychik
Xiaopu Luo
Alex Liu
Ting Wang
AAML
13
2
0
05 Nov 2019
Orchestrating the Development Lifecycle of Machine Learning-Based IoT
  Applications: A Taxonomy and Survey
Orchestrating the Development Lifecycle of Machine Learning-Based IoT Applications: A Taxonomy and Survey
Bin Qian
Jie Su
Z. Wen
D. N. Jha
Yinhao Li
...
Albert Y. Zomaya
Omer F. Rana
Lizhe Wang
Maciej Koutny
R. Ranjan
28
4
0
11 Oct 2019
Hidden Trigger Backdoor Attacks
Hidden Trigger Backdoor Attacks
Aniruddha Saha
Akshayvarun Subramanya
Hamed Pirsiavash
13
613
0
30 Sep 2019
FENCE: Feasible Evasion Attacks on Neural Networks in Constrained
  Environments
FENCE: Feasible Evasion Attacks on Neural Networks in Constrained Environments
Alesia Chernikova
Alina Oprea
AAML
19
35
0
23 Sep 2019
Adversarial Vulnerability Bounds for Gaussian Process Classification
Adversarial Vulnerability Bounds for Gaussian Process Classification
M. Smith
Kathrin Grosse
Michael Backes
Mauricio A. Alvarez
AAML
8
9
0
19 Sep 2019
Defending against Machine Learning based Inference Attacks via
  Adversarial Examples: Opportunities and Challenges
Defending against Machine Learning based Inference Attacks via Adversarial Examples: Opportunities and Challenges
Jinyuan Jia
Neil Zhenqiang Gong
AAML
SILM
15
16
0
17 Sep 2019
Explaining Vulnerabilities to Adversarial Machine Learning through
  Visual Analytics
Explaining Vulnerabilities to Adversarial Machine Learning through Visual Analytics
Yuxin Ma
Tiankai Xie
Jundong Li
Ross Maciejewski
AAML
13
67
0
17 Jul 2019
Adversarial Security Attacks and Perturbations on Machine Learning and
  Deep Learning Methods
Adversarial Security Attacks and Perturbations on Machine Learning and Deep Learning Methods
Arif Siddiqi
AAML
19
11
0
17 Jul 2019
Universal Litmus Patterns: Revealing Backdoor Attacks in CNNs
Universal Litmus Patterns: Revealing Backdoor Attacks in CNNs
Soheil Kolouri
Aniruddha Saha
Hamed Pirsiavash
Heiko Hoffmann
AAML
27
231
0
26 Jun 2019
Terminal Brain Damage: Exposing the Graceless Degradation in Deep Neural
  Networks Under Hardware Fault Attacks
Terminal Brain Damage: Exposing the Graceless Degradation in Deep Neural Networks Under Hardware Fault Attacks
Sanghyun Hong
Pietro Frigo
Yigitcan Kaya
Cristiano Giuffrida
Tudor Dumitras
AAML
22
211
0
03 Jun 2019
An Investigation of Data Poisoning Defenses for Online Learning
An Investigation of Data Poisoning Defenses for Online Learning
Yizhen Wang
Somesh Jha
Kamalika Chaudhuri
AAML
13
5
0
28 May 2019
Transferable Clean-Label Poisoning Attacks on Deep Neural Nets
Transferable Clean-Label Poisoning Attacks on Deep Neural Nets
Chen Zhu
Yifan Jiang
Ali Shafahi
Hengduo Li
Gavin Taylor
Christoph Studer
Tom Goldstein
27
283
0
15 May 2019
Better the Devil you Know: An Analysis of Evasion Attacks using
  Out-of-Distribution Adversarial Examples
Better the Devil you Know: An Analysis of Evasion Attacks using Out-of-Distribution Adversarial Examples
Vikash Sehwag
A. Bhagoji
Liwei Song
Chawin Sitawarin
Daniel Cullina
M. Chiang
Prateek Mittal
OODD
29
26
0
05 May 2019
Gotta Catch Ém All: Using Honeypots to Catch Adversarial Attacks on
  Neural Networks
Gotta Catch Ém All: Using Honeypots to Catch Adversarial Attacks on Neural Networks
Shawn Shan
Emily Wenger
Bolun Wang
Yangqiu Song
Haitao Zheng
Ben Y. Zhao
23
71
0
18 Apr 2019
Attacking Graph-based Classification via Manipulating the Graph
  Structure
Attacking Graph-based Classification via Manipulating the Graph Structure
Binghui Wang
Neil Zhenqiang Gong
AAML
39
154
0
01 Mar 2019
Stronger Data Poisoning Attacks Break Data Sanitization Defenses
Stronger Data Poisoning Attacks Break Data Sanitization Defenses
Pang Wei Koh
Jacob Steinhardt
Percy Liang
6
240
0
02 Nov 2018
Security Analysis of Deep Neural Networks Operating in the Presence of
  Cache Side-Channel Attacks
Security Analysis of Deep Neural Networks Operating in the Presence of Cache Side-Channel Attacks
Sanghyun Hong
Michael Davinroy
Yigitcan Kaya
S. Locke
Ian Rackow
Kevin Kulda
Dana Dachman-Soled
Tudor Dumitras
MIACV
25
90
0
08 Oct 2018
Why Do Adversarial Attacks Transfer? Explaining Transferability of
  Evasion and Poisoning Attacks
Why Do Adversarial Attacks Transfer? Explaining Transferability of Evasion and Poisoning Attacks
Ambra Demontis
Marco Melis
Maura Pintor
Matthew Jagielski
Battista Biggio
Alina Oprea
Cristina Nita-Rotaru
Fabio Roli
SILM
AAML
19
11
0
08 Sep 2018
ML-Leaks: Model and Data Independent Membership Inference Attacks and
  Defenses on Machine Learning Models
ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models
A. Salem
Yang Zhang
Mathias Humbert
Pascal Berrang
Mario Fritz
Michael Backes
MIACV
MIALM
39
928
0
04 Jun 2018
Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural
  Networks
Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks
Kang Liu
Brendan Dolan-Gavitt
S. Garg
AAML
24
1,021
0
30 May 2018
Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
Ali Shafahi
Yifan Jiang
Mahyar Najibi
Octavian Suciu
Christoph Studer
Tudor Dumitras
Tom Goldstein
AAML
22
1,076
0
03 Apr 2018
Previous
123