ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1911.12562
  4. Cited By
Towards Security Threats of Deep Learning Systems: A Survey
v1v2 (latest)

Towards Security Threats of Deep Learning Systems: A Survey

28 November 2019
Yingzhe He
Guozhu Meng
Kai Chen
Xingbo Hu
Jinwen He
    AAMLELM
ArXiv (abs)PDFHTML

Papers citing "Towards Security Threats of Deep Learning Systems: A Survey"

46 / 46 papers shown
Title
Support Vector Machines under Adversarial Label Contamination
Support Vector Machines under Adversarial Label Contamination
Huang Xiao
Battista Biggio
B. Nelson
Han Xiao
Claudia Eckert
Fabio Roli
AAML
56
231
0
01 Jun 2022
Is Data Clustering in Adversarial Settings Secure?
Is Data Clustering in Adversarial Settings Secure?
Battista Biggio
I. Pillai
Samuel Rota Buló
Andrea Valenza
Marcello Pelillo
Fabio Roli
AAML
48
130
0
25 Nov 2018
Data Poisoning Attacks against Online Learning
Data Poisoning Attacks against Online Learning
Yizhen Wang
Kamalika Chaudhuri
AAML
62
93
0
27 Aug 2018
Security and Privacy Issues in Deep Learning
Security and Privacy Issues in Deep Learning
Ho Bae
Jaehee Jang
Dahuin Jung
Hyemi Jang
Heonseok Ha
Hyungyu Lee
Sungroh Yoon
SILMMIACV
118
78
0
31 Jul 2018
Efficient Deep Learning on Multi-Source Private Data
Efficient Deep Learning on Multi-Source Private Data
Nicholas Hynes
Raymond Cheng
Basel Alomair
FedML
62
102
0
17 Jul 2018
Machine Learning with Membership Privacy using Adversarial
  Regularization
Machine Learning with Membership Privacy using Adversarial Regularization
Milad Nasr
Reza Shokri
Amir Houmansadr
FedMLMIACV
52
472
0
16 Jul 2018
Algorithms that Remember: Model Inversion Attacks and Data Protection
  Law
Algorithms that Remember: Model Inversion Attacks and Data Protection Law
Michael Veale
Reuben Binns
L. Edwards
46
197
0
12 Jul 2018
Privacy-preserving Machine Learning through Data Obfuscation
Privacy-preserving Machine Learning through Data Obfuscation
Tianwei Zhang
Zecheng He
R. Lee
67
80
0
05 Jul 2018
Manipulating Machine Learning: Poisoning Attacks and Countermeasures for
  Regression Learning
Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning
Matthew Jagielski
Alina Oprea
Battista Biggio
Chang-rui Liu
Cristina Nita-Rotaru
Yue Liu
AAML
85
761
0
01 Apr 2018
DeepGauge: Multi-Granularity Testing Criteria for Deep Learning Systems
DeepGauge: Multi-Granularity Testing Criteria for Deep Learning Systems
Lei Ma
Felix Juefei Xu
Fuyuan Zhang
Jiyuan Sun
Minhui Xue
...
Ting Su
Li Li
Yang Liu
Jianjun Zhao
Yadong Wang
ELM
67
622
0
20 Mar 2018
On the Suitability of $L_p$-norms for Creating and Preventing
  Adversarial Examples
On the Suitability of LpL_pLp​-norms for Creating and Preventing Adversarial Examples
Mahmood Sharif
Lujo Bauer
Michael K. Reiter
AAML
130
138
0
27 Feb 2018
Stealing Hyperparameters in Machine Learning
Stealing Hyperparameters in Machine Learning
Binghui Wang
Neil Zhenqiang Gong
AAML
144
466
0
14 Feb 2018
CommanderSong: A Systematic Approach for Practical Adversarial Voice
  Recognition
CommanderSong: A Systematic Approach for Practical Adversarial Voice Recognition
Xuejing Yuan
Yuxuan Chen
Yue Zhao
Yunhui Long
Xiaokang Liu
Kai Chen
Shengzhi Zhang
Heqing Huang
Xiaofeng Wang
Carl A. Gunter
AAML
66
354
0
24 Jan 2018
Gazelle: A Low Latency Framework for Secure Neural Network Inference
Gazelle: A Low Latency Framework for Secure Neural Network Inference
Chiraag Juvekar
Vinod Vaikuntanathan
A. Chandrakasan
65
893
0
16 Jan 2018
Black-box Generation of Adversarial Text Sequences to Evade Deep
  Learning Classifiers
Black-box Generation of Adversarial Text Sequences to Evade Deep Learning Classifiers
Ji Gao
Jack Lanchantin
M. Soffa
Yanjun Qi
AAML
137
721
0
13 Jan 2018
Audio Adversarial Examples: Targeted Attacks on Speech-to-Text
Audio Adversarial Examples: Targeted Attacks on Speech-to-Text
Nicholas Carlini
D. Wagner
AAML
97
1,083
0
05 Jan 2018
Improving the Adversarial Robustness and Interpretability of Deep Neural
  Networks by Regularizing their Input Gradients
Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing their Input Gradients
A. Ross
Finale Doshi-Velez
AAML
152
682
0
26 Nov 2017
Mitigating Adversarial Effects Through Randomization
Mitigating Adversarial Effects Through Randomization
Cihang Xie
Jianyu Wang
Zhishuai Zhang
Zhou Ren
Alan Yuille
AAML
113
1,059
0
06 Nov 2017
EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial
  Examples
EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples
Pin-Yu Chen
Yash Sharma
Huan Zhang
Jinfeng Yi
Cho-Jui Hsieh
AAML
66
641
0
13 Sep 2017
Towards Proving the Adversarial Robustness of Deep Neural Networks
Towards Proving the Adversarial Robustness of Deep Neural Networks
Guy Katz
Clark W. Barrett
D. Dill
Kyle D. Julian
Mykel J. Kochenderfer
AAMLOOD
80
118
0
08 Sep 2017
Towards Poisoning of Deep Learning Algorithms with Back-gradient
  Optimization
Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization
Luis Muñoz-González
Battista Biggio
Ambra Demontis
Andrea Paudice
Vasin Wongrassamee
Emil C. Lupu
Fabio Roli
AAML
99
633
0
29 Aug 2017
DeepTest: Automated Testing of Deep-Neural-Network-driven Autonomous
  Cars
DeepTest: Automated Testing of Deep-Neural-Network-driven Autonomous Cars
Yuchi Tian
Kexin Pei
Suman Jana
Baishakhi Ray
AAML
64
1,359
0
28 Aug 2017
Efficient Defenses Against Adversarial Attacks
Efficient Defenses Against Adversarial Attacks
Valentina Zantedeschi
Maria-Irina Nicolae
Ambrish Rawat
AAML
46
297
0
21 Jul 2017
Towards Deep Learning Models Resistant to Adversarial Attacks
Towards Deep Learning Models Resistant to Adversarial Attacks
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILMOOD
310
12,069
0
19 Jun 2017
Certified Defenses for Data Poisoning Attacks
Certified Defenses for Data Poisoning Attacks
Jacob Steinhardt
Pang Wei Koh
Percy Liang
AAML
105
755
0
09 Jun 2017
Towards Robust Detection of Adversarial Examples
Towards Robust Detection of Adversarial Examples
Tianyu Pang
Chao Du
Yinpeng Dong
Jun Zhu
AAML
69
18
0
02 Jun 2017
Black-Box Attacks against RNN based Malware Detection Algorithms
Black-Box Attacks against RNN based Malware Detection Algorithms
Weiwei Hu
Ying Tan
44
150
0
23 May 2017
Ensemble Adversarial Training: Attacks and Defenses
Ensemble Adversarial Training: Attacks and Defenses
Florian Tramèr
Alexey Kurakin
Nicolas Papernot
Ian Goodfellow
Dan Boneh
Patrick McDaniel
AAML
177
2,725
0
19 May 2017
The Space of Transferable Adversarial Examples
The Space of Transferable Adversarial Examples
Florian Tramèr
Nicolas Papernot
Ian Goodfellow
Dan Boneh
Patrick McDaniel
AAMLSILM
90
557
0
11 Apr 2017
Deep Models Under the GAN: Information Leakage from Collaborative Deep
  Learning
Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning
Briland Hitaj
G. Ateniese
Fernando Perez-Cruz
FedML
120
1,404
0
24 Feb 2017
Adversarial examples for generative models
Adversarial examples for generative models
Jernej Kos
Ian S. Fischer
Basel Alomair
GAN
72
274
0
22 Feb 2017
Adversarial Machine Learning at Scale
Adversarial Machine Learning at Scale
Alexey Kurakin
Ian Goodfellow
Samy Bengio
AAML
472
3,144
0
04 Nov 2016
Universal adversarial perturbations
Universal adversarial perturbations
Seyed-Mohsen Moosavi-Dezfooli
Alhussein Fawzi
Omar Fawzi
P. Frossard
AAML
145
2,527
0
26 Oct 2016
Membership Inference Attacks against Machine Learning Models
Membership Inference Attacks against Machine Learning Models
Reza Shokri
M. Stronati
Congzheng Song
Vitaly Shmatikov
SLRMIALMMIACV
261
4,135
0
18 Oct 2016
Stealing Machine Learning Models via Prediction APIs
Stealing Machine Learning Models via Prediction APIs
Florian Tramèr
Fan Zhang
Ari Juels
Michael K. Reiter
Thomas Ristenpart
SILMMLAU
107
1,807
0
09 Sep 2016
Towards Evaluating the Robustness of Neural Networks
Towards Evaluating the Robustness of Neural Networks
Nicholas Carlini
D. Wagner
OODAAML
266
8,555
0
16 Aug 2016
Adversarial examples in the physical world
Adversarial examples in the physical world
Alexey Kurakin
Ian Goodfellow
Samy Bengio
SILMAAML
543
5,897
0
08 Jul 2016
Bag of Tricks for Efficient Text Classification
Bag of Tricks for Efficient Text Classification
Armand Joulin
Edouard Grave
Piotr Bojanowski
Tomas Mikolov
VLM
175
4,622
0
06 Jul 2016
Deep Learning with Differential Privacy
Deep Learning with Differential Privacy
Martín Abadi
Andy Chu
Ian Goodfellow
H. B. McMahan
Ilya Mironov
Kunal Talwar
Li Zhang
FedMLSyDa
216
6,130
0
01 Jul 2016
Crafting Adversarial Input Sequences for Recurrent Neural Networks
Crafting Adversarial Input Sequences for Recurrent Neural Networks
Nicolas Papernot
Patrick McDaniel
A. Swami
Richard E. Harang
AAMLGANSILM
51
456
0
28 Apr 2016
Practical Black-Box Attacks against Machine Learning
Practical Black-Box Attacks against Machine Learning
Nicolas Papernot
Patrick McDaniel
Ian Goodfellow
S. Jha
Z. Berkay Celik
A. Swami
MLAUAAML
75
3,678
0
08 Feb 2016
DeepFool: a simple and accurate method to fool deep neural networks
DeepFool: a simple and accurate method to fool deep neural networks
Seyed-Mohsen Moosavi-Dezfooli
Alhussein Fawzi
P. Frossard
AAML
151
4,897
0
14 Nov 2015
Learning with a Strong Adversary
Learning with a Strong Adversary
Ruitong Huang
Bing Xu
Dale Schuurmans
Csaba Szepesvári
AAML
79
358
0
10 Nov 2015
Intriguing properties of neural networks
Intriguing properties of neural networks
Christian Szegedy
Wojciech Zaremba
Ilya Sutskever
Joan Bruna
D. Erhan
Ian Goodfellow
Rob Fergus
AAML
277
14,927
1
21 Dec 2013
Hacking Smart Machines with Smarter Ones: How to Extract Meaningful Data
  from Machine Learning Classifiers
Hacking Smart Machines with Smarter Ones: How to Extract Meaningful Data from Machine Learning Classifiers
G. Ateniese
G. Felici
L. Mancini
A. Spognardi
Antonio Villani
Domenico Vitali
84
462
0
19 Jun 2013
Poisoning Attacks against Support Vector Machines
Poisoning Attacks against Support Vector Machines
Battista Biggio
B. Nelson
Pavel Laskov
AAML
115
1,593
0
27 Jun 2012
1