ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2102.04661
  4. Cited By
Security and Privacy for Artificial Intelligence: Opportunities and
  Challenges

Security and Privacy for Artificial Intelligence: Opportunities and Challenges

9 February 2021
Ayodeji Oseni
Nour Moustafa
Helge Janicke
Peng Liu
Z. Tari
A. Vasilakos
    AAML
ArXiv (abs)PDFHTML

Papers citing "Security and Privacy for Artificial Intelligence: Opportunities and Challenges"

50 / 74 papers shown
Title
Adversarial Examples in Modern Machine Learning: A Review
Adversarial Examples in Modern Machine Learning: A Review
R. Wiyatno
Anqi Xu
Ousmane Amadou Dia
A. D. Berker
AAML
99
105
0
13 Nov 2019
Biometric Backdoors: A Poisoning Attack Against Unsupervised Template
  Updating
Biometric Backdoors: A Poisoning Attack Against Unsupervised Template Updating
Giulio Lovisotto
Simon Eberz
Ivan Martinovic
AAML
73
36
0
22 May 2019
Adversarial Training for Free!
Adversarial Training for Free!
Ali Shafahi
Mahyar Najibi
Amin Ghiasi
Zheng Xu
John P. Dickerson
Christoph Studer
L. Davis
Gavin Taylor
Tom Goldstein
AAML
136
1,253
0
29 Apr 2019
Towards Federated Learning at Scale: System Design
Towards Federated Learning at Scale: System Design
Keith Bonawitz
Hubert Eichner
W. Grieskamp
Dzmitry Huba
A. Ingerman
...
H. B. McMahan
Timon Van Overveldt
David Petrou
Daniel Ramage
Jason Roselander
FedML
126
2,675
0
04 Feb 2019
Stronger Data Poisoning Attacks Break Data Sanitization Defenses
Stronger Data Poisoning Attacks Break Data Sanitization Defenses
Pang Wei Koh
Jacob Steinhardt
Percy Liang
89
243
0
02 Nov 2018
Security Matters: A Survey on Adversarial Machine Learning
Security Matters: A Survey on Adversarial Machine Learning
Guofu Li
Pengjia Zhu
Jin Li
Zhemin Yang
Ning Cao
Zhiyi Chen
AAML
76
25
0
16 Oct 2018
Adversarial Attacks and Defences: A Survey
Adversarial Attacks and Defences: A Survey
Anirban Chakraborty
Manaar Alam
Vishal Dey
Anupam Chattopadhyay
Debdeep Mukhopadhyay
AAMLOOD
89
683
0
28 Sep 2018
Query-Efficient Black-Box Attack by Active Learning
Query-Efficient Black-Box Attack by Active Learning
Pengcheng Li
Jinfeng Yi
Lijun Zhang
AAMLMLAU
51
55
0
13 Sep 2018
Adversarial Examples: Opportunities and Challenges
Adversarial Examples: Opportunities and Challenges
Jiliang Zhang
Chen Li
AAML
55
234
0
13 Sep 2018
Security and Privacy Issues in Deep Learning
Security and Privacy Issues in Deep Learning
Ho Bae
Jaehee Jang
Dahuin Jung
Hyemi Jang
Heonseok Ha
Hyungyu Lee
Sungroh Yoon
SILMMIACV
118
79
0
31 Jul 2018
Black-box Adversarial Attacks with Limited Queries and Information
Black-box Adversarial Attacks with Limited Queries and Information
Andrew Ilyas
Logan Engstrom
Anish Athalye
Jessy Lin
MLAUAAML
165
1,208
0
23 Apr 2018
Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
Ali Shafahi
Wenjie Huang
Mahyar Najibi
Octavian Suciu
Christoph Studer
Tudor Dumitras
Tom Goldstein
AAML
89
1,097
0
03 Apr 2018
Manipulating Machine Learning: Poisoning Attacks and Countermeasures for
  Regression Learning
Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning
Matthew Jagielski
Alina Oprea
Battista Biggio
Chang-rui Liu
Cristina Nita-Rotaru
Yue Liu
AAML
85
764
0
01 Apr 2018
Technical Report: When Does Machine Learning FAIL? Generalized
  Transferability for Evasion and Poisoning Attacks
Technical Report: When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks
Octavian Suciu
R. Marginean
Yigitcan Kaya
Hal Daumé
Tudor Dumitras
AAML
83
287
0
19 Mar 2018
Certified Robustness to Adversarial Examples with Differential Privacy
Certified Robustness to Adversarial Examples with Differential Privacy
Mathias Lécuyer
Vaggelis Atlidakis
Roxana Geambasu
Daniel J. Hsu
Suman Jana
SILMAAML
96
939
0
09 Feb 2018
Characterizing Adversarial Subspaces Using Local Intrinsic
  Dimensionality
Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality
Xingjun Ma
Yue Liu
Yisen Wang
S. Erfani
S. Wijewickrema
Grant Schoenebeck
Basel Alomair
Michael E. Houle
James Bailey
AAML
114
742
0
08 Jan 2018
Audio Adversarial Examples: Targeted Attacks on Speech-to-Text
Audio Adversarial Examples: Targeted Attacks on Speech-to-Text
Nicholas Carlini
D. Wagner
AAML
101
1,083
0
05 Jan 2018
Threat of Adversarial Attacks on Deep Learning in Computer Vision: A
  Survey
Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey
Naveed Akhtar
Ajmal Mian
AAML
99
1,870
0
02 Jan 2018
Adversarial Examples: Attacks and Defenses for Deep Learning
Adversarial Examples: Attacks and Defenses for Deep Learning
Xiaoyong Yuan
Pan He
Qile Zhu
Xiaolin Li
SILMAAML
101
1,625
0
19 Dec 2017
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
Battista Biggio
Fabio Roli
AAML
135
1,409
0
08 Dec 2017
Towards Robust Neural Networks via Random Self-ensemble
Towards Robust Neural Networks via Random Self-ensemble
Xuanqing Liu
Minhao Cheng
Huan Zhang
Cho-Jui Hsieh
FedMLAAML
101
423
0
02 Dec 2017
CryptoDL: Deep Neural Networks over Encrypted Data
CryptoDL: Deep Neural Networks over Encrypted Data
Ehsan Hesamifard
Hassan Takabi
Mehdi Ghasemi
55
381
0
14 Nov 2017
One pixel attack for fooling deep neural networks
One pixel attack for fooling deep neural networks
Jiawei Su
Danilo Vasconcellos Vargas
Kouichi Sakurai
AAML
150
2,327
0
24 Oct 2017
Mitigating Evasion Attacks to Deep Neural Networks via Region-based
  Classification
Mitigating Evasion Attacks to Deep Neural Networks via Region-based Classification
Xiaoyu Cao
Neil Zhenqiang Gong
AAML
71
211
0
17 Sep 2017
EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial
  Examples
EAD: Elastic-Net Attacks to Deep Neural Networks via Adversarial Examples
Pin-Yu Chen
Yash Sharma
Huan Zhang
Jinfeng Yi
Cho-Jui Hsieh
AAML
66
641
0
13 Sep 2017
Ensemble Methods as a Defense to Adversarial Perturbations Against Deep
  Neural Networks
Ensemble Methods as a Defense to Adversarial Perturbations Against Deep Neural Networks
Thilo Strauss
Markus Hanselmann
Andrej Junginger
Holger Ulmer
AAML
76
136
0
11 Sep 2017
Security Evaluation of Pattern Classifiers under Attack
Security Evaluation of Pattern Classifiers under Attack
Battista Biggio
Giorgio Fumera
Fabio Roli
AAML
81
444
0
02 Sep 2017
Towards Poisoning of Deep Learning Algorithms with Back-gradient
  Optimization
Towards Poisoning of Deep Learning Algorithms with Back-gradient Optimization
Luis Muñoz-González
Battista Biggio
Ambra Demontis
Andrea Paudice
Vasin Wongrassamee
Emil C. Lupu
Fabio Roli
AAML
110
633
0
29 Aug 2017
Is Deep Learning Safe for Robot Vision? Adversarial Examples against the
  iCub Humanoid
Is Deep Learning Safe for Robot Vision? Adversarial Examples against the iCub Humanoid
Marco Melis
Ambra Demontis
Battista Biggio
Gavin Brown
Giorgio Fumera
Fabio Roli
AAML
52
98
0
23 Aug 2017
BadNets: Identifying Vulnerabilities in the Machine Learning Model
  Supply Chain
BadNets: Identifying Vulnerabilities in the Machine Learning Model Supply Chain
Tianyu Gu
Brendan Dolan-Gavitt
S. Garg
SILM
130
1,782
0
22 Aug 2017
Evasion Attacks against Machine Learning at Test Time
Evasion Attacks against Machine Learning at Test Time
Battista Biggio
Igino Corona
Davide Maiorca
B. Nelson
Nedim Srndic
Pavel Laskov
Giorgio Giacinto
Fabio Roli
AAML
163
2,160
0
21 Aug 2017
ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural
  Networks without Training Substitute Models
ZOO: Zeroth Order Optimization based Black-box Attacks to Deep Neural Networks without Training Substitute Models
Pin-Yu Chen
Huan Zhang
Yash Sharma
Jinfeng Yi
Cho-Jui Hsieh
AAML
90
1,887
0
14 Aug 2017
Towards Crafting Text Adversarial Samples
Towards Crafting Text Adversarial Samples
Suranjana Samanta
S. Mehta
AAML
77
222
0
10 Jul 2017
UPSET and ANGRI : Breaking High Performance Image Classifiers
UPSET and ANGRI : Breaking High Performance Image Classifiers
Sayantan Sarkar
Ankan Bansal
U. Mahbub
Rama Chellappa
AAML
68
108
0
04 Jul 2017
Towards Deep Learning Models Resistant to Adversarial Attacks
Towards Deep Learning Models Resistant to Adversarial Attacks
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILMOOD
319
12,151
0
19 Jun 2017
Adversarial Example Defenses: Ensembles of Weak Defenses are not Strong
Adversarial Example Defenses: Ensembles of Weak Defenses are not Strong
Warren He
James Wei
Xinyun Chen
Nicholas Carlini
Basel Alomair
AAML
102
242
0
15 Jun 2017
Feature Squeezing Mitigates and Detects Carlini/Wagner Adversarial
  Examples
Feature Squeezing Mitigates and Detects Carlini/Wagner Adversarial Examples
Weilin Xu
David Evans
Yanjun Qi
AAML
59
42
0
30 May 2017
MagNet: a Two-Pronged Defense against Adversarial Examples
MagNet: a Two-Pronged Defense against Adversarial Examples
Dongyu Meng
Hao Chen
AAML
56
1,208
0
25 May 2017
Evading Classifiers by Morphing in the Dark
Evading Classifiers by Morphing in the Dark
Hung Dang
Yue Huang
E. Chang
AAML
97
124
0
22 May 2017
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection
  Methods
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods
Nicholas Carlini
D. Wagner
AAML
131
1,867
0
20 May 2017
Ensemble Adversarial Training: Attacks and Defenses
Ensemble Adversarial Training: Attacks and Defenses
Florian Tramèr
Alexey Kurakin
Nicolas Papernot
Ian Goodfellow
Dan Boneh
Patrick McDaniel
AAML
177
2,729
0
19 May 2017
The Space of Transferable Adversarial Examples
The Space of Transferable Adversarial Examples
Florian Tramèr
Nicolas Papernot
Ian Goodfellow
Dan Boneh
Patrick McDaniel
AAMLSILM
102
558
0
11 Apr 2017
Feature Squeezing: Detecting Adversarial Examples in Deep Neural
  Networks
Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks
Weilin Xu
David Evans
Yanjun Qi
AAML
97
1,273
0
04 Apr 2017
Adversarial Transformation Networks: Learning to Generate Adversarial
  Examples
Adversarial Transformation Networks: Learning to Generate Adversarial Examples
S. Baluja
Ian S. Fischer
GAN
79
286
0
28 Mar 2017
Generative Poisoning Attack Method Against Neural Networks
Generative Poisoning Attack Method Against Neural Networks
Chaofei Yang
Qing Wu
Hai Helen Li
Yiran Chen
AAML
68
218
0
03 Mar 2017
Robustness to Adversarial Examples through an Ensemble of Specialists
Robustness to Adversarial Examples through an Ensemble of Specialists
Mahdieh Abbasi
Christian Gagné
AAML
93
109
0
22 Feb 2017
On the (Statistical) Detection of Adversarial Examples
On the (Statistical) Detection of Adversarial Examples
Kathrin Grosse
Praveen Manoharan
Nicolas Papernot
Michael Backes
Patrick McDaniel
AAML
86
714
0
21 Feb 2017
Generating Adversarial Malware Examples for Black-Box Attacks Based on
  GAN
Generating Adversarial Malware Examples for Black-Box Attacks Based on GAN
Weiwei Hu
Ying Tan
GAN
89
463
0
20 Feb 2017
Adversarial Machine Learning at Scale
Adversarial Machine Learning at Scale
Alexey Kurakin
Ian Goodfellow
Samy Bengio
AAML
477
3,148
0
04 Nov 2016
Universal adversarial perturbations
Universal adversarial perturbations
Seyed-Mohsen Moosavi-Dezfooli
Alhussein Fawzi
Omar Fawzi
P. Frossard
AAML
154
2,533
0
26 Oct 2016
12
Next