ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1712.03141
  4. Cited By
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
v1v2 (latest)

Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning

8 December 2017
Battista Biggio
Fabio Roli
    AAML
ArXiv (abs)PDFHTML

Papers citing "Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning"

50 / 590 papers shown
Title
The Attack Generator: A Systematic Approach Towards Constructing
  Adversarial Attacks
The Attack Generator: A Systematic Approach Towards Constructing Adversarial Attacks
F. Assion
Peter Schlicht
Florens Greßner
W. Günther
Fabian Hüger
Nico M. Schmidt
Umair Rasheed
AAML
75
14
0
17 Jun 2019
Evaluating the Robustness of Nearest Neighbor Classifiers: A Primal-Dual
  Perspective
Evaluating the Robustness of Nearest Neighbor Classifiers: A Primal-Dual Perspective
Lu Wang
Xuanqing Liu
Jinfeng Yi
Zhi Zhou
Cho-Jui Hsieh
AAML
83
22
0
10 Jun 2019
Do Image Classifiers Generalize Across Time?
Do Image Classifiers Generalize Across Time?
Vaishaal Shankar
Achal Dave
Rebecca Roelofs
Deva Ramanan
Benjamin Recht
Ludwig Schmidt
142
83
0
05 Jun 2019
Architecture Selection via the Trade-off Between Accuracy and Robustness
Architecture Selection via the Trade-off Between Accuracy and Robustness
Zhun Deng
Cynthia Dwork
Jialiang Wang
Yao-Min Zhao
AAML
98
3
0
04 Jun 2019
Voice Mimicry Attacks Assisted by Automatic Speaker Verification
Voice Mimicry Attacks Assisted by Automatic Speaker Verification
Ville Vestman
Tomi Kinnunen
Rosa González Hautamäki
Md. Sahidullah
84
37
0
03 Jun 2019
The Adversarial Machine Learning Conundrum: Can The Insecurity of ML
  Become The Achilles' Heel of Cognitive Networks?
The Adversarial Machine Learning Conundrum: Can The Insecurity of ML Become The Achilles' Heel of Cognitive Networks?
Muhammad Usama
Junaid Qadir
Ala I. Al-Fuqaha
M. Hamdi
AAML
48
19
0
03 Jun 2019
Unlabeled Data Improves Adversarial Robustness
Unlabeled Data Improves Adversarial Robustness
Y. Carmon
Aditi Raghunathan
Ludwig Schmidt
Percy Liang
John C. Duchi
130
754
0
31 May 2019
Securing Connected & Autonomous Vehicles: Challenges Posed by
  Adversarial Machine Learning and The Way Forward
Securing Connected & Autonomous Vehicles: Challenges Posed by Adversarial Machine Learning and The Way Forward
A. Qayyum
Muhammad Usama
Junaid Qadir
Ala I. Al-Fuqaha
AAML
94
191
0
29 May 2019
Empirically Measuring Concentration: Fundamental Limits on Intrinsic
  Robustness
Empirically Measuring Concentration: Fundamental Limits on Intrinsic Robustness
Saeed Mahloujifar
Xiao Zhang
Mohammad Mahmoody
David Evans
64
22
0
29 May 2019
Adversarial Robustness Guarantees for Classification with Gaussian
  Processes
Adversarial Robustness Guarantees for Classification with Gaussian Processes
Arno Blaas
A. Patané
Luca Laurenti
L. Cardelli
Marta Z. Kwiatkowska
Stephen J. Roberts
GPAAML
89
21
0
28 May 2019
Adversarially Robust Learning Could Leverage Computational Hardness
Adversarially Robust Learning Could Leverage Computational Hardness
Sanjam Garg
S. Jha
Saeed Mahloujifar
Mohammad Mahmoody
AAML
163
24
0
28 May 2019
Privacy Risks of Securing Machine Learning Models against Adversarial
  Examples
Privacy Risks of Securing Machine Learning Models against Adversarial Examples
Liwei Song
Reza Shokri
Prateek Mittal
SILMMIACVAAML
88
248
0
24 May 2019
Predicting Model Failure using Saliency Maps in Autonomous Driving
  Systems
Predicting Model Failure using Saliency Maps in Autonomous Driving Systems
Sina Mohseni
Akshay V. Jagadeesh
Zhangyang Wang
78
14
0
19 May 2019
Genuinely Distributed Byzantine Machine Learning
Genuinely Distributed Byzantine Machine Learning
El-Mahdi El-Mhamdi
R. Guerraoui
Arsany Guirguis
Lê Nguyên Hoang
Sébastien Rouault
FedMLOOD
68
19
0
05 May 2019
Detecting Adversarial Examples through Nonlinear Dimensionality
  Reduction
Detecting Adversarial Examples through Nonlinear Dimensionality Reduction
Francesco Crecchi
D. Bacciu
Battista Biggio
AAML
83
10
0
30 Apr 2019
CryptoNN: Training Neural Networks over Encrypted Data
CryptoNN: Training Neural Networks over Encrypted Data
Runhua Xu
J. Joshi
Chong Li
64
76
0
15 Apr 2019
Adversarial Learning in Statistical Classification: A Comprehensive
  Review of Defenses Against Attacks
Adversarial Learning in Statistical Classification: A Comprehensive Review of Defenses Against Attacks
David J. Miller
Zhen Xiang
G. Kesidis
AAML
74
35
0
12 Apr 2019
Malware Evasion Attack and Defense
Malware Evasion Attack and Defense
Yonghong Huang
Utkarsh Verma
Celeste Fralick
G. Infante-Lopez
B. Kumar
Carl Woodward
AAML
65
16
0
07 Apr 2019
Statistical Guarantees for the Robustness of Bayesian Neural Networks
Statistical Guarantees for the Robustness of Bayesian Neural Networks
L. Cardelli
Marta Kwiatkowska
Luca Laurenti
Nicola Paoletti
A. Patané
Matthew Wicker
AAML
89
54
0
05 Mar 2019
TamperNN: Efficient Tampering Detection of Deployed Neural Nets
TamperNN: Efficient Tampering Detection of Deployed Neural Nets
Erwan Le Merrer
Gilles Tredan
MLAUAAML
21
9
0
01 Mar 2019
Quantifying Perceptual Distortion of Adversarial Examples
Quantifying Perceptual Distortion of Adversarial Examples
Matt Jordan
N. Manoj
Surbhi Goel
A. Dimakis
68
39
0
21 Feb 2019
There are No Bit Parts for Sign Bits in Black-Box Attacks
There are No Bit Parts for Sign Bits in Black-Box Attacks
Abdullah Al-Dujaili
Una-May O’Reilly
AAML
116
20
0
19 Feb 2019
Do ImageNet Classifiers Generalize to ImageNet?
Do ImageNet Classifiers Generalize to ImageNet?
Benjamin Recht
Rebecca Roelofs
Ludwig Schmidt
Vaishaal Shankar
OODSSegVLM
138
1,732
0
13 Feb 2019
When Causal Intervention Meets Adversarial Examples and Image Masking
  for Deep Neural Networks
When Causal Intervention Meets Adversarial Examples and Image Masking for Deep Neural Networks
Chao-Han Huck Yang
Yi-Chieh Liu
Pin-Yu Chen
Xiaoli Ma
Y. Tsai
BDLAAMLCML
80
21
0
09 Feb 2019
On the security relevance of weights in deep learning
On the security relevance of weights in deep learning
Kathrin Grosse
T. A. Trost
Marius Mosbach
Michael Backes
Dietrich Klakow
AAML
58
6
0
08 Feb 2019
Computational Limitations in Robust Classification and Win-Win Results
Computational Limitations in Robust Classification and Win-Win Results
Akshay Degwekar
Preetum Nakkiran
Vinod Vaikuntanathan
67
39
0
04 Feb 2019
The Efficacy of SHIELD under Different Threat Models
The Efficacy of SHIELD under Different Threat Models
Cory Cornelius
Nilaksh Das
Shang-Tse Chen
Li Chen
Michael E. Kounavis
Duen Horng Chau
AAML
71
11
0
01 Feb 2019
Optimal Attack against Autoregressive Models by Manipulating the
  Environment
Optimal Attack against Autoregressive Models by Manipulating the Environment
Yiding Chen
Xiaojin Zhu
AAML
53
11
0
01 Feb 2019
Adversarial Examples Are a Natural Consequence of Test Error in Noise
Adversarial Examples Are a Natural Consequence of Test Error in Noise
Nic Ford
Justin Gilmer
Nicholas Carlini
E. D. Cubuk
AAML
132
320
0
29 Jan 2019
RED-Attack: Resource Efficient Decision based Attack for Machine
  Learning
RED-Attack: Resource Efficient Decision based Attack for Machine Learning
Faiq Khalid
Hassan Ali
Muhammad Abdullah Hanif
Semeen Rehman
Rehan Ahmed
Mohamed Bennai
AAML
71
14
0
29 Jan 2019
Adversarial Attacks on Deep Learning Models in Natural Language
  Processing: A Survey
Adversarial Attacks on Deep Learning Models in Natural Language Processing: A Survey
W. Zhang
Quan Z. Sheng
A. Alhazmi
Chenliang Li
AAML
114
57
0
21 Jan 2019
Explaining Vulnerabilities of Deep Learning to Adversarial Malware
  Binaries
Explaining Vulnerabilities of Deep Learning to Adversarial Malware Binaries
Christian Scano
Battista Biggio
Giovanni Lagorio
Fabio Roli
A. Armando
AAML
80
131
0
11 Jan 2019
Characterizing and evaluating adversarial examples for Offline
  Handwritten Signature Verification
Characterizing and evaluating adversarial examples for Offline Handwritten Signature Verification
L. G. Hafemann
R. Sabourin
Luiz Eduardo Soares de Oliveira
AAML
55
44
0
10 Jan 2019
Towards resilient machine learning for ransomware detection
Towards resilient machine learning for ransomware detection
Li-Wei Chen
Chih-Yuan Yang
Anindya Paul
R. Sahita
AAML
36
22
0
21 Dec 2018
PROVEN: Certifying Robustness of Neural Networks with a Probabilistic
  Approach
PROVEN: Certifying Robustness of Neural Networks with a Probabilistic Approach
Tsui-Wei Weng
Pin-Yu Chen
Lam M. Nguyen
M. Squillante
Ivan Oseledets
Luca Daniel
AAML
82
30
0
18 Dec 2018
Training Set Camouflage
Training Set Camouflage
Ayon Sen
Scott Alfeld
Xuezhou Zhang
Ara Vartanian
Yuzhe Ma
Xiaojin Zhu
13
7
0
13 Dec 2018
Adversarial Attacks, Regression, and Numerical Stability Regularization
Adversarial Attacks, Regression, and Numerical Stability Regularization
A. Nguyen
Edward Raff
AAML
52
30
0
07 Dec 2018
The Limitations of Model Uncertainty in Adversarial Settings
The Limitations of Model Uncertainty in Adversarial Settings
Kathrin Grosse
David Pfaff
M. Smith
Michael Backes
AAML
63
34
0
06 Dec 2018
Interpretable Deep Learning under Fire
Interpretable Deep Learning under Fire
Xinyang Zhang
Ningfei Wang
Hua Shen
S. Ji
Xiapu Luo
Ting Wang
AAMLAI4CE
138
173
0
03 Dec 2018
Disentangling Adversarial Robustness and Generalization
Disentangling Adversarial Robustness and Generalization
David Stutz
Matthias Hein
Bernt Schiele
AAMLOOD
311
285
0
03 Dec 2018
Model-Reuse Attacks on Deep Learning Systems
Model-Reuse Attacks on Deep Learning Systems
Yujie Ji
Xinyang Zhang
S. Ji
Xiapu Luo
Ting Wang
SILMAAML
189
187
0
02 Dec 2018
Discrete Adversarial Attacks and Submodular Optimization with
  Applications to Text Classification
Discrete Adversarial Attacks and Submodular Optimization with Applications to Text Classification
Qi Lei
Lingfei Wu
Pin-Yu Chen
A. Dimakis
Inderjit S. Dhillon
Michael Witbrock
AAML
102
92
0
01 Dec 2018
CNN-Cert: An Efficient Framework for Certifying Robustness of
  Convolutional Neural Networks
CNN-Cert: An Efficient Framework for Certifying Robustness of Convolutional Neural Networks
Akhilan Boopathy
Tsui-Wei Weng
Pin-Yu Chen
Sijia Liu
Luca Daniel
AAML
158
138
0
29 Nov 2018
Bilateral Adversarial Training: Towards Fast Training of More Robust
  Models Against Adversarial Attacks
Bilateral Adversarial Training: Towards Fast Training of More Robust Models Against Adversarial Attacks
Jianyu Wang
Haichao Zhang
OODAAML
87
119
0
26 Nov 2018
Decoupling Direction and Norm for Efficient Gradient-Based L2
  Adversarial Attacks and Defenses
Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses
Jérôme Rony
L. G. Hafemann
Luiz Eduardo Soares de Oliveira
Ismail Ben Ayed
R. Sabourin
Eric Granger
AAML
78
299
0
23 Nov 2018
Theoretical Analysis of Adversarial Learning: A Minimax Approach
Theoretical Analysis of Adversarial Learning: A Minimax Approach
Zhuozhuo Tu
Jingwei Zhang
Dacheng Tao
AAML
72
68
0
13 Nov 2018
An Optimal Control View of Adversarial Machine Learning
An Optimal Control View of Adversarial Machine Learning
Xiaojin Zhu
AAML
55
25
0
11 Nov 2018
Can We Use Speaker Recognition Technology to Attack Itself? Enhancing
  Mimicry Attacks Using Automatic Target Speaker Selection
Can We Use Speaker Recognition Technology to Attack Itself? Enhancing Mimicry Attacks Using Automatic Target Speaker Selection
Tomi Kinnunen
Rosa González Hautamäki
Ville Vestman
Md. Sahidullah
70
5
0
09 Nov 2018
FAdeML: Understanding the Impact of Pre-Processing Noise Filtering on
  Adversarial Machine Learning
FAdeML: Understanding the Impact of Pre-Processing Noise Filtering on Adversarial Machine Learning
Faiq Khalid
Muhammad Abdullah Hanif
Semeen Rehman
Junaid Qadir
Mohamed Bennai
AAML
85
34
0
04 Nov 2018
Efficient Neural Network Robustness Certification with General
  Activation Functions
Efficient Neural Network Robustness Certification with General Activation Functions
Huan Zhang
Tsui-Wei Weng
Pin-Yu Chen
Cho-Jui Hsieh
Luca Daniel
AAML
124
765
0
02 Nov 2018
Previous
123...101112
Next