ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1712.03141
  4. Cited By
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
v1v2 (latest)

Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning

8 December 2017
Battista Biggio
Fabio Roli
    AAML
ArXiv (abs)PDFHTML

Papers citing "Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning"

50 / 590 papers shown
Title
Do Adversarially Robust ImageNet Models Transfer Better?
Do Adversarially Robust ImageNet Models Transfer Better?
Hadi Salman
Andrew Ilyas
Logan Engstrom
Ashish Kapoor
Aleksander Madry
139
428
0
16 Jul 2020
A Survey of Privacy Attacks in Machine Learning
A Survey of Privacy Attacks in Machine Learning
M. Rigaki
Sebastian Garcia
PILMAAML
117
226
0
15 Jul 2020
Adversarial Examples and Metrics
Adversarial Examples and Metrics
Nico Döttling
Kathrin Grosse
Michael Backes
Ian Molloy
AAML
49
0
0
14 Jul 2020
Improved Detection of Adversarial Images Using Deep Neural Networks
Improved Detection of Adversarial Images Using Deep Neural Networks
Yutong Gao
Yi-Lun Pan
AAML
56
3
0
10 Jul 2020
Certifying Decision Trees Against Evasion Attacks by Program Analysis
Certifying Decision Trees Against Evasion Attacks by Program Analysis
Stefano Calzavara
Pietro Ferrara
Claudio Lucchese
AAML
60
10
0
06 Jul 2020
On Data Augmentation and Adversarial Risk: An Empirical Analysis
On Data Augmentation and Adversarial Risk: An Empirical Analysis
Hamid Eghbalzadeh
Khaled Koutini
Paul Primus
Verena Haunschmid
Michal Lewandowski
Werner Zellinger
Bernhard A. Moser
Gerhard Widmer
AAML
42
9
0
06 Jul 2020
Understanding and Improving Fast Adversarial Training
Understanding and Improving Fast Adversarial Training
Maksym Andriushchenko
Nicolas Flammarion
AAML
94
294
0
06 Jul 2020
Adversarial Machine Learning Attacks and Defense Methods in the Cyber
  Security Domain
Adversarial Machine Learning Attacks and Defense Methods in the Cyber Security Domain
Ishai Rosenberg
A. Shabtai
Yuval Elovici
Lior Rokach
AAML
85
12
0
05 Jul 2020
Adversarial Deep Ensemble: Evasion Attacks and Defenses for Malware
  Detection
Adversarial Deep Ensemble: Evasion Attacks and Defenses for Malware Detection
Deqiang Li
Qianmu Li
AAML
77
126
0
30 Jun 2020
Best-Effort Adversarial Approximation of Black-Box Malware Classifiers
Best-Effort Adversarial Approximation of Black-Box Malware Classifiers
A. Ali
Birhanu Eshete
AAML
34
6
0
28 Jun 2020
Bit Error Robustness for Energy-Efficient DNN Accelerators
Bit Error Robustness for Energy-Efficient DNN Accelerators
David Stutz
Nandhini Chandramoorthy
Matthias Hein
Bernt Schiele
MQ
52
1
0
24 Jun 2020
X-ModalNet: A Semi-Supervised Deep Cross-Modal Network for
  Classification of Remote Sensing Data
X-ModalNet: A Semi-Supervised Deep Cross-Modal Network for Classification of Remote Sensing Data
Danfeng Hong
Naoto Yokoya
Gui-Song Xia
J. Chanussot
X. Zhu
39
55
0
24 Jun 2020
With Great Dispersion Comes Greater Resilience: Efficient Poisoning
  Attacks and Defenses for Linear Regression Models
With Great Dispersion Comes Greater Resilience: Efficient Poisoning Attacks and Defenses for Linear Regression Models
Jialin Wen
Benjamin Zi Hao Zhao
Minhui Xue
Alina Oprea
Hai-feng Qian
AAML
70
20
0
21 Jun 2020
Graph Backdoor
Graph Backdoor
Zhaohan Xi
Ren Pang
S. Ji
Ting Wang
AI4CEAAML
72
172
0
21 Jun 2020
The Dilemma Between Data Transformations and Adversarial Robustness for
  Time Series Application Systems
The Dilemma Between Data Transformations and Adversarial Robustness for Time Series Application Systems
Sheila Alemany
N. Pissinou
AAML
20
8
0
18 Jun 2020
A Survey of Machine Learning Methods and Challenges for Windows Malware
  Classification
A Survey of Machine Learning Methods and Challenges for Windows Malware Classification
Edward Raff
Charles K. Nicholas
AAML
70
57
0
15 Jun 2020
Improving Adversarial Robustness via Unlabeled Out-of-Domain Data
Improving Adversarial Robustness via Unlabeled Out-of-Domain Data
Zhun Deng
Linjun Zhang
Amirata Ghorbani
James Zou
90
32
0
15 Jun 2020
Defending SVMs against Poisoning Attacks: the Hardness and DBSCAN
  Approach
Defending SVMs against Poisoning Attacks: the Hardness and DBSCAN Approach
Hu Ding
Fan Yang
Jiawei Huang
AAML
15
0
0
14 Jun 2020
Backdoor Smoothing: Demystifying Backdoor Attacks on Deep Neural
  Networks
Backdoor Smoothing: Demystifying Backdoor Attacks on Deep Neural Networks
Kathrin Grosse
Taesung Lee
Battista Biggio
Youngja Park
Michael Backes
Ian Molloy
AAML
39
10
0
11 Jun 2020
Adversarial Attack Vulnerability of Medical Image Analysis Systems:
  Unexplored Factors
Adversarial Attack Vulnerability of Medical Image Analysis Systems: Unexplored Factors
Gerda Bortsova
C. González-Gonzalo
S. Wetstein
Florian Dubost
Ioannis Katramados
...
Bram van Ginneken
J. Pluim
M. Veta
Clara I. Sánchez
Marleen de Bruijne
AAMLMedIm
38
131
0
11 Jun 2020
Trade-offs between membership privacy & adversarially robust learning
Trade-offs between membership privacy & adversarially robust learning
Jamie Hayes
SILM
54
3
0
08 Jun 2020
Consistency Regularization for Certified Robustness of Smoothed
  Classifiers
Consistency Regularization for Certified Robustness of Smoothed Classifiers
Jongheon Jeong
Jinwoo Shin
AAML
86
88
0
07 Jun 2020
Unique properties of adversarially trained linear classifiers on
  Gaussian data
Unique properties of adversarially trained linear classifiers on Gaussian data
Jamie Hayes
AAML
75
0
0
06 Jun 2020
Domain Knowledge Alleviates Adversarial Attacks in Multi-Label
  Classifiers
Domain Knowledge Alleviates Adversarial Attacks in Multi-Label Classifiers
S. Melacci
Gabriele Ciravegna
Angelo Sotgiu
Ambra Demontis
Battista Biggio
Marco Gori
Fabio Roli
88
15
0
06 Jun 2020
Generating Artificial Outliers in the Absence of Genuine Ones -- a
  Survey
Generating Artificial Outliers in the Absence of Genuine Ones -- a Survey
Georg Steinbuss
Klemens Böhm
54
8
0
05 Jun 2020
Sponge Examples: Energy-Latency Attacks on Neural Networks
Sponge Examples: Energy-Latency Attacks on Neural Networks
Ilia Shumailov
Yiren Zhao
Daniel Bates
Nicolas Papernot
Robert D. Mullins
Ross J. Anderson
SILM
81
138
0
05 Jun 2020
Rethinking Empirical Evaluation of Adversarial Robustness Using
  First-Order Attack Methods
Rethinking Empirical Evaluation of Adversarial Robustness Using First-Order Attack Methods
Kyungmi Lee
A. Chandrakasan
ELMAAML
69
3
0
01 Jun 2020
Keyed Non-Parametric Hypothesis Tests
Keyed Non-Parametric Hypothesis Tests
Yao Cheng
Cheng-Kang Chu
Hsiao-Ying Lin
Marius Lombard-Platet
D. Naccache
AAML
6
0
0
25 May 2020
Adversarial Attack on Hierarchical Graph Pooling Neural Networks
Adversarial Attack on Hierarchical Graph Pooling Neural Networks
Haoteng Tang
Guixiang Ma
Yurong Chen
Lei Guo
Wei Wang
Bo Zeng
Liang Zhan
AAML
93
28
0
23 May 2020
Reliability and Robustness analysis of Machine Learning based Phishing
  URL Detectors
Reliability and Robustness analysis of Machine Learning based Phishing URL Detectors
Bushra Sabir
Muhammad Ali Babar
R. Gaire
A. Abuadbba
AAML
86
10
0
18 May 2020
Encryption Inspired Adversarial Defense for Visual Classification
Encryption Inspired Adversarial Defense for Visual Classification
Maungmaung Aprilpyone
Hitoshi Kiya
56
32
0
16 May 2020
Ethical Adversaries: Towards Mitigating Unfairness with Adversarial
  Machine Learning
Ethical Adversaries: Towards Mitigating Unfairness with Adversarial Machine Learning
Pieter Delobelle
Paul Temple
Gilles Perrouin
Benoit Frénay
P. Heymans
Bettina Berendt
AAMLFaML
134
16
0
14 May 2020
Spanning Attack: Reinforce Black-box Attacks with Unlabeled Data
Spanning Attack: Reinforce Black-box Attacks with Unlabeled Data
Lu Wang
Huan Zhang
Jinfeng Yi
Cho-Jui Hsieh
Yuan Jiang
AAML
82
12
0
11 May 2020
Blind Backdoors in Deep Learning Models
Blind Backdoors in Deep Learning Models
Eugene Bagdasaryan
Vitaly Shmatikov
AAMLFedMLSILM
160
310
0
08 May 2020
Adversarial Training against Location-Optimized Adversarial Patches
Adversarial Training against Location-Optimized Adversarial Patches
Sukrut Rao
David Stutz
Bernt Schiele
AAML
84
92
0
05 May 2020
Do Gradient-based Explanations Tell Anything About Adversarial
  Robustness to Android Malware?
Do Gradient-based Explanations Tell Anything About Adversarial Robustness to Android Malware?
Marco Melis
Michele Scalas
Ambra Demontis
Davide Maiorca
Battista Biggio
Giorgio Giacinto
Fabio Roli
AAMLFAtt
54
28
0
04 May 2020
Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness
Bridging Mode Connectivity in Loss Landscapes and Adversarial Robustness
Pu Zhao
Pin-Yu Chen
Payel Das
Karthikeyan N. Ramamurthy
Xue Lin
AAML
149
191
0
30 Apr 2020
Bias Busters: Robustifying DL-based Lithographic Hotspot Detectors
  Against Backdooring Attacks
Bias Busters: Robustifying DL-based Lithographic Hotspot Detectors Against Backdooring Attacks
Kang Liu
Benjamin Tan
Gaurav Rajavendra Reddy
S. Garg
Yiorgos Makris
Ramesh Karri
AAML
49
9
0
26 Apr 2020
A Black-box Adversarial Attack Strategy with Adjustable Sparsity and
  Generalizability for Deep Image Classifiers
A Black-box Adversarial Attack Strategy with Adjustable Sparsity and Generalizability for Deep Image Classifiers
Arka Ghosh
S. S. Mullick
Shounak Datta
Swagatam Das
R. Mallipeddi
A. Das
AAML
63
38
0
24 Apr 2020
Systematic Evaluation of Backdoor Data Poisoning Attacks on Image
  Classifiers
Systematic Evaluation of Backdoor Data Poisoning Attacks on Image Classifiers
Loc Truong
Chace Jones
Brian Hutchinson
Andrew August
Brenda Praggastis
Robert J. Jasper
Nicole Nichols
Aaron Tuor
AAML
68
52
0
24 Apr 2020
Probabilistic Safety for Bayesian Neural Networks
Probabilistic Safety for Bayesian Neural Networks
Matthew Wicker
Luca Laurenti
A. Patané
Marta Z. Kwiatkowska
AAML
61
52
0
21 Apr 2020
The Attacker's Perspective on Automatic Speaker Verification: An
  Overview
The Attacker's Perspective on Automatic Speaker Verification: An Overview
Rohan Kumar Das
Xiaohai Tian
Tomi Kinnunen
Haizhou Li
AAML
66
80
0
19 Apr 2020
Protecting Classifiers From Attacks. A Bayesian Approach
Protecting Classifiers From Attacks. A Bayesian Approach
Víctor Gallego
Roi Naveiro
A. Redondo
D. Insua
Fabrizio Ruggeri
AAML
13
2
0
18 Apr 2020
Poisoning Attacks on Algorithmic Fairness
Poisoning Attacks on Algorithmic Fairness
David Solans
Battista Biggio
Carlos Castillo
AAML
84
82
0
15 Apr 2020
Feature Partitioning for Robust Tree Ensembles and their Certification
  in Adversarial Scenarios
Feature Partitioning for Robust Tree Ensembles and their Certification in Adversarial Scenarios
Stefano Calzavara
Claudio Lucchese
Federico Marcuzzi
S. Orlando
AAML
45
9
0
07 Apr 2020
Adversarial Genetic Programming for Cyber Security: A Rising Application
  Domain Where GP Matters
Adversarial Genetic Programming for Cyber Security: A Rising Application Domain Where GP Matters
Una-May O’Reilly
J. Toutouh
M. Pertierra
Daniel Prado Sanchez
Dennis Garcia
Anthony Erb Luogo
Jonathan Kelly
Erik Hemberg
SILMAAML
52
30
0
07 Apr 2020
Functionality-preserving Black-box Optimization of Adversarial Windows
  Malware
Functionality-preserving Black-box Optimization of Adversarial Windows Malware
Christian Scano
Battista Biggio
Giovanni Lagorio
Fabio Roli
A. Armando
AAML
87
146
0
30 Mar 2020
Policy Teaching via Environment Poisoning: Training-time Adversarial
  Attacks against Reinforcement Learning
Policy Teaching via Environment Poisoning: Training-time Adversarial Attacks against Reinforcement Learning
Amin Rakhsha
Goran Radanović
R. Devidze
Xiaojin Zhu
Adish Singla
AAMLOffRL
99
125
0
28 Mar 2020
A Separation Result Between Data-oblivious and Data-aware Poisoning
  Attacks
A Separation Result Between Data-oblivious and Data-aware Poisoning Attacks
Samuel Deng
Sanjam Garg
S. Jha
Saeed Mahloujifar
Mohammad Mahmoody
Abhradeep Thakurta
30
3
0
26 Mar 2020
Vulnerabilities of Connectionist AI Applications: Evaluation and Defence
Vulnerabilities of Connectionist AI Applications: Evaluation and Defence
Christian Berghoff
Matthias Neu
Arndt von Twickel
AAML
109
25
0
18 Mar 2020
Previous
123...10111289
Next