ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1712.03141
  4. Cited By
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
v1v2 (latest)

Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning

8 December 2017
Battista Biggio
Fabio Roli
    AAML
ArXiv (abs)PDFHTML

Papers citing "Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning"

50 / 590 papers shown
Title
The Feasibility and Inevitability of Stealth Attacks
The Feasibility and Inevitability of Stealth Attacks
I. Tyukin
D. Higham
Alexander Bastounis
Eliyas Woldegeorgis
Alexander N. Gorban
AAML
48
19
0
26 Jun 2021
Estimating the Robustness of Classification Models by the Structure of
  the Learned Feature-Space
Estimating the Robustness of Classification Models by the Structure of the Learned Feature-Space
Kalun Ho
Franz-Josef Pfreundt
J. Keuper
Margret Keuper
OODUQCV
50
3
0
23 Jun 2021
Adversarial Training Helps Transfer Learning via Better Representations
Adversarial Training Helps Transfer Learning via Better Representations
Zhun Deng
Linjun Zhang
Kailas Vodrahalli
Kenji Kawaguchi
James Zou
GAN
89
54
0
18 Jun 2021
Indicators of Attack Failure: Debugging and Improving Optimization of
  Adversarial Examples
Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial Examples
Maura Pintor
Christian Scano
Angelo Sotgiu
Ambra Demontis
Nicholas Carlini
Battista Biggio
Fabio Roli
AAML
88
28
0
18 Jun 2021
Bad Characters: Imperceptible NLP Attacks
Bad Characters: Imperceptible NLP Attacks
Nicholas Boucher
Ilia Shumailov
Ross J. Anderson
Nicolas Papernot
AAMLSILM
102
107
0
18 Jun 2021
Modeling Realistic Adversarial Attacks against Network Intrusion
  Detection Systems
Modeling Realistic Adversarial Attacks against Network Intrusion Detection Systems
Giovanni Apruzzese
M. Andreolini
Luca Ferretti
Mirco Marchetti
M. Colajanni
AAML
102
109
0
17 Jun 2021
Model Extraction and Adversarial Attacks on Neural Networks using
  Switching Power Information
Model Extraction and Adversarial Attacks on Neural Networks using Switching Power Information
Tommy Li
Cory E. Merkel
AAML
31
5
0
15 Jun 2021
Certification of embedded systems based on Machine Learning: A survey
Certification of embedded systems based on Machine Learning: A survey
Guillaume Vidot
Christophe Gabreau
I. Ober
Iulian Ober
51
12
0
14 Jun 2021
AdaMatch: A Unified Approach to Semi-Supervised Learning and Domain
  Adaptation
AdaMatch: A Unified Approach to Semi-Supervised Learning and Domain Adaptation
David Berthelot
Rebecca Roelofs
Kihyuk Sohn
Nicholas Carlini
Alexey Kurakin
61
145
0
08 Jun 2021
Markpainting: Adversarial Machine Learning meets Inpainting
Markpainting: Adversarial Machine Learning meets Inpainting
David Khachaturov
Ilia Shumailov
Yiren Zhao
Nicolas Papernot
Ross J. Anderson
WIGM
65
14
0
01 Jun 2021
Gradient-based Data Subversion Attack Against Binary Classifiers
Gradient-based Data Subversion Attack Against Binary Classifiers
Rosni Vasu
Sanjay Seetharaman
Shubham Malaviya
Manish Shukla
S. Lodha
AAML
31
0
0
31 May 2021
A BIC-based Mixture Model Defense against Data Poisoning Attacks on
  Classifiers
A BIC-based Mixture Model Defense against Data Poisoning Attacks on Classifiers
Xi Li
David J. Miller
Zhen Xiang
G. Kesidis
AAML
38
0
0
28 May 2021
On the Robustness of Domain Constraints
On the Robustness of Domain Constraints
Ryan Sheatsley
Blaine Hoak
Eric Pauley
Yohan Beugin
Mike Weisman
Patrick McDaniel
AAMLOOD
88
26
0
18 May 2021
Lightweight Distributed Gaussian Process Regression for Online Machine
  Learning
Lightweight Distributed Gaussian Process Regression for Online Machine Learning
Zhenyuan Yuan
Minghui Zhu
50
4
0
11 May 2021
De-Pois: An Attack-Agnostic Defense against Data Poisoning Attacks
De-Pois: An Attack-Agnostic Defense against Data Poisoning Attacks
Jian Chen
Xuxin Zhang
Rui Zhang
Chen Wang
Ling Liu
AAML
72
88
0
08 May 2021
Topological Uncertainty: Monitoring trained neural networks through
  persistence of activation graphs
Topological Uncertainty: Monitoring trained neural networks through persistence of activation graphs
Théo Lacombe
Yuichi Ike
Mathieu Carrière
Frédéric Chazal
Marc Glisse
Yuhei Umeda
75
23
0
07 May 2021
On the Adversarial Robustness of Quantized Neural Networks
On the Adversarial Robustness of Quantized Neural Networks
Micah Gorsline
James T. Smith
Cory E. Merkel
AAML
85
19
0
01 May 2021
Adversarial Example Detection for DNN Models: A Review and Experimental
  Comparison
Adversarial Example Detection for DNN Models: A Review and Experimental Comparison
Ahmed Aldahdooh
W. Hamidouche
Sid Ahmed Fezza
Olivier Déforges
AAML
233
128
0
01 May 2021
IPatch: A Remote Adversarial Patch
IPatch: A Remote Adversarial Patch
Yisroel Mirsky
AAML
62
12
0
30 Apr 2021
Influence Based Defense Against Data Poisoning Attacks in Online
  Learning
Influence Based Defense Against Data Poisoning Attacks in Online Learning
Sanjay Seetharaman
Shubham Malaviya
KV Rosni
Manish Shukla
S. Lodha
TDIAAML
111
9
0
24 Apr 2021
Turning Federated Learning Systems Into Covert Channels
Turning Federated Learning Systems Into Covert Channels
Gabriele Costa
Fabio Pinelli
S. Soderi
Gabriele Tolomei
FedML
70
12
0
21 Apr 2021
Prospective Artificial Intelligence Approaches for Active Cyber Defence
Prospective Artificial Intelligence Approaches for Active Cyber Defence
Neil Dhir
H. Hoeltgebaum
N. Adams
M. Briers
A. Burke
Paul Jones
AAML
80
28
0
20 Apr 2021
Provable Robustness of Adversarial Training for Learning Halfspaces with
  Noise
Provable Robustness of Adversarial Training for Learning Halfspaces with Noise
Difan Zou
Spencer Frei
Quanquan Gu
58
13
0
19 Apr 2021
Robust Learning Meets Generative Models: Can Proxy Distributions Improve
  Adversarial Robustness?
Robust Learning Meets Generative Models: Can Proxy Distributions Improve Adversarial Robustness?
Vikash Sehwag
Saeed Mahloujifar
Tinashe Handina
Sihui Dai
Chong Xiang
M. Chiang
Prateek Mittal
OOD
106
131
0
19 Apr 2021
Lower Bounds on Cross-Entropy Loss in the Presence of Test-time
  Adversaries
Lower Bounds on Cross-Entropy Loss in the Presence of Test-time Adversaries
A. Bhagoji
Daniel Cullina
Vikash Sehwag
Prateek Mittal
AAMLOOD
73
3
0
16 Apr 2021
Random and Adversarial Bit Error Robustness: Energy-Efficient and Secure
  DNN Accelerators
Random and Adversarial Bit Error Robustness: Energy-Efficient and Secure DNN Accelerators
David Stutz
Nandhini Chandramoorthy
Matthias Hein
Bernt Schiele
AAMLMQ
68
18
0
16 Apr 2021
Evaluating Standard Feature Sets Towards Increased Generalisability and
  Explainability of ML-based Network Intrusion Detection
Evaluating Standard Feature Sets Towards Increased Generalisability and Explainability of ML-based Network Intrusion Detection
Mohanad Sarhan
S. Layeghy
Marius Portmann
67
69
0
15 Apr 2021
A Backdoor Attack against 3D Point Cloud Classifiers
A Backdoor Attack against 3D Point Cloud Classifiers
Zhen Xiang
David J. Miller
Siheng Chen
Xi Li
G. Kesidis
3DPCAAML
84
77
0
12 Apr 2021
Relating Adversarially Robust Generalization to Flat Minima
Relating Adversarially Robust Generalization to Flat Minima
David Stutz
Matthias Hein
Bernt Schiele
OOD
105
67
0
09 Apr 2021
Adversarial Robustness Guarantees for Gaussian Processes
Adversarial Robustness Guarantees for Gaussian Processes
A. Patané
Arno Blaas
Luca Laurenti
L. Cardelli
Stephen J. Roberts
Marta Z. Kwiatkowska
GPAAML
188
9
0
07 Apr 2021
Achieving Transparency Report Privacy in Linear Time
Achieving Transparency Report Privacy in Linear Time
Chien-Lun Chen
L. Golubchik
R. Pal
73
4
0
31 Mar 2021
A Variational Inequality Approach to Bayesian Regression Games
A Variational Inequality Approach to Bayesian Regression Games
Wenshuo Guo
Michael I. Jordan
Tianyi Lin
40
5
0
24 Mar 2021
Black-box Detection of Backdoor Attacks with Limited Information and
  Data
Black-box Detection of Backdoor Attacks with Limited Information and Data
Yinpeng Dong
Xiao Yang
Zhijie Deng
Tianyu Pang
Zihao Xiao
Hang Su
Jun Zhu
AAML
91
114
0
24 Mar 2021
The Hammer and the Nut: Is Bilevel Optimization Really Needed to Poison
  Linear Classifiers?
The Hammer and the Nut: Is Bilevel Optimization Really Needed to Poison Linear Classifiers?
Antonio Emanuele Cinà
Sebastiano Vascon
Ambra Demontis
Battista Biggio
Fabio Roli
Marcello Pelillo
AAML
66
10
0
23 Mar 2021
Explainable Adversarial Attacks in Deep Neural Networks Using Activation
  Profiles
Explainable Adversarial Attacks in Deep Neural Networks Using Activation Profiles
G. Cantareira
R. Mello
F. Paulovich
AAML
57
9
0
18 Mar 2021
ReinforceBug: A Framework to Generate Adversarial Textual Examples
ReinforceBug: A Framework to Generate Adversarial Textual Examples
Bushra Sabir
M. Babar
R. Gaire
SILMAAML
57
3
0
11 Mar 2021
Proof-of-Learning: Definitions and Practice
Proof-of-Learning: Definitions and Practice
Hengrui Jia
Mohammad Yaghini
Christopher A. Choquette-Choo
Natalie Dullerud
Anvith Thudi
Varun Chandrasekaran
Nicolas Papernot
AAML
84
106
0
09 Mar 2021
Pseudo-labeling for Scalable 3D Object Detection
Pseudo-labeling for Scalable 3D Object Detection
Benjamin Caine
Rebecca Roelofs
Vijay Vasudevan
Jiquan Ngiam
Yuning Chai
Zhiwen Chen
Jonathon Shlens
87
42
0
02 Mar 2021
Adversarial Examples can be Effective Data Augmentation for Unsupervised
  Machine Learning
Adversarial Examples can be Effective Data Augmentation for Unsupervised Machine Learning
Chia-Yi Hsu
Pin-Yu Chen
Songtao Lu
Sijia Liu
Chia-Mu Yu
AAML
93
11
0
02 Mar 2021
Fast Minimum-norm Adversarial Attacks through Adaptive Norm Constraints
Fast Minimum-norm Adversarial Attacks through Adaptive Norm Constraints
Maura Pintor
Fabio Roli
Wieland Brendel
Battista Biggio
AAML
92
73
0
25 Feb 2021
Non-Singular Adversarial Robustness of Neural Networks
Non-Singular Adversarial Robustness of Neural Networks
Yu-Lin Tsai
Chia-Yi Hsu
Chia-Mu Yu
Pin-Yu Chen
AAMLOOD
56
5
0
23 Feb 2021
Model-Based Domain Generalization
Model-Based Domain Generalization
Alexander Robey
George J. Pappas
Hamed Hassani
OOD
151
131
0
23 Feb 2021
Universal Adversarial Examples and Perturbations for Quantum Classifiers
Universal Adversarial Examples and Perturbations for Quantum Classifiers
Weiyuan Gong
D. Deng
AAML
88
25
0
15 Feb 2021
Resilient Machine Learning for Networked Cyber Physical Systems: A
  Survey for Machine Learning Security to Securing Machine Learning for CPS
Resilient Machine Learning for Networked Cyber Physical Systems: A Survey for Machine Learning Security to Securing Machine Learning for CPS
Felix O. Olowononi
D. Rawat
Chunmei Liu
95
138
0
14 Feb 2021
Realizable Universal Adversarial Perturbations for Malware
Realizable Universal Adversarial Perturbations for Malware
Raphael Labaca-Castro
Luis Muñoz-González
Feargus Pendlebury
Gabi Dreo Rodosek
Fabio Pierazzi
Lorenzo Cavallaro
AAML
63
6
0
12 Feb 2021
Fairness-Aware PAC Learning from Corrupted Data
Fairness-Aware PAC Learning from Corrupted Data
Nikola Konstantinov
Christoph H. Lampert
84
19
0
11 Feb 2021
Defense Against Reward Poisoning Attacks in Reinforcement Learning
Defense Against Reward Poisoning Attacks in Reinforcement Learning
Kiarash Banihashem
Adish Singla
Goran Radanović
AAML
92
27
0
10 Feb 2021
Bayesian Inference with Certifiable Adversarial Robustness
Bayesian Inference with Certifiable Adversarial Robustness
Matthew Wicker
Luca Laurenti
A. Patané
Zhoutong Chen
Zheng Zhang
Marta Z. Kwiatkowska
AAMLBDL
126
30
0
10 Feb 2021
Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial
  Training
Better Safe Than Sorry: Preventing Delusive Adversaries with Adversarial Training
Lue Tao
Lei Feng
Jinfeng Yi
Sheng-Jun Huang
Songcan Chen
AAML
143
73
0
09 Feb 2021
Security and Privacy for Artificial Intelligence: Opportunities and
  Challenges
Security and Privacy for Artificial Intelligence: Opportunities and Challenges
Ayodeji Oseni
Nour Moustafa
Helge Janicke
Peng Liu
Z. Tari
A. Vasilakos
AAML
67
52
0
09 Feb 2021
Previous
123...678...101112
Next