ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 1712.03141
  4. Cited By
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
v1v2 (latest)

Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning

8 December 2017
Battista Biggio
Fabio Roli
    AAML
ArXiv (abs)PDFHTML

Papers citing "Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning"

50 / 590 papers shown
Title
On managing vulnerabilities in AI/ML systems
On managing vulnerabilities in AI/ML systems
Jonathan M. Spring
April Galyardt
A. Householder
Nathan M. VanHoudnos
64
19
0
22 Jan 2021
Heating up decision boundaries: isocapacitory saturation, adversarial
  scenarios and generalization bounds
Heating up decision boundaries: isocapacitory saturation, adversarial scenarios and generalization bounds
B. Georgiev
L. Franken
Mayukh Mukherjee
AAML
31
1
0
15 Jan 2021
Towards a Robust and Trustworthy Machine Learning System Development: An
  Engineering Perspective
Towards a Robust and Trustworthy Machine Learning System Development: An Engineering Perspective
Pulei Xiong
Scott Buffett
Shahrear Iqbal
Philippe Lamontagne
M. Mamun
Heather Molyneaux
OOD
81
15
0
08 Jan 2021
The Effect of Prior Lipschitz Continuity on the Adversarial Robustness
  of Bayesian Neural Networks
The Effect of Prior Lipschitz Continuity on the Adversarial Robustness of Bayesian Neural Networks
Arno Blaas
Stephen J. Roberts
BDLAAML
85
2
0
07 Jan 2021
Robust Machine Learning Systems: Challenges, Current Trends,
  Perspectives, and the Road Ahead
Robust Machine Learning Systems: Challenges, Current Trends, Perspectives, and the Road Ahead
Mohamed Bennai
Mahum Naseer
T. Theocharides
C. Kyrkou
O. Mutlu
Lois Orosa
Jungwook Choi
OOD
139
101
0
04 Jan 2021
With False Friends Like These, Who Can Notice Mistakes?
With False Friends Like These, Who Can Notice Mistakes?
Lue Tao
Lei Feng
Jinfeng Yi
Songcan Chen
AAML
70
6
0
29 Dec 2020
Characterizing the Evasion Attackability of Multi-label Classifiers
Characterizing the Evasion Attackability of Multi-label Classifiers
Zhuo Yang
Yufei Han
Xiangliang Zhang
AAML
38
10
0
17 Dec 2020
Machine Learning for Detecting Data Exfiltration: A Review
Machine Learning for Detecting Data Exfiltration: A Review
Bushra Sabir
Faheem Ullah
M. Babar
R. Gaire
AAML
70
33
0
17 Dec 2020
Mitigating Bias in Calibration Error Estimation
Mitigating Bias in Calibration Error Estimation
Rebecca Roelofs
Nicholas Cain
Jonathon Shlens
Michael C. Mozer
100
95
0
15 Dec 2020
Developing Future Human-Centered Smart Cities: Critical Analysis of
  Smart City Security, Interpretability, and Ethical Challenges
Developing Future Human-Centered Smart Cities: Critical Analysis of Smart City Security, Interpretability, and Ethical Challenges
Kashif Ahmad
Majdi Maabreh
M. Ghaly
Khalil Khan
Junaid Qadir
Ala I. Al-Fuqaha
115
157
0
14 Dec 2020
Risk Management Framework for Machine Learning Security
Risk Management Framework for Machine Learning Security
J. Breier
A. Baldwin
H. Balinsky
Yang Liu
AAML
31
3
0
09 Dec 2020
TrollHunter [Evader]: Automated Detection [Evasion] of Twitter Trolls
  During the COVID-19 Pandemic
TrollHunter [Evader]: Automated Detection [Evasion] of Twitter Trolls During the COVID-19 Pandemic
Peter Jachim
Filipo Sharevski
Paige Treebridge
32
27
0
04 Dec 2020
Use the Spear as a Shield: A Novel Adversarial Example based
  Privacy-Preserving Technique against Membership Inference Attacks
Use the Spear as a Shield: A Novel Adversarial Example based Privacy-Preserving Technique against Membership Inference Attacks
Mingfu Xue
Chengxiang Yuan
Can He
Zhiyu Wu
Yushu Zhang
Zhe Liu
Weiqiang Liu
MIACV
16
12
0
27 Nov 2020
Rethinking Uncertainty in Deep Learning: Whether and How it Improves
  Robustness
Rethinking Uncertainty in Deep Learning: Whether and How it Improves Robustness
Yilun Jin
Lixin Fan
Kam Woh Ng
Ce Ju
Qiang Yang
AAMLOOD
27
1
0
27 Nov 2020
Simple statistical methods for unsupervised brain anomaly detection on
  MRI are competitive to deep learning methods
Simple statistical methods for unsupervised brain anomaly detection on MRI are competitive to deep learning methods
Victor Saase
H. Wenz
T. Ganslandt
C. Groden
M. Maros
21
5
0
25 Nov 2020
Omni: Automated Ensemble with Unexpected Models against Adversarial
  Evasion Attack
Omni: Automated Ensemble with Unexpected Models against Adversarial Evasion Attack
Rui Shu
Tianpei Xia
Laurie A. Williams
Tim Menzies
AAML
70
16
0
23 Nov 2020
Policy Teaching in Reinforcement Learning via Environment Poisoning
  Attacks
Policy Teaching in Reinforcement Learning via Environment Poisoning Attacks
Amin Rakhsha
Goran Radanović
R. Devidze
Xiaojin Zhu
Adish Singla
AAMLOffRL
87
29
0
21 Nov 2020
Challenges in Deploying Machine Learning: a Survey of Case Studies
Challenges in Deploying Machine Learning: a Survey of Case Studies
Andrei Paleyes
Raoul-Gabriel Urma
Neil D. Lawrence
71
409
0
18 Nov 2020
Adversarially Robust Classification based on GLRT
Adversarially Robust Classification based on GLRT
Bhagyashree Puranik
Upamanyu Madhow
Ramtin Pedarsani
VLMAAML
58
4
0
16 Nov 2020
Getting Passive Aggressive About False Positives: Patching Deployed
  Malware Detectors
Getting Passive Aggressive About False Positives: Patching Deployed Malware Detectors
Edward Raff
Bobby Filar
James Holt
85
7
0
22 Oct 2020
Optimism in the Face of Adversity: Understanding and Improving Deep
  Learning through Adversarial Robustness
Optimism in the Face of Adversity: Understanding and Improving Deep Learning through Adversarial Robustness
Guillermo Ortiz-Jiménez
Apostolos Modas
Seyed-Mohsen Moosavi-Dezfooli
P. Frossard
AAML
121
48
0
19 Oct 2020
Against All Odds: Winning the Defense Challenge in an Evasion
  Competition with Diversification
Against All Odds: Winning the Defense Challenge in an Evasion Competition with Diversification
Erwin Quiring
Lukas Pirch
Michael Reimsbach
Dan Arp
Konrad Rieck
AAML
45
13
0
19 Oct 2020
FADER: Fast Adversarial Example Rejection
FADER: Fast Adversarial Example Rejection
Francesco Crecchi
Marco Melis
Angelo Sotgiu
D. Bacciu
Battista Biggio
AAML
57
15
0
18 Oct 2020
Mischief: A Simple Black-Box Attack Against Transformer Architectures
Mischief: A Simple Black-Box Attack Against Transformer Architectures
Adrian de Wynter
AAML
74
1
0
16 Oct 2020
Reverse Engineering Imperceptible Backdoor Attacks on Deep Neural
  Networks for Detection and Training Set Cleansing
Reverse Engineering Imperceptible Backdoor Attacks on Deep Neural Networks for Detection and Training Set Cleansing
Zhen Xiang
David J. Miller
G. Kesidis
81
23
0
15 Oct 2020
Toward Few-step Adversarial Training from a Frequency Perspective
Toward Few-step Adversarial Training from a Frequency Perspective
H. Wang
Cory Cornelius
Brandon Edwards
Jason Martin
AAML
43
4
0
13 Oct 2020
Diagnosing and Preventing Instabilities in Recurrent Video Processing
Diagnosing and Preventing Instabilities in Recurrent Video Processing
T. Tanay
Aivar Sootla
Matteo Maggioni
P. Dokania
Philip Torr
A. Leonardis
Greg Slabaugh
66
7
0
10 Oct 2020
Transcending Transcend: Revisiting Malware Classification in the
  Presence of Concept Drift
Transcending Transcend: Revisiting Malware Classification in the Presence of Concept Drift
Federico Barbero
Feargus Pendlebury
Fabio Pierazzi
Lorenzo Cavallaro
81
75
0
08 Oct 2020
Decamouflage: A Framework to Detect Image-Scaling Attacks on
  Convolutional Neural Networks
Decamouflage: A Framework to Detect Image-Scaling Attacks on Convolutional Neural Networks
Bedeuro Kim
A. Abuadbba
Yansong Gao
Yifeng Zheng
Muhammad Ejaz Ahmed
Hyoungshick Kim
Surya Nepal
22
4
0
08 Oct 2020
Assessing Robustness of Text Classification through Maximal Safe Radius
  Computation
Assessing Robustness of Text Classification through Maximal Safe Radius Computation
Emanuele La Malfa
Min Wu
Luca Laurenti
Benjie Wang
Anthony Hartshorn
Marta Z. Kwiatkowska
AAML
70
18
0
01 Oct 2020
Geometric Disentanglement by Random Convex Polytopes
Geometric Disentanglement by Random Convex Polytopes
M. Joswig
M. Kaluba
Lukas Ruff
65
3
0
29 Sep 2020
Advancing the Research and Development of Assured Artificial
  Intelligence and Machine Learning Capabilities
Advancing the Research and Development of Assured Artificial Intelligence and Machine Learning Capabilities
Tyler J. Shipp
Daniel Clouse
Michael J. De Lucia
Metin B. Ahiskali
Kai Steverson
Jonathan M. Mullin
Nathaniel D. Bastian
21
4
0
24 Sep 2020
A Unifying Review of Deep and Shallow Anomaly Detection
A Unifying Review of Deep and Shallow Anomaly Detection
Lukas Ruff
Jacob R. Kauffmann
Robert A. Vandermeulen
G. Montavon
Wojciech Samek
Marius Kloft
Thomas G. Dietterich
Klaus-Robert Muller
UQCV
148
806
0
24 Sep 2020
Pocket Diagnosis: Secure Federated Learning against Poisoning Attack in
  the Cloud
Pocket Diagnosis: Secure Federated Learning against Poisoning Attack in the Cloud
Zhuo Ma
Jianfeng Ma
Yinbin Miao
Ximeng Liu
K. Choo
R. Deng
FedML
118
33
0
23 Sep 2020
Adversarial Concept Drift Detection under Poisoning Attacks for Robust
  Data Stream Mining
Adversarial Concept Drift Detection under Poisoning Attacks for Robust Data Stream Mining
Lukasz Korycki
Bartosz Krawczyk
AAML
123
23
0
20 Sep 2020
The Intriguing Relation Between Counterfactual Explanations and
  Adversarial Examples
The Intriguing Relation Between Counterfactual Explanations and Adversarial Examples
Timo Freiesleben
GAN
102
64
0
11 Sep 2020
A black-box adversarial attack for poisoning clustering
A black-box adversarial attack for poisoning clustering
Antonio Emanuele Cinà
Alessandro Torcinovich
Marcello Pelillo
AAML
121
41
0
09 Sep 2020
SoK: Certified Robustness for Deep Neural Networks
SoK: Certified Robustness for Deep Neural Networks
Linyi Li
Tao Xie
Yue Liu
AAML
123
131
0
09 Sep 2020
Fuzzy Unique Image Transformation: Defense Against Adversarial Attacks
  On Deep COVID-19 Models
Fuzzy Unique Image Transformation: Defense Against Adversarial Attacks On Deep COVID-19 Models
A. Tripathi
Ashish Mishra
AAMLMedIm
40
10
0
08 Sep 2020
Detection Defense Against Adversarial Attacks with Saliency Map
Detection Defense Against Adversarial Attacks with Saliency Map
Dengpan Ye
Chuanxi Chen
Changrui Liu
Hao Wang
Shunzhi Jiang
AAML
57
28
0
06 Sep 2020
Examining Machine Learning for 5G and Beyond through an Adversarial Lens
Examining Machine Learning for 5G and Beyond through an Adversarial Lens
Muhammad Usama
Rupendra Nath Mitra
Inaam Ilahi
Junaid Qadir
M. Marina
AAML
46
25
0
05 Sep 2020
Practical Cross-modal Manifold Alignment for Grounded Language
Practical Cross-modal Manifold Alignment for Grounded Language
A. Nguyen
Luke E. Richards
Gaoussou Youssouf Kebe
Edward Raff
Kasra Darvish
Frank Ferraro
Cynthia Matuszek
20
4
0
01 Sep 2020
Adversarially Robust Learning via Entropic Regularization
Adversarially Robust Learning via Entropic Regularization
Gauri Jagatap
Ameya Joshi
A. B. Chowdhury
S. Garg
Chinmay Hegde
OOD
125
11
0
27 Aug 2020
Avoiding Negative Side Effects due to Incomplete Knowledge of AI Systems
Avoiding Negative Side Effects due to Incomplete Knowledge of AI Systems
Sandhya Saisubramanian
S. Zilberstein
Ece Kamar
99
22
0
24 Aug 2020
Defending Regression Learners Against Poisoning Attacks
Defending Regression Learners Against Poisoning Attacks
Sandamal Weerasinghe
S. Erfani
T. Alpcan
C. Leckie
Justin Kopacz
AAML
23
0
0
21 Aug 2020
Extrapolating false alarm rates in automatic speaker verification
Extrapolating false alarm rates in automatic speaker verification
A. Sholokhov
Tomi Kinnunen
Ville Vestman
Kong Aik Lee
42
1
0
08 Aug 2020
Adversarial Examples on Object Recognition: A Comprehensive Survey
Adversarial Examples on Object Recognition: A Comprehensive Survey
A. Serban
E. Poll
Joost Visser
AAML
118
73
0
07 Aug 2020
Trojaning Language Models for Fun and Profit
Trojaning Language Models for Fun and Profit
Xinyang Zhang
Zheng Zhang
Shouling Ji
Ting Wang
SILMAAML
98
140
0
01 Aug 2020
A General Framework For Detecting Anomalous Inputs to DNN Classifiers
A General Framework For Detecting Anomalous Inputs to DNN Classifiers
Jayaram Raghuram
Varun Chandrasekaran
S. Jha
Suman Banerjee
AAML
106
35
0
29 Jul 2020
Transfer Learning without Knowing: Reprogramming Black-box Machine
  Learning Models with Scarce Data and Limited Resources
Transfer Learning without Knowing: Reprogramming Black-box Machine Learning Models with Scarce Data and Limited Resources
Yun-Yun Tsai
Pin-Yu Chen
Tsung-Yi Ho
AAMLMLAUBDL
82
99
0
17 Jul 2020
Previous
123...101112789
Next