ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2212.14315
  4. Cited By
"Real Attackers Don't Compute Gradients": Bridging the Gap Between
  Adversarial ML Research and Practice

"Real Attackers Don't Compute Gradients": Bridging the Gap Between Adversarial ML Research and Practice

29 December 2022
Giovanni Apruzzese
Hyrum S. Anderson
Savino Dambra
D. Freeman
Fabio Pierazzi
Kevin A. Roundy
    AAML
ArXiv (abs)PDFHTML

Papers citing ""Real Attackers Don't Compute Gradients": Bridging the Gap Between Adversarial ML Research and Practice"

43 / 93 papers shown
Title
Adversarial Machine Learning -- Industry Perspectives
Adversarial Machine Learning -- Industry Perspectives
Ramnath Kumar
Magnus Nyström
J. Lambert
Andrew Marshall
Mario Goertzel
Andi Comissoneru
Matt Swann
Sharon Xia
AAMLSILM
89
235
0
04 Feb 2020
Humpty Dumpty: Controlling Word Meanings via Corpus Poisoning
Humpty Dumpty: Controlling Word Meanings via Corpus Poisoning
R. Schuster
Tal Schuster
Yoav Meri
Vitaly Shmatikov
AAML
43
39
0
14 Jan 2020
Fast is better than free: Revisiting adversarial training
Fast is better than free: Revisiting adversarial training
Eric Wong
Leslie Rice
J. Zico Kolter
AAMLOOD
138
1,179
0
12 Jan 2020
Analyzing Information Leakage of Updates to Natural Language Models
Analyzing Information Leakage of Updates to Natural Language Models
Santiago Zanella Béguelin
Lukas Wutschitz
Shruti Tople
Victor Rühle
Andrew Paverd
O. Ohrimenko
Boris Köpf
Marc Brockschmidt
ELMMIACVFedMLPILMKELM
53
125
0
17 Dec 2019
Who is Real Bob? Adversarial Attacks on Speaker Recognition Systems
Who is Real Bob? Adversarial Attacks on Speaker Recognition Systems
Guangke Chen
Sen Chen
Lingling Fan
Xiaoning Du
Zhe Zhao
Fu Song
Yang Liu
AAML
100
196
0
03 Nov 2019
Hear "No Evil", See "Kenansville": Efficient and Transferable Black-Box
  Attacks on Speech Recognition and Voice Identification Systems
Hear "No Evil", See "Kenansville": Efficient and Transferable Black-Box Attacks on Speech Recognition and Voice Identification Systems
H. Abdullah
Muhammad Sajidur Rahman
Washington Garcia
Logan Blue
Kevin Warren
Anurag Swarnim Yadav
T. Shrimpton
Patrick Traynor
AAML
50
89
0
11 Oct 2019
Detecting AI Trojans Using Meta Neural Analysis
Detecting AI Trojans Using Meta Neural Analysis
Xiaojun Xu
Qi Wang
Huichen Li
Nikita Borisov
Carl A. Gunter
Yue Liu
84
323
0
08 Oct 2019
MemGuard: Defending against Black-Box Membership Inference Attacks via
  Adversarial Examples
MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples
Jinyuan Jia
Ahmed Salem
Michael Backes
Yang Zhang
Neil Zhenqiang Gong
69
390
0
23 Sep 2019
High Accuracy and High Fidelity Extraction of Neural Networks
High Accuracy and High Fidelity Extraction of Neural Networks
Matthew Jagielski
Nicholas Carlini
David Berthelot
Alexey Kurakin
Nicolas Papernot
MLAUMIACV
81
377
0
03 Sep 2019
Federated Learning: Challenges, Methods, and Future Directions
Federated Learning: Challenges, Methods, and Future Directions
Tian Li
Anit Kumar Sahu
Ameet Talwalkar
Virginia Smith
FedML
123
4,517
0
21 Aug 2019
Hybrid Batch Attacks: Finding Black-box Adversarial Examples with
  Limited Queries
Hybrid Batch Attacks: Finding Black-box Adversarial Examples with Limited Queries
Fnu Suya
Jianfeng Chi
David Evans
Yuan Tian
AAML
62
85
0
19 Aug 2019
Demon in the Variant: Statistical Analysis of DNNs for Robust Backdoor
  Contamination Detection
Demon in the Variant: Statistical Analysis of DNNs for Robust Backdoor Contamination Detection
Di Tang
Xiaofeng Wang
Haixu Tang
Kehuan Zhang
AAML
61
201
0
02 Aug 2019
Stolen Memories: Leveraging Model Memorization for Calibrated White-Box
  Membership Inference
Stolen Memories: Leveraging Model Memorization for Calibrated White-Box Membership Inference
Klas Leino
Matt Fredrikson
MIACV
91
271
0
27 Jun 2019
Quantitative Verification of Neural Networks And its Security
  Applications
Quantitative Verification of Neural Networks And its Security Applications
Teodora Baluta
Shiqi Shen
Shweta Shinde
Kuldeep S. Meel
P. Saxena
AAML
61
105
0
25 Jun 2019
Terminal Brain Damage: Exposing the Graceless Degradation in Deep Neural
  Networks Under Hardware Fault Attacks
Terminal Brain Damage: Exposing the Graceless Degradation in Deep Neural Networks Under Hardware Fault Attacks
Sanghyun Hong
Pietro Frigo
Yigitcan Kaya
Cristiano Giuffrida
Tudor Dumitras
AAML
53
211
0
03 Jun 2019
Misleading Authorship Attribution of Source Code using Adversarial
  Learning
Misleading Authorship Attribution of Source Code using Adversarial Learning
Erwin Quiring
Alwin Maier
Konrad Rieck
46
106
0
29 May 2019
Privacy Risks of Securing Machine Learning Models against Adversarial
  Examples
Privacy Risks of Securing Machine Learning Models against Adversarial Examples
Liwei Song
Reza Shokri
Prateek Mittal
SILMMIACVAAML
57
241
0
24 May 2019
A critique of the DeepSec Platform for Security Analysis of Deep
  Learning Models
A critique of the DeepSec Platform for Security Analysis of Deep Learning Models
Nicholas Carlini
ELM
51
14
0
17 May 2019
Adversarial Training for Free!
Adversarial Training for Free!
Ali Shafahi
Mahyar Najibi
Amin Ghiasi
Zheng Xu
John P. Dickerson
Christoph Studer
L. Davis
Gavin Taylor
Tom Goldstein
AAML
132
1,249
0
29 Apr 2019
Gotta Catch Ém All: Using Honeypots to Catch Adversarial Attacks on
  Neural Networks
Gotta Catch Ém All: Using Honeypots to Catch Adversarial Attacks on Neural Networks
Shawn Shan
Emily Wenger
Bolun Wang
Yangqiu Song
Haitao Zheng
Ben Y. Zhao
67
73
0
18 Apr 2019
HopSkipJumpAttack: A Query-Efficient Decision-Based Attack
HopSkipJumpAttack: A Query-Efficient Decision-Based Attack
Jianbo Chen
Michael I. Jordan
Martin J. Wainwright
AAML
68
667
0
03 Apr 2019
Updates-Leak: Data Set Inference and Reconstruction Attacks in Online
  Learning
Updates-Leak: Data Set Inference and Reconstruction Attacks in Online Learning
A. Salem
Apratim Bhattacharyya
Michael Backes
Mario Fritz
Yang Zhang
FedMLAAMLMIACV
70
257
0
01 Apr 2019
Attacking Graph-based Classification via Manipulating the Graph
  Structure
Attacking Graph-based Classification via Manipulating the Graph Structure
Binghui Wang
Neil Zhenqiang Gong
AAML
87
155
0
01 Mar 2019
On Evaluating Adversarial Robustness
On Evaluating Adversarial Robustness
Nicholas Carlini
Anish Athalye
Nicolas Papernot
Wieland Brendel
Jonas Rauber
Dimitris Tsipras
Ian Goodfellow
Aleksander Madry
Alexey Kurakin
ELMAAML
89
901
0
18 Feb 2019
CT-GAN: Malicious Tampering of 3D Medical Imagery using Deep Learning
CT-GAN: Malicious Tampering of 3D Medical Imagery using Deep Learning
Yisroel Mirsky
Tom Mahler
I. Shelef
Yuval Elovici
MedIm
57
194
0
11 Jan 2019
TextBugger: Generating Adversarial Text Against Real-world Applications
TextBugger: Generating Adversarial Text Against Real-world Applications
Jinfeng Li
S. Ji
Tianyu Du
Bo Li
Ting Wang
SILMAAML
211
743
0
13 Dec 2018
Interpretable Deep Learning under Fire
Interpretable Deep Learning under Fire
Xinyang Zhang
Ningfei Wang
Hua Shen
S. Ji
Xiapu Luo
Ting Wang
AAMLAI4CE
102
171
0
03 Dec 2018
AdVersarial: Perceptual Ad Blocking meets Adversarial Machine Learning
AdVersarial: Perceptual Ad Blocking meets Adversarial Machine Learning
K. Makarychev
Pascal Dupré
Yury Makarychev
Giancarlo Pellegrino
Dan Boneh
AAML
98
64
0
08 Nov 2018
Exploring Connections Between Active Learning and Model Extraction
Exploring Connections Between Active Learning and Model Extraction
Varun Chandrasekaran
Kamalika Chaudhuri
Irene Giacomelli
Shane Walker
Songbai Yan
MIACV
200
158
0
05 Nov 2018
Procedural Noise Adversarial Examples for Black-Box Attacks on Deep
  Convolutional Networks
Procedural Noise Adversarial Examples for Black-Box Attacks on Deep Convolutional Networks
Kenneth T. Co
Luis Muñoz-González
Sixte de Maupeou
Emil C. Lupu
AAML
61
67
0
30 Sep 2018
Motivating the Rules of the Game for Adversarial Example Research
Motivating the Rules of the Game for Adversarial Example Research
Justin Gilmer
Ryan P. Adams
Ian Goodfellow
David G. Andersen
George E. Dahl
AAML
85
229
0
18 Jul 2018
Adversarial Perturbations Against Real-Time Video Classification Systems
Adversarial Perturbations Against Real-Time Video Classification Systems
Shasha Li
Ajaya Neupane
S. Paul
Chengyu Song
S. Krishnamurthy
Amit K. Roy-Chowdhury
A. Swami
AAML
81
119
0
02 Jul 2018
ML-Leaks: Model and Data Independent Membership Inference Attacks and
  Defenses on Machine Learning Models
ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models
A. Salem
Yang Zhang
Mathias Humbert
Pascal Berrang
Mario Fritz
Michael Backes
MIACVMIALM
96
949
0
04 Jun 2018
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
Wild Patterns: Ten Years After the Rise of Adversarial Machine Learning
Battista Biggio
Fabio Roli
AAML
130
1,409
0
08 Dec 2017
Improving Robustness of ML Classifiers against Realizable Evasion
  Attacks Using Conserved Features
Improving Robustness of ML Classifiers against Realizable Evasion Attacks Using Conserved Features
Liang Tong
Yue Liu
Chen Hajaj
Chaowei Xiao
Ning Zhang
Yevgeniy Vorobeychik
AAMLOOD
36
87
0
28 Aug 2017
Evasion Attacks against Machine Learning at Test Time
Evasion Attacks against Machine Learning at Test Time
Battista Biggio
Igino Corona
Davide Maiorca
B. Nelson
Nedim Srndic
Pavel Laskov
Giorgio Giacinto
Fabio Roli
AAML
157
2,153
0
21 Aug 2017
Towards Deep Learning Models Resistant to Adversarial Attacks
Towards Deep Learning Models Resistant to Adversarial Attacks
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILMOOD
310
12,069
0
19 Jun 2017
Membership Inference Attacks against Machine Learning Models
Membership Inference Attacks against Machine Learning Models
Reza Shokri
M. Stronati
Congzheng Song
Vitaly Shmatikov
SLRMIALMMIACV
261
4,135
0
18 Oct 2016
Stealing Machine Learning Models via Prediction APIs
Stealing Machine Learning Models via Prediction APIs
Florian Tramèr
Fan Zhang
Ari Juels
Michael K. Reiter
Thomas Ristenpart
SILMMLAU
107
1,807
0
09 Sep 2016
Towards Evaluating the Robustness of Neural Networks
Towards Evaluating the Robustness of Neural Networks
Nicholas Carlini
D. Wagner
OODAAML
266
8,555
0
16 Aug 2016
Practical Black-Box Attacks against Machine Learning
Practical Black-Box Attacks against Machine Learning
Nicolas Papernot
Patrick McDaniel
Ian Goodfellow
S. Jha
Z. Berkay Celik
A. Swami
MLAUAAML
75
3,678
0
08 Feb 2016
Intriguing properties of neural networks
Intriguing properties of neural networks
Christian Szegedy
Wojciech Zaremba
Ilya Sutskever
Joan Bruna
D. Erhan
Ian Goodfellow
Rob Fergus
AAML
277
14,927
1
21 Dec 2013
Poisoning Attacks against Support Vector Machines
Poisoning Attacks against Support Vector Machines
Battista Biggio
B. Nelson
Pavel Laskov
AAML
115
1,593
0
27 Jun 2012
Previous
12