ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2212.14315
  4. Cited By
"Real Attackers Don't Compute Gradients": Bridging the Gap Between
  Adversarial ML Research and Practice

"Real Attackers Don't Compute Gradients": Bridging the Gap Between Adversarial ML Research and Practice

29 December 2022
Giovanni Apruzzese
Hyrum S. Anderson
Savino Dambra
D. Freeman
Fabio Pierazzi
Kevin A. Roundy
    AAML
ArXiv (abs)PDFHTML

Papers citing ""Real Attackers Don't Compute Gradients": Bridging the Gap Between Adversarial ML Research and Practice"

50 / 93 papers shown
Title
OverThink: Slowdown Attacks on Reasoning LLMs
OverThink: Slowdown Attacks on Reasoning LLMs
A. Kumar
Jaechul Roh
A. Naseh
Marzena Karpinska
Mohit Iyyer
Amir Houmansadr
Eugene Bagdasarian
LRM
126
20
0
04 Feb 2025
Defending against Adversarial Malware Attacks on ML-based Android Malware Detection Systems
Defending against Adversarial Malware Attacks on ML-based Android Malware Detection Systems
Ping He
Lorenzo Cavallaro
Shouling Ji
AAML
194
0
0
23 Jan 2025
Position: A taxonomy for reporting and describing AI security incidents
Position: A taxonomy for reporting and describing AI security incidents
L. Bieringer
Kevin Paeth
Andreas Wespi
Kathrin Grosse
Alexandre Alahi
Kathrin Grosse
137
0
0
19 Dec 2024
From ML to LLM: Evaluating the Robustness of Phishing Webpage Detection Models against Adversarial Attacks
From ML to LLM: Evaluating the Robustness of Phishing Webpage Detection Models against Adversarial Attacks
Aditya Kulkarni
Vivek Balachandran
D. Divakaran
Tamal Das
AAML
92
4
0
29 Jul 2024
Evaluating the Effectiveness and Robustness of Visual Similarity-based Phishing Detection Models
Evaluating the Effectiveness and Robustness of Visual Similarity-based Phishing Detection Models
Fujiao Ji
Kiho Lee
Hyungjoon Koo
Wenhao You
Euijin Choo
Hyoungshick Kim
Doowon Kim
AAML
87
2
0
30 May 2024
Wild Networks: Exposure of 5G Network Infrastructures to Adversarial
  Examples
Wild Networks: Exposure of 5G Network Infrastructures to Adversarial Examples
Giovanni Apruzzese
Rodion Vladimirov
A.T. Tastemirova
Pavel Laskov
AAML
81
16
0
04 Jul 2022
Threat Assessment in Machine Learning based Systems
Threat Assessment in Machine Learning based Systems
L. Tidjon
Foutse Khomh
49
17
0
30 Jun 2022
The Role of Machine Learning in Cybersecurity
The Role of Machine Learning in Cybersecurity
Giovanni Apruzzese
Pavel Laskov
Edgardo Montes de Oca
Wissam Mallouli
Luis Brdalo Rapa
A. Grammatopoulos
Fabio Di Franco
73
137
0
20 Jun 2022
Post-breach Recovery: Protection against White-box Adversarial Examples
  for Leaked DNN Models
Post-breach Recovery: Protection against White-box Adversarial Examples for Leaked DNN Models
Shawn Shan
Wen-Luan Ding
Emily Wenger
Haitao Zheng
Ben Y. Zhao
AAML
68
10
0
21 May 2022
SoK: The Impact of Unlabelled Data in Cyberthreat Detection
SoK: The Impact of Unlabelled Data in Cyberthreat Detection
Giovanni Apruzzese
Pavel Laskov
A.T. Tastemirova
76
31
0
18 May 2022
Concept-based Adversarial Attacks: Tricking Humans and Classifiers Alike
Concept-based Adversarial Attacks: Tricking Humans and Classifiers Alike
Johannes Schneider
Giovanni Apruzzese
AAML
115
8
0
18 Mar 2022
Black-box Adversarial Attacks on Commercial Speech Platforms with
  Minimal Information
Black-box Adversarial Attacks on Commercial Speech Platforms with Minimal Information
Baolin Zheng
Peipei Jiang
Qian Wang
Qi Li
Chao Shen
Cong Wang
Yunjie Ge
Qingyang Teng
Shenyi Zhang
AAML
34
71
0
19 Oct 2021
Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks
Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks
Shawn Shan
A. Bhagoji
Haitao Zheng
Ben Y. Zhao
AAML
134
51
0
13 Oct 2021
A Framework for Cluster and Classifier Evaluation in the Absence of
  Reference Labels
A Framework for Cluster and Classifier Evaluation in the Absence of Reference Labels
R. Joyce
Edward Raff
Charles K. Nicholas
75
16
0
23 Sep 2021
A Hard Label Black-box Adversarial Attack Against Graph Neural Networks
A Hard Label Black-box Adversarial Attack Against Graph Neural Networks
Jiaming Mu
Binghui Wang
Qi Li
Kun Sun
Mingwei Xu
Zhuotao Liu
AAML
56
37
0
21 Aug 2021
Accuracy on the Line: On the Strong Correlation Between
  Out-of-Distribution and In-Distribution Generalization
Accuracy on the Line: On the Strong Correlation Between Out-of-Distribution and In-Distribution Generalization
John Miller
Rohan Taori
Aditi Raghunathan
Shiori Sagawa
Pang Wei Koh
Vaishaal Shankar
Percy Liang
Y. Carmon
Ludwig Schmidt
OODDOOD
80
278
0
09 Jul 2021
Indicators of Attack Failure: Debugging and Improving Optimization of
  Adversarial Examples
Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial Examples
Maura Pintor
Christian Scano
Angelo Sotgiu
Ambra Demontis
Nicholas Carlini
Battista Biggio
Fabio Roli
AAML
61
28
0
18 Jun 2021
Modeling Realistic Adversarial Attacks against Network Intrusion
  Detection Systems
Modeling Realistic Adversarial Attacks against Network Intrusion Detection Systems
Giovanni Apruzzese
M. Andreolini
Luca Ferretti
Mirco Marchetti
M. Colajanni
AAML
76
107
0
17 Jun 2021
On the Robustness of Domain Constraints
On the Robustness of Domain Constraints
Ryan Sheatsley
Blaine Hoak
Eric Pauley
Yohan Beugin
Mike Weisman
Patrick McDaniel
AAMLOOD
65
25
0
18 May 2021
Poisoning the Unlabeled Dataset of Semi-Supervised Learning
Poisoning the Unlabeled Dataset of Semi-Supervised Learning
Nicholas Carlini
AAML
206
68
0
04 May 2021
Fall of Giants: How popular text-based MLaaS fall against a simple
  evasion attack
Fall of Giants: How popular text-based MLaaS fall against a simple evasion attack
Luca Pajola
Mauro Conti
29
28
0
13 Apr 2021
T-Miner: A Generative Approach to Defend Against Trojan Attacks on
  DNN-based Text Classification
T-Miner: A Generative Approach to Defend Against Trojan Attacks on DNN-based Text Classification
A. Azizi
I. A. Tahmid
Asim Waheed
Neal Mangaokar
Jiameng Pu
M. Javed
Chandan K. Reddy
Bimal Viswanath
AAML
65
80
0
07 Mar 2021
WaveGuard: Understanding and Mitigating Audio Adversarial Examples
WaveGuard: Understanding and Mitigating Audio Adversarial Examples
Shehzeen Samarah Hussain
Paarth Neekhara
Shlomo Dubnov
Julian McAuley
F. Koushanfar
AAML
65
74
0
04 Mar 2021
Dompteur: Taming Audio Adversarial Examples
Dompteur: Taming Audio Adversarial Examples
Thorsten Eisenhofer
Lea Schonherr
Joel Frank
Lars Speckemeier
D. Kolossa
Thorsten Holz
AAML
59
24
0
10 Feb 2021
Quantifying and Mitigating Privacy Risks of Contrastive Learning
Quantifying and Mitigating Privacy Risks of Contrastive Learning
Xinlei He
Yang Zhang
77
51
0
08 Feb 2021
Robust Adversarial Attacks Against DNN-Based Wireless Communication
  Systems
Robust Adversarial Attacks Against DNN-Based Wireless Communication Systems
Alireza Bahramali
Milad Nasr
Amir Houmansadr
Dennis Goeckel
Don Towsley
AAML
71
56
0
01 Feb 2021
Adversary Instantiation: Lower Bounds for Differentially Private Machine
  Learning
Adversary Instantiation: Lower Bounds for Differentially Private Machine Learning
Milad Nasr
Shuang Song
Abhradeep Thakurta
Nicolas Papernot
Nicholas Carlini
MIACVFedML
134
223
0
11 Jan 2021
Data Poisoning Attacks to Deep Learning Based Recommender Systems
Data Poisoning Attacks to Deep Learning Based Recommender Systems
Hai Huang
Jiaming Mu
Neil Zhenqiang Gong
Qi Li
Bin Liu
Mingwei Xu
AAML
76
129
0
07 Jan 2021
Practical Blind Membership Inference Attack via Differential Comparisons
Practical Blind Membership Inference Attack via Differential Comparisons
Bo Hui
Yuchen Yang
Haolin Yuan
Philippe Burlina
Neil Zhenqiang Gong
Yinzhi Cao
MIACV
164
121
0
05 Jan 2021
FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping
FLTrust: Byzantine-robust Federated Learning via Trust Bootstrapping
Xiaoyu Cao
Minghong Fang
Jia Liu
Neil Zhenqiang Gong
FedML
169
643
0
27 Dec 2020
Extracting Training Data from Large Language Models
Extracting Training Data from Large Language Models
Nicholas Carlini
Florian Tramèr
Eric Wallace
Matthew Jagielski
Ariel Herbert-Voss
...
Tom B. Brown
Basel Alomair
Ulfar Erlingsson
Alina Oprea
Colin Raffel
MLAUSILM
492
1,923
0
14 Dec 2020
Practical No-box Adversarial Attacks against DNNs
Practical No-box Adversarial Attacks against DNNs
Qizhang Li
Yiwen Guo
Hao Chen
AAML
63
59
0
04 Dec 2020
Deep-Dup: An Adversarial Weight Duplication Attack Framework to Crush
  Deep Neural Network in Multi-Tenant FPGA
Deep-Dup: An Adversarial Weight Duplication Attack Framework to Crush Deep Neural Network in Multi-Tenant FPGA
Adnan Siraj Rakin
Yukui Luo
Xiaolin Xu
Deliang Fan
AAML
60
50
0
05 Nov 2020
Against All Odds: Winning the Defense Challenge in an Evasion
  Competition with Diversification
Against All Odds: Winning the Defense Challenge in an Evasion Competition with Diversification
Erwin Quiring
Lukas Pirch
Michael Reimsbach
Dan Arp
Konrad Rieck
AAML
36
13
0
19 Oct 2020
DeepDyve: Dynamic Verification for Deep Neural Networks
DeepDyve: Dynamic Verification for Deep Neural Networks
Yu Li
Min Li
Bo Luo
Ye Tian
Qiang Xu
AAML
58
31
0
21 Sep 2020
Adversarial Watermarking Transformer: Towards Tracing Text Provenance
  with Data Hiding
Adversarial Watermarking Transformer: Towards Tracing Text Provenance with Data Hiding
Sahar Abdelnabi
Mario Fritz
WaLM
51
151
0
07 Sep 2020
SIGL: Securing Software Installations Through Deep Graph Learning
SIGL: Securing Software Installations Through Deep Graph Learning
Xueyuan Han
Xiao Yu
Thomas Pasquier
Ding Li
J. Rhee
James W. Mickens
Margo Seltzer
Haifeng Chen
63
49
0
26 Aug 2020
Membership Leakage in Label-Only Exposures
Membership Leakage in Label-Only Exposures
Zheng Li
Yang Zhang
75
245
0
30 Jul 2020
SoK: The Faults in our ASRs: An Overview of Attacks against Automatic
  Speech Recognition and Speaker Identification Systems
SoK: The Faults in our ASRs: An Overview of Attacks against Automatic Speech Recognition and Speaker Identification Systems
H. Abdullah
Kevin Warren
Vincent Bindschaedler
Nicolas Papernot
Patrick Traynor
AAML
67
129
0
13 Jul 2020
SLAP: Improving Physical Adversarial Examples with Short-Lived
  Adversarial Perturbations
SLAP: Improving Physical Adversarial Examples with Short-Lived Adversarial Perturbations
Giulio Lovisotto
H.C.M. Turner
Ivo Sluganovic
Martin Strohmeier
Ivan Martinovic
AAML
65
102
0
08 Jul 2020
You Autocomplete Me: Poisoning Vulnerabilities in Neural Code Completion
You Autocomplete Me: Poisoning Vulnerabilities in Neural Code Completion
R. Schuster
Congzheng Song
Eran Tromer
Vitaly Shmatikov
SILMAAML
105
156
0
05 Jul 2020
Hermes Attack: Steal DNN Models with Lossless Inference Accuracy
Hermes Attack: Steal DNN Models with Lossless Inference Accuracy
Yuankun Zhu
Yueqiang Cheng
Husheng Zhou
Yantao Lu
MIACVAAML
83
101
0
23 Jun 2020
Graph Backdoor
Graph Backdoor
Zhaohan Xi
Ren Pang
S. Ji
Ting Wang
AI4CEAAML
55
171
0
21 Jun 2020
Blind Backdoors in Deep Learning Models
Blind Backdoors in Deep Learning Models
Eugene Bagdasaryan
Vitaly Shmatikov
AAMLFedMLSILM
110
304
0
08 May 2020
When Machine Unlearning Jeopardizes Privacy
When Machine Unlearning Jeopardizes Privacy
Min Chen
Zhikun Zhang
Tianhao Wang
Michael Backes
Mathias Humbert
Yang Zhang
MIACV
83
229
0
05 May 2020
Information Leakage in Embedding Models
Information Leakage in Embedding Models
Congzheng Song
A. Raghunathan
MIACV
57
270
0
31 Mar 2020
Systematic Evaluation of Privacy Risks of Machine Learning Models
Systematic Evaluation of Privacy Risks of Machine Learning Models
Liwei Song
Prateek Mittal
MIACV
345
373
0
24 Mar 2020
TSS: Transformation-Specific Smoothing for Robustness Certification
TSS: Transformation-Specific Smoothing for Robustness Certification
Linyi Li
Maurice Weber
Xiaojun Xu
Luka Rimanic
B. Kailkhura
Tao Xie
Ce Zhang
Yue Liu
AAML
89
57
0
27 Feb 2020
Entangled Watermarks as a Defense against Model Extraction
Entangled Watermarks as a Defense against Model Extraction
Hengrui Jia
Christopher A. Choquette-Choo
Varun Chandrasekaran
Nicolas Papernot
WaLMAAML
77
220
0
27 Feb 2020
Mind Your Weight(s): A Large-scale Study on Insufficient Machine
  Learning Model Protection in Mobile Apps
Mind Your Weight(s): A Large-scale Study on Insufficient Machine Learning Model Protection in Mobile Apps
Zhichuang Sun
Ruimin Sun
Long Lu
Alan Mislove
75
78
0
18 Feb 2020
12
Next