Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2312.05924
Cited By
v1
v2 (latest)
Data-Free Hard-Label Robustness Stealing Attack
10 December 2023
Xiaojian Yuan
Kejiang Chen
Wen Huang
Jie Zhang
Weiming Zhang
Neng H. Yu
AAML
Re-assign community
ArXiv (abs)
PDF
HTML
Github (13★)
Papers citing
"Data-Free Hard-Label Robustness Stealing Attack"
21 / 21 papers shown
Title
Momentum Adversarial Distillation: Handling Large Distribution Shifts in Data-Free Knowledge Distillation
Kien Do
Hung Le
D. Nguyen
Dang Nguyen
Haripriya Harikumar
T. Tran
Santu Rana
Svetha Venkatesh
48
33
0
21 Sep 2022
Improving Robustness using Generated Data
Sven Gowal
Sylvestre-Alvise Rebuffi
Olivia Wiles
Florian Stimberg
D. A. Calian
Timothy A. Mann
91
302
0
18 Oct 2021
Preventing Catastrophic Forgetting and Distribution Mismatch in Knowledge Distillation via Synthetic Data
Kuluhan Binici
N. Pham
T. Mitra
K. Leman
79
41
0
11 Aug 2021
Black-Box Dissector: Towards Erasing-based Hard-Label Model Stealing Attack
Yixu Wang
Jie Li
Hong Liu
Yan Wang
Yongjian Wu
Feiyue Huang
Rongrong Ji
AAML
89
36
0
03 May 2021
Robust Machine Learning Systems: Challenges, Current Trends, Perspectives, and the Road Ahead
Mohamed Bennai
Mahum Naseer
T. Theocharides
C. Kyrkou
O. Mutlu
Lois Orosa
Jungwook Choi
OOD
117
101
0
04 Jan 2021
Data-Free Model Extraction
Jean-Baptiste Truong
Pratyush Maini
R. Walls
Nicolas Papernot
MIACV
79
189
0
30 Nov 2020
Black-Box Ripper: Copying black-box models using generative evolutionary algorithms
Antonio Bărbălău
Adrian Cosma
Radu Tudor Ionescu
Marius Popescu
MIACV
MLAU
71
43
0
21 Oct 2020
MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation
Sanjay Kariyappa
A. Prakash
Moinuddin K. Qureshi
AAML
69
153
0
06 May 2020
Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks
Francesco Croce
Matthias Hein
AAML
227
1,858
0
03 Mar 2020
Zero-shot Knowledge Transfer via Adversarial Belief Matching
P. Micaelli
Amos Storkey
54
230
0
23 May 2019
Zero-Shot Knowledge Distillation in Deep Networks
Gaurav Kumar Nayak
Konda Reddy Mopuri
Vaisakh Shaj
R. Venkatesh Babu
Anirban Chakraborty
75
245
0
20 May 2019
Adversarially Robust Generalization Requires More Data
Ludwig Schmidt
Shibani Santurkar
Dimitris Tsipras
Kunal Talwar
Aleksander Madry
OOD
AAML
155
795
0
30 Apr 2018
MobileNetV2: Inverted Residuals and Linear Bottlenecks
Mark Sandler
Andrew G. Howard
Menglong Zhu
A. Zhmoginov
Liang-Chieh Chen
204
19,333
0
13 Jan 2018
Data-Free Knowledge Distillation for Deep Neural Networks
Raphael Gontijo-Lopes
Stefano Fenu
Thad Starner
60
273
0
19 Oct 2017
Towards Deep Learning Models Resistant to Adversarial Attacks
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILM
OOD
317
12,131
0
19 Jun 2017
Membership Inference Attacks against Machine Learning Models
Reza Shokri
M. Stronati
Congzheng Song
Vitaly Shmatikov
SLR
MIALM
MIACV
272
4,159
0
18 Oct 2016
Stealing Machine Learning Models via Prediction APIs
Florian Tramèr
Fan Zhang
Ari Juels
Michael K. Reiter
Thomas Ristenpart
SILM
MLAU
109
1,810
0
09 Sep 2016
Towards Evaluating the Robustness of Neural Networks
Nicholas Carlini
D. Wagner
OOD
AAML
282
8,583
0
16 Aug 2016
Wide Residual Networks
Sergey Zagoruyko
N. Komodakis
353
8,000
0
23 May 2016
Rethinking the Inception Architecture for Computer Vision
Christian Szegedy
Vincent Vanhoucke
Sergey Ioffe
Jonathon Shlens
Z. Wojna
3DV
BDL
886
27,416
0
02 Dec 2015
Explaining and Harnessing Adversarial Examples
Ian Goodfellow
Jonathon Shlens
Christian Szegedy
AAML
GAN
282
19,121
0
20 Dec 2014
1