Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2302.02162
Cited By
AUTOLYCUS: Exploiting Explainable AI (XAI) for Model Extraction Attacks against Interpretable Models
4 February 2023
Abdullah Çaglar Öksüz
Anisa Halimi
Erman Ayday
ELM
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"AUTOLYCUS: Exploiting Explainable AI (XAI) for Model Extraction Attacks against Interpretable Models"
8 / 8 papers shown
Title
DualCF: Efficient Model Extraction Attack from Counterfactual Explanations
Yongjie Wang
Hangwei Qian
Chunyan Miao
AAML
24
31
0
13 May 2022
MEGEX: Data-Free Model Extraction Attack against Gradient-Based Explainable AI
T. Miura
Satoshi Hasegawa
Toshiki Shibahara
SILM
MIACV
34
37
0
19 Jul 2021
Exploiting Explanations for Model Inversion Attacks
Xu Zhao
Wencan Zhang
Xiao Xiao
Brian Y. Lim
MIACV
41
83
0
26 Apr 2021
Label-Only Membership Inference Attacks
Christopher A. Choquette-Choo
Florian Tramèr
Nicholas Carlini
Nicolas Papernot
MIACV
MIALM
47
500
0
28 Jul 2020
Exploring Connections Between Active Learning and Model Extraction
Varun Chandrasekaran
Kamalika Chaudhuri
Irene Giacomelli
Shane Walker
Songbai Yan
MIACV
91
158
0
05 Nov 2018
Membership Inference Attacks against Machine Learning Models
Reza Shokri
M. Stronati
Congzheng Song
Vitaly Shmatikov
SLR
MIALM
MIACV
191
4,075
0
18 Oct 2016
Learning Deep Features for Discriminative Localization
Bolei Zhou
A. Khosla
Àgata Lapedriza
A. Oliva
Antonio Torralba
SSL
SSeg
FAtt
91
9,266
0
14 Dec 2015
Agnostic Active Learning Without Constraints
A. Beygelzimer
Daniel J. Hsu
John Langford
Tong Zhang
VLM
79
181
0
14 Jun 2010
1