Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
2205.10686
Cited By
Post-breach Recovery: Protection against White-box Adversarial Examples for Leaked DNN Models
21 May 2022
Shawn Shan
Wen-Luan Ding
Emily Wenger
Haitao Zheng
Ben Y. Zhao
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"Post-breach Recovery: Protection against White-box Adversarial Examples for Leaked DNN Models"
4 / 4 papers shown
Title
"Real Attackers Don't Compute Gradients": Bridging the Gap Between Adversarial ML Research and Practice
Giovanni Apruzzese
Hyrum S. Anderson
Savino Dambra
D. Freeman
Fabio Pierazzi
Kevin A. Roundy
AAML
31
75
0
29 Dec 2022
Poison Forensics: Traceback of Data Poisoning Attacks in Neural Networks
Shawn Shan
A. Bhagoji
Haitao Zheng
Ben Y. Zhao
AAML
94
50
0
13 Oct 2021
Adversarial Attack across Datasets
Yunxiao Qin
Yuanhao Xiong
Jinfeng Yi
Lihong Cao
Cho-Jui Hsieh
AAML
32
3
0
13 Oct 2021
Adversarial examples in the physical world
Alexey Kurakin
Ian Goodfellow
Samy Bengio
SILM
AAML
287
5,837
0
08 Jul 2016
1