ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2002.08327
  4. Cited By
Fawkes: Protecting Privacy against Unauthorized Deep Learning Models

Fawkes: Protecting Privacy against Unauthorized Deep Learning Models

19 February 2020
Shawn Shan
Emily Wenger
Jiayun Zhang
Huiying Li
Haitao Zheng
Ben Y. Zhao
    PICV
    MU
ArXivPDFHTML

Papers citing "Fawkes: Protecting Privacy against Unauthorized Deep Learning Models"

35 / 35 papers shown
Title
Making an Invisibility Cloak: Real World Adversarial Attacks on Object
  Detectors
Making an Invisibility Cloak: Real World Adversarial Attacks on Object Detectors
Zuxuan Wu
Ser-Nam Lim
L. Davis
Tom Goldstein
AAML
117
265
0
31 Oct 2019
Deep k-NN Defense against Clean-label Data Poisoning Attacks
Deep k-NN Defense against Clean-label Data Poisoning Attacks
Neehar Peri
Neal Gupta
Wenjie Huang
Liam H. Fowl
Chen Zhu
Soheil Feizi
Tom Goldstein
John P. Dickerson
AAML
44
6
0
29 Sep 2019
AdvHat: Real-world adversarial attack on ArcFace Face ID system
AdvHat: Real-world adversarial attack on ArcFace Face ID system
Stepan Alekseevich Komkov
Aleksandr Petiushko
AAML
CVBM
54
285
0
23 Aug 2019
Hiding Faces in Plain Sight: Disrupting AI Face Synthesis with
  Adversarial Perturbations
Hiding Faces in Plain Sight: Disrupting AI Face Synthesis with Adversarial Perturbations
Yuezun Li
Xin Yang
Baoyuan Wu
Siwei Lyu
AAML
PICV
CVBM
74
38
0
21 Jun 2019
Transferable Clean-Label Poisoning Attacks on Deep Neural Nets
Transferable Clean-Label Poisoning Attacks on Deep Neural Nets
Chen Zhu
Wenjie Huang
Ali Shafahi
Hengduo Li
Gavin Taylor
Christoph Studer
Tom Goldstein
83
285
0
15 May 2019
AnonymousNet: Natural Face De-Identification with Measurable Privacy
AnonymousNet: Natural Face De-Identification with Measurable Privacy
Tao Li
Lei Lin
PICV
64
147
0
19 Apr 2019
Fooling automated surveillance cameras: adversarial patches to attack
  person detection
Fooling automated surveillance cameras: adversarial patches to attack person detection
Simen Thys
W. V. Ranst
Toon Goedemé
AAML
107
569
0
18 Apr 2019
Detecting Backdoor Attacks on Deep Neural Networks by Activation
  Clustering
Detecting Backdoor Attacks on Deep Neural Networks by Activation Clustering
Bryant Chen
Wilka Carvalho
Wenjie Li
Heiko Ludwig
Benjamin Edwards
Chengyao Chen
Ziqiang Cao
Biplav Srivastava
AAML
89
796
0
09 Nov 2018
Privacy-Protective-GAN for Face De-identification
Privacy-Protective-GAN for Face De-identification
Yifan Wu
Fan Yang
Haibin Ling
CVBM
PICV
42
60
0
23 Jun 2018
Robustness May Be at Odds with Accuracy
Robustness May Be at Odds with Accuracy
Dimitris Tsipras
Shibani Santurkar
Logan Engstrom
Alexander Turner
Aleksander Madry
AAML
102
1,781
0
30 May 2018
A Hybrid Model for Identity Obfuscation by Face Replacement
A Hybrid Model for Identity Obfuscation by Face Replacement
Qianru Sun
A. Tewari
Weipeng Xu
Mario Fritz
Christian Theobalt
Bernt Schiele
CVBM
PICV
61
127
0
13 Apr 2018
Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
Ali Shafahi
Wenjie Huang
Mahyar Najibi
Octavian Suciu
Christoph Studer
Tudor Dumitras
Tom Goldstein
AAML
86
1,090
0
03 Apr 2018
Technical Report: When Does Machine Learning FAIL? Generalized
  Transferability for Evasion and Poisoning Attacks
Technical Report: When Does Machine Learning FAIL? Generalized Transferability for Evasion and Poisoning Attacks
Octavian Suciu
R. Marginean
Yigitcan Kaya
Hal Daumé
Tudor Dumitras
AAML
81
286
0
19 Mar 2018
Detection of Adversarial Training Examples in Poisoning Attacks through
  Anomaly Detection
Detection of Adversarial Training Examples in Poisoning Attacks through Anomaly Detection
Andrea Paudice
Luis Muñoz-González
András Gyorgy
Emil C. Lupu
AAML
58
146
0
08 Feb 2018
Natural and Effective Obfuscation by Head Inpainting
Natural and Effective Obfuscation by Head Inpainting
Qianru Sun
Liqian Ma
Seong Joon Oh
Luc Van Gool
Bernt Schiele
Mario Fritz
PICV
352
204
0
24 Nov 2017
Countering Adversarial Images using Input Transformations
Countering Adversarial Images using Input Transformations
Chuan Guo
Mayank Rana
Moustapha Cissé
Laurens van der Maaten
AAML
114
1,405
0
31 Oct 2017
VGGFace2: A dataset for recognising faces across pose and age
VGGFace2: A dataset for recognising faces across pose and age
Qiong Cao
Li Shen
Weidi Xie
Omkar M. Parkhi
Andrew Zisserman
CVBM
95
2,630
0
23 Oct 2017
Machine Learning Models that Remember Too Much
Machine Learning Models that Remember Too Much
Congzheng Song
Thomas Ristenpart
Vitaly Shmatikov
VLM
70
516
0
22 Sep 2017
Towards Deep Learning Models Resistant to Adversarial Attacks
Towards Deep Learning Models Resistant to Adversarial Attacks
Aleksander Madry
Aleksandar Makelov
Ludwig Schmidt
Dimitris Tsipras
Adrian Vladu
SILM
OOD
307
12,069
0
19 Jun 2017
Certified Defenses for Data Poisoning Attacks
Certified Defenses for Data Poisoning Attacks
Jacob Steinhardt
Pang Wei Koh
Percy Liang
AAML
92
755
0
09 Jun 2017
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection
  Methods
Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods
Nicholas Carlini
D. Wagner
AAML
123
1,857
0
20 May 2017
Level Playing Field for Million Scale Face Recognition
Level Playing Field for Million Scale Face Recognition
A. Nech
Ira Kemelmacher-Shlizerman
CVBM
72
191
0
01 May 2017
Generative Poisoning Attack Method Against Neural Networks
Generative Poisoning Attack Method Against Neural Networks
Chaofei Yang
Qing Wu
Hai Helen Li
Yiran Chen
AAML
59
218
0
03 Mar 2017
Detecting Adversarial Samples from Artifacts
Detecting Adversarial Samples from Artifacts
Reuben Feinman
Ryan R. Curtin
S. Shintre
Andrew B. Gardner
AAML
93
893
0
01 Mar 2017
Delving into Transferable Adversarial Examples and Black-box Attacks
Delving into Transferable Adversarial Examples and Black-box Attacks
Yanpei Liu
Xinyun Chen
Chang-rui Liu
D. Song
AAML
140
1,737
0
08 Nov 2016
Densely Connected Convolutional Networks
Densely Connected Convolutional Networks
Gao Huang
Zhuang Liu
Laurens van der Maaten
Kilian Q. Weinberger
PINN
3DV
772
36,813
0
25 Aug 2016
Towards Evaluating the Robustness of Neural Networks
Towards Evaluating the Robustness of Neural Networks
Nicholas Carlini
D. Wagner
OOD
AAML
266
8,555
0
16 Aug 2016
Adversarial examples in the physical world
Adversarial examples in the physical world
Alexey Kurakin
Ian Goodfellow
Samy Bengio
SILM
AAML
540
5,897
0
08 Jul 2016
Deep Learning with Differential Privacy
Deep Learning with Differential Privacy
Martín Abadi
Andy Chu
Ian Goodfellow
H. B. McMahan
Ilya Mironov
Kunal Talwar
Li Zhang
FedML
SyDa
207
6,130
0
01 Jul 2016
Transferability in Machine Learning: from Phenomena to Black-Box Attacks
  using Adversarial Samples
Transferability in Machine Learning: from Phenomena to Black-Box Attacks using Adversarial Samples
Nicolas Papernot
Patrick McDaniel
Ian Goodfellow
SILM
AAML
114
1,739
0
24 May 2016
Inception-v4, Inception-ResNet and the Impact of Residual Connections on
  Learning
Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning
Christian Szegedy
Sergey Ioffe
Vincent Vanhoucke
Alexander A. Alemi
377
14,253
0
23 Feb 2016
Explaining and Harnessing Adversarial Examples
Explaining and Harnessing Adversarial Examples
Ian Goodfellow
Jonathon Shlens
Christian Szegedy
AAML
GAN
277
19,066
0
20 Dec 2014
Learning Face Representation from Scratch
Learning Face Representation from Scratch
Dong Yi
Zhen Lei
Tianran Ouyang
Stan Z. Li
CVBM
89
2,011
0
28 Nov 2014
How transferable are features in deep neural networks?
How transferable are features in deep neural networks?
J. Yosinski
Jeff Clune
Yoshua Bengio
Hod Lipson
OOD
231
8,336
0
06 Nov 2014
Intriguing properties of neural networks
Intriguing properties of neural networks
Christian Szegedy
Wojciech Zaremba
Ilya Sutskever
Joan Bruna
D. Erhan
Ian Goodfellow
Rob Fergus
AAML
270
14,927
1
21 Dec 2013
1