Papers
Communities
Events
Blog
Pricing
Search
Open menu
Home
Papers
1911.07135
Cited By
The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks
17 November 2019
Yuheng Zhang
R. Jia
Hengzhi Pei
Wenxiao Wang
Bo-wen Li
D. Song
AAML
Re-assign community
ArXiv
PDF
HTML
Papers citing
"The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks"
39 / 89 papers shown
Title
IOP-FL: Inside-Outside Personalization for Federated Medical Image Segmentation
Meirui Jiang
Hongzheng Yang
Chen Cheng
Qianming Dou
37
32
0
16 Apr 2022
Commonality in Natural Images Rescues GANs: Pretraining GANs with Generic and Privacy-free Synthetic Data
Kyungjune Baek
Hyunjung Shim
30
12
0
11 Apr 2022
Ensemble learning using individual neonatal data for seizure detection
A. Borovac
S. Gudmundsson
G. Thorvardsson
S. M. Moghadam
P. Nevalainen
N. Stevenson
S. Vanhatalo
T. Runarsson
FedML
21
8
0
11 Apr 2022
FedVLN: Privacy-preserving Federated Vision-and-Language Navigation
Kaiwen Zhou
Qing Guo
FedML
26
8
0
28 Mar 2022
Label-Only Model Inversion Attacks via Boundary Repulsion
Mostafa Kahla
Si-An Chen
H. Just
R. Jia
35
74
0
03 Mar 2022
Differentially Private Graph Classification with GNNs
Tamara T. Mueller
Johannes C. Paetzold
Chinmay Prabhakar
Dmitrii Usynin
Daniel Rueckert
Georgios Kaissis
50
18
0
05 Feb 2022
Variational Model Inversion Attacks
Kuan-Chieh Jackson Wang
Yanzhe Fu
Ke Li
Ashish Khisti
R. Zemel
Alireza Makhzani
MIACV
25
95
0
26 Jan 2022
Are Your Sensitive Attributes Private? Novel Model Inversion Attribute Inference Attacks on Classification Models
Shagufta Mehnaz
S. V. Dibbo
Ehsanul Kabir
Ninghui Li
E. Bertino
MIACV
37
60
0
23 Jan 2022
Reconstructing Training Data with Informed Adversaries
Borja Balle
Giovanni Cherubin
Jamie Hayes
MIACV
AAML
43
158
0
13 Jan 2022
Get a Model! Model Hijacking Attack Against Machine Learning Models
A. Salem
Michael Backes
Yang Zhang
AAML
17
28
0
08 Nov 2021
Black-box Adversarial Attacks on Commercial Speech Platforms with Minimal Information
Baolin Zheng
Peipei Jiang
Qian Wang
Qi Li
Chao Shen
Cong Wang
Yunjie Ge
Qingyang Teng
Shenyi Zhang
AAML
18
69
0
19 Oct 2021
SoK: Machine Learning Governance
Varun Chandrasekaran
Hengrui Jia
Anvith Thudi
Adelin Travers
Mohammad Yaghini
Nicolas Papernot
38
16
0
20 Sep 2021
UnSplit: Data-Oblivious Model Inversion, Model Stealing, and Label Inference Attacks Against Split Learning
Ege Erdogan
Alptekin Kupcu
A. E. Cicek
FedML
MIACV
35
77
0
20 Aug 2021
Advances in adversarial attacks and defenses in computer vision: A survey
Naveed Akhtar
Ajmal Mian
Navid Kardan
M. Shah
AAML
26
235
0
01 Aug 2021
Trustworthy AI: A Computational Perspective
Haochen Liu
Yiqi Wang
Wenqi Fan
Xiaorui Liu
Yaxin Li
Shaili Jain
Yunhao Liu
Anil K. Jain
Jiliang Tang
FaML
104
196
0
12 Jul 2021
Survey: Leakage and Privacy at Inference Time
Marija Jegorova
Chaitanya Kaul
Charlie Mayor
Alison Q. OÑeil
Alexander Weir
Roderick Murray-Smith
Sotirios A. Tsaftaris
PILM
MIACV
21
71
0
04 Jul 2021
TOHAN: A One-step Approach towards Few-shot Hypothesis Adaptation
Haoang Chi
Feng Liu
Wenjing Yang
L. Lan
Tongliang Liu
Bo Han
William Cheung
James T. Kwok
35
27
0
11 Jun 2021
GraphMI: Extracting Private Graph Data from Graph Neural Networks
Zaixi Zhang
Qi Liu
Zhenya Huang
Hao Wang
Chengqiang Lu
Chuanren Liu
Enhong Chen
26
68
0
05 Jun 2021
Property Inference Attacks on Convolutional Neural Networks: Influence and Implications of Target Model's Complexity
Mathias Parisot
Balázs Pejó
Dayana Spagnuelo
MIACV
24
33
0
27 Apr 2021
Exploiting Explanations for Model Inversion Attacks
Xu Zhao
Wencan Zhang
Xiao Xiao
Brian Y. Lim
MIACV
26
82
0
26 Apr 2021
See through Gradients: Image Batch Recovery via GradInversion
Hongxu Yin
Arun Mallya
Arash Vahdat
J. Álvarez
Jan Kautz
Pavlo Molchanov
FedML
25
460
0
15 Apr 2021
Privacy and Trust Redefined in Federated Machine Learning
Pavlos Papadopoulos
Will Abramson
A. Hall
Nikolaos Pitropakis
William J. Buchanan
33
42
0
29 Mar 2021
DataLens: Scalable Privacy Preserving Training via Gradient Compression and Aggregation
Wei Ping
Fan Wu
Yunhui Long
Luka Rimanic
Ce Zhang
Bo-wen Li
FedML
45
63
0
20 Mar 2021
ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models
Yugeng Liu
Rui Wen
Xinlei He
A. Salem
Zhikun Zhang
Michael Backes
Emiliano De Cristofaro
Mario Fritz
Yang Zhang
AAML
17
125
0
04 Feb 2021
Unleashing the Tiger: Inference Attacks on Split Learning
Dario Pasquini
G. Ateniese
M. Bernaschi
FedML
34
147
0
04 Dec 2020
Feature Inference Attack on Model Predictions in Vertical Federated Learning
Xinjian Luo
Yuncheng Wu
Xiaokui Xiao
Beng Chin Ooi
FedML
AAML
11
218
0
20 Oct 2020
R-GAP: Recursive Gradient Attack on Privacy
Junyi Zhu
Matthew Blaschko
FedML
6
132
0
15 Oct 2020
Knowledge-Enriched Distributional Model Inversion Attacks
Si-An Chen
Mostafa Kahla
R. Jia
Guo-Jun Qi
24
93
0
08 Oct 2020
Federated Learning for Computational Pathology on Gigapixel Whole Slide Images
Ming Y. Lu
Dehan Kong
Jana Lipkova
Richard J. Chen
Rajendra Singh
Drew F. K. Williamson
Tiffany Y. Chen
Faisal Mahmood
FedML
MedIm
25
167
0
21 Sep 2020
Malicious Network Traffic Detection via Deep Learning: An Information Theoretic View
Erick Galinkin
AAML
15
0
0
16 Sep 2020
Improving Robustness to Model Inversion Attacks via Mutual Information Regularization
Tianhao Wang
Yuheng Zhang
R. Jia
24
74
0
11 Sep 2020
Membership Leakage in Label-Only Exposures
Zheng Li
Yang Zhang
34
237
0
30 Jul 2020
Privacy-preserving Artificial Intelligence Techniques in Biomedicine
Reihaneh Torkzadehmahani
Reza Nasirigerdeh
David B. Blumenthal
T. Kacprowski
M. List
...
Harald H. H. W. Schmidt
A. Schwalber
Christof Tschohl
Andrea Wohner
Jan Baumbach
21
59
0
22 Jul 2020
A Survey of Privacy Attacks in Machine Learning
M. Rigaki
Sebastian Garcia
PILM
AAML
39
213
0
15 Jul 2020
ARIANN: Low-Interaction Privacy-Preserving Deep Learning via Function Secret Sharing
T. Ryffel
Pierre Tholoniat
D. Pointcheval
Francis R. Bach
FedML
28
94
0
08 Jun 2020
MAZE: Data-Free Model Stealing Attack Using Zeroth-Order Gradient Estimation
Sanjay Kariyappa
A. Prakash
Moinuddin K. Qureshi
AAML
26
146
0
06 May 2020
Exploiting Defenses against GAN-Based Feature Inference Attacks in Federated Learning
Xinjian Luo
Xiangqi Zhu
FedML
73
25
0
27 Apr 2020
Machine Unlearning: Linear Filtration for Logit-based Classifiers
Thomas Baumhauer
Pascal Schöttle
Matthias Zeppelzauer
MU
111
130
0
07 Feb 2020
G-PATE: Scalable Differentially Private Data Generator via Private Aggregation of Teacher Discriminators
Yunhui Long
Wei Ping
Zhuolin Yang
B. Kailkhura
Aston Zhang
C.A. Gunter
Bo-wen Li
16
72
0
21 Jun 2019
Previous
1
2