Impact of Dataset Properties on Membership Inference Vulnerability of Deep Transfer Learning
- MIACV
 

Main:10 Pages
15 Figures
Bibliography:3 Pages
18 Tables
Appendix:32 Pages
Abstract
Membership inference attacks (MIAs) are used to test practical privacy of machine learning models. MIAs complement formal guarantees from differential privacy (DP) under a more realistic adversary model. We analyse MIA vulnerability of fine-tuned neural networks both empirically and theoretically, the latter using a simplified model of fine-tuning. We show that the vulnerability of non-DP models when measured as the attacker advantage at a fixed false positive rate reduces according to a simple power law as the number of examples per class increases. A similar power-law applies even for the most vulnerable points, but the dataset size needed for adequate protection of the most vulnerable points is very large.
View on arXivComments on this paper
