103
3
v1v2 (latest)

Membership Inference Attack Should Move On to Distributional Statistics for Distilled Generative Models

Main:6 Pages
2 Figures
Bibliography:1 Pages
4 Tables
Appendix:15 Pages
Abstract

To detect unauthorized data usage in training large-scale generative models (e.g., ChatGPT or Midjourney), membership inference attacks (MIA) have proven effective in distinguishing a single training instance (a member) from a single non-training instance (a non-member). This success is mainly credited to a memorization effect: models tend to perform better on a member than a non-member. However, we find that standard MIAs fail against distilled generative models (i.e., student models) that are increasingly deployed in practice for efficiency (e.g., ChatGPT 4o-mini). Trained exclusively on data generated from a large-scale model (a teacher model), the student model lacks direct exposure to any members (teacher's training data), nullifying the memorization effect that standard MIAs rely on. This finding reveals a serious privacy loophole, where generation-service providers could deploy a student model whose teacher was potentially trained on unauthorized data, yet claim the deployed model is clean because it was not directly trained on such data. Hence, are distilled models inherently unauditable for upstream privacy violations, and should we discard them when we care about privacy? We contend no, as we uncover a memory chain connecting the student and teacher's member data: the distribution of student-generated data aligns more closely with the distribution of the teacher's members than with non-members, thus we can detect unauthorized data usage even when direct instance-level memorization is absent. This leads us to posit that MIAs on distilled generative models should shift from instance-level scores to distribution-level statistics. We further propose three principles of distribution-based MIAs for detecting unauthorized training data through distilled generative models, and validate our position through an exemplar framework. We lastly discuss the implications of our position.

View on arXiv
@article{li2025_2502.02970,
  title={ Membership Inference Attack Should Move On to Distributional Statistics for Distilled Generative Models },
  author={ Muxing Li and Zesheng Ye and Yixuan Li and Andy Song and Guangquan Zhang and Feng Liu },
  journal={arXiv preprint arXiv:2502.02970},
  year={ 2025 }
}
Comments on this paper