7
0

Membership Inference Attacks as Privacy Tools: Reliability, Disparity and Ensemble

Main:18 Pages
28 Figures
Bibliography:3 Pages
7 Tables
Appendix:1 Pages
Abstract

Membership inference attacks (MIAs) pose a significant threat to the privacy of machine learning models and are widely used as tools for privacy assessment, auditing, and machine unlearning. While prior MIA research has primarily focused on performance metrics such as AUC, accuracy, and TPR@low FPR - either by developing new methods to enhance these metrics or using them to evaluate privacy solutions - we found that it overlooks the disparities among different attacks. These disparities, both between distinct attack methods and between multiple instantiations of the same method, have crucial implications for the reliability and completeness of MIAs as privacy evaluation tools. In this paper, we systematically investigate these disparities through a novel framework based on coverage and stability analysis. Extensive experiments reveal significant disparities in MIAs, their potential causes, and their broader implications for privacy evaluation. To address these challenges, we propose an ensemble framework with three distinct strategies to harness the strengths of state-of-the-art MIAs while accounting for their disparities. This framework not only enables the construction of more powerful attacks but also provides a more robust and comprehensive methodology for privacy evaluation.

View on arXiv
@article{wang2025_2506.13972,
  title={ Membership Inference Attacks as Privacy Tools: Reliability, Disparity and Ensemble },
  author={ Zhiqi Wang and Chengyu Zhang and Yuetian Chen and Nathalie Baracaldo and Swanand Kadhe and Lei Yu },
  journal={arXiv preprint arXiv:2506.13972},
  year={ 2025 }
}
Comments on this paper