ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.09004
14
0

Lower Bounds on the MMSE of Adversarially Inferring Sensitive Features

13 May 2025
Monica Welfert
Nathan Stromberg
Mario Díaz
Lalitha Sankar
    AAML
ArXivPDFHTML
Abstract

We propose an adversarial evaluation framework for sensitive feature inference based on minimum mean-squared error (MMSE) estimation with a finite sample size and linear predictive models. Our approach establishes theoretical lower bounds on the true MMSE of inferring sensitive features from noisy observations of other correlated features. These bounds are expressed in terms of the empirical MMSE under a restricted hypothesis class and a non-negative error term. The error term captures both the estimation error due to finite number of samples and the approximation error from using a restricted hypothesis class. For linear predictive models, we derive closed-form bounds, which are order optimal in terms of the noise variance, on the approximation error for several classes of relationships between the sensitive and non-sensitive features, including linear mappings, binary symmetric channels, and class-conditional multi-variate Gaussian distributions. We also present a new lower bound that relies on the MSE computed on a hold-out validation dataset of the MMSE estimator learned on finite-samples and a restricted hypothesis class. Through empirical evaluation, we demonstrate that our framework serves as an effective tool for MMSE-based adversarial evaluation of sensitive feature inference that balances theoretical guarantees with practical efficiency.

View on arXiv
@article{welfert2025_2505.09004,
  title={ Lower Bounds on the MMSE of Adversarially Inferring Sensitive Features },
  author={ Monica Welfert and Nathan Stromberg and Mario Diaz and Lalitha Sankar },
  journal={arXiv preprint arXiv:2505.09004},
  year={ 2025 }
}
Comments on this paper