ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2503.16266
57
0

From Head to Tail: Efficient Black-box Model Inversion Attack via Long-tailed Learning

20 March 2025
Ziang Li
Hongguang Zhang
Juan Wang
Meihui Chen
Hongxin Hu
Wenzhe Yi
Xiaoyang Xu
Mengda Yang
Chenjun Ma
ArXivPDFHTML
Abstract

Model Inversion Attacks (MIAs) aim to reconstruct private training data from models, leading to privacy leakage, particularly in facial recognition systems. Although many studies have enhanced the effectiveness of white-box MIAs, less attention has been paid to improving efficiency and utility under limited attacker capabilities. Existing black-box MIAs necessitate an impractical number of queries, incurring significant overhead. Therefore, we analyze the limitations of existing MIAs and introduce Surrogate Model-based Inversion with Long-tailed Enhancement (SMILE), a high-resolution oriented and query-efficient MIA for the black-box setting. We begin by analyzing the initialization of MIAs from a data distribution perspective and propose a long-tailed surrogate training method to obtain high-quality initial points. We then enhance the attack's effectiveness by employing the gradient-free black-box optimization algorithm selected by NGOpt. Our experiments show that SMILE outperforms existing state-of-the-art black-box MIAs while requiring only about 5% of the query overhead.

View on arXiv
@article{li2025_2503.16266,
  title={ From Head to Tail: Efficient Black-box Model Inversion Attack via Long-tailed Learning },
  author={ Ziang Li and Hongguang Zhang and Juan Wang and Meihui Chen and Hongxin Hu and Wenzhe Yi and Xiaoyang Xu and Mengda Yang and Chenjun Ma },
  journal={arXiv preprint arXiv:2503.16266},
  year={ 2025 }
}
Comments on this paper