ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.02761
44
0
v1v2 (latest)

Rethinking Machine Unlearning in Image Generation Models

3 June 2025
Renyang Liu
Wenjie Feng
Tianwei Zhang
Wei Zhou
Xueqi Cheng
See-Kiong Ng
    MUVLM
ArXiv (abs)PDFHTML
Main:13 Pages
10 Figures
Bibliography:4 Pages
10 Tables
Appendix:1 Pages
Abstract

With the surge and widespread application of image generation models, data privacy and content safety have become major concerns and attracted great attention from users, service providers, and policymakers. Machine unlearning (MU) is recognized as a cost-effective and promising means to address these challenges. Despite some advancements, image generation model unlearning (IGMU) still faces remarkable gaps in practice, e.g., unclear task discrimination and unlearning guidelines, lack of an effective evaluation framework, and unreliable evaluation metrics. These can hinder the understanding of unlearning mechanisms and the design of practical unlearning algorithms. We perform exhaustive assessments over existing state-of-the-art unlearning algorithms and evaluation standards, and discover several critical flaws and challenges in IGMU tasks. Driven by these limitations, we make several core contributions, to facilitate the comprehensive understanding, standardized categorization, and reliable evaluation of IGMU. Specifically, (1) We design CatIGMU, a novel hierarchical task categorization framework. It provides detailed implementation guidance for IGMU, assisting in the design of unlearning algorithms and the construction of testbeds. (2) We introduce EvalIGMU, a comprehensive evaluation framework. It includes reliable quantitative metrics across five critical aspects. (3) We construct DataIGM, a high-quality unlearning dataset, which can be used for extensive evaluations of IGMU, training content detectors for judgment, and benchmarking the state-of-the-art unlearning algorithms. With EvalIGMU and DataIGM, we discover that most existing IGMU algorithms cannot handle the unlearning well across different evaluation dimensions, especially for preservation and robustness. Code and models are available at this https URL.

View on arXiv
@article{liu2025_2506.02761,
  title={ Rethinking Machine Unlearning in Image Generation Models },
  author={ Renyang Liu and Wenjie Feng and Tianwei Zhang and Wei Zhou and Xueqi Cheng and See-Kiong Ng },
  journal={arXiv preprint arXiv:2506.02761},
  year={ 2025 }
}
Comments on this paper