ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2408.01689
68
0

Controllable Unlearning for Image-to-Image Generative Models via ε\varepsilonε-Constrained Optimization

20 February 2025
Xiaohua Feng
Chao-Jun Chen
Yuyuan Li
L. Zhang
Longfei Li
Jun Zhou
Xiaolin Zheng
    MU
ArXivPDFHTML
Abstract

While generative models have made significant advancements in recent years, they also raise concerns such as privacy breaches and biases. Machine unlearning has emerged as a viable solution, aiming to remove specific training data, e.g., containing private information and bias, from models. In this paper, we study the machine unlearning problem in Image-to-Image (I2I) generative models. Previous studies mainly treat it as a single objective optimization problem, offering a solitary solution, thereby neglecting the varied user expectations towards the trade-off between complete unlearning and model utility. To address this issue, we propose a controllable unlearning framework that uses a control coefficient ε\varepsilonε to control the trade-off. We reformulate the I2I generative model unlearning problem into a ε\varepsilonε-constrained optimization problem and solve it with a gradient-based method to find optimal solutions for unlearning boundaries. These boundaries define the valid range for the control coefficient. Within this range, every yielded solution is theoretically guaranteed with Pareto optimality. We also analyze the convergence rate of our framework under various control functions. Extensive experiments on two benchmark datasets across three mainstream I2I models demonstrate the effectiveness of our controllable unlearning framework.

View on arXiv
@article{feng2025_2408.01689,
  title={ Controllable Unlearning for Image-to-Image Generative Models via $\varepsilon$-Constrained Optimization },
  author={ Xiaohua Feng and Yuyuan Li and Chaochao Chen and Li Zhang and Longfei Li and Jun Zhou and Xiaolin Zheng },
  journal={arXiv preprint arXiv:2408.01689},
  year={ 2025 }
}
Comments on this paper