ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.17081
37
1

Forgetting Any Data at Any Time: A Theoretically Certified Unlearning Framework for Vertical Federated Learning

24 February 2025
Linian Wang
Leye Wang
    MU
ArXivPDFHTML
Abstract

Privacy concerns in machine learning are heightened by regulations such as the GDPR, which enforces the "right to be forgotten" (RTBF), driving the emergence of machine unlearning as a critical research field. Vertical Federated Learning (VFL) enables collaborative model training by aggregating a sample's features across distributed parties while preserving data privacy at each source. This paradigm has seen widespread adoption in healthcare, finance, and other privacy-sensitive domains. However, existing VFL systems lack robust mechanisms to comply with RTBF requirements, as unlearning methodologies for VFL remain underexplored. In this work, we introduce the first VFL framework with theoretically guaranteed unlearning capabilities, enabling the removal of any data at any time. Unlike prior approaches -- which impose restrictive assumptions on model architectures or data types for removal -- our solution is model- and data-agnostic, offering universal compatibility. Moreover, our framework supports asynchronous unlearning, eliminating the need for all parties to be simultaneously online during the forgetting process. These advancements address critical gaps in current VFL systems, ensuring compliance with RTBF while maintaining operationalthis http URLmake all our implementations publicly available atthis https URL.

View on arXiv
@article{wang2025_2502.17081,
  title={ Forgetting Any Data at Any Time: A Theoretically Certified Unlearning Framework for Vertical Federated Learning },
  author={ Linian Wang and Leye Wang },
  journal={arXiv preprint arXiv:2502.17081},
  year={ 2025 }
}
Comments on this paper