ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.05872
41
0

A Taxonomy of Attacks and Defenses in Split Learning

9 May 2025
Aqsa Shabbir
Halil Ibrahim Kanpak
Alptekin Küpçü
Sinem Sav
ArXivPDFHTML
Abstract

Split Learning (SL) has emerged as a promising paradigm for distributed deep learning, allowing resource-constrained clients to offload portions of their model computation to servers while maintaining collaborative learning. However, recent research has demonstrated that SL remains vulnerable to a range of privacy and security threats, including information leakage, model inversion, and adversarial attacks. While various defense mechanisms have been proposed, a systematic understanding of the attack landscape and corresponding countermeasures is still lacking. In this study, we present a comprehensive taxonomy of attacks and defenses in SL, categorizing them along three key dimensions: employed strategies, constraints, and effectiveness. Furthermore, we identify key open challenges and research gaps in SL based on our systematization, highlighting potential future directions.

View on arXiv
@article{shabbir2025_2505.05872,
  title={ A Taxonomy of Attacks and Defenses in Split Learning },
  author={ Aqsa Shabbir and Halil İbrahim Kanpak and Alptekin Küpçü and Sinem Sav },
  journal={arXiv preprint arXiv:2505.05872},
  year={ 2025 }
}
Comments on this paper