ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2504.21739
77
0

Bilateral Differentially Private Vertical Federated Boosted Decision Trees

30 April 2025
Bokang Zhang
Zhikun Zhang
Haodong Jiang
Yong-Jin Liu
Lihao Zheng
Yuxiao Zhou
Shuaiting Huang
Junfeng Wu
    FedML
ArXivPDFHTML
Abstract

Federated learning is a distributed machine learning paradigm that enables collaborative training across multiple parties while ensuring data privacy. Gradient Boosting Decision Trees (GBDT), such as XGBoost, have gained popularity due to their high performance and strong interpretability. Therefore, there has been a growing interest in adapting XGBoost for use in federated settings via cryptographic techniques. However, it should be noted that these approaches may not always provide rigorous theoretical privacy guarantees, and they often come with a high computational cost in terms of time and space requirements. In this paper, we propose a variant of vertical federated XGBoost with bilateral differential privacy guarantee: MaskedXGBoost. We build well-calibrated noise to perturb the intermediate information to protect privacy. The noise is structured with part of its ingredients in the null space of the arithmetical operation for splitting score evaluation in XGBoost, helping us achieve consistently better utility than other perturbation methods and relatively lower overhead than encryption-based techniques. We provide theoretical utility analysis and empirically verify privacy preservation. Compared with other algorithms, our algorithm's superiority in both utility and efficiency has been validated on multiple datasets.

View on arXiv
@article{zhang2025_2504.21739,
  title={ Bilateral Differentially Private Vertical Federated Boosted Decision Trees },
  author={ Bokang Zhang and Zhikun Zhang and Haodong Jiang and Yang Liu and Lihao Zheng and Yuxiao Zhou and Shuaiting Huang and Junfeng Wu },
  journal={arXiv preprint arXiv:2504.21739},
  year={ 2025 }
}
Comments on this paper