ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2505.24737
22
0

Adapting to Linear Separable Subsets with Large-Margin in Differentially Private Learning

30 May 2025
Erchi Wang
Yuqing Zhu
Yu-Xiang Wang
ArXiv (abs)PDFHTML
Main:13 Pages
3 Figures
Bibliography:3 Pages
2 Tables
Appendix:22 Pages
Abstract

This paper studies the problem of differentially private empirical risk minimization (DP-ERM) for binary linear classification. We obtain an efficient (ε,δ)(\varepsilon,\delta)(ε,δ)-DP algorithm with an empirical zero-one risk bound of O~(1γ2εn+∣Sout∣γn)\tilde{O}\left(\frac{1}{\gamma^2\varepsilon n} + \frac{|S_{\mathrm{out}}|}{\gamma n}\right)O~(γ2εn1​+γn∣Sout​∣​) where nnn is the number of data points, SoutS_{\mathrm{out}}Sout​ is an arbitrary subset of data one can remove and γ\gammaγ is the margin of linear separation of the remaining data points (after SoutS_{\mathrm{out}}Sout​ is removed). Here, O~(⋅)\tilde{O}(\cdot)O~(⋅) hides only logarithmic terms. In the agnostic case, we improve the existing results when the number of outliers is small. Our algorithm is highly adaptive because it does not require knowing the margin parameter γ\gammaγ or outlier subset SoutS_{\mathrm{out}}Sout​. We also derive a utility bound for the advanced private hyperparameter tuning algorithm.

View on arXiv
@article{wang2025_2505.24737,
  title={ Adapting to Linear Separable Subsets with Large-Margin in Differentially Private Learning },
  author={ Erchi Wang and Yuqing Zhu and Yu-Xiang Wang },
  journal={arXiv preprint arXiv:2505.24737},
  year={ 2025 }
}
Comments on this paper