ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2502.17384
74
1
v1v2 (latest)

On Traceability in ℓp\ell_pℓp​ Stochastic Convex Optimization

24 February 2025
Sasha Voitovych
Mahdi Haghifam
Idan Attias
Gintare Karolina Dziugaite
Roi Livni
Daniel M. Roy
ArXiv (abs)PDFHTML
Main:11 Pages
Bibliography:5 Pages
1 Tables
Appendix:36 Pages
Abstract

In this paper, we investigate the necessity of traceability for accurate learning in stochastic convex optimization (SCO) under ℓp\ell_pℓp​ geometries. Informally, we say a learning algorithm is mmm-traceable if, by analyzing its output, it is possible to identify at least mmm of its training samples. Our main results uncover a fundamental tradeoff between traceability and excess risk in SCO. For every p∈[1,∞)p\in [1,\infty)p∈[1,∞), we establish the existence of an excess risk threshold below which every sample-efficient learner is traceable with the number of samples which is a constant fraction of its training sample. For p∈[1,2]p\in [1,2]p∈[1,2], this threshold coincides with the best excess risk of differentially private (DP) algorithms, i.e., above this threshold, there exist algorithms that are not traceable, which corresponds to a sharp phase transition. For p∈(2,∞)p \in (2,\infty)p∈(2,∞), this threshold instead gives novel lower bounds for DP learning, partially closing an open problem in this setup. En route to establishing these results, we prove a sparse variant of the fingerprinting lemma, which is of independent interest to the community.

View on arXiv
@article{voitovych2025_2502.17384,
  title={ On Traceability in $\ell_p$ Stochastic Convex Optimization },
  author={ Sasha Voitovych and Mahdi Haghifam and Idan Attias and Gintare Karolina Dziugaite and Roi Livni and Daniel M. Roy },
  journal={arXiv preprint arXiv:2502.17384},
  year={ 2025 }
}
Comments on this paper