ResearchTrend.AI
  • Papers
  • Communities
  • Events
  • Blog
  • Pricing
Papers
Communities
Social Events
Terms and Conditions
Pricing
Parameter LabParameter LabTwitterGitHubLinkedInBlueskyYoutube

© 2025 ResearchTrend.AI, All rights reserved.

  1. Home
  2. Papers
  3. 2506.00158
24
0

Privacy Amplification in Differentially Private Zeroth-Order Optimization with Hidden States

30 May 2025
Eli Chien
Wei-Ning Chen
P. Li
ArXiv (abs)PDFHTML
Main:9 Pages
1 Figures
Bibliography:4 Pages
Appendix:15 Pages
Abstract

Zeroth-order optimization has emerged as a promising approach for fine-tuning large language models on domain-specific data, particularly under differential privacy (DP) and memory constraints. While first-order methods have been extensively studied from a privacy perspective, the privacy analysis and algorithmic design for zeroth-order methods remain significantly underexplored. A critical open question concerns hidden-state DP analysis: although convergent privacy bounds are known for first-order methods, it has remained unclear whether similar guarantees can be established for zeroth-order methods. In this work, we provide an affirmative answer by proving a convergent DP bound for zeroth-order optimization. Our analysis generalizes the celebrated privacy amplification-by-iteration framework to the setting of smooth loss functions in zeroth-order optimization. Furthermore, it induces better DP zeroth-order algorithmic designs that are previously unknown to the literature.

View on arXiv
@article{chien2025_2506.00158,
  title={ Privacy Amplification in Differentially Private Zeroth-Order Optimization with Hidden States },
  author={ Eli Chien and Wei-Ning Chen and Pan Li },
  journal={arXiv preprint arXiv:2506.00158},
  year={ 2025 }
}
Comments on this paper